On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
It is well known that it only costs ISPs 2 to 3 cents per gb of internet upload/download. Your example only works for peak hours. All other times, the only reason the limit exists is to be greedy.
I don't really understand what you are trying to explain. Are you talking about a speed cap or a usage cap. Usage cap is totally a way to make people pay more. How is it that they make insane amounts of money even if, as you say, upgrading the system is so costly. They could surely still make money while upgrading the infrastructure little by little. Also, if they keep investing little by little, the money upgrading costs would surely go down over time.
On another point, don't you think investing in upgrading the North american network create jobs, stimulate the economy and make the internet more available to everyone? I'm not really talking only about companies (mostly governments, although companies should definitely invest too, don't let the gov't pay for everything).
On October 18 2011 11:59 Feartheguru wrote: It is well known that it only costs ISPs 2 to 3 cents per gb of internet upload/download. Your example only works for peak hours. All other times, the only reason the limit exists is to be greedy.
Um, that cost figure might work from the CTMS on, but not from home->ctms. That is where the problem lies.
I don't really understand what you are trying to explain. Are you talking about a speed cap or a usage cap. Usage cap is totally a way to make people pay more. How is it that they make insane amounts of money even if, as you say, upgrading the system is so costly. They could surely still make money while upgrading the infrastructure little by little. Also, if they keep investing little by little, the money upgrading costs would surely go down over time.
On another point, don't you think investing in upgrading the North american network create jobs, stimulate the economy and make the internet more available to everyone? I'm not really talking only about companies (mostly governments, although companies should definitely invest too, don't let the gov't pay for everything).
The usage cap exists because the speed caps are much larger then the capacity of the infrastructure, were you to use it fully at all times. The idea is that most people just use the internet for email etc, and they want it to be fast, so you have far higher speed caps then they can handle if everyone is maxed on it simultaneously. You trade around bandwidth. The usage cap tries to ensure that the trading around bandwidth does not fail because some users are using a ton of bandwidth and clogging things up for everyone else. They get discouraged from continually doing it by imposing the cap.
I agree that it would create a ton of jobs, and fully support it. I know the plant I work at could hire a ton of skilled assemblers were that the case.
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things.
edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system.
Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things.
These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway.
Sorry, if you make a post like this, I must sidetrack you by nitpicking the details.
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things.
edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system.
Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things.
These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway.
Sorry, if you make a post like this, I must sidetrack you by nitpicking the details.
Exactly: I don't care when I am designing other parts of the system . I am generally given something resembling a Third order inter-mod spec a loss/gain spec, and a (RF) bandwidth spec.
As far as TDMA is concerned, I believe that you get a lot higher throughput versus number of users, as you don't have to worry about interference from out of band communications as much(the white-space problem). The graph I am looking at from Pratts Satellite comm textbook(its what I had lying around) shows like 85% throughput for TDMA at 60 users versus somewhere below 50% from FDMA, but maybe I am missing something, as I don't normally worry about that stuff, and while some systems may use OFDM and combined TDMA/FDMA systems AFAIK DOCSIS is plain old FDMA. NM: I guess it has some TDMA mixed in for burst access.
Maybe I shouldn't have posted about details regarding multiplexing, as they are irrelevant to my point , but its always nice to learn new details.
There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
edit: btw I should stop ragging on you, since it's RF guys that do a lot of the dirty work. I'll just go back to pretending IMD is 0, and you can go back to making that happen.
On October 18 2011 12:30 Myrmidon wrote: There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
You are indeed not orthogonal like OFDM so that is the problem: filtering. You end up with what is basically equivalent to an increased noise floor in FDMA from the other channels if you place them too closely due to filter roll-off. If you had ideal filters then I guess FDMA would be the same. The overhead of TDMA is just less then the cost you take in increased BER from FDMA in typical systems due to what is effectively an increased noise floor: at least that is what I think the graph was showing.
I learned quite a bit from reading this. Although I have to admit I skimmed through some area's as this is not an area I know much about at all Thanks for explaining it as best as you could though.
A shorter version would be that we have terrible internet infrastructure in the States and most areas are serviced by one company with a local monopoly, so people have little choice of their provider and the providers have little incentive to improve.
Solution: Government subsidies for improving broadband access, especially to rural areas.
Problem: That would be "socialism" and therefore evil to most politicians.
On October 18 2011 12:30 Myrmidon wrote: There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
You are indeed not orthogonal like OFDM so that is the problem: filtering. You end up with what is basically equivalent to an increased noise floor in FDMA from the other channels if you place them too closely due to filter roll-off. If you had ideal filters then I guess FDMA would be the same. The overhead of TDMA is just less then the cost you take in increased BER from FDMA in typical systems due to what is effectively an increased noise floor: at least that is what I think the graph was showing.
Hm yeah, well I consider OFDM(A) to be a kind of FDMA and the type that makes most sense to use (in most or certain contexts).
I think in industry, a lot of jargon starts diverging from technical definitions and begins to mean something different and sometimes much larger or specific than the original meaning. At least that's my impression.
The worst offender is "waveform" meaning everything from the waveform to the RF, hardware, coding, link control, medium access, etc. The first time somebody mentioned "waveforms" to me, I thought he was talking about squiggly things...
Recently I've been confused about what different people mean by SC-FDMA.
On October 18 2011 12:30 Myrmidon wrote: There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
You are indeed not orthogonal like OFDM so that is the problem: filtering. You end up with what is basically equivalent to an increased noise floor in FDMA from the other channels if you place them too closely due to filter roll-off. If you had ideal filters then I guess FDMA would be the same. The overhead of TDMA is just less then the cost you take in increased BER from FDMA in typical systems due to what is effectively an increased noise floor: at least that is what I think the graph was showing.
Hm yeah, well I consider OFDM(A) to be a kind of FDMA and the type that makes most sense to use (in most or certain contexts).
I think in industry, a lot of jargon starts diverging from technical definitions and begins to mean something different and sometimes much larger or specific than the original meaning. At least that's my impression.
The worst offender is "waveform" meaning everything from the waveform to the RF, hardware, coding, link control, medium access, etc. The first time somebody mentioned "waveforms" to me, I thought he was talking about squiggly things...
Recently I've been confused about what different people mean by SC-FDMA.
hahaha as an RF guy I think the worst offender is "bandwidth". I start thinking in terms of frequency bands not in terms of what the digital guys mean by bitrate.
Also another great offender is dB's of loss and gain. 32 dB's of loss is -32dB of gain of course, but so many documents make that simple concept way too confusing by being nonspecific.
And indeed industry jargon often means different things. Sometimes I have to deal with customers in academia and have little to no idea what they actually want compared to customers in industry.
It's not that you have really written false information in the OP, but that only covers an aspect of cable internet and doesn't chime in on the different types of DSL, although a lot of the same general concept is shared.
It doesn't really explain why monthly usage caps are implemented though; what you explained is more so why companies that provide internet service don't give us the data rates that a place like Korea has. You more so explained that limiting factor of the speeds that traverse the metropolitan area network. Putting a cap on your monthly usage doesn't stop everyone from maxing out the link speeds of the provider during the busiest hours. Maybe it inclines them to use the internet less, which could in turn cut down on bandwidth during the busy hours, but they could always just do away with the cap and throttle your speeds during those hours. Maybe I'm missing something; it's been a while since I had my data telecom courses, but that's how I've always understood it.
On October 18 2011 11:23 Alventenie wrote: I'm curious as to why you tell us why bandwidth caps are a regular thing that occurs to us, even though we pay more than other countries that have faster internet than us.
I will be insanely biased here, but whatever. South Korea pays around $30 a month for broadband (going off last years numbers at 28.80, but i rounded up a dollar just in case) vs our $45 a month on broadband. They also get faster internet than we do, and I haven't heard much about a bandwidth cap from them (although I did not do a search for this specifically, I never hear about it in the news).
Why do you think that we have to pay more, for a worse system that isn't going to support the need of internet in the future. You say caps are needed, but how are we to eliminate them if the future is going to have us getting internet to more people, thus making this a big problem.
Korean government subsidizes a lot of the costs though, which is why Korea's internet is so good and relatively cheap. Until the US government decides to start investing heavily into internet, US will always be more expensive and slower
also, Korea is much smaller than US .. so for US to invest into internet infrastructure would be incredibly expensive .. (but better use than that damned military budget ..)
There are tons of other countries whiteout caps (or at least caps that don't matter as long as your not downloading everything you can find )? Caps to me are just a thing of the late 90ies early 00's?... ...
On October 18 2011 11:59 Feartheguru wrote: It is well known that it only costs ISPs 2 to 3 cents per gb of internet upload/download. Your example only works for peak hours. All other times, the only reason the limit exists is to be greedy.
Um, that cost figure might work from the CTMS on, but not from home->ctms. That is where the problem lies.
I don't really understand what you are trying to explain. Are you talking about a speed cap or a usage cap. Usage cap is totally a way to make people pay more. How is it that they make insane amounts of money even if, as you say, upgrading the system is so costly. They could surely still make money while upgrading the infrastructure little by little. Also, if they keep investing little by little, the money upgrading costs would surely go down over time.
On another point, don't you think investing in upgrading the North american network create jobs, stimulate the economy and make the internet more available to everyone? I'm not really talking only about companies (mostly governments, although companies should definitely invest too, don't let the gov't pay for everything).
The usage cap exists because the speed caps are much larger then the capacity of the infrastructure, were you to use it fully at all times. The idea is that most people just use the internet for email etc, and they want it to be fast, so you have far higher speed caps then they can handle if everyone is maxed on it simultaneously. You trade around bandwidth. The usage cap tries to ensure that the trading around bandwidth does not fail because some users are using a ton of bandwidth and clogging things up for everyone else. They get discouraged from continually doing it by imposing the cap.
I agree that it would create a ton of jobs, and fully support it. I know the plant I work at could hire a ton of skilled assemblers were that the case.
So... Why should the users be billed because the ISPs are willing to offer more than they can handle? If that's not 100% greed, I don't know what that is. You can come with a straight face and say "Oh it's to improve our technology, etc..." It might be partially true, but the true behind offering caps is greed.
I'm sure the majority of people would understand if you throttle their speed during peak hours, kind of like rush hour on your way to work. Everyone uses the highway, sure they could've built a 20 lane highway, but that's way too expensive (and there's actually more issues with land), but we have to deal with it. However, I'm not being given a limit of time or distance to use the highway, I just use it as I want.
Internet should be the same, sure you can only pass so much internets through the tubes at one time, but when its not congested, there's no issue... It's almost as if they tried to create something out of nothing using the lack of knowledge of the average consumer...
Thanks for the techplanations, I might have even understood some of it
So, I decided to do some research myself into the financial viability of that: The profit of the bandwidth cappers, year ending 31-12-2010, in Millions USD: Comcast 22,687.00 Bell 13,120.00 Rogers 10,666.00 Telus 5,506.00 Shaw 3,717.58
So here's what I think: the big telcos can afford to upgrade their networks with their sizeable profits. Of course, their shareholders and the companies themselves won't want to just do that when they can make that amount of money without spending anything, since they hold mono/oligopolies due to the enormous sunk costs of being an ISP. Therefore, government incentives are needed if we want these companies to continue to improve their internet services. Now, I believe internet to be an essential service in today's world and is becoming more and more like a utility rather than a luxury, therefore, I believe governments should incentivise/force these companies to spend more of their profits on infrastructure than giving it back to their shareholders.
As a heavy user, I dislike bandwidth caps, but I understand the economics behind them, as long as they are reasonable, especially with the increasing importance of the internet and the rise of streaming and digital distribution channels.