Recently following this MLG I read a rather interesting discussion in which people complained about their bandwidth cap being exceeded. People understandably expressed frustration about their caps, and seemed to blame it more on the cable companies being greedy then any real need. As someone who designs telecommunications infrastructure equipment for their job I was a little shocked to see the lack of understanding about the state of the infrastructure, and why it's so hard and expensive to increase capacity.
How your cable modem works:
Your cable modem connects to the internet through a system called "hybrid fiber cable". First the data bits from your router or PC are modulated into something called 64-QAM(this is the most common, you may have 128-QAM or 32-QAM). This is the same type of modulation system used by many cell phones, and a number of other modern telecommunications systems. Your modem is assigned a frequency and a bandwidth window by a piece of equipment back at your local regional center called a cable modem termination system. 64-QAM is a clever way of encoding data: a group of 6 bits is encoded into a sin wave by storing the information in the amplitude and phase of the wave, not just the amplitude like AM or the phase like FM. This allows you to transfer 6 bits per symbol versus just one for AM, but because the signals for different combination of bits are very similar you are more susceptible to noise, so you need a higher signal to noise ratio. Each person on a physical channel of a cable modem termination system(within the same neighborhood) has a different frequency, and it is all combined onto a common line. The system tries to be smart about allocating your bandwidth in order to make sure everyone can get internet, but isn't that smart about it: its pretty hard to finely control such a thing.
*I have been corrected by a more knowledgeable poster about my understanding of QAM symbol rates, for anyone who cares: *
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK: Send 000: amplitude -7 Send 001: amplitude -5 Send 011: amplitude -3 Send 101: amplitude -1 Send 110: amplitude 1 Send 111: amplitude 3 Send 101: amplitude 5 Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
Because you need a lot of power(which generally leads to a high signal to noise ratio), you need to be very close to the transmitter, or you will lose a lot of power to attenuation over the cables. In order to solve this problem cost effectively, a system called hybrid fiber cable was devised. In this system, at a point somewhere in your neighborhood, the QAM signals carried over coax cable are modulated into fiber. This is not the same as connecting to fiber optic internet: it is an analog QAM signal, but it is modulated up to fiber optic frequencies to be able to travel through fiber a long distance with low attenuation. The thing in your neighborhood is not a router, it is a simple non-regenerative modulator: it just multiplies the signal coming in by a light wave to carry it along the fiber optic cable.
When your signal gets to the cable company it is demodulated back to regular QAM over coax. This is because we don't really have the technology yet to directly process analog signals carried over fiber. It is then routed to the cable modem termination system. The cable modem termination system is basically a router, that takes in QAM modulated cable at one end, and feeds out to the internet on the other end.
Why it is so hard to upgrade:
We are basically maxed out in capacity. Typical cable infrastructure has a bandwidth between 5MHz-1GHz, but that is shared across any number of homes. In order to increase capacity one of two things need to be done:
-Increased signal to noise ratio: this can be done by either reducing the noise, ala more expensive modems and recievers, or more power. If you can increase the signal to noise ratio, you can increase the number of bits per symbol, but you have to upgrade the equipment both in the home and in the CMTS. An example of this is DOCSIS3.0 that can support up to 256-QAM. Because it is darn expensive to increase the SNR by any other way, this basically boils down to having more neighborhood sites closer to homes to boost the power(you cant just increase the transmit power, or you will start blowing out components). For 128-QAM you typically need to receive a little less then 1 milliwatt of RF power, which is quite a lot when talking about telecommunications systems. As a point of reference your cell phone receives somewhere around 10^-9 to 10^-12 watts.
-Increased number of physical channels: in order to do this, you must increase the number of physical cable channels running to the regional center. The regional center must purchase more cable modem termination systems(which run around 3 million dollars a piece), and must serve less homes on a single modulator, and run appropriate fiber to the modulators.
Both options boil down to having the fiber modulators closer to the home, and both options are extremely expensive to implement. The ISPs have found that the majority of people are not willing to pay more for more speed, and with the huge expense it then becomes hardly worth it.
Conclusion:
When looking at the modern cable modem infrastructure, you will notice that it is basically one giant kludge. This is because it incrementally evolved to minimize cost while increasing speed. If anyone was to design a new system it would look nothing like the current one, but to install an entirely new system is incredibly expensive(hence why most people don't have FIOS access, and it is not much cheaper then cable).
ISPs are greedy. They offer internet speeds that they "can't afford" on their current infrastructure, then they charge the clients up the ass for a fake bandwidth cap issue. Why is it I'm being charged as much for a GB during off hours as I am during prime time, which in Canada, ISPs stated they might throttle your internet speed during prime time if not enough bandwidth is available, which is understandable... But then they go ahead and claim its a precious resource that should be used sparingly, because everyone should have their fair share.
Bandwidth is only a big deal during prime hours, where most people use it, the rest of the time, it's available freely to just about anyone that uses it.
Also, if ISPs charge you for bandwidth caps, shouldn't they at least use it on upgrading their infrastructure, rather than increasing your speed?
People are upset (and justifiably) over usage caps, commonly and erroneously referred to as bandwidth caps, not literal digital bandwidth and the limitations thereof
This allows you to transfer 64 bits per symbol versus just one for AM,
6 bits, not 64. It's 64 different symbols, each representing 6 bits of data. 2^6 = 64
btw, where do you work?
Yeah you are right I will change it, I do RF stuff not your silly base-band stuff. I work for Aeroflex we ODM a lot of the components that other companies then mark up and sell.
I'm curious as to why you tell us why bandwidth caps are a regular thing that occurs to us, even though we pay more than other countries that have faster internet than us.
I will be insanely biased here, but whatever. South Korea pays around $30 a month for broadband (going off last years numbers at 28.80, but i rounded up a dollar just in case) vs our $45 a month on broadband. They also get faster internet than we do, and I haven't heard much about a bandwidth cap from them (although I did not do a search for this specifically, I never hear about it in the news).
Why do you think that we have to pay more, for a worse system that isn't going to support the need of internet in the future. You say caps are needed, but how are we to eliminate them if the future is going to have us getting internet to more people, thus making this a big problem.
On October 18 2011 11:21 floor exercise wrote: People are upset (and justifiably) over usage caps, commonly and erroneously referred to as bandwidth caps, not literal digital bandwidth and the limitations thereof
Well they are one and the same. The majority of people don't ever get near their capacity, but want their internet to be snappy fast when they do use it. If you can increase bandwidth capacity you have no need for usage caps.
very quality post There are a lot of technological upgrades that are coming from places like 3M that is making a lot of your points better but yeah its still a huge problem. Great insight loved the read.
On October 18 2011 11:23 Alventenie wrote: I'm curious as to why you tell us why bandwidth caps are a regular thing that occurs to us, even though we pay more than other countries that have faster internet than us.
I will be insanely biased here, but whatever. South Korea pays around $30 a month for broadband (going off last years numbers at 28.80, but i rounded up a dollar just in case) vs our $45 a month on broadband. They also get faster internet than we do, and I haven't heard much about a bandwidth cap from them (although I did not do a search for this specifically, I never hear about it in the news).
Why do you think that we have to pay more, for a worse system that isn't going to support the need of internet in the future. You say caps are needed, but how are we to eliminate them if the future is going to have us getting internet to more people, thus making this a big problem.
The big difference is that, as far as I know, they are not working off of a hybrid fiber cable infrastructure. I believe they are basically pure fiber(correct me if I am wrong, I know very little about what they have in place). Its a lot cheaper to upgrade pure fiber then to upgrade hybrid-fiber cable. We already had the cable infrastructure in place so it was cheaper to go to hybrid fiber cable then to pure fiber, and most people don't want to pay a lot more for more speed or usage capacity.
Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
On October 18 2011 11:21 floor exercise wrote: People are upset (and justifiably) over usage caps, commonly and erroneously referred to as bandwidth caps, not literal digital bandwidth and the limitations thereof
Well they are one and the same. The majority of people don't ever get near their capacity, but want their internet to be snappy fast when they do use it. If you can increase bandwidth capacity you have no need for usage caps.
Not really, it's just the convenient excuse used to fleece customers. The only possible time for bandwidth capacity to reach peak is in peak usage hours. Usage based billing does not in any way curtail that, it's clearly an exploitative 'solution' to what may or may not be a problem at the end of the day.
How are they one in the same? I don't deny the existence of bandwidth limitations or even network congestion in certain hours, but how does usage based billing in any way effectively tackle the supposed issue of everyone coming home and using the internet at the same time?
Very greedy people decided to arbitrarily assign a cost to transmitting data and charge us this fee regardless of when or how we use this data, because there might be a problem in certain areas between the hours of 8 to 11 due to the technical limitations of our current infrastructure.
I appreciate your post for what it is but it completely misses the mark as to why Canadians are upset
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
@op: are you kidding me? Then how come the rest of the fuckin developed world has way better and cheaper internet than us? It's a fucking joke. We've been monopolized for far too long and instead of increasing speeds overtime, they smack on more and more restrictions. I know that distance is an issue since our countries (canada and states) are huge, but companies could easily focus their efforts on where traffic is highest (i.e. major cities). No need to set up networks over the praries and shit. Start somewhere and expand.
On October 18 2011 11:23 Alventenie wrote: I'm curious as to why you tell us why bandwidth caps are a regular thing that occurs to us, even though we pay more than other countries that have faster internet than us.
I will be insanely biased here, but whatever. South Korea pays around $30 a month for broadband (going off last years numbers at 28.80, but i rounded up a dollar just in case) vs our $45 a month on broadband. They also get faster internet than we do, and I haven't heard much about a bandwidth cap from them (although I did not do a search for this specifically, I never hear about it in the news).
Why do you think that we have to pay more, for a worse system that isn't going to support the need of internet in the future. You say caps are needed, but how are we to eliminate them if the future is going to have us getting internet to more people, thus making this a big problem.
Korean government subsidizes a lot of the costs though, which is why Korea's internet is so good and relatively cheap. Until the US government decides to start investing heavily into internet, US will always be more expensive and slower
also, Korea is much smaller than US .. so for US to invest into internet infrastructure would be incredibly expensive .. (but better use than that damned military budget ..)
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
You described what I deal with at my job on a daily basis. I'm guessing we do somewhat of the same work. Well put I'd say as a general brush-over, even without referencing dB, dBm, dBc, Eb/n0, etc.
It's amazing how long technology has been around before it's even brought mainstream, only to then be seen as that "hot, new technology product".
Really funky system, thanks for the information. Gotta admit though I would trade my internet for that. I'm paying 60 a month plus have to have a phone line on with my ISP and only get 3 down and .5 up. I live in a small town Texas but still, DSL basically costing 85 dollars a month for such shit is ridiculous. Hope one day all those ISP's out there take a big fucking hit that jostles there comfy little industry, cheapskate bastards. -_-
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
It is well known that it only costs ISPs 2 to 3 cents per gb of internet upload/download. Your example only works for peak hours. All other times, the only reason the limit exists is to be greedy.
I don't really understand what you are trying to explain. Are you talking about a speed cap or a usage cap. Usage cap is totally a way to make people pay more. How is it that they make insane amounts of money even if, as you say, upgrading the system is so costly. They could surely still make money while upgrading the infrastructure little by little. Also, if they keep investing little by little, the money upgrading costs would surely go down over time.
On another point, don't you think investing in upgrading the North american network create jobs, stimulate the economy and make the internet more available to everyone? I'm not really talking only about companies (mostly governments, although companies should definitely invest too, don't let the gov't pay for everything).
On October 18 2011 11:59 Feartheguru wrote: It is well known that it only costs ISPs 2 to 3 cents per gb of internet upload/download. Your example only works for peak hours. All other times, the only reason the limit exists is to be greedy.
Um, that cost figure might work from the CTMS on, but not from home->ctms. That is where the problem lies.
I don't really understand what you are trying to explain. Are you talking about a speed cap or a usage cap. Usage cap is totally a way to make people pay more. How is it that they make insane amounts of money even if, as you say, upgrading the system is so costly. They could surely still make money while upgrading the infrastructure little by little. Also, if they keep investing little by little, the money upgrading costs would surely go down over time.
On another point, don't you think investing in upgrading the North american network create jobs, stimulate the economy and make the internet more available to everyone? I'm not really talking only about companies (mostly governments, although companies should definitely invest too, don't let the gov't pay for everything).
The usage cap exists because the speed caps are much larger then the capacity of the infrastructure, were you to use it fully at all times. The idea is that most people just use the internet for email etc, and they want it to be fast, so you have far higher speed caps then they can handle if everyone is maxed on it simultaneously. You trade around bandwidth. The usage cap tries to ensure that the trading around bandwidth does not fail because some users are using a ton of bandwidth and clogging things up for everyone else. They get discouraged from continually doing it by imposing the cap.
I agree that it would create a ton of jobs, and fully support it. I know the plant I work at could hire a ton of skilled assemblers were that the case.
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things.
edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system.
Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things.
These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway.
Sorry, if you make a post like this, I must sidetrack you by nitpicking the details.
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things.
edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system.
Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things.
These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway.
Sorry, if you make a post like this, I must sidetrack you by nitpicking the details.
Exactly: I don't care when I am designing other parts of the system . I am generally given something resembling a Third order inter-mod spec a loss/gain spec, and a (RF) bandwidth spec.
As far as TDMA is concerned, I believe that you get a lot higher throughput versus number of users, as you don't have to worry about interference from out of band communications as much(the white-space problem). The graph I am looking at from Pratts Satellite comm textbook(its what I had lying around) shows like 85% throughput for TDMA at 60 users versus somewhere below 50% from FDMA, but maybe I am missing something, as I don't normally worry about that stuff, and while some systems may use OFDM and combined TDMA/FDMA systems AFAIK DOCSIS is plain old FDMA. NM: I guess it has some TDMA mixed in for burst access.
Maybe I shouldn't have posted about details regarding multiplexing, as they are irrelevant to my point , but its always nice to learn new details.
There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
edit: btw I should stop ragging on you, since it's RF guys that do a lot of the dirty work. I'll just go back to pretending IMD is 0, and you can go back to making that happen.
On October 18 2011 12:30 Myrmidon wrote: There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
You are indeed not orthogonal like OFDM so that is the problem: filtering. You end up with what is basically equivalent to an increased noise floor in FDMA from the other channels if you place them too closely due to filter roll-off. If you had ideal filters then I guess FDMA would be the same. The overhead of TDMA is just less then the cost you take in increased BER from FDMA in typical systems due to what is effectively an increased noise floor: at least that is what I think the graph was showing.
I learned quite a bit from reading this. Although I have to admit I skimmed through some area's as this is not an area I know much about at all Thanks for explaining it as best as you could though.
A shorter version would be that we have terrible internet infrastructure in the States and most areas are serviced by one company with a local monopoly, so people have little choice of their provider and the providers have little incentive to improve.
Solution: Government subsidies for improving broadband access, especially to rural areas.
Problem: That would be "socialism" and therefore evil to most politicians.
On October 18 2011 12:30 Myrmidon wrote: There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
You are indeed not orthogonal like OFDM so that is the problem: filtering. You end up with what is basically equivalent to an increased noise floor in FDMA from the other channels if you place them too closely due to filter roll-off. If you had ideal filters then I guess FDMA would be the same. The overhead of TDMA is just less then the cost you take in increased BER from FDMA in typical systems due to what is effectively an increased noise floor: at least that is what I think the graph was showing.
Hm yeah, well I consider OFDM(A) to be a kind of FDMA and the type that makes most sense to use (in most or certain contexts).
I think in industry, a lot of jargon starts diverging from technical definitions and begins to mean something different and sometimes much larger or specific than the original meaning. At least that's my impression.
The worst offender is "waveform" meaning everything from the waveform to the RF, hardware, coding, link control, medium access, etc. The first time somebody mentioned "waveforms" to me, I thought he was talking about squiggly things...
Recently I've been confused about what different people mean by SC-FDMA.
On October 18 2011 12:30 Myrmidon wrote: There must be something in practice that is not being accounted for, in the difference between the FDMA and TDMA result. If you make a whole lot of simplifying assumptions about guard periods not existing, preambles/headers/overheads/etc. not existing, and so on, and assume that everything just is subjected to AWGN, FDMA and TDMA should give you the same result.
Well, you do waste a lot of bandwidth by doing FDMA that is not tightly spaced like in OFDM, so that could be it. I was assuming orthogonality of the subcarriers like in OFDM. If you do not use OFDM, you need guard bands in frequency between subcarriers, which sucks.
Yeah, it's hard to say what DOCSIS or any of these standards do, now that they're got several million revisions each, and different modes...
You are indeed not orthogonal like OFDM so that is the problem: filtering. You end up with what is basically equivalent to an increased noise floor in FDMA from the other channels if you place them too closely due to filter roll-off. If you had ideal filters then I guess FDMA would be the same. The overhead of TDMA is just less then the cost you take in increased BER from FDMA in typical systems due to what is effectively an increased noise floor: at least that is what I think the graph was showing.
Hm yeah, well I consider OFDM(A) to be a kind of FDMA and the type that makes most sense to use (in most or certain contexts).
I think in industry, a lot of jargon starts diverging from technical definitions and begins to mean something different and sometimes much larger or specific than the original meaning. At least that's my impression.
The worst offender is "waveform" meaning everything from the waveform to the RF, hardware, coding, link control, medium access, etc. The first time somebody mentioned "waveforms" to me, I thought he was talking about squiggly things...
Recently I've been confused about what different people mean by SC-FDMA.
hahaha as an RF guy I think the worst offender is "bandwidth". I start thinking in terms of frequency bands not in terms of what the digital guys mean by bitrate.
Also another great offender is dB's of loss and gain. 32 dB's of loss is -32dB of gain of course, but so many documents make that simple concept way too confusing by being nonspecific.
And indeed industry jargon often means different things. Sometimes I have to deal with customers in academia and have little to no idea what they actually want compared to customers in industry.
It's not that you have really written false information in the OP, but that only covers an aspect of cable internet and doesn't chime in on the different types of DSL, although a lot of the same general concept is shared.
It doesn't really explain why monthly usage caps are implemented though; what you explained is more so why companies that provide internet service don't give us the data rates that a place like Korea has. You more so explained that limiting factor of the speeds that traverse the metropolitan area network. Putting a cap on your monthly usage doesn't stop everyone from maxing out the link speeds of the provider during the busiest hours. Maybe it inclines them to use the internet less, which could in turn cut down on bandwidth during the busy hours, but they could always just do away with the cap and throttle your speeds during those hours. Maybe I'm missing something; it's been a while since I had my data telecom courses, but that's how I've always understood it.
On October 18 2011 11:23 Alventenie wrote: I'm curious as to why you tell us why bandwidth caps are a regular thing that occurs to us, even though we pay more than other countries that have faster internet than us.
I will be insanely biased here, but whatever. South Korea pays around $30 a month for broadband (going off last years numbers at 28.80, but i rounded up a dollar just in case) vs our $45 a month on broadband. They also get faster internet than we do, and I haven't heard much about a bandwidth cap from them (although I did not do a search for this specifically, I never hear about it in the news).
Why do you think that we have to pay more, for a worse system that isn't going to support the need of internet in the future. You say caps are needed, but how are we to eliminate them if the future is going to have us getting internet to more people, thus making this a big problem.
Korean government subsidizes a lot of the costs though, which is why Korea's internet is so good and relatively cheap. Until the US government decides to start investing heavily into internet, US will always be more expensive and slower
also, Korea is much smaller than US .. so for US to invest into internet infrastructure would be incredibly expensive .. (but better use than that damned military budget ..)
There are tons of other countries whiteout caps (or at least caps that don't matter as long as your not downloading everything you can find )? Caps to me are just a thing of the late 90ies early 00's?... ...
On October 18 2011 11:59 Feartheguru wrote: It is well known that it only costs ISPs 2 to 3 cents per gb of internet upload/download. Your example only works for peak hours. All other times, the only reason the limit exists is to be greedy.
Um, that cost figure might work from the CTMS on, but not from home->ctms. That is where the problem lies.
I don't really understand what you are trying to explain. Are you talking about a speed cap or a usage cap. Usage cap is totally a way to make people pay more. How is it that they make insane amounts of money even if, as you say, upgrading the system is so costly. They could surely still make money while upgrading the infrastructure little by little. Also, if they keep investing little by little, the money upgrading costs would surely go down over time.
On another point, don't you think investing in upgrading the North american network create jobs, stimulate the economy and make the internet more available to everyone? I'm not really talking only about companies (mostly governments, although companies should definitely invest too, don't let the gov't pay for everything).
The usage cap exists because the speed caps are much larger then the capacity of the infrastructure, were you to use it fully at all times. The idea is that most people just use the internet for email etc, and they want it to be fast, so you have far higher speed caps then they can handle if everyone is maxed on it simultaneously. You trade around bandwidth. The usage cap tries to ensure that the trading around bandwidth does not fail because some users are using a ton of bandwidth and clogging things up for everyone else. They get discouraged from continually doing it by imposing the cap.
I agree that it would create a ton of jobs, and fully support it. I know the plant I work at could hire a ton of skilled assemblers were that the case.
So... Why should the users be billed because the ISPs are willing to offer more than they can handle? If that's not 100% greed, I don't know what that is. You can come with a straight face and say "Oh it's to improve our technology, etc..." It might be partially true, but the true behind offering caps is greed.
I'm sure the majority of people would understand if you throttle their speed during peak hours, kind of like rush hour on your way to work. Everyone uses the highway, sure they could've built a 20 lane highway, but that's way too expensive (and there's actually more issues with land), but we have to deal with it. However, I'm not being given a limit of time or distance to use the highway, I just use it as I want.
Internet should be the same, sure you can only pass so much internets through the tubes at one time, but when its not congested, there's no issue... It's almost as if they tried to create something out of nothing using the lack of knowledge of the average consumer...
Thanks for the techplanations, I might have even understood some of it
So, I decided to do some research myself into the financial viability of that: The profit of the bandwidth cappers, year ending 31-12-2010, in Millions USD: Comcast 22,687.00 Bell 13,120.00 Rogers 10,666.00 Telus 5,506.00 Shaw 3,717.58
So here's what I think: the big telcos can afford to upgrade their networks with their sizeable profits. Of course, their shareholders and the companies themselves won't want to just do that when they can make that amount of money without spending anything, since they hold mono/oligopolies due to the enormous sunk costs of being an ISP. Therefore, government incentives are needed if we want these companies to continue to improve their internet services. Now, I believe internet to be an essential service in today's world and is becoming more and more like a utility rather than a luxury, therefore, I believe governments should incentivise/force these companies to spend more of their profits on infrastructure than giving it back to their shareholders.
As a heavy user, I dislike bandwidth caps, but I understand the economics behind them, as long as they are reasonable, especially with the increasing importance of the internet and the rise of streaming and digital distribution channels.
Regardless, waveform and bandwidth are widely misused and misunderstood out and even in the industry sometimes I find. I have seen more posts on TL misusing the term "bandwidth" than I have seen using it correctly.
Also, being in my third year of EE, I remember a colleague being amazed at how a dB can actually be a negative number and totally unaware that the base unit was a reference to the milliwatt. Poor guy.
Question for Invalid: Do you like your job/the work you do? If you could change your line of work, would you? And if so, what to? Just curious for personal reasons.
Question for Myrmidon: You know the finer and nit pick details that oldies would often look into a book as a quick refresher. Are you a current student? If not, what work are you involved in, and where would you like to eventually be as your ideal job in telecom?
On October 18 2011 14:54 Grobyc wrote: It's not that you have really written false information in the OP, but that only covers an aspect of cable internet and doesn't chime in on the different types of DSL, although a lot of the same general concept is shared.
It doesn't really explain why monthly usage caps are implemented though; what you explained is more so why companies that provide internet service don't give us the data rates that a place like Korea has. You more so explained that limiting factor of the speeds that traverse the metropolitan area network. Putting a cap on your monthly usage doesn't stop everyone from maxing out the link speeds of the provider during the busiest hours. Maybe it inclines them to use the internet less, which could in turn cut down on bandwidth during the busy hours, but they could always just do away with the cap and throttle your speeds during those hours. Maybe I'm missing something; it's been a while since I had my data telecom courses, but that's how I've always understood it.
The point of the usage cap, is to discourage people from overusing the network. Throttling might make more sense but it is not fair to those majority of users who just check their email. It probably would make more sense to only meter usage during peak hours or something, but that might be hard for people to understand.
I wasn't aware anyone with DSL had caps, that is a pretty different situation.
Question for Invalid: Do you like your job/the work you do? If you could change your line of work, would you? And if so, what to? Just curious for personal reasons.
Indeed I do enjoy my job. I find it satisfying to work on component design: when you finish you actually have physical thing you designed.
I'd rather believe that American companies didn't invest enough early enough to avoid the "we would need to pay massive amounts to accommodate everyone's usage and increase prices accordingly" than that they're simply incompetent.
The US has a pretty shitty standing on internet infrastructure, and it's not because we don't have the technology to keep up with the rest of the world.
The telecom companies have been given monopolies by the government because (theoretically) it's more efficient to have one entity to provide centralized service. Of course, they are expected and allowed to make reasonable profits, in return for providing service for reasonable rates. Instead, they've charged more than necessary and invested less than is equitable. And now they want the customers to pay for the infrastructure debt while cutting the least possible out of their margins.
I'm not a network engineer and I don't know the specifics, but I'm pretty damn sure that most of Europe and parts of Asia don't have some special and regional specific resource that allow them to outperform and underprice the US telecom companies.
On October 18 2011 20:52 Horrde wrote: Regardless, waveform and bandwidth are widely misused and misunderstood out and even in the industry sometimes I find. I have seen more posts on TL misusing the term "bandwidth" than I have seen using it correctly.
Also, being in my third year of EE, I remember a colleague being amazed at how a dB can actually be a negative number and totally unaware that the base unit was a reference to the milliwatt. Poor guy.
Question for Myrmidon: You know the finer and nit pick details that oldies would often look into a book as a quick refresher. Are you a current student? If not, what work are you involved in, and where would you like to eventually be as your ideal job in telecom?
Yeah, I don't know, after a certain point of acceptance of some kind of aberrant definition, words like "bandwidth" start having multiple definitions...in the same field. That gets me too.
And I think you mean to say dBm. dB referenced to mW. I guess this kind of expression is more useful when the noise is more or less constant, and you just need to measure some kind of signal level.
I'm a second-year graduate student, doing a Ph.D. in EE (skipping masters). I do wireless communications research when I'm not wasting time on other things.
At this point I don't think much about real jobs. Most of my advisor's students end up doing research in industry, some with defense/communications contractors, others in research labs, a few at universities while teaching. I guess that'll be me one day, doing one of those things. Worst comes to worst, I figure with some small additional training, I'm way overqualified to teach high school math, and I probably wouldn't mind doing that.
On October 18 2011 11:58 InvalidID wrote: [spoiler=more off topic tech talk]
On October 18 2011 11:55 Myrmidon wrote:
On October 18 2011 11:48 InvalidID wrote:
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things.
edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system.
Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things.
These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway.
Sorry, if you make a post like this, I must sidetrack you by nitpicking the details.
Exactly: I don't care when I am designing other parts of the system . I am generally given something resembling a Third order inter-mod spec a loss/gain spec, and a (RF) bandwidth spec.
On second thought, it often does matter signaling is being done, for the hardware design. I guess this is just part of the spec that gets handed down the chain.
Ex 1) Obviously noncoherent and coherent modulation schemes have different needs.
Ex 2) A system that uses only BPSK, QPSK, 8-PSK -- amplitude reference is not needed, so corners can be cut in places or a different design used on the hardware (but accurate phase extremely important of course)
Ex 3) A system that uses traditional OFDM with relatively high peak-to-average power ratio -- high linearity of the amplifier is required to keep the subcarriers actually orthogonal and thus not causing interchannel interference
In research you see a lot of proposed interactions between hardware and other parts of the stack, that would theoretically improve performance. In practice there is a lot more compartmentalization of tasks, though, and one working group just delivers a product that can meet their required specs.
Do you see any interest in dynamic spectrum access, cognitive radio, etc. in industry? It used to be that network coding was all the rage, but that fad's died down quite a bit.
On October 18 2011 11:58 InvalidID wrote: [spoiler=more off topic tech talk]
On October 18 2011 11:55 Myrmidon wrote:
On October 18 2011 11:48 InvalidID wrote:
On October 18 2011 11:43 Myrmidon wrote: Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency.
Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used.
Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK:
Send 000: amplitude -7
Send 001: amplitude -5
Send 011: amplitude -3
Send 101: amplitude -1
Send 110: amplitude 1
Send 111: amplitude 3
Send 101: amplitude 5
Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error.
Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping.
I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate.
I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things.
Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband.
It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot.
I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over.
How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things.
edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system.
Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things.
These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway.
Sorry, if you make a post like this, I must sidetrack you by nitpicking the details.
Exactly: I don't care when I am designing other parts of the system . I am generally given something resembling a Third order inter-mod spec a loss/gain spec, and a (RF) bandwidth spec.
On second thought, it often does matter signaling is being done, for the hardware design. I guess this is just part of the spec that gets handed down the chain.
Ex 1) Obviously noncoherent and coherent modulation schemes have different needs.
Ex 2) A system that uses only BPSK, QPSK, 8-PSK -- amplitude reference is not needed, so corners can be cut in places or a different design used on the hardware (but accurate phase extremely important of course)
Ex 3) A system that uses traditional OFDM with relatively high peak-to-average power ratio -- high linearity of the amplifier is required to keep the subcarriers actually orthogonal and thus not causing interchannel interference
In research you see a lot of proposed interactions between hardware and other parts of the stack, that would theoretically improve performance. In practice there is a lot more compartmentalization of tasks, though, and one working group just delivers a product that can meet their required specs.
Do you see any interest in dynamic spectrum access, cognitive radio, etc. in industry? It used to be that network coding was all the rage, but that fad's died down quite a bit.
Indeed it does get handed down in the spec. The spec defines what is needed or optimal for each stage is defined by the systems engineer. An RF components guy like me is supposed to design a black box that meets the spec. Systems engineering is cool stuff, but it would sure be a stressful job having to be knowledgeable(and liable) about all stages of the design. Obviously some general understanding of the system is important-it is impossible to optimize everything simultaneously, so it lets you know what is important.
The rage in industry is risk reduction right now-it seems so many people are too worried about their jobs to go out on a limb .
I am not totally sure about what happened with cognative radio but it seems like things didn't pan out for some reason, I remember hearing about how it was going to be the next big thing all the time a few years ago, I think its a bit like MEMS in that respect-if you talk to engineers who started about 10-15 years ago, they will tell you how at that time everyone was in a rage about using mems for everything. Relays(you would be suprised how many high end things use relay switching, for things where switching time doesn't matter much and you have less then a billion or so expected cycles-the loss is lower and you can have latching between power-offs), were supposedly going to be obsolete-and 10 years later we are still using relays . But maybe it just hasn't had time to escape academia-telecom standards are notoriously dated by the time they get implemented.
In terms of the biggest new thing that has really taken off, it is mixed signal stuff. You are starting to see really complicated digital blocks mixed into MMICs, and gone is the day of a stand alone MMIC component for pretty-much any application that is not extremely demanding. And of course SDR is getting higher and higher frequency-I mean check out the type of DACs that are on the market: http://www.fujitsu.com/emea/services/microelectronics/dataconverters/mb86064/index.html
So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
On October 19 2011 11:24 Redmark wrote: So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
Because they can get away with it since they aren't regulated and the masses don't know any better.
On October 19 2011 11:24 Redmark wrote: So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
Good question. I think that would be an excellent solution. Some cable companies sort of do-they use traffic shaping to try and choke things like bit-torrent traffic during peak hours.
Well, I think most of the ideas are about cooperation at a higher level than hardware and RF. e.g. for wireless mesh network, making routing decisions based on some kind of link metrics like SINR, modulation and coding used, etc. Or maybe based on application data. A lot of current algorithms and protocols don't work well in environments outside of which they were originally intended for. Black boxes are good for simplicity, meeting budgets, etc., but in some scenarios (i.e. not those things that most people in industry are currently working on), there is more motivation to open them up I think.
Yep, can't get rid of relays.
Like a lot of other things, dynamic spectrum access sounds like a good idea to me. The idea is to allow unlicensed users to opportunistically scan for unused licensed spectrum and communicate at that frequency band. Then if the licensed owners of the spectrum start up, the unlicensed secondary users quit and find some other spectrum. This is great from an efficiency perspective, as ISM band etc. are all super-crowded and pretty much all spectrum is already allocated--yet a lot of it goes unused most of the time in most places. But doing this in practice is difficult, to say the least. This is not to mention the bureaucratic and regulatory hurdles.
I'm not sure if it's dead yet, as much as waiting on the hardware (err, there's a mountain of work to be done outside of hardware as well). To be able to switch frequencies and communications modes like that, requires a great deal of reconfiguration on the RF and signals side. The more hardware that can be reused for each mode, the better. SDR would be great for that. The more signals you have to work with, the better.
edit:
On October 19 2011 11:24 Redmark wrote: So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
As far as I know, there are no bandwidth (where by "bandwidth" I mean throughput) caps, just total data caps? You mean to limit the data rate during peak hours?
(I know that "bandwidth cap" is the terminology that has been used...don't mean to single out anybody)
I think that throughput caps during peak hours make sense (makes it more fair), but then even more people are going to complain about not getting their rated max speeds. Encouraging off-peak usage is a very good idea, for both network resources and electric power.
But with more queuing up packets and traffic shaping, network latency may suffer, which isn't what you want for say online gaming, or VOIP.
IMHO this gets back to breaking up the black boxes, or at least introducing a few new ones, at least with respect some parts of 7 layer OSI. It would be nice to have prioritization of some time-sensitive traffic over bulk traffic, on the Internet. But if everything is riding on TCP/IP, you get no such thing, without some clever hacks. That's because these things weren't designed with these usage models in mind.
I don't mind if my torrent or download gets throttled like mad during peak hours, but if a game gets laggy because of congestion on the edge network...
There was an earlier post briefly mentioning how Korea runs a pure fiber system, the reason for their cheaper/better internet.
My question remains however: Why are we (Canada and the U.S.) paying much more, for slower/limited internet than most, if not all other developed nations (and some developing nations)?
So I guess the real question is: Why didn't we start off / upgrade to pure fiber when it's faster AND cheaper than hybrid fiber when so many other countries did?
On October 19 2011 12:03 HardMacro wrote: There was an earlier post briefly mentioning how Korea runs a pure fiber system, the reason for their cheaper/better internet.
My question remains however: Why are we (Canada and the U.S.) paying much more, for slower/limited internet than most, if not all other developed nations (and some developing nations)?
So I guess the real question is: Why didn't we start off / upgrade to pure fiber when it's faster AND cheaper than hybrid fiber when so many other countries did?
We didn't in the past because it was always more cost effective to get a little more speed by upgrading the hybrid fiber cable then it was to build a totally new network. If you build a totally new network you have to finance the cost in subscriber fees. If Comcast can offer a service a little slower for a lot less, more people are going to choose that, and you are going to have trouble breaking even.
I think those other countries(and correct me if I am wrong, I am not well versed in how those networks came to be) did not have the cable infrastructure in the first place, making the upgrade more viable, not to mention the oft mentioned and often erroneous population density argument.
As far as I know, there are no bandwidth (where by "bandwidth" I mean throughput) caps, just total data caps? You mean to limit the data rate during peak hours?
Yes. I guess I used bandwidth cap(we have those too), when a lot of the complaints were really directed at throughput caps. The throughput caps, however, exist because of the limited bandwidth.
On October 19 2011 12:29 a176 wrote: I don't understand what you are trying to correlate. How does effective capacity at given moment effect a monthly transfer limit?
The system has a lot less effective capacity at any given moment then can be offered to all users, were they to use it simultaneously. This works because the vast majority of the users don't use their internet a ton all the time. But when they do use their internet they want the pages they view to load super fast. A small minority of users like us use the internet a ton more to do things like view streams. If everyone were to view streams all day we would not have enough bandwidth to accommodate them all. So usage caps were implemented to ensure that the bandwidth remains open for the 99% of users who just check email and occasionally watch a youtube video of a cat, by trying to discourage that small minority from over-using the system.
On October 19 2011 12:29 a176 wrote: I don't understand what you are trying to correlate. How does effective capacity at given moment effect a monthly transfer limit?
The system has a lot less effective capacity at any given moment then can be offered to all users, were they to use it simultaneously. This works because the vast majority of the users don't use their internet a ton all the time. But when they do use their internet they want the pages they view to load super fast. A small minority of users like us use the internet a ton more to do things like view streams. If everyone were to view streams all day we would not have enough bandwidth to accommodate them all. So usage caps were implemented to ensure that the bandwidth remains open for the 99% of users who just check email and occasionally watch a youtube video of a cat, by trying to discourage that small minority from over-using the system.
Your last name doesnt happen to be Finckenstein, does it ?
On October 18 2011 16:27 Canucklehead wrote: I would agree with the op if the cable companies/isps weren't rolling in money already, so they have no excuse to not upgrade the infrastructure.
what figures do you have to support their apparent money-swimming activites?