|
Regardless, waveform and bandwidth are widely misused and misunderstood out and even in the industry sometimes I find. I have seen more posts on TL misusing the term "bandwidth" than I have seen using it correctly.
Also, being in my third year of EE, I remember a colleague being amazed at how a dB can actually be a negative number and totally unaware that the base unit was a reference to the milliwatt. Poor guy.
Question for Invalid: Do you like your job/the work you do? If you could change your line of work, would you? And if so, what to? Just curious for personal reasons.
Question for Myrmidon: You know the finer and nit pick details that oldies would often look into a book as a quick refresher. Are you a current student? If not, what work are you involved in, and where would you like to eventually be as your ideal job in telecom?
|
On October 18 2011 14:54 Grobyc wrote: It's not that you have really written false information in the OP, but that only covers an aspect of cable internet and doesn't chime in on the different types of DSL, although a lot of the same general concept is shared.
It doesn't really explain why monthly usage caps are implemented though; what you explained is more so why companies that provide internet service don't give us the data rates that a place like Korea has. You more so explained that limiting factor of the speeds that traverse the metropolitan area network. Putting a cap on your monthly usage doesn't stop everyone from maxing out the link speeds of the provider during the busiest hours. Maybe it inclines them to use the internet less, which could in turn cut down on bandwidth during the busy hours, but they could always just do away with the cap and throttle your speeds during those hours. Maybe I'm missing something; it's been a while since I had my data telecom courses, but that's how I've always understood it.
The point of the usage cap, is to discourage people from overusing the network. Throttling might make more sense but it is not fair to those majority of users who just check their email. It probably would make more sense to only meter usage during peak hours or something, but that might be hard for people to understand.
I wasn't aware anyone with DSL had caps, that is a pretty different situation.
Question for Invalid: Do you like your job/the work you do? If you could change your line of work, would you? And if so, what to? Just curious for personal reasons.
Indeed I do enjoy my job. I find it satisfying to work on component design: when you finish you actually have physical thing you designed.
|
I'd rather believe that American companies didn't invest enough early enough to avoid the "we would need to pay massive amounts to accommodate everyone's usage and increase prices accordingly" than that they're simply incompetent.
The US has a pretty shitty standing on internet infrastructure, and it's not because we don't have the technology to keep up with the rest of the world.
The telecom companies have been given monopolies by the government because (theoretically) it's more efficient to have one entity to provide centralized service. Of course, they are expected and allowed to make reasonable profits, in return for providing service for reasonable rates. Instead, they've charged more than necessary and invested less than is equitable. And now they want the customers to pay for the infrastructure debt while cutting the least possible out of their margins.
I'm not a network engineer and I don't know the specifics, but I'm pretty damn sure that most of Europe and parts of Asia don't have some special and regional specific resource that allow them to outperform and underprice the US telecom companies.
|
On October 18 2011 20:52 Horrde wrote: Regardless, waveform and bandwidth are widely misused and misunderstood out and even in the industry sometimes I find. I have seen more posts on TL misusing the term "bandwidth" than I have seen using it correctly.
Also, being in my third year of EE, I remember a colleague being amazed at how a dB can actually be a negative number and totally unaware that the base unit was a reference to the milliwatt. Poor guy.
Question for Myrmidon: You know the finer and nit pick details that oldies would often look into a book as a quick refresher. Are you a current student? If not, what work are you involved in, and where would you like to eventually be as your ideal job in telecom?
Yeah, I don't know, after a certain point of acceptance of some kind of aberrant definition, words like "bandwidth" start having multiple definitions...in the same field. That gets me too.
And I think you mean to say dBm. dB referenced to mW. I guess this kind of expression is more useful when the noise is more or less constant, and you just need to measure some kind of signal level.
I'm a second-year graduate student, doing a Ph.D. in EE (skipping masters). I do wireless communications research when I'm not wasting time on other things.
At this point I don't think much about real jobs. Most of my advisor's students end up doing research in industry, some with defense/communications contractors, others in research labs, a few at universities while teaching. I guess that'll be me one day, doing one of those things. Worst comes to worst, I figure with some small additional training, I'm way overqualified to teach high school math, and I probably wouldn't mind doing that.
|
edit: oops, double post, but this is an entirely different topic
On October 18 2011 12:16 InvalidID wrote:Show nested quote +On October 18 2011 12:07 Myrmidon wrote:On October 18 2011 11:58 InvalidID wrote:[spoiler=more off topic tech talk] On October 18 2011 11:55 Myrmidon wrote:On October 18 2011 11:48 InvalidID wrote:On October 18 2011 11:43 Myrmidon wrote:Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency. Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used. QAM = quadrature amplitude modulation QASK = quadrature amplitude-shift keying Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK: - Send 000: amplitude -7
- Send 001: amplitude -5
- Send 011: amplitude -3
- Send 101: amplitude -1
- Send 110: amplitude 1
- Send 111: amplitude 3
- Send 101: amplitude 5
- Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error. Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping. I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate. I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things. Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband. It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot. I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over. How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things. edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system. Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things. These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway. Sorry, if you make a post like this, I must sidetrack you by nitpicking the details. Exactly: I don't care when I am designing other parts of the system . I am generally given something resembling a Third order inter-mod spec a loss/gain spec, and a (RF) bandwidth spec.
On second thought, it often does matter signaling is being done, for the hardware design. I guess this is just part of the spec that gets handed down the chain.
Ex 1) Obviously noncoherent and coherent modulation schemes have different needs.
Ex 2) A system that uses only BPSK, QPSK, 8-PSK -- amplitude reference is not needed, so corners can be cut in places or a different design used on the hardware (but accurate phase extremely important of course)
Ex 3) A system that uses traditional OFDM with relatively high peak-to-average power ratio -- high linearity of the amplifier is required to keep the subcarriers actually orthogonal and thus not causing interchannel interference
In research you see a lot of proposed interactions between hardware and other parts of the stack, that would theoretically improve performance. In practice there is a lot more compartmentalization of tasks, though, and one working group just delivers a product that can meet their required specs.
Do you see any interest in dynamic spectrum access, cognitive radio, etc. in industry? It used to be that network coding was all the rage, but that fad's died down quite a bit.
|
On October 19 2011 09:46 Myrmidon wrote:edit: oops, double post, but this is an entirely different topic Show nested quote +On October 18 2011 12:16 InvalidID wrote:On October 18 2011 12:07 Myrmidon wrote:On October 18 2011 11:58 InvalidID wrote:[spoiler=more off topic tech talk] On October 18 2011 11:55 Myrmidon wrote:On October 18 2011 11:48 InvalidID wrote:On October 18 2011 11:43 Myrmidon wrote:Uh, there is NOT one modulation symbol per cycle of the carrier (sine wave). The modulation symbol rate is usually much lower than the carrier frequency. Also, you can get more than 1 bit per modulation symbol when doing just amplitude modulation or frequency modulation. Normal square 64-QAM is just 64-QASK (8-ASK on in-phase component and 8-ASK on quadrature component). Though in practice, non-square QAM constellations are often used. QAM = quadrature amplitude modulation QASK = quadrature amplitude-shift keying Here is a very simple example to anyone following, for sending 3 bits per symbol via 8-ASK: - Send 000: amplitude -7
- Send 001: amplitude -5
- Send 011: amplitude -3
- Send 101: amplitude -1
- Send 110: amplitude 1
- Send 111: amplitude 3
- Send 101: amplitude 5
- Send 100: amplitude 7
If you are the receiver and detect an amplitude of 2.1, assume that 111 was sent (most likely it was 111 since that is the closest value on the list). If 111 was sent and you detect an amplitude of 1.8 and assume 110 was sent, then the last bit was in error. Each time you want to send a new set of 3 bits, send a different amplitude value corresponding to the above mapping. I am aware well of the terminology, and the basic concept of QAM and you are probably right, as far as the number of symbols per period, I said before I do RF not baseband. Everything I work with is well past modulation, I was just repeating what I remembered from classes as far as the modulation goes. I will update it to be accurate. I'm much more familiar with wireless communications systems, though I focus less on the RF hardware, filtering, DSP, etc. side of things. Anyway, I thought cable telecommunications systems did not use baseband, for better propagation characteristics? Also, if there's some kind of frequency-division multiple access as you're describing (and there is), that implies modulating information onto different carrier frequencies, not baseband. It is FDMA. I was referring to all the modulation mumbo jumbo as base-band, for RF people QAM looks like PSK etc, we take a bandwidth spec and live with it. I took a few classes on wireless/wired communications systems for my masters in EE, but I guess I remembered a few details about that stuff incorrectly as I don't normally worry about it a whole lot. I guess some TDMA system might technically be more efficient if I remember that stuff right, but it would be expensive as heck to change things over. How does QAM look like PSK, unless you're talking about very specific QAM constellations like 4-QAM being identical to 4-PSK? Those are different things. edit: nevermind, I see what you're saying. You don't care where the symbols are placed on the constellation diagram, if you're designing other parts of the system. Regardless if it's FDMA or TDMA, if you have the same channel, you have the same (information-theoretic) capacity. FDMA and TDMA are both useful for different things. Many systems do joint FDMA and TDMA. Switching to TDMA won't by itself improve things. These systems use OFDMA actually, which has many practical advantages (e.g. reducing intersymbol interference, lower-complexity channel equalization), so you wouldn't want to get rid of that anyway. Sorry, if you make a post like this, I must sidetrack you by nitpicking the details. Exactly: I don't care when I am designing other parts of the system . I am generally given something resembling a Third order inter-mod spec a loss/gain spec, and a (RF) bandwidth spec. On second thought, it often does matter signaling is being done, for the hardware design. I guess this is just part of the spec that gets handed down the chain. Ex 1) Obviously noncoherent and coherent modulation schemes have different needs. Ex 2) A system that uses only BPSK, QPSK, 8-PSK -- amplitude reference is not needed, so corners can be cut in places or a different design used on the hardware (but accurate phase extremely important of course) Ex 3) A system that uses traditional OFDM with relatively high peak-to-average power ratio -- high linearity of the amplifier is required to keep the subcarriers actually orthogonal and thus not causing interchannel interference In research you see a lot of proposed interactions between hardware and other parts of the stack, that would theoretically improve performance. In practice there is a lot more compartmentalization of tasks, though, and one working group just delivers a product that can meet their required specs. Do you see any interest in dynamic spectrum access, cognitive radio, etc. in industry? It used to be that network coding was all the rage, but that fad's died down quite a bit.
Indeed it does get handed down in the spec. The spec defines what is needed or optimal for each stage is defined by the systems engineer. An RF components guy like me is supposed to design a black box that meets the spec. Systems engineering is cool stuff, but it would sure be a stressful job having to be knowledgeable(and liable) about all stages of the design. Obviously some general understanding of the system is important-it is impossible to optimize everything simultaneously, so it lets you know what is important.
The rage in industry is risk reduction right now-it seems so many people are too worried about their jobs to go out on a limb .
I am not totally sure about what happened with cognative radio but it seems like things didn't pan out for some reason, I remember hearing about how it was going to be the next big thing all the time a few years ago, I think its a bit like MEMS in that respect-if you talk to engineers who started about 10-15 years ago, they will tell you how at that time everyone was in a rage about using mems for everything. Relays(you would be suprised how many high end things use relay switching, for things where switching time doesn't matter much and you have less then a billion or so expected cycles-the loss is lower and you can have latching between power-offs), were supposedly going to be obsolete-and 10 years later we are still using relays . But maybe it just hasn't had time to escape academia-telecom standards are notoriously dated by the time they get implemented.
In terms of the biggest new thing that has really taken off, it is mixed signal stuff. You are starting to see really complicated digital blocks mixed into MMICs, and gone is the day of a stand alone MMIC component for pretty-much any application that is not extremely demanding. And of course SDR is getting higher and higher frequency-I mean check out the type of DACs that are on the market: http://www.fujitsu.com/emea/services/microelectronics/dataconverters/mb86064/index.html
|
So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
|
On October 19 2011 11:24 Redmark wrote: So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
Because they can get away with it since they aren't regulated and the masses don't know any better.
|
On October 19 2011 11:24 Redmark wrote: So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of?
Good question. I think that would be an excellent solution. Some cable companies sort of do-they use traffic shaping to try and choke things like bit-torrent traffic during peak hours.
|
Well, I think most of the ideas are about cooperation at a higher level than hardware and RF. e.g. for wireless mesh network, making routing decisions based on some kind of link metrics like SINR, modulation and coding used, etc. Or maybe based on application data. A lot of current algorithms and protocols don't work well in environments outside of which they were originally intended for. Black boxes are good for simplicity, meeting budgets, etc., but in some scenarios (i.e. not those things that most people in industry are currently working on), there is more motivation to open them up I think.
Yep, can't get rid of relays.
Like a lot of other things, dynamic spectrum access sounds like a good idea to me. The idea is to allow unlicensed users to opportunistically scan for unused licensed spectrum and communicate at that frequency band. Then if the licensed owners of the spectrum start up, the unlicensed secondary users quit and find some other spectrum. This is great from an efficiency perspective, as ISM band etc. are all super-crowded and pretty much all spectrum is already allocated--yet a lot of it goes unused most of the time in most places. But doing this in practice is difficult, to say the least. This is not to mention the bureaucratic and regulatory hurdles.
I'm not sure if it's dead yet, as much as waiting on the hardware (err, there's a mountain of work to be done outside of hardware as well). To be able to switch frequencies and communications modes like that, requires a great deal of reconfiguration on the RF and signals side. The more hardware that can be reused for each mode, the better. SDR would be great for that. The more signals you have to work with, the better.
edit:
On October 19 2011 11:24 Redmark wrote: So why can't they implement some sort of time-based billing like they do in Ontario with electricity? Have bandwidth caps only for peak usage periods? Since it's all digital shouldn't it be easy to keep track of? As far as I know, there are no bandwidth (where by "bandwidth" I mean throughput) caps, just total data caps? You mean to limit the data rate during peak hours?
(I know that "bandwidth cap" is the terminology that has been used...don't mean to single out anybody)
I think that throughput caps during peak hours make sense (makes it more fair), but then even more people are going to complain about not getting their rated max speeds. Encouraging off-peak usage is a very good idea, for both network resources and electric power.
But with more queuing up packets and traffic shaping, network latency may suffer, which isn't what you want for say online gaming, or VOIP.
IMHO this gets back to breaking up the black boxes, or at least introducing a few new ones, at least with respect some parts of 7 layer OSI. It would be nice to have prioritization of some time-sensitive traffic over bulk traffic, on the Internet. But if everything is riding on TCP/IP, you get no such thing, without some clever hacks. That's because these things weren't designed with these usage models in mind.
I don't mind if my torrent or download gets throttled like mad during peak hours, but if a game gets laggy because of congestion on the edge network...
|
There was an earlier post briefly mentioning how Korea runs a pure fiber system, the reason for their cheaper/better internet.
My question remains however: Why are we (Canada and the U.S.) paying much more, for slower/limited internet than most, if not all other developed nations (and some developing nations)?
So I guess the real question is: Why didn't we start off / upgrade to pure fiber when it's faster AND cheaper than hybrid fiber when so many other countries did?
|
On October 19 2011 12:03 HardMacro wrote: There was an earlier post briefly mentioning how Korea runs a pure fiber system, the reason for their cheaper/better internet.
My question remains however: Why are we (Canada and the U.S.) paying much more, for slower/limited internet than most, if not all other developed nations (and some developing nations)?
So I guess the real question is: Why didn't we start off / upgrade to pure fiber when it's faster AND cheaper than hybrid fiber when so many other countries did?
We didn't in the past because it was always more cost effective to get a little more speed by upgrading the hybrid fiber cable then it was to build a totally new network. If you build a totally new network you have to finance the cost in subscriber fees. If Comcast can offer a service a little slower for a lot less, more people are going to choose that, and you are going to have trouble breaking even.
I think those other countries(and correct me if I am wrong, I am not well versed in how those networks came to be) did not have the cable infrastructure in the first place, making the upgrade more viable, not to mention the oft mentioned and often erroneous population density argument.
|
As far as I know, there are no bandwidth (where by "bandwidth" I mean throughput) caps, just total data caps? You mean to limit the data rate during peak hours?
Yes. I guess I used bandwidth cap(we have those too), when a lot of the complaints were really directed at throughput caps. The throughput caps, however, exist because of the limited bandwidth.
|
I don't understand what you are trying to correlate. How does effective capacity at given moment effect a monthly transfer limit?
|
On October 19 2011 12:29 a176 wrote: I don't understand what you are trying to correlate. How does effective capacity at given moment effect a monthly transfer limit?
The system has a lot less effective capacity at any given moment then can be offered to all users, were they to use it simultaneously. This works because the vast majority of the users don't use their internet a ton all the time. But when they do use their internet they want the pages they view to load super fast. A small minority of users like us use the internet a ton more to do things like view streams. If everyone were to view streams all day we would not have enough bandwidth to accommodate them all. So usage caps were implemented to ensure that the bandwidth remains open for the 99% of users who just check email and occasionally watch a youtube video of a cat, by trying to discourage that small minority from over-using the system.
|
On October 19 2011 12:35 InvalidID wrote:Show nested quote +On October 19 2011 12:29 a176 wrote: I don't understand what you are trying to correlate. How does effective capacity at given moment effect a monthly transfer limit? The system has a lot less effective capacity at any given moment then can be offered to all users, were they to use it simultaneously. This works because the vast majority of the users don't use their internet a ton all the time. But when they do use their internet they want the pages they view to load super fast. A small minority of users like us use the internet a ton more to do things like view streams. If everyone were to view streams all day we would not have enough bandwidth to accommodate them all. So usage caps were implemented to ensure that the bandwidth remains open for the 99% of users who just check email and occasionally watch a youtube video of a cat, by trying to discourage that small minority from over-using the system.
Your last name doesnt happen to be Finckenstein, does it ?
|
On October 18 2011 16:27 Canucklehead wrote: I would agree with the op if the cable companies/isps weren't rolling in money already, so they have no excuse to not upgrade the infrastructure.
what figures do you have to support their apparent money-swimming activites?
|
|
|
|