|
On May 20 2010 13:12 Ian Ian Ian wrote: Forget about the subtraction then..
Like it just seems to me that if you measure something to 4 decimal places or whatever, it should still have the same amount of significant digits, regardless of it's 0.000# or #.000#
No. It does not have the same number of significant digits. It has the same level of precision, however.
Imagine I have a scale that can only read to 1 decimal place. I place a penny on it, and find the reading is 4.5 grams. I have 2 significant digits, and 1 decimal place precision.
I then put 10 pennies on the scale. The reading I get is 45.4 grams. I have 3 significant digits, and 1 decimal place precision.
I then put 100 pennies on the scale. The reading I get is 454.3 grams. I have 4 significant digits, and 1 decimal place precision.
I put 1000 pennies on the scale. I get 4543.2 grams. I have 5 significant digits, and 1 decimal place precision.
Because I have a known count of the number of pennies, I can divide up the 4543.2 grams by 1000 pennies to find the weight of the average penny to be 4.5432 grams. This is using a scale which has 1 decimal place precision to find a result with 4 decimal place precision.
Precision and significant digits are two completely different, but related, concepts.
|
I just finished a first year physics course and there was not talk at all about significant digits. All there was was error analysis with those cool little plus/minus signs and fair bit of tedious calculation to find exactly what the plus/minus was. Are significant digits ever actually used and for what?
|
On May 20 2010 13:46 Kwidowmaker wrote: I just finished a first year physics course and there was not talk at all about significant digits. All there was was error analysis with those cool little plus/minus signs and fair bit of tedious calculation to find exactly what the plus/minus was. Are significant digits ever actually used and for what?
They're used in Chemistry pretty often. Physics tends to ignore sig figs and units.
|
Bill307
Canada9103 Posts
I remember being all confused the first time we were taught significant digits in high school. Then I realized it was just jargon for something I already understood intuitively.
Intuitively, you know that 0001 and 1 are the same thing. Intuitively, you know that 0.001 kg and 1 g are the same thing. So leading zeroes don't change anything.
Intuitively, if I tell you I bought 1 kg of peanuts, you know that I probably didn't buy exactly 1 kg of peanuts: I'm just rounding it off. You also have no idea how precise I was: did I round it to the nearest 10 g and it just happened to come out to 1 kg? Did I round it to the nearest 100 g and the real weight is something like 1.043 kg? You don't know: it's ambiguous.
So when you're working in the field of science, where precision is very important, you know that there has to be a system of telling people how precise your measurements are. One such system is scientific notation.
See, normally (not talking about sig figs or scientific notation here...) there's no reason to add trailing zeroes after the decimal point, e.g. if I write 0.400 the trailing zeroes normally serve no purpose: I'd might as well write 0.4 and it'd be the same thing. So science says, let's use those trailing zeroes for something: let's have them indicate that our measurement is more precise than just 0.4.
Say I've measured out 1.000 kg of calcium chloride with precision to the gram, but we're writing it in grams to be consistent with our other figures, so we write 1000 g. How will other scientists know that those zeroes are significant? How do they know we didn't just round it to the nearest 100 grams or something? That's where scientific notation comes in. If we write it as 1.000 x 10^3 grams, there is no question that the trailing zeroes must be significant, otherwise we would've simply written 1 x 10^3 instead.
And that's all there is to significant digits. It's all very practical, designed to allow scientists to communicate with each other more clearly and with less confusion. Schools just fail to teach it from a practical standpoint, in my experience. =P
|
On May 20 2010 12:37 Ian Ian Ian wrote: Basically I need someone to convince me as to why leading zeroes are not counted as significant digits.
I've been listening to this bullshit in school for forever. And I've never had someone that has really explained it to my understanding..
As I see it, significant digits are a way of showing how much accuracy you took in you're measurements. If I weigh something and I get let's say, 10.000405 grams it is considered to have 8 significant digits. Let's say I weigh the same thing, but it loses ten pounds, and is now 0.000405 grams. I used the same tool to obtain this result and am measuring to the same degree of accuracy. But now I only have 3 significant digits. This does not make sense to me whatsoever.
Well, if you have 1001 grams and take away 1000 you still get 1 grams. You subtracted two quantities with 4 significant digits and got one with 1 significant digit. This problem has nothing to do with leading zeros.
If you want to you can think of significant figures as relative precision. Measuring your mass to kilograms is less precise than measuring the mass of the Moon to kilograms, even though both measurements are in kilograms.
|
Your scale probably has a variance of 0.01mg or so. When you measure something relatively large compared to the variance of the scale, you get a lot of significant figures because the scale is pretty sure of those last 10.000405 When you measure something small, and you're hoping to get 0.0004050182, the scale really has no idea if those last 0.0004050182 are even close to correct. You may be able to find a scale that can give you a small enough variance to measure that, but the max capacity will probably be 10mg or so.
|
On May 20 2010 14:09 Bill307 wrote: I remember being all confused the first time we were taught significant digits in high school. Then I realized it was just jargon for something I already understood intuitively.
Intuitively, you know that 0001 and 1 are the same thing. Intuitively, you know that 0.001 kg and 1 g are the same thing. So leading zeroes don't change anything.
Intuitively, if I tell you I bought 1 kg of peanuts, you know that I probably didn't buy exactly 1 kg of peanuts: I'm just rounding it off. You also have no idea how precise I was: did I round it to the nearest 10 g and it just happened to come out to 1 kg? Did I round it to the nearest 100 g and the real weight is something like 1.043 kg? You don't know: it's ambiguous.
So when you're working in the field of science, where precision is very important, you know that there has to be a system of telling people how precise your measurements are. One such system is scientific notation.
See, normally (not talking about sig figs or scientific notation here...) there's no reason to add trailing zeroes after the decimal point, e.g. if I write 0.400 the trailing zeroes normally serve no purpose: I'd might as well write 0.4 and it'd be the same thing. So science says, let's use those trailing zeroes for something: let's have them indicate that our measurement is more precise than just 0.4.
Say I've measured out 1.000 kg of calcium chloride with precision to the gram, but we're writing it in grams to be consistent with our other figures, so we write 1000 g. How will other scientists know that those zeroes are significant? How do they know we didn't just round it to the nearest 100 grams or something? That's where scientific notation comes in. If we write it as 1.000 x 10^3 grams, there is no question that the trailing zeroes must be significant, otherwise we would've simply written 1 x 10^3 instead.
And that's all there is to significant digits. It's all very practical, designed to allow scientists to communicate with each other more clearly and with less confusion. Schools just fail to teach it from a practical standpoint, in my experience. =P Couldn't have said it better.
On a side note it's nice to know what the expression was in English. (For leading numbers, that is.)
|
|
|
|