|
Thread Rules 1. This is not a "do my homework for me" thread. If you have specific questions, ask, but don't post an assignment or homework problem and expect an exact solution. 2. No recruiting for your cockamamie projects (you won't replace facebook with 3 dudes you found on the internet and $20) 3. If you can't articulate why a language is bad, don't start slinging shit about it. Just remember that nothing is worse than making CSS IE6 compatible. 4. Use [code] tags to format code blocks. |
On October 31 2015 18:10 supereddie wrote:Show nested quote +On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example?
Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance?
|
On November 02 2015 16:30 Manit0u wrote:Show nested quote +On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance?
What are you asking?
All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations.
When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that.
If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance.
|
GRAND OLD AMERICA16375 Posts
|
On November 02 2015 17:08 Blisse wrote:Show nested quote +On November 02 2015 16:30 Manit0u wrote:On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance? What are you asking? All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations. When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that. If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance.
I'm asking how do you know what to compare your value against?
Unless you set some pre-defined constants you really can't do it since you'd have to calculate it using floats which could also be erroneous. That's why you can't really use delta comparison in a system where you don't know how many or how large your input values will be and you don't have pre-set data to compare it. Unless I'm mistaken, but that's why I'm asking the question.
|
I'm not sure I fully understand the point of it. This allows you to develop web layouts without having to touch JS/CSS/HTML? You can just write it in, what, C#, and it compiles to a web suite version?
On November 03 2015 04:24 Manit0u wrote:Show nested quote +On November 02 2015 17:08 Blisse wrote:On November 02 2015 16:30 Manit0u wrote:On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance? What are you asking? All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations. When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that. If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance. I'm asking how do you know what to compare your value against? Unless you set some pre-defined constants you really can't do it since you'd have to calculate it using floats which could also be erroneous. That's why you can't really use delta comparison in a system where you don't know how many or how large your input values will be and you don't have pre-set data to compare it. Unless I'm mistaken, but that's why I'm asking the question.
Right, which is why earlier he mentioned that you need to define the constants ahead of time for your delta. They may be business constants, etc. If you're using money, it could be 2 digits(i.e. 20.00 and 20.01 are different, but 20.00 and 20.0049 are the same, so your delta would be 0.005). If you don't know anything about your system, then you can't really use a delta effectively, but you can just make one up that sounds like it might work.
|
On November 03 2015 04:24 Manit0u wrote:Show nested quote +On November 02 2015 17:08 Blisse wrote:On November 02 2015 16:30 Manit0u wrote:On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance? What are you asking? All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations. When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that. If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance. I'm asking how do you know what to compare your value against? Unless you set some pre-defined constants you really can't do it since you'd have to calculate it using floats which could also be erroneous. That's why you can't really use delta comparison in a system where you don't know how many or how large your input values will be and you don't have pre-set data to compare it. Unless I'm mistaken, but that's why I'm asking the question.
You have to have some sort of predetermined value. If you sum any amount of numbers and you wanted to verify an answer you have to know something beforehand whether you are working with ints or doubles. Typically you'd have some predetermined total that you'd want to compare against.
|
Not posted in this thread before but I'm doing some javascript and I'm new to it and I am failing hard. Feel like an idiot because I can't get it to work. I'm trying to do a few things with functions on a webpage. What I'm stuck on atm is that I am trying to create a function that will get the day's date. I am able to figure out how to get the date but when I try to make it a function it messes it up :/. I know I'm doing something stupid but I can't figure it out so yeah.
I have this which returns the date correctly:
var currentdate = new Date(); var dd = currentdate.getDate(); var mm = currentdate.getMonth()+1; var yyyy = currentdate.getFullYear();
if(dd<10) { dd='0'+dd }
if(mm<10) { mm='0'+mm }
today = mm+'/'+dd+'/'+yyyy;
alert(today);
But the requirement for the project I'm doing says it needs to be in a function today(). Why I don't know because it would work without that but whatever. Right now I am trying this which doesn't work:
function today() { var currentdate = new Date(); var dd = currentdate.getDate(); var mm = currentdate.getMonth()+1; var yyyy = currentdate.getFullYear();
if(dd<10) { dd='0'+dd }
if(mm<10) { mm='0'+mm }
return mm+'/'+dd+'/'+yyyy; }
I've tried a bunch of other things as well but to no avail.
Any help would be appreciated, or even pointing me to a resource that will clear things up so I am less clueless.
|
On November 03 2015 04:24 Manit0u wrote:Show nested quote +On November 02 2015 17:08 Blisse wrote:On November 02 2015 16:30 Manit0u wrote:On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance? What are you asking? All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations. When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that. If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance. I'm asking how do you know what to compare your value against? Unless you set some pre-defined constants you really can't do it since you'd have to calculate it using floats which could also be erroneous. That's why you can't really use delta comparison in a system where you don't know how many or how large your input values will be and you don't have pre-set data to compare it. Unless I'm mistaken, but that's why I'm asking the question.
floating point representation is not "erroneous". It's imprecise, which is not the same as "erroneous" or inaccurate. For your question, reference this wiki page: https://en.wikipedia.org/wiki/Machine_epsilon. Rounding modes are also important for understanding where the imprecision comes from, as different floating point implementations may round differently which is why floats come out slightly off from what you expect. see https://en.wikipedia.org/wiki/Floating_point#Rounding_modes.
For an example, many implementations of the C software floating point libraries first compare the bitfields for the exponent, then compare the base using the machine epsilon. This is a sufficient way to compare floats without needing to know before hand what their values might be.
|
On November 03 2015 04:24 Manit0u wrote:Show nested quote +On November 02 2015 17:08 Blisse wrote:On November 02 2015 16:30 Manit0u wrote:On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance? What are you asking? All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations. When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that. If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance. I'm asking how do you know what to compare your value against? Unless you set some pre-defined constants you really can't do it since you'd have to calculate it using floats which could also be erroneous. That's why you can't really use delta comparison in a system where you don't know how many or how large your input values will be and you don't have pre-set data to compare it. Unless I'm mistaken, but that's why I'm asking the question.
No that should be wrong.
If you have a spreadsheet and you need to compare the sums of two columns of doubles/floats then you don't have pre-defined constants but you still need to use epsilon checks to compare the two.
The point is that if you ever need to compare two numbers represented as double/floats, you cannot do so directly because they're imprecise and operations accumulate errors, so you need to deal with the poor precision by doing an epsilon comparison.
I don't really understand how you can't think of an example where you need to compare two numbers that are double/floats so I'm still really lost.
---
If you're talking about how the errors in precision can become unclear/unbounded if you have a large number of double/float operations, your error is bounded by the IEEE implementation of the type and internal to the language - use another number representation if you need more precision.
http://stackoverflow.com/questions/747470/what-is-the-meaning-of-numeric-limitsdoubledigits10
|
|
On November 03 2015 07:39 Nesserev wrote:Show nested quote +On November 03 2015 06:53 Kickstart wrote:Not posted in this thread before but I'm doing some javascript and I'm new to it and I am failing hard. Feel like an idiot because I can't get it to work. I'm trying to do a few things with functions on a webpage. What I'm stuck on atm is that I am trying to create a function that will get the day's date. I am able to figure out how to get the date but when I try to make it a function it messes it up :/. I know I'm doing something stupid but I can't figure it out so yeah. I have this which returns the date correctly: var currentdate = new Date(); var dd = currentdate.getDate(); var mm = currentdate.getMonth()+1; var yyyy = currentdate.getFullYear();
if(dd<10) { dd='0'+dd }
if(mm<10) { mm='0'+mm }
today = mm+'/'+dd+'/'+yyyy;
alert(today);
But the requirement for the project I'm doing says it needs to be in a function today(). Why I don't know because it would work without that but whatever. Right now I am trying this which doesn't work: function today() { var currentdate = new Date(); var dd = currentdate.getDate(); var mm = currentdate.getMonth()+1; var yyyy = currentdate.getFullYear();
if(dd<10) { dd='0'+dd }
if(mm<10) { mm='0'+mm }
return mm+'/'+dd+'/'+yyyy; }
I've tried a bunch of other things as well but to no avail. Any help would be appreciated, or even pointing me to a resource that will clear things up so I am less clueless. I tried the second piece of code, and it seems to work like intended; it returns the string: "11/02/2015" (today's date in MURRICAN format) For a small project, it probably won't hurt that you initialize global variables like that, but it's better to avoid them at all costs anyway. As projects get bigger, those variables will be accessible from anywhere else in your code, and thus any part of your code might introduce an error. Also, you determine the date once, but are you sure that every subsequent time that today() is called, that the date is still the same. It's better to keep logical statements that belong together in the same function.
Yeah you are right, I am just retarded. Second piece is actually working correctly, I was just calling like "alert(today);" instead of "alert(today());" Woops.
|
On November 03 2015 05:01 WarSame wrote:I'm not sure I fully understand the point of it. This allows you to develop web layouts without having to touch JS/CSS/HTML? You can just write it in, what, C#, and it compiles to a web suite version?
I think the point is supposed to be that C# and XAML are a lot nicer interfaces for UI development than JS/CSS/HTML, which is probably true.
|
If that's the case then I'm fully behind it. My experience with the web suite is painful so far. I just wasn't sure if I was reading it right. It can be hard to understand some descriptions of products. There's a whole ton of lingo, other products' names, and unclear explanations.
|
On November 03 2015 07:56 Kickstart wrote:Show nested quote +On November 03 2015 07:39 Nesserev wrote:On November 03 2015 06:53 Kickstart wrote:Not posted in this thread before but I'm doing some javascript and I'm new to it and I am failing hard. Feel like an idiot because I can't get it to work. I'm trying to do a few things with functions on a webpage. What I'm stuck on atm is that I am trying to create a function that will get the day's date. I am able to figure out how to get the date but when I try to make it a function it messes it up :/. I know I'm doing something stupid but I can't figure it out so yeah. I have this which returns the date correctly: var currentdate = new Date(); var dd = currentdate.getDate(); var mm = currentdate.getMonth()+1; var yyyy = currentdate.getFullYear();
if(dd<10) { dd='0'+dd }
if(mm<10) { mm='0'+mm }
today = mm+'/'+dd+'/'+yyyy;
alert(today);
But the requirement for the project I'm doing says it needs to be in a function today(). Why I don't know because it would work without that but whatever. Right now I am trying this which doesn't work: function today() { var currentdate = new Date(); var dd = currentdate.getDate(); var mm = currentdate.getMonth()+1; var yyyy = currentdate.getFullYear();
if(dd<10) { dd='0'+dd }
if(mm<10) { mm='0'+mm }
return mm+'/'+dd+'/'+yyyy; }
I've tried a bunch of other things as well but to no avail. Any help would be appreciated, or even pointing me to a resource that will clear things up so I am less clueless. I tried the second piece of code, and it seems to work like intended; it returns the string: "11/02/2015" (today's date in MURRICAN format) For a small project, it probably won't hurt that you initialize global variables like that, but it's better to avoid them at all costs anyway. As projects get bigger, those variables will be accessible from anywhere else in your code, and thus any part of your code might introduce an error. Also, you determine the date once, but are you sure that every subsequent time that today() is called, that the date is still the same. It's better to keep logical statements that belong together in the same function. Yeah you are right, I am just retarded. Second piece is actually working correctly, I was just calling like "alert(today);" instead of "alert(today());" Woops.
Here's another way of doing it. That should show you the way 
function printDate() { var temp = new Date(); var dateStr = padStr(temp.getFullYear()) + padStr(1 + temp.getMonth()) + padStr(temp.getDate()) + padStr(temp.getHours()) + padStr(temp.getMinutes()) + padStr(temp.getSeconds()); debug (dateStr ); }
function padStr(i) { return (i < 10) ? "0" + i : "" + i; }
|
On November 03 2015 07:16 Blisse wrote:Show nested quote +On November 03 2015 04:24 Manit0u wrote:On November 02 2015 17:08 Blisse wrote:On November 02 2015 16:30 Manit0u wrote:On October 31 2015 18:10 supereddie wrote:On October 31 2015 03:54 Manit0u wrote:On October 29 2015 04:39 supereddie wrote:Even if you round, the binary representation of fractions is still a problem. To avoid other problems, use a delta in comparisons. double delta = 0.00001; // max deviation/fault tolerance Math.Abs(0.3 - (0.1 + 0.2)) < delta
All cool and dandy, but what do you want to compare here? You have 2 doubles and you want to see if the result of an operation performed on them is within delta. How do you determine what do you want to compare the result against? It's nice when talking about 0.1, 0.2 and 0.3 for the sake of examples, but what if you don't know any of them beforehand? The delta should be an application or business rule. The others can just be variables or something. Basically anywhere you do an equals comparison between floats/doubles you can replace it with this. Useless example: // Somewhere else internal static readonly double FaultTolerance = 0.001;
private bool IsOneThird(double someNumber) { // any someNumer between 0.332 and 0.334 is counted as 1/3 return (Math.Abs(0.33 - someNumer) < Constants.FaultTolerance); }
This still doesn't answer my question. Where did 0.33 come from in this example? Let's say someNumber is 9268.27. How do you know which method to use to see if it's within fault tolerance? What are you asking? All mathematic operations are within whatever IEEE spec that the language/compiler supports, that's where you get your guarantees about the tolerance of operations. When you have floats/doubles you want to compare, you can only compare them up to the tolerance of that IEEE spec because multiple operations lose precision, but you already know that. If you have a budget of like $100.00 (or whatever a user enters) and you need to check if you reached that budget and you used floats for some poor reason, you can have three items costing 10.0+10.0+80.0 that equals 100.00. But because you represented them in floats it's actually 100.0000001, and that's higher than your first 100 which was actually 100.0000000001, so your check fails when obviously it should be correct. So to fix that you don't go 10.0+10.0+80.0 < 100.00 but 100.00 - 10.0+10.0+80.0 < tolerance. I'm asking how do you know what to compare your value against? Unless you set some pre-defined constants you really can't do it since you'd have to calculate it using floats which could also be erroneous. That's why you can't really use delta comparison in a system where you don't know how many or how large your input values will be and you don't have pre-set data to compare it. Unless I'm mistaken, but that's why I'm asking the question. No that should be wrong. If you have a spreadsheet and you need to compare the sums of two columns of doubles/floats then you don't have pre-defined constants but you still need to use epsilon checks to compare the two. The point is that if you ever need to compare two numbers represented as double/floats, you cannot do so directly because they're imprecise and operations accumulate errors, so you need to deal with the poor precision by doing an epsilon comparison. I don't really understand how you can't think of an example where you need to compare two numbers that are double/floats so I'm still really lost. --- If you're talking about how the errors in precision can become unclear/unbounded if you have a large number of double/float operations, your error is bounded by the IEEE implementation of the type and internal to the language - use another number representation if you need more precision. http://stackoverflow.com/questions/747470/what-is-the-meaning-of-numeric-limitsdoubledigits10 I'm confused, aren't you two actually saying the same thing? He's pointing out that you can't compare floating point numbers without first setting a tolerance. Which is exactly what you are saying, right?
|
To me, it sounded like he was saying that during A - B < e, that you have to have a pre-defined/constant A or B otherwise the comparison doesn't, which doesn't make sense to me.
I'm asking how do you know what to compare your value against?
Specifically this...
---
OH are you (manitou) asking how do you determine the tolerance?
The fault tolerance is just a number you decide to choose depending on how much precision you want to be able to guarantee with each operation - you choose it yourself based on the numeric type and your system requirements.
---
Also I was watching X-Files on Netflix and they mentioned Manitou and I was like oh damn that's where your name comes from....
|
On November 04 2015 03:19 Blisse wrote:To me, it sounded like he was saying that during A - B < e, that you have to have a pre-defined/constant A or B otherwise the comparison doesn't, which doesn't make sense to me. Specifically this... --- OH are you (manitou) asking how do you determine the tolerance? The fault tolerance is just a number you decide to choose depending on how much precision you want to be able to guarantee with each operation - you choose it yourself based on the numeric type and your system requirements. --- Also I was watching X-Files on Netflix and they mentioned Manitou and I was like oh damn that's where your name comes from....
Haha. Well, I didn't ask about the tolerance (that's obviously a const). I was asking about A +/- B where if you have to calculate both A and B you can get the same imprecision in both of them, as opposed to A (or B) also being a pre-set value.
// CASE 1 const double tolerance = 0.001; const double n = 0.03;
boolean all_is_fine(double x, double y) { return n - (x + y) <= tolerance; }
// CASE 2 const double tolerance = 0.001;
boolean all_is_fine(double a, double b, double c, double d) { return (a + b) - (c + d) <= tolerance; }
I was asking about case 1 vs case 2 specifically, since in all examples posted previously there was only case 1 presented. What if you have to do it case 2 way? Won't the imprecision skew both additions thus making tolerance check moot?
Might be dumb question but it intrigued me (at my company we try to avoid floats like a plague so I don't have much experience with them).
|
I believe in that case you may want to double the tolerance. Potentially you could have 0.00001 extra from an addition or whatever. If you have 2 then you can get 0.00002 extra, so just double the tolerance to make up for that. It's kinda like uncertainty in physics. If you multiply them, then you square the tolerance(which makes it smaller), if you divide I think maybe you square root the tolerance?
|
|
This guy here blogged a lot about floating point numbers: https://randomascii.wordpress.com/
Here's his article about comparisons: https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
Well, I didn't ask about the tolerance (that's obviously a const).
That's actually a thing that's weird about floating point. The precision of the numbers is not a constant. It's changing with the size of the values as there's the exponent part of the floating point number that moves the point around. In C, the "float.h" header file has a definition for that "epsilon" that's telling you the precision of float and double from 1.0 to 2.0. If you move away from that range, the precision changes.
What's going on there is, the main bits of the data structure are used for fractions like this:
1 + (1/2) + (1/4) + (1/8) + (1/16) + (1/32) + ...
Those bits build values between 1.0 and <2.0. Then there's a second, smaller set of bits that moves the point around to get to smaller and larger ranges of numbers (a binary point, not decimal point).
Here's something fun about Patriot missiles being more imprecise the longer the computers were running:
http://www.gao.gov/products/IMTEC-92-26
There's something about 24-bit precision of their computers in the PDF. That would fit with 32-bit floats. It seems they used time in perhaps seconds since the machine was started for their calculations. The precision for comparing two time values then went to shit after several days when the numbers got large.
|
|
|
|