EDIT: A false positive can quite often be a lot more dangerous than a false negative. See the Cochrane review on breast cancer.
Ask and answer stupid questions here! - Page 712
Forum Index > General Forum |
Ghostcom
Denmark4782 Posts
EDIT: A false positive can quite often be a lot more dangerous than a false negative. See the Cochrane review on breast cancer. | ||
Uldridge
Belgium4786 Posts
| ||
Ghostcom
Denmark4782 Posts
| ||
Artisreal
Germany9235 Posts
On November 02 2018 06:26 Uldridge wrote: What do you generally prefer? High significance or high power? It's sort of sad that you need to have a tradeoff between the 2. Nvm, entirely misread the comment without taking into account the posts before it. | ||
Uldridge
Belgium4786 Posts
I know there are formulas to find the sample size etc needed to keep your type II error low enough while you set your significance level suuuuuper low (or high, whatever, more stringent), but surely there should be some way to figure out what the best alpha and 1-beta levels are for a certain experimental setup and the data that is available. I mean, how do you know it's relevant when you can arbitrarily choose your stuff, right? Sometimes it all feels like tinkering until something seemingly relevant pops out. I know I'm a statistics rookie though, so don't bother with replying if this is way too full of mistakes. | ||
DarkPlasmaBall
United States44359 Posts
On November 02 2018 08:18 Uldridge wrote: Well, if you lower your rate of type I errors, i.e. changing your significance level from 0.05 to 0.001 for example, you'll need extreme amounts of evidence (data) in order to show something is truly deviating from the null hypothesis. This also makes sure that your type II errors skyrocket, because you being so stringent will also cause you to miss all the relevancy. I know there are formulas to find the sample size etc needed to keep your type II error low enough while you set your significance level suuuuuper low (or high, whatever, more stringent), but surely there should be some way to figure out what the best alpha and 1-beta levels are for a certain experimental setup and the data that is available. I mean, how do you know it's relevant when you can arbitrarily choose your stuff, right? Sometimes it all feels like tinkering until something seemingly relevant pops out. I know I'm a statistics rookie though, so don't bother with replying if this is way too full of mistakes. Yup, that's statistics (and the art behind a lot of scientific studies in general) ![]() As Ghostcom noted, accepting more Type 1 errors vs. accepting more Type 2 errors is usually based on context. Sometimes, a false positive leads to a "better safe than sorry" approach, like how you're supposed to treat every gun like a loaded gun (even if it's not). In terms of a false positive vs. false negative for a disease, it might depend on how costly (financially, physically, etc.) an unnecessary treatment (medicine, chemotherapy) might be vs. not treating it at all. In general, a simple solution to a lot of these very serious ethical dilemmas is to just repeat tests a few times, as it's exponentially less likely to receive multiple Type errors in a row. A common example of this is when women use 2 or 3 different pregnancy sticks when testing whether or not they're pregnant. | ||
Uldridge
Belgium4786 Posts
I mean, I understand why it's used and I understand the invaluable insight it might give us, but it still feels wrong, if you know what I mean. It's like knowing the raft you've made to escape the island is made out of crappy driftwood, which might make you drown at any point in time, but you're forced to use it any way if you want to chance it off the island. I just wish there were hard boundaries instead of all this fuzzy stuff, but I guess that's our human limitation.. | ||
Simberto
Germany11517 Posts
It only means that you get have different types of statements from statistics when compared to the logic of other parts of maths. In other parts of maths, you usually prove if something is true, or not, (or possibly if it is impossible to prove that it is true). And the same type of results are also true in the applications of that maths, any type of uncertainty is usually based in the uncertainty of the input data, not the uncertainty of the maths. In statistics, you use the same type of logic and proofs for the maths itself, but in the applications, this leads to statements with a margin of error. That does not make the statements any less valuable, you just need to understand what the statement actually is, and that is a bit more complex. Because it is usually not "x is true" or "x is untrue", but "x has a 96% chance of being true and a 4% chance of being untrue". This statement itself is still mathematically 100% true, but the problem is that people don't understand that statement, and think it actually means "x is true". The problem is not with statistics, but with people not understanding what statistics tells them. As a clear example of that, take a look at the 2016 election in the US. There were predictions that Hillary Clinton had a 70% chance of winning. After Donald Trump won, people claimed that these prediction were wrong, because Clinton didn't win. The statement may have been wrong, but it was definitively not wrong for that reason. Because if Hillary Clinton had a 70% chance of winning, that means that she has a 30% chance of losing. Just because the percentage is higher than 50% does not mean that it is certain, and if it doesn't happen, the prediction was wrong. | ||
Uldridge
Belgium4786 Posts
The fact that you can arbitrarily pick a significance level, or do some p-hacking to make your results look promising seems like an inherent flaw. We shouldn't be able to do that. There should be ways to circumvent that, by using some rigorous formulations on what you can and can't do. For instance, why is 0.05 significance value so widely used in life sciences? Why not use 0.01? Why not use a custom significance value suited for your experimental set up, based on clearly defined ways to get to that custom significance value? | ||
Ghostcom
Denmark4782 Posts
In fact, all major journals have agreed to ban p-values from their papers (but they don't really do that in practice though). This paper by Greenland et al is probably one of the more crucial ones in this regard: https://link.springer.com/article/10.1007/s10654-016-0149-3 | ||
bo1b
Australia12814 Posts
https://www.yourtango.com/200938569/worlds-10-best-worst-lovers | ||
_fool
Netherlands678 Posts
On November 02 2018 17:17 bo1b wrote: When are German men going to step their game up, sexually? https://www.yourtango.com/200938569/worlds-10-best-worst-lovers The Germans (and the whole of Northern Europe, for that matter) are still waiting for an EU standard for this. Length of foreplay, satisfaction rates, etc. The whole climate thing has been taking top priority, though, so the topic has been moved to somewhere mid 2020's. A shame, really. We really could have made a mark here. Of course, individual countries are free to implement their own preliminary standards, so not all is lost. | ||
Artisreal
Germany9235 Posts
On November 02 2018 17:43 _fool wrote: The Germans (and the whole of Northern Europe, for that matter) are still waiting for an EU standard for this. Length of foreplay, satisfaction rates, etc. The whole climate thing has been taking top priority, though, so the topic has been moved to somewhere mid 2020's. A shame, really. We really could have made a mark here. Of course, individual countries are free to implement their own preliminary standards, so not all is lost. Holly shit, this would make for a perfect guideline for incels to finally "get" women. | ||
bo1b
Australia12814 Posts
On November 02 2018 17:43 _fool wrote: The Germans (and the whole of Northern Europe, for that matter) are still waiting for an EU standard for this. Length of foreplay, satisfaction rates, etc. The whole climate thing has been taking top priority, though, so the topic has been moved to somewhere mid 2020's. A shame, really. We really could have made a mark here. Of course, individual countries are free to implement their own preliminary standards, so not all is lost. I'm just imagining a bureaucrat in Geneva putting an ISO standard together for all of this now lmao. | ||
DarkPlasmaBall
United States44359 Posts
On November 02 2018 21:11 bo1b wrote: Instead of worrying about Incels, you should worry about Gercel's when this info leaks, step it up buddy. I'm just imagining a bureaucrat in Geneva putting an ISO standard together for all of this now lmao. So instead of metrosexuals, we'll have metric-sexuals? | ||
JimmiC
Canada22817 Posts
| ||
![]()
Liquid`Drone
Norway28672 Posts
| ||
bo1b
Australia12814 Posts
| ||
JimmiC
Canada22817 Posts
| ||
Gorsameth
Netherlands21691 Posts
| ||
| ||