It's a system with horrible flaws that isn't used in real sports and needs to get out of esports. It is to tournament formats what Instant Runoff is to voting.
Winner's Advantage in Grand Finals - Page 2
Blogs > motbob |
Cheren
United States2911 Posts
It's a system with horrible flaws that isn't used in real sports and needs to get out of esports. It is to tournament formats what Instant Runoff is to voting. | ||
sertas
Sweden859 Posts
| ||
motbob
United States12546 Posts
On March 15 2015 08:57 itsjustatank wrote: Your generated Elo predictions based on arbitrary distribution choices resulted in differences that do not seem statistically significant. You do no test to prove that they are statistically significant, you just give the differences in observed percentages. Null hypothesis: there is no statistically significant difference between starting a double-elimination finals 1-0 versus 0-0 Alternate hypothesis: there is a statistically significant difference between starting a double-elimination finals 1-0 versus 0-0 You have not proven whether or not what you got is noise and whether or not there really is a difference between a 1-0 start and a 0-0 start. You just want one of the two, clearly, and think this is enough to want to make a change. Your argument is completely non-falsifiable right now. Sure, it may work for the internet, but unless you do that extra work you are pissing in the wind with a cloak of statistics making your advocacy look smart to people who do not know what they are reading. What is the point of worrying about null/alternate hypotheses, usually? The normal case is this: we sat on the side of the curb all day and observed 200 people passing by. 120 of those people were male. Assuming (liberally) that this has been a completely typical day in terms of the composition of people passing by, can we take our 120/200 number and say that people who walk past the curb are more likely to be male than not? Or was what we saw dictated by random chance? We have to use statistical tests to get a P-value and thereby answer that question and see if we can reject the null. In Excel, those considerations don't really make any sense because we can just increase the sample size to some absurd number. Imagine I simulate my exercise: I generate a random number and create a cell that returns 1 (for male) 51% of the time and 0 (female) 49% of the time. I then run the test 200 times. The test gives me 54.5%; a test with a 1000 "sample size" gave 52.8%; 10000, 51.5%; 50000, 51.036%. As the sample size gets larger and larger, the value observed converges to the "true" value of 51%. So in this context, an appropriate objection isn't "you didn't do a proper statistical test" because we don't care about inferences and P-values here. We can get the true value, or approach it, just by cranking up the number of simulation runs. | ||
FFGenerations
7088 Posts
| ||
GeckoXp
Germany2016 Posts
On March 15 2015 09:27 motbob wrote: What is the point of worrying about null/alternate hypotheses, usually? The normal case is this: we sat on the side of the curb all day and observed 200 people passing by. 120 of those people were male. Assuming (liberally) that this has been a completely typical day in terms of the composition of people passing by, can we take our 120/200 number and say that people who walk past the curb are more likely to be male than not? Or was what we saw dictated by random chance? We have to use statistical tests to get a P-value and thereby answer that question and see if we can reject the null. In Excel, those considerations don't really make any sense because we can just increase the sample size to some absurd number. Imagine I simulate my exercise: I generate a random number and create a cell that returns 1 (for male) 51% of the time and 0 (female) 49% of the time. I then run the test 200 times. The test gives me 54.5%; a test with a 1000 "sample size" gave 52.8%; 10000, 51.5%; 50000, 51.036%. As the sample size gets larger and larger, the value observed converges to the "true" value of 51%. So in this context, an appropriate objection isn't "you didn't do a proper statistical test" because we don't care about inferences and P-values here. We can get the true value, or approach it, just by cranking up the number of simulation runs. You do realize you're not flipping a simulated coin, but you're using estimators with assumptions, right? | ||
motbob
United States12546 Posts
On March 15 2015 09:44 GeckoXp wrote: You do realize you're not flipping a simulated coin, but you're using estimators with assumptions, right? From my perspective a tournament is just a series of specifically weighted coin flips. | ||
GeckoXp
Germany2016 Posts
edit, the a in toss / throw means single. 8[ | ||
itsjustatank
Hong Kong9145 Posts
at the point where you even admit this in your OP, there isn't much else to say. On March 14 2015 17:17 motbob wrote:But the difference between formats seems small enough that, if I were an organizer, I would just keep doing what spectators want (no advantage). at best you win that in your perfect little infinite compting boxes of imaginary players, it is perhaps a tiny bit better to have 1-0 start in the finals of a double-elimination tournament for the winners bracket player. if it were significant though, then you would be doing more than just cloaking uncertainties with claims of certainties. you'd have a solid basis to go to every tournament designer and have them unfuck their systems. as it is, you don't. | ||
motbob
United States12546 Posts
| ||
motbob
United States12546 Posts
On March 15 2015 09:13 Cheren wrote: Every defense of double elimination I've read is tautological. "Double elimination works because teams that lose twice are eliminated." "Double elimination works because everyone gets a second chance." It's a system with horrible flaws that isn't used in real sports and needs to get out of esports. It is to tournament formats what Instant Runoff is to voting. In the absence of perfect seeding, double elim has obvious advantages if you care about more teams than just the winner. People sometimes talk about the "real finals" in tournaments like the GSL; sometimes the two best players land on one side of the bracket. If that's a problem, double elim fixes it. | ||
Cascade
Australia5405 Posts
On March 15 2015 09:27 motbob wrote: So in this context, an appropriate objection isn't "you didn't do a proper statistical test" because we don't care about inferences and P-values here. We can get the true value, or approach it, just by cranking up the number of simulation runs. Umm, yeah, you kinda have to do some kind of statistical test, or at least convince us in some way that your numbers are accurate enough so that we feel confident that the differences you quote are more than random noise. We can never get the true value by simulation (infinite accuracy computer simulations with infinite computing time have some practical issues unfortunately. Especially in excel. ), but we can often get close enough with enough computing time. it is incredibly important that you make sure that you actually are putting in enough computing time to get sufficiently accurate numbers out. Did you? For example, in your first example of 51.6% vs 52.2% from 10k runs. This seems to be close enough to flipping a coin, which will have an error of around 1/sqrt(N), which for 10k runs is 1% relative uncertainty, which is exactly the difference you are seeing. So I think I need some convincing that the differences you are quoting are more than just numerical noise. Let me know if you need help. Nonetheless, the idea of the simulation is great! I love the approach. | ||
MysteryMeat1
United States3288 Posts
| ||
Lucumo
6850 Posts
On March 14 2015 17:17 motbob wrote: The conclusion I derive from these results is this: if tournament organizers are concerned solely with creating a format where the best team wins, they should have GF with a 1-0 advantage. But the difference between formats seems small enough that, if I were an organizer, I would just keep doing what spectators want (no advantage). Nope, team from winners' side should need to win one bo3, team from losers' side two. It's not called "double elimination" for nothing. | ||
eonrulz
United Kingdom225 Posts
On March 15 2015 17:39 Cascade wrote: Umm, yeah, you kinda have to do some kind of statistical test, or at least convince us in some way that your numbers are accurate enough so that we feel confident that the differences you quote are more than random noise. We can never get the true value by simulation (infinite accuracy computer simulations with infinite computing time have some practical issues unfortunately. Especially in excel. ), but we can often get close enough with enough computing time. it is incredibly important that you make sure that you actually are putting in enough computing time to get sufficiently accurate numbers out. Did you? For example, in your first example of 51.6% vs 52.2% from 10k runs. This seems to be close enough to flipping a coin, which will have an error of around 1/sqrt(N), which for 10k runs is 1% relative uncertainty, which is exactly the difference you are seeing. So I think I need some convincing that the differences you are quoting are more than just numerical noise. Let me know if you need help. Nonetheless, the idea of the simulation is great! I love the approach. I actually made exactly the same remark on the LiquidDota version of this blog . Errors and standard deviation are important, regardless of how many toys you run, at the very least so we can see how significant it is. I'd also be interested in seeing the correlation between say, ELO difference between the top two teams and the top team win rate. You'd definitely expect some correlation, but if its too strongly correlated (or the reverse, I guess), then I'd say that there's a bias there, that you'd have to take into account when dealing with the significance of the results. Or do some reweighting in your monte carlo. I mean, maybe its a small thing, but it'd be nice to see. Edit: my knowledge of statistics comes from particle physics, where we do some weird stuff that isn't necessarily, rigorously mathematically correct. And our monte carlo samples are often >500k events, and we still worry about statistical uncertainties (not to mention systematics, which might come into play here as part of your ELO definitions). Still want to see the errors, though | ||
Liquid`Drone
Norway28443 Posts
On March 15 2015 09:13 Cheren wrote: Every defense of double elimination I've read is tautological. "Double elimination works because teams that lose twice are eliminated." "Double elimination works because everyone gets a second chance." It's a system with horrible flaws that isn't used in real sports and needs to get out of esports. It is to tournament formats what Instant Runoff is to voting. I'm sorry, I actually completely agree that double elimination shouldn't be used for serious competition. But when I started reading about Instant Runoff, it immediately struck me as a pretty sweet voting system. Why does it suck? | ||
Cascade
Australia5405 Posts
On March 15 2015 22:34 eonrulz wrote: I actually made exactly the same remark on the LiquidDota version of this blog . Errors and standard deviation are important, regardless of how many toys you run, at the very least so we can see how significant it is. I'd also be interested in seeing the correlation between say, ELO difference between the top two teams and the top team win rate. You'd definitely expect some correlation, but if its too strongly correlated (or the reverse, I guess), then I'd say that there's a bias there, that you'd have to take into account when dealing with the significance of the results. Or do some reweighting in your monte carlo. I mean, maybe its a small thing, but it'd be nice to see. Edit: my knowledge of statistics comes from particle physics, where we do some weird stuff that isn't necessarily, rigorously mathematically correct. And our monte carlo samples are often >500k events, and we still worry about statistical uncertainties (not to mention systematics, which might come into play here as part of your ELO definitions). Still want to see the errors, though Ahaha, I'm an (ex) particle physicist myself. :D wrote a minimum bias event generator. Qcd phenomenology essentially. good to see the particle physics kind of thinking around. exactly what are you doing? (Did do?) You location is Switzerland, so I guess LHC? | ||
itsjustatank
Hong Kong9145 Posts
On March 16 2015 03:04 Liquid`Drone wrote: I'm sorry, I actually completely agree that double elimination shouldn't be used for serious competition. But when I started reading about Instant Runoff, it immediately struck me as a pretty sweet voting system. Why does it suck? IRV does not pick the Condorcet winner. Here, an example from Wikipedia: IRV uses a process of elimination to assign each voter's ballot to their first choice among a dwindling list of remaining candidates until one candidate receives an outright majority of ballots. It does not comply with the Condorcet criterion. Consider, for example, the following vote count of preferences with three candidates {A,B,C}: 35 A>B>C 34 C>B>A 31 B>C>A In this case, B is preferred to A by 65 votes to 35, and B is preferred to C by 66 to 34, hence B is strongly preferred to both A and C. B must then win according to the Condorcet criterion. Using the rules of IRV, B is ranked first by the fewest voters and is eliminated, and then C wins with the transferred votes from B. In cases where there is a Condorcet Winner, and where IRV does not choose it, a majority would by definition prefer the Condorcet Winner to the IRV winner. STV (single-transferable vote) does a better job. | ||
deliberate
Germany5 Posts
Assuming we are running a double elimination bracket where all sets are best-of-threes. In the final game the winner of the winners bracket and the winner of the losers bracket meet. As pointed out earlier, the consistent choice of format would be a BO3, and in the case of the participant from the winners bracket losing, another BO3. A more common choice is the BO5 with an 1:0 advantage for the participant from the winners bracket. Assuming further, that between the two competitors the chance of one of them winning is constant (like team A has a 60% chance of winning against B for all games), we can calculate the probabilites for the total sets. The following graph shows the chance of winning the whole set for the team from the winners bracket dependent on their chance of winning the individual matches against the team from the losers bracket. The different curves show a standard BO3 and BO5, as well as the double elimination BO3 and the BO5 with winners bracket advantage. The first observation is, that the BO5 with 1:0 advantage probability curve is similar to the double elimination BO3 curve, which makes it a viable choice as the final set in terms of consistency. The second observation is the huge advantage of the team from the winners bracket. Even with a 40% win chance against the team from the losers bracket in the individual matches, the overall chance of winning is still >50%. | ||
itsjustatank
Hong Kong9145 Posts
A team may be more likely to win, but the field of predicting human action is not reducible to numbers currently as much as we would love them to be and try. To pretend that we can is the height of arrogance and to tell others we can is to lie with statistics. We can talk about likelihoods, but we must qualify that with a lot of uncertainty. If it is not qualified, it is lying. | ||
micronesia
United States24449 Posts
You can use a simplified model and say the odds of getting a 300 are 1% if you throw strikes with a consistent success rate of about 68 percent. If you try to argue that the model does not account fully for the other variables described above, you are correct, but the only reasonable thing you can do is say there's not point in doing any calculation, then. Instead, we perform the calculation anyway and just acknowledge what was and was not modeled. It is still interesting to determine that you need a 68% chance of getting a strike to roll a 300 one game in 100. edit: tank, the edit you made to your post while I was typing seems to already address what I was getting at | ||
| ||