GSL Code S Membership statistical analysis - Page 2
Forum Index > StarCraft 2 Tournaments |
Plutonium
United States2217 Posts
| ||
Koshi
Belgium38797 Posts
I always appreciate any sort of data collecting. I always check first if it is legit, and the Bayesian Bradley-Terry model is. Then I just read it as it is presented, and remember the things that seem useful. Do not forget that data is always misleading and sometimes even misrepresented. Just try to gather as much as possible and compare. | ||
Mip
United States63 Posts
I don't even know why this is such a issue for you. We are talking about the same assumption that is built into the current ranking system. They give no points for any round prior to the Ro64 and they make no attempt to adjust for it for their rankings, they believe that only the Ro64 onward is rank-worthy, and because I have no data other than what they give me, I inherit that assumption implicitly by the data that is available, and there is nothing I can do about that. This isn't even a tremendous problem like you are trying to claim it is. There are a couple players that it affects adversely, I'll give you that, but all the top players are ranked appropriately with reasonable skill levels estimated for each. It's not a greater problem than what the point system imposes though, so there's no loss as compared to the current alternative. There is no feasible way for me to even gather data on >100 players that played in the 3 seasons. There is no central source to find which players didn't sign up, and which players failed to qualify. So even if I wanted to make your irresponsible arbitrary data additions, I could not do so. If you'd like to start making house calls to all the players to find out which tournaments they played in, then we can start to talk about measuring that sort of effect. | ||
nath
United States1788 Posts
also whoever talked about jinro 'trying the other two times, he entered more than once' LOL its clearly only ro64 and beyond...theres nothing about qualifications in this model. gj man I love the model you chose; am not a stats major but i've worked with a lot of statistical methods in research and i liked what you did. On December 10 2010 16:14 Plutonium wrote: Talk all the statistical academics you want, It doesn't change the fact that your results are massively flawed by your unsound assumptions. This took a lot of work, and I applaud you for that. However, your model is wrong. I understand that you do not have the data required to separate between failing to qualify and not registering. All I'm suggesting is that you at least attempt to compensate for that, instead of trying to handwave it away. He doesn't want to take that into account; it would cause more problems. Better to let the issue resolve itself over time as more data is collected than untwist your panties. | ||
Mip
United States63 Posts
@skipgamer Let me clarify what I mean by, "Not ideally suited for prediction." What I mean is that I only gathered data on the GSL. If I wanted to get prediction accuracy, I should gather as many outside tournaments as I can do add that data to the GSL data so that I have as much information about each player as possible. On the other hand, if I want to do a fair ranking system for just the GSL (which is so far what I've been after). I absolutely must base my results only off of GSL data. So when I created these results it's under the context of, "Let's only rate these players based off of their GSL performance." The next step is to pull in more data, and since no one has volunteered to help me yet, it might be a slow process. When I have all possible available data, then we can start talking in terms of "ideal prediction accuracy" (ideal according to our current state of knowledge anyway). People keep calling these results "inaccurate." That may have some truth to it, but against the benchmark of the GSL point system, these results are at worst on par with that, so I really wouldn't worry about that at this point. The model and methodology are sound and the accuracy of the results will get better as more data comes in. ----------------------------------------------------------------------------- Think about what we're trying to do, we have a bunch of progamers and they are all playing SC2. We want to know, what is their skill level, how do their compare to the other players? Well we can't go measure their skill level directly. It's not something I can go stick a ruler on or anything. What I can see is what happens when they play a game. I can see whether they win or lose. This will provide hints at their skill level. Take season 2 for example, We know FruitDealer is amazing because he won Season 1. FruitDealer loses to Foxer in Ro32, that's going to tell me that Foxer is at least skillful enough to beat FruitDealer, which is quite substancial. Then Foxer goes on to lose against NesTea in a nearly dead even match. That's going to hint toward thinking Foxer and NesTea are about at the same level. When you take all the data together, you'll get hundreds of hints about how the players compare to each other. You'll still be uncertain exactly how much different their skills are from each other, but you should have an idea. In simplistic terms, that is how this model is working, the wins and losses point toward the skill that that player has compared to who they are playing with. So the issue with not having pre-Ro64 data is that everyone who losses in early rounds may get unfairly ranked downward because there's no data showing that they won their way through the Ro128, Ro256, etc. This is a real problem for players like IdrA and JookToJung like we've said before, but there's no available data to fix it. I'm sure someone at GomTV has it, but it's not publicly available. | ||
Plutonium
United States2217 Posts
He's taken a look at your data and analysis and assures me that everything you say is correct, and that I am "so wrong". However, he says that releasing this data can be misleading because us people are stupid and don't understand what you're doing. A few key points he makes: 1) The amount of uncertainty in this analysis needs to be emphasized. The results are going to be extremely noisy, and these results should not be taken too seriously. 2) Saying that not making it to the Ro64 and not registering is an acceptable assumption, given that we do not know this data. 3) It would be nice to "reward" those who qualify for the Ro64 multiple times. How much you reward this, however, is unclear. There are mechanisms for this - increasing initial means and reducing standard errors. However, the question is what values should be assigned to this, unless one wants to be more arbitrary about it. 4) He then starts talking reaching these values by integrating the chances of getting in to the Ro64 for GSL1,2, & 3, and using that data to create initial skill levels for players. This is a tricky proposition, and it may overcomplicate the model. | ||
Hazuc
Canada471 Posts
| ||
roadrunner343
148 Posts
On the serious side, you already knew that (And admitted it) and I applaud the work you've done and love the data. Obviously we can't take it as 100% truth, but it gives us a lot of good data to work with. Thanks. | ||
Mip
United States63 Posts
But yeah, since posting these last two threads, I've definitely experienced people misunderstanding me. It's hard to be perfectly clear on a forum where you kind of have to throw it all out there to see what people do and do not understand right away and what things need to be clarified, especially when I mostly converse with people who have been working for years on the same educational course that all speak the same language as me, so to speak. When I came up with this idea, I thought it would be a good way to show much uncertainty there is in the model, it's like yeah, Jinro is awesome, but based on what we know about him from the data, there's a 25% chance that he might not actually be in the Top 32 best players in our pool. Maybe we'll know better the match that starts in an hour and a half =). Compare this to NesTea, where we think that there's a 1 in 200 chance than he might not actually be in the top 32, because he's won games against top players across multiple seasons. The beautiful thing about Bayesian statistics is you can actually speak in the language of probability unlike in classical statistics where we speak in terms of meaningless confidence intervals. In the data for this post, when you look at that spreadsheet, you can say that based on our current state of knowledge, assuming the model with it's assumptions (everyone cringe a little bit at the assumptions), we can say that Player Whoever has a whatever % chance of being in the top 32 most skilled. Then you look over the list and say, "But player X doesn't have a %20 chance of being top 32." and to you I say, "You're probably right, but the data doesn't know that." The data doesn't troll the TL forums analyzing the crap out of everyone's play, it just considers the wins and losses that it's given. Once the GSL gets going with Code S tournaments, we will see a great refinement in our estimates for all Code S players, because we will get so much data about each of them because all tournaments will feature them. | ||
Mip
United States63 Posts
| ||
Skytalker
Sweden671 Posts
| ||
Mip
United States63 Posts
+ Show Spoiler + Names ProbWinFinal 1 Rain 0.5055279 2 MC 0.4944721 Yeah, so how do you like that, 50/50, the data has no idea who to favor for these two. Had it been NesTea instead of Rain, it would give only a 35% chance to MC, but vs Rain, a dead even split. So go flip your coin, Heads bet Rain, Tails bet MC | ||
roadrunner343
148 Posts
Is there anyway to make the algorithm take which race they are playing into account? For example, I'm sure some pros have much higher win rates against certain races than others. Is this simply impossible, way too much work, or is it doable? Either way, I like your spreadsheet. | ||
hybridsc
United States63 Posts
| ||
greycubed
United States615 Posts
| ||
GeorgeForeman
United States1746 Posts
| ||
confusedcrib
United States1307 Posts
| ||
Mitosis.
Sweden16 Posts
| ||
aristarchus
United States652 Posts
Why not add another fake player, something like "average person who lost in the final round of qualifying." Anytime someone qualified, give them a win over that player. Anytime they failed, give them a loss. It might be better to use a different fake player for each season, since the qualifying pool's difficulty presumably changed. Obviously that wouldn't be ideal, but it might be less bad than ignoring the qualifying rounds entirely. (I have a math background, but no stat, so I trust your judgement, but thought I'd throw out the suggestion.) | ||
Mip
United States63 Posts
At the end of the day, I think I'm much more interested in how the races and maps are balanced than which players seem to be better right now. @enjoyx and Mitosis Thanks for pointing that out, it's a pain in the butt to catch all of those. The TL brackets are inconsistent about how the capitalize names and whether or not their include clan tags etc. and it's caused a lot of players to get split into two people. I wish I had a data source that had data that didn't need cleaning, but alas no. @GeorgeForeman and confusedcrib I'm glad you paid attention in your intro stats classes, but in Bayesian statistics, you can integrate over the uncertainty in your estimates to obtain a single number that takes into account all of the uncertainty you have in your estimate. We can say with Bayesian statistics that based on our current state of knowledge (priors + data provided) that the probability of Player X actually being Top 32 is Y%. That you would bring up a t-test for this model immediately puts you at an intro stats level in my brain. Your instinct is correct for that level of stats knowledge, but in this case, it should not be a concern to you. You should think of those percentages in terms of what I described at the end of the paragraph above. However, to appease you guys, I added a column of Standard Errors. If you are using your intro stats knowledge,however, you will misinterpret them because they mean different things if your data are not from a normal/gaussian distribution. For a binary outcome, the variance is prob * (1 - prob), and then the standard error is the square root of that, but you have to throw away any thoughts that, for example, 3 standard errors gives you a confidence interval or any nonsense like that that you are taught in intro stats. For example, for NesTea, if you tried to do that, you'd get a confidence interval that included probabilities greater than 1. To do it properly, you'd have to convert to a odds ratio, compute confidence intervals, then convert back to a probability metric. @aristarchus I could definitely make an indicator variable adjustment for that problem, but I don't even have data on who registered and didn't qualify vs who didn't register. Some of the players it's easy to find that information, like if they are on a major team, the liquidpedia site has all that. But for the less known people, I don't even know where to look, and I really don't want to look in a hundred different places to track down that information because if I start, I don't know that I can find everyone, and if I can't find everyone, who whole thing is a waste. | ||
| ||