|
As an economist I understand all too well how frustrating it can be to have assumptions block the practical implementation of your model, so I definitely feel you there. 
I don't know if you've thought of this yet, but if you want to test the validity of your model, why not use SC1 data instead of SC2? Yes, the differences between SC1 and SC2 will introduce more uncertainty into your model, however the wealth of data points you'll gain from it might be worth it. Just a thought.
|
My problem with a point system is that it doesn't take into account the skill of the players you play against. The base of the system I used is the same as the ELO system that is used for Chess ranking except that I took a Bayesian approach.
The rankings that I posted are the posterior means of the skill parameters - 1 standard deviation. It's somewhat arbitrary. If I made it 2 standard deviations, you'd see players like LiveForever drop down a lot. For people like FruitDealer and NesTea, there are a lot more games used to estimate their skill, so if you penalize uncertainty, players who have played a lot of GSL games will float to the top.
Here's a google spreadsheet of the full ranking results : GSL Ranking Results
|
Updated rankings in original post, see if you find them more agreeable. I think most will.
|
I totally agree with you that a point system with random tournament seeding does not tell you very much. However, large elimination tournaments with huge skill differences between players like GSL and MLG seed their tournament brackets. (Note: Just like in tennis these seedings can be independent of the ATP-style rankings and it won't change the story.) These seedings will get better over time and thus reduce the luck aspect. In the end, good players will get farther in tournaments more often, and thus accumulated more points in a point system. Thus, a point system with NON-random tournament seeding should be a good approximation of skill given the sparseness of games compared to all the possible 1v1 match-ups.
Again, I agree that a point system cannot account for everything, and a richer model would be preferable. All I am saying is that a point system can be informative.
|
Oh, I totally agree. I think seeding is always valuable so as to maximize the opportunity to gather data from the players. I see a point system as an approximation to dynamic Bayesian system however. It's not that it doesn't work or that it's not valuable, but the Bayesian approach just lets the data inform the rankings entirely, where a point system is only informed by the round reached.
For example, in the GSL Season 2, FruitDealer lost to MarineKing in the Ro32. By the point system, FruitDealer lost out on a lot of points by not getting further in the tournament. In the Bayesian model, it takes into account that MarineKing is friggen good, so losing to him isn't really that big of an upset. The point system also gives no way to quantify uncertainty about the players skill.
Realistically, either approach works fairly well, the Bayesian approach is just more dynamic in the way it ranks.
|
Can't access the Google spreadsheet in the OP, just to let you know.
Keep up the good work.
|
However, if is true that FruitDealer and MarineKing are both "good" players, then under a "good" seeding system they would not be meeting the Ro32. For example, no tennis tournament would ever have the possibility Roger Federer and Rafael Nadal meeting in the 2nd round of their tournament, as both are considered good players, and thus the point system per round makes sense for tennis. Unfortunately, SC2 is not well developed enough yet to make clean seedings. In this respect, your Bayesian approach adds value in these early stages of the game, and I would be very curious to see the details of your analysis.
On a slightly different tack, on the State of the Game podcast a couple of weeks age there was a big discussion about MLG's extended series tiebreaker. The crux of the argument centered around whether different rounds of a tournament should be considered different; that is, does defeating someone in the Ro32 mean something different than defeating someone in the Ro8. In my opinion, yes, and thus the ending point of a tournament is important to incorporate. For instance, if MarineKing beat two great players in Ro64 and Ro32, that is good, but not as good as beating them in the Ro16 and Ro8. I believe proper seeding and points for tournament ending spot takes this situation into account.
|
On December 08 2010 19:10 Mip wrote: Time effects are something I definitely have in mind for future use. I mean, it's pretty clear that a year from now, no one will care what happened in GSL Season 1 as far as predictions are concerned.
Here is a paper for accounting for time effects: "Whole-History Rating: A Bayesian Rating System for Players of Time-Varying Strength" http://remi.coulom.free.fr/WHR/
I thought it sounds like a cool concept and I'd like to see it used. On a different game server I play on (KGS - a Go server) they use Bayesian, but to account for time variation they use a simple weight-decay, and it has some strange side effects.
|
To Solon TLG: About the Stat of the Game podcast, my thought is that the only thing that matters is the skill of the players involved. Whether MarineKing beats FruitDealer matters only how skillful they are. I don't think it matters which round they are in. I don't see that being in the round of 32 vs the finals will make a difference. Since they are both comfortable under pressure, I think it's reasonable to assume that the round effects them both in the same way. If that is not true, who is favored? So if neither are favored, we should be able to treat the data as if the round doesn't matter.
To KillerDucky: Thanks for the article. My thought for a time parameter was to have some measurement of the time passed and have the likelihood of past events shrink toward 50/50 as the data becomes older, the past significant upsets will shrink towards non-significance as time passes.
|
is this somewhat like true skill on the xbox?
|
@beat farm: They are both Bayesian approaches... so probably.
|
I think the predictions could be made more accurate, if you take into account the players strength in each matchup. The problem is thatit may require more games to become accurate (as each matchup is only one third of the games, for a random player even worse). Still I think, once enough data is available it would be more accurate to give the players seperate rankings for each match-up.
|
On December 09 2010 02:31 Cel.erity wrote:Show nested quote +On December 08 2010 22:24 kazansky wrote:On December 08 2010 22:20 aka_star wrote: I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right. You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches. Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football. The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it. The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40. I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.
I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-) I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.
|
@Sandermatt Yeah, I would like add something like that. It would take more data (which is already a problem). The way I would do it is have a skill rating for each player, and then an adjustment for the opponents race. Would be very easy to add if I had more data.
|
where are you going to school for statistics?
|
On December 09 2010 07:41 kazansky wrote:Show nested quote +On December 09 2010 02:31 Cel.erity wrote:On December 08 2010 22:24 kazansky wrote:On December 08 2010 22:20 aka_star wrote: I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right. You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches. Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football. The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it. The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40. I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent. I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-) I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.
I think you guys are kind of off base, I already have a model that can rate Starcraft players with a decent amount of accuracy with only 400 something games. Is it perfect? No. But it has a lot of strength and will learn as it gets more data.
Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.
As for this model and map imbalance, this model averages over all maps. It's primary function is the rate the players objectively based on their performance, which I believe it does quite nicely. If you want to optimize this for prediction, which I believe there is enough data out there that we could start, we need to pull together more data, which I would like help with if there is anyone out there good at parsing webpages.
Like I said in the original post, my data look like this [2343,] "MC" "MarineKing" [2344,] "MC" "MarineKing" [2345,] "MC" "MarineKing" [2346,] "Jinro" "Choya" [2347,] "Jinro" "Choya" [2348,] "Jinro" "Choya" [2349,] "Choya" "Jinro" [2350,] "Choya" "Jinro"
It actually starts out like this :
MarineKing 1 MC 3 Jinro 3 Choya 2
and then I convert it.
If instead I could get my data to look more like this: MC Protoss MarineKing Terran Lost Temple MC Protoss MarineKing Terran Blistering Sands MC Protoss MarineKing Terran Jungle Basin
I could then start adjusting for those kinds of things. There should already be enough data to start something like this. So long as I have more data than the effective number of parameters that I'm trying to estimate, I can do it no problem.
|
@PROJECTILE I'm going to school at BYU in Provo, UT. They have a pretty good statistics program, but no PhD option, they stop at Master's degrees.
|
On December 09 2010 15:58 Mip wrote: Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.
As I said, yes statistic models of this kind are able to give very high prediction accuracy. I didn't say yours will yet, and I think if you keep the work up, yours will in about 5 years, or lets say 2 years. That is at least what I meant. To provide high accuracy for the complete outcome of a tournament, and very reliable predictions, you need a huge amount of data to weight, on the one hand.
Why I said 5 years was: if you knew every result of the SC2 players right now to base your assumption on, or every result of the BW players, you would highly likely choose the Broodwar players to predict, because the game is far more figured out, so your variation is narrowed down by far, because no every week a new cheese appears.
You can start making predictions whenever you want, but if you want to hit +95% over a total GSL (every game) just based on a statistical model, I think you will have to rely on 5 years of tactic development and 2 years of data :-)
I didn't want to spoil your fun, I love your work and totally appreciate it.
|
Interesting and fun project, though, as you said, you don't have enough data to actually make that strong of predictions. As others have said, you probably need to include a time factor as well.
|
@Kazansky Small variance and prediction accuracy are not the same thing in this kind of model.
Each player has an unmeasurable skill parameter, that we can get glimpses of when they win or lose. So the more wins and loses I observe, the more I can nail down exactly what a player's skill parameter is. Over time, I can hope to achieve a fairly high precision with many player's skill levels.
But knowing a player's skill is only the parameter that feeds my function that tells me the probability that a player will win, which from the first post is exp(skill1)/(exp(skill1)+exp(skill2)). If in 5 years, I have 2 players of the same skill, the according to this formula, the probability of either winning is 50/50. Which makes sense for players of identical skill. So right now, I might say, well, there's a 30-70% chance player 1 wins (centered at 50-50, but I'm uncertain about exactly what it is), then 5 years from now I can say that there's a 49-51% chance player 1 wins (still 50-50, but I'm certain it's about 50-50 at this point). I'll be able to narrow in only on the probability that a specific player can beat another, not on the actual outcome.
What you are saying is that in 5 years, there will only be <5% upsets, and >95% perfect predictability. According to any paired comparison model, that would imply that all player's skill levels are tremendously far apart, which is not likely to be the case. That would imply that no rivalry would exist, no excitement in wondering who will come out on top in any match-up because 95%+ of the time you'd know the victor in advance.
I don't understand how one could ever have high predictability of evenly match opponents. I think that would, by definition, make them not evenly matched.
|
|
|
|