|
On January 11 2013 08:41 TheBB wrote:Show nested quote +On January 11 2013 08:34 Hier wrote:On January 11 2013 08:29 TheBB wrote:On January 11 2013 08:06 RainmanMP wrote: How is KT so low despite dominating Proleague? Remember that the team rank is not based on results in team leagues, but rather on simulations, which are based on the ratings of the individual players on the team. The question should then be: why are players on KT ranked so low, in spite of dominating in Proleague? The obvious answer is probably that the Kespa players play fewer games than the ESF players do, so their ratings don't update as quickly. (This also explains why Baby is still ranked so high. He did great in MvP, and then he keeps playing just enough games to stay active, but not enough to really drop down quickly.) Another explanation could be that the KT players are doing well in Proleague but poorly in other leagues. (Certainly possible, considering the recent Up and Downs for example. I'm not familiar enough with each of them to say.) Feel free to investigate. You can see the matches that caused each rating adjustment by going to a player page, clicking "rating history" and then clicking the little arrow to the right of the entry you are interested in. Why are rating update speed and number of games related? Please, read the FAQ. I wrote it for exactly these kinds of questions. Relevant section: Show nested quote +Then, the new rating is adjusted somewhat in the direction of the maximal likelihood rating found above. How much it's adjusted depends on how certain the original rating was, and how certain the maximal likelihood rating is (how consistent the results were). The adjustment will be biased towards whichever of these two is most certain. Basically, the more games a player plays over a shorter time period, the more accurately we can pinpoint his or her current skill. More weight will then be given to the recent results, and less weight to the rating from the previous list. If a player plays fewer games, they gauge the current skill of a player very inaccurately, and so the system will "prefer" the relative certainty of the already established rating. The uncertainty of a player's rating will grow over time if he or she doesn't play enough games. At the moment Baby's rating has an estimated standard deviation of about 93 points. After his MvP run, it was about 72. If it grows much larger, his rating will adjust quicker. So the system does not reflect current player ratings, rather past ratings?
|
Aligulac makes me sad when I look at the UP race list. Actually, it makes me even more depressed looking at what's almost not in the OP race list.
|
According to this Flash has the best TvT in the world... I do like the effort; I've been following these threads for a while now.
|
On January 11 2013 09:08 Hier wrote:Show nested quote +On January 11 2013 08:41 TheBB wrote:On January 11 2013 08:34 Hier wrote:On January 11 2013 08:29 TheBB wrote:On January 11 2013 08:06 RainmanMP wrote: How is KT so low despite dominating Proleague? Remember that the team rank is not based on results in team leagues, but rather on simulations, which are based on the ratings of the individual players on the team. The question should then be: why are players on KT ranked so low, in spite of dominating in Proleague? The obvious answer is probably that the Kespa players play fewer games than the ESF players do, so their ratings don't update as quickly. (This also explains why Baby is still ranked so high. He did great in MvP, and then he keeps playing just enough games to stay active, but not enough to really drop down quickly.) Another explanation could be that the KT players are doing well in Proleague but poorly in other leagues. (Certainly possible, considering the recent Up and Downs for example. I'm not familiar enough with each of them to say.) Feel free to investigate. You can see the matches that caused each rating adjustment by going to a player page, clicking "rating history" and then clicking the little arrow to the right of the entry you are interested in. Why are rating update speed and number of games related? Please, read the FAQ. I wrote it for exactly these kinds of questions. Relevant section: Then, the new rating is adjusted somewhat in the direction of the maximal likelihood rating found above. How much it's adjusted depends on how certain the original rating was, and how certain the maximal likelihood rating is (how consistent the results were). The adjustment will be biased towards whichever of these two is most certain. Basically, the more games a player plays over a shorter time period, the more accurately we can pinpoint his or her current skill. More weight will then be given to the recent results, and less weight to the rating from the previous list. If a player plays fewer games, they gauge the current skill of a player very inaccurately, and so the system will "prefer" the relative certainty of the already established rating. The uncertainty of a player's rating will grow over time if he or she doesn't play enough games. At the moment Baby's rating has an estimated standard deviation of about 93 points. After his MvP run, it was about 72. If it grows much larger, his rating will adjust quicker. So the system does not reflect current player ratings, rather past ratings? What am I supposed to say? We use games to estimate a player's skill, and if there aren't enough games, what then? I can't just pull numbers out of thin air.
If a player goes on a tear and establishes a very high and very accurate rating, then disappears and returns four months later to play four games with middling results, what would you say about his current skill? It seems reasonable to me to say that we're not very sure, but he's probably still a pretty good player, just maybe not as good as we originally thought.
The system tries as well as it can to estimate current skill. For some players, this is easy (they play a lot), and for others this is hard (they play little). This difficulty is reflected in the standard deviation, which is high for some and low for others. This standard deviation also directly influences how quickly ratings adjust.
|
What's Brown been up to recently?
|
|
Also Bomber's TvP decreased despite him winning 3-0 against protoss in GSL, infact he's gone 3 map losses in 21 against protoss...
|
On January 11 2013 09:15 TheBB wrote:Show nested quote +On January 11 2013 09:08 Hier wrote:On January 11 2013 08:41 TheBB wrote:On January 11 2013 08:34 Hier wrote:On January 11 2013 08:29 TheBB wrote:On January 11 2013 08:06 RainmanMP wrote: How is KT so low despite dominating Proleague? Remember that the team rank is not based on results in team leagues, but rather on simulations, which are based on the ratings of the individual players on the team. The question should then be: why are players on KT ranked so low, in spite of dominating in Proleague? The obvious answer is probably that the Kespa players play fewer games than the ESF players do, so their ratings don't update as quickly. (This also explains why Baby is still ranked so high. He did great in MvP, and then he keeps playing just enough games to stay active, but not enough to really drop down quickly.) Another explanation could be that the KT players are doing well in Proleague but poorly in other leagues. (Certainly possible, considering the recent Up and Downs for example. I'm not familiar enough with each of them to say.) Feel free to investigate. You can see the matches that caused each rating adjustment by going to a player page, clicking "rating history" and then clicking the little arrow to the right of the entry you are interested in. Why are rating update speed and number of games related? Please, read the FAQ. I wrote it for exactly these kinds of questions. Relevant section: Then, the new rating is adjusted somewhat in the direction of the maximal likelihood rating found above. How much it's adjusted depends on how certain the original rating was, and how certain the maximal likelihood rating is (how consistent the results were). The adjustment will be biased towards whichever of these two is most certain. Basically, the more games a player plays over a shorter time period, the more accurately we can pinpoint his or her current skill. More weight will then be given to the recent results, and less weight to the rating from the previous list. If a player plays fewer games, they gauge the current skill of a player very inaccurately, and so the system will "prefer" the relative certainty of the already established rating. The uncertainty of a player's rating will grow over time if he or she doesn't play enough games. At the moment Baby's rating has an estimated standard deviation of about 93 points. After his MvP run, it was about 72. If it grows much larger, his rating will adjust quicker. So the system does not reflect current player ratings, rather past ratings? What am I supposed to say? We use games to estimate a player's skill, and if there aren't enough games, what then? I can't just pull numbers out of thin air. If a player goes on a tear and establishes a very high and very accurate rating, then disappears and returns four months later to play four games with middling results, what would you say about his current skill? It seems reasonable to me to say that we're not very sure, but he's probably still a pretty good player, just maybe not as good as we originally thought. The system tries as well as it can to estimate current skill. For some players, this is easy (they play a lot), and for others this is hard (they play little). This difficulty is reflected in the standard deviation, which is high for some and low for others. This standard deviation also directly influences how quickly ratings adjust. But why does it put less weight on recent games, even if they are less frequent than before? How does the system handle varying uniform game frequencies with respect to players' ratings?
|
8 of top 10 foreigners are zerg, what a surprise
|
On January 11 2013 09:23 Hier wrote:Show nested quote +On January 11 2013 09:15 TheBB wrote:On January 11 2013 09:08 Hier wrote:On January 11 2013 08:41 TheBB wrote:On January 11 2013 08:34 Hier wrote:On January 11 2013 08:29 TheBB wrote:On January 11 2013 08:06 RainmanMP wrote: How is KT so low despite dominating Proleague? Remember that the team rank is not based on results in team leagues, but rather on simulations, which are based on the ratings of the individual players on the team. The question should then be: why are players on KT ranked so low, in spite of dominating in Proleague? The obvious answer is probably that the Kespa players play fewer games than the ESF players do, so their ratings don't update as quickly. (This also explains why Baby is still ranked so high. He did great in MvP, and then he keeps playing just enough games to stay active, but not enough to really drop down quickly.) Another explanation could be that the KT players are doing well in Proleague but poorly in other leagues. (Certainly possible, considering the recent Up and Downs for example. I'm not familiar enough with each of them to say.) Feel free to investigate. You can see the matches that caused each rating adjustment by going to a player page, clicking "rating history" and then clicking the little arrow to the right of the entry you are interested in. Why are rating update speed and number of games related? Please, read the FAQ. I wrote it for exactly these kinds of questions. Relevant section: Then, the new rating is adjusted somewhat in the direction of the maximal likelihood rating found above. How much it's adjusted depends on how certain the original rating was, and how certain the maximal likelihood rating is (how consistent the results were). The adjustment will be biased towards whichever of these two is most certain. Basically, the more games a player plays over a shorter time period, the more accurately we can pinpoint his or her current skill. More weight will then be given to the recent results, and less weight to the rating from the previous list. If a player plays fewer games, they gauge the current skill of a player very inaccurately, and so the system will "prefer" the relative certainty of the already established rating. The uncertainty of a player's rating will grow over time if he or she doesn't play enough games. At the moment Baby's rating has an estimated standard deviation of about 93 points. After his MvP run, it was about 72. If it grows much larger, his rating will adjust quicker. So the system does not reflect current player ratings, rather past ratings? What am I supposed to say? We use games to estimate a player's skill, and if there aren't enough games, what then? I can't just pull numbers out of thin air. If a player goes on a tear and establishes a very high and very accurate rating, then disappears and returns four months later to play four games with middling results, what would you say about his current skill? It seems reasonable to me to say that we're not very sure, but he's probably still a pretty good player, just maybe not as good as we originally thought. The system tries as well as it can to estimate current skill. For some players, this is easy (they play a lot), and for others this is hard (they play little). This difficulty is reflected in the standard deviation, which is high for some and low for others. This standard deviation also directly influences how quickly ratings adjust. But why does it put less weight on recent games, even if they are less frequent than before? How does the system handle varying uniform game frequencies with respect to players' ratings?
It doesn't, more recent games are more heavily weighted. To use a hypothetical example, if a player plays 100 games in a week and then one per fortnight (to stay in theBB's rankings) for 3 months, then you've got 6 games in the past 3 months, and 100 in the one previous. And (not knowing how fast they decay for theBB's algorithm), say the old games are worth half as much as a recent one - that's still a weighting of 50 against 6, and in reality the 6 will have decayed slightly too.
EDIT: You could make a game's weighting decay faster, but then the rankings will become even more volatile.
|
On January 11 2013 09:23 Hier wrote:Show nested quote +On January 11 2013 09:15 TheBB wrote:On January 11 2013 09:08 Hier wrote:On January 11 2013 08:41 TheBB wrote:On January 11 2013 08:34 Hier wrote:On January 11 2013 08:29 TheBB wrote:On January 11 2013 08:06 RainmanMP wrote: How is KT so low despite dominating Proleague? Remember that the team rank is not based on results in team leagues, but rather on simulations, which are based on the ratings of the individual players on the team. The question should then be: why are players on KT ranked so low, in spite of dominating in Proleague? The obvious answer is probably that the Kespa players play fewer games than the ESF players do, so their ratings don't update as quickly. (This also explains why Baby is still ranked so high. He did great in MvP, and then he keeps playing just enough games to stay active, but not enough to really drop down quickly.) Another explanation could be that the KT players are doing well in Proleague but poorly in other leagues. (Certainly possible, considering the recent Up and Downs for example. I'm not familiar enough with each of them to say.) Feel free to investigate. You can see the matches that caused each rating adjustment by going to a player page, clicking "rating history" and then clicking the little arrow to the right of the entry you are interested in. Why are rating update speed and number of games related? Please, read the FAQ. I wrote it for exactly these kinds of questions. Relevant section: Then, the new rating is adjusted somewhat in the direction of the maximal likelihood rating found above. How much it's adjusted depends on how certain the original rating was, and how certain the maximal likelihood rating is (how consistent the results were). The adjustment will be biased towards whichever of these two is most certain. Basically, the more games a player plays over a shorter time period, the more accurately we can pinpoint his or her current skill. More weight will then be given to the recent results, and less weight to the rating from the previous list. If a player plays fewer games, they gauge the current skill of a player very inaccurately, and so the system will "prefer" the relative certainty of the already established rating. The uncertainty of a player's rating will grow over time if he or she doesn't play enough games. At the moment Baby's rating has an estimated standard deviation of about 93 points. After his MvP run, it was about 72. If it grows much larger, his rating will adjust quicker. So the system does not reflect current player ratings, rather past ratings? What am I supposed to say? We use games to estimate a player's skill, and if there aren't enough games, what then? I can't just pull numbers out of thin air. If a player goes on a tear and establishes a very high and very accurate rating, then disappears and returns four months later to play four games with middling results, what would you say about his current skill? It seems reasonable to me to say that we're not very sure, but he's probably still a pretty good player, just maybe not as good as we originally thought. The system tries as well as it can to estimate current skill. For some players, this is easy (they play a lot), and for others this is hard (they play little). This difficulty is reflected in the standard deviation, which is high for some and low for others. This standard deviation also directly influences how quickly ratings adjust. But why does it put less weight on recent games, even if they are less frequent than before? How does the system handle varying uniform game frequencies with respect to players' ratings? It doesn't put less weight on recent games. They are weighted equal to all others (actually a bit more, as dainbramage said,) there just happens to be fewer of them.
If I have five gold dubloons in my left hand and fifty in the other, the five in the left hand will weigh less on account of there being fewer dubloons, not because those dubloons individually weigh less than those in the right hand.
|
Yay, thanks for the hard work =)
|
How can they weigh more when their weight is determined by a change in certainty level of a player's rating? That's why I was asking how players' rating is changed upon a sudden win rate change given varying certainties based on varying uniform game frequencies?
Also, I didn't really see anything on decay.
|
|
Interesting concept!
I don't agree with your judgements, but it's a fun read^^ Keep it up plz!
|
28078 Posts
I agree with most of the Korean ones. The foreigner ones are surprising though.
|
I laughed so hard when I saw the OP and UP race list. Sad zealot fan club indeed.
|
truly an era of patch zergs
User was warned for this post
|
On January 11 2013 08:36 jinorazi wrote: the term "patchzerg" is real? i thought it was just a whining term Got another explanation for why patch 1.5 mysteriously coincided with the latter half of 2012 being a giant ZvZ fest?
It's real. Only those in deep denial will say otherwise.
|
The Hall of Fame list is so bad... Bomber being number 1 by so much doesn't make any sense to me at all. He's never even won a GSL. The foreigners and Puma are strange as well.
|
|
|
|