• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 20:29
CET 02:29
KST 10:29
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
SC2 All-Star Invitational: Jan 17-1814Weekly Cups (Dec 22-28): Classic & MaxPax win, Percival surprises1Weekly Cups (Dec 15-21): Classic wins big, MaxPax & Clem take weeklies3ComeBackTV's documentary on Byun's Career !11Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win4
StarCraft 2
General
SC2 All-Star Invitational: Jan 17-18 Chinese SC2 server to reopen; live all-star event in Hangzhou Weekly Cups (Dec 22-28): Classic & MaxPax win, Percival surprises Starcraft 2 Zerg Coach ComeBackTV's documentary on Byun's Career !
Tourneys
OSC Season 13 World Championship $5,000+ WardiTV 2025 Championship $100 Prize Pool - Winter Warp Gate Masters Showdow Sparkling Tuna Cup - Weekly Open Tournament Winter Warp Gate Amateur Showdown #1
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 506 Warp Zone Mutation # 505 Rise From Ashes Mutation # 504 Retribution Mutation # 503 Fowl Play
Brood War
General
I would like to say something about StarCraft (UMS) SWITCHEROO *New* /Destination Edit/ BGH Auto Balance -> http://bghmmr.eu/ What monitor do you use for playing Remastered? BW General Discussion
Tourneys
SLON Grand Finals – Season 2 [Megathread] Daily Proleagues [BSL21] LB SemiFinals - Saturday 21:00 CET [BSL21] WB & LB Finals - Sunday 21:00 CET
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers Game Theory for Starcraft Current Meta
Other Games
General Games
Nintendo Switch Thread Awesome Games Done Quick 2026! Stormgate/Frost Giant Megathread Mechabellum Beyond All Reason
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Survivor II: The Amazon Sengoku Mafia TL Mafia Community Thread
Community
General
US Politics Mega-thread Canadian Politics Mega-thread The Games Industry And ATVI Russo-Ukrainian War Thread 12 Days of Starcraft
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List TL+ Announced Where to ask questions and add stream?
Blogs
National Diversity: A Challe…
TrAiDoS
I decided to write a webnov…
DjKniteX
James Bond movies ranking - pa…
Topin
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1036 users

GSL Season 3 predictions using statistics - Page 3

Forum Index > StarCraft 2 Tournaments
Post a Reply
Prev 1 2 3 4 Next All
RoboBob
Profile Blog Joined September 2010
United States798 Posts
December 08 2010 18:52 GMT
#41
As an economist I understand all too well how frustrating it can be to have assumptions block the practical implementation of your model, so I definitely feel you there.

I don't know if you've thought of this yet, but if you want to test the validity of your model, why not use SC1 data instead of SC2? Yes, the differences between SC1 and SC2 will introduce more uncertainty into your model, however the wealth of data points you'll gain from it might be worth it. Just a thought.
Mip
Profile Joined June 2010
United States63 Posts
Last Edited: 2010-12-08 20:39:39
December 08 2010 19:35 GMT
#42
My problem with a point system is that it doesn't take into account the skill of the players you play against. The base of the system I used is the same as the ELO system that is used for Chess ranking except that I took a Bayesian approach.

The rankings that I posted are the posterior means of the skill parameters - 1 standard deviation. It's somewhat arbitrary. If I made it 2 standard deviations, you'd see players like LiveForever drop down a lot. For people like FruitDealer and NesTea, there are a lot more games used to estimate their skill, so if you penalize uncertainty, players who have played a lot of GSL games will float to the top.

Here's a google spreadsheet of the full ranking results : GSL Ranking Results
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 19:40 GMT
#43
Updated rankings in original post, see if you find them more agreeable. I think most will.
SolonTLG
Profile Joined November 2010
United States299 Posts
December 08 2010 19:57 GMT
#44
I totally agree with you that a point system with random tournament seeding does not tell you very much. However, large elimination tournaments with huge skill differences between players like GSL and MLG seed their tournament brackets. (Note: Just like in tennis these seedings can be independent of the ATP-style rankings and it won't change the story.) These seedings will get better over time and thus reduce the luck aspect. In the end, good players will get farther in tournaments more often, and thus accumulated more points in a point system. Thus, a point system with NON-random tournament seeding should be a good approximation of skill given the sparseness of games compared to all the possible 1v1 match-ups.

Again, I agree that a point system cannot account for everything, and a richer model would be preferable. All I am saying is that a point system can be informative.

The Law Giver
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 20:24 GMT
#45
Oh, I totally agree. I think seeding is always valuable so as to maximize the opportunity to gather data from the players. I see a point system as an approximation to dynamic Bayesian system however. It's not that it doesn't work or that it's not valuable, but the Bayesian approach just lets the data inform the rankings entirely, where a point system is only informed by the round reached.

For example, in the GSL Season 2, FruitDealer lost to MarineKing in the Ro32. By the point system, FruitDealer lost out on a lot of points by not getting further in the tournament. In the Bayesian model, it takes into account that MarineKing is friggen good, so losing to him isn't really that big of an upset. The point system also gives no way to quantify uncertainty about the players skill.

Realistically, either approach works fairly well, the Bayesian approach is just more dynamic in the way it ranks.
CrAzEdBaDgEr
Profile Joined August 2010
Canada166 Posts
December 08 2010 20:30 GMT
#46
Can't access the Google spreadsheet in the OP, just to let you know.

Keep up the good work.
SolonTLG
Profile Joined November 2010
United States299 Posts
Last Edited: 2010-12-08 21:02:41
December 08 2010 20:47 GMT
#47
However, if is true that FruitDealer and MarineKing are both "good" players, then under a "good" seeding system they would not be meeting the Ro32. For example, no tennis tournament would ever have the possibility Roger Federer and Rafael Nadal meeting in the 2nd round of their tournament, as both are considered good players, and thus the point system per round makes sense for tennis. Unfortunately, SC2 is not well developed enough yet to make clean seedings. In this respect, your Bayesian approach adds value in these early stages of the game, and I would be very curious to see the details of your analysis.

On a slightly different tack, on the State of the Game podcast a couple of weeks age there was a big discussion about MLG's extended series tiebreaker. The crux of the argument centered around whether different rounds of a tournament should be considered different; that is, does defeating someone in the Ro32 mean something different than defeating someone in the Ro8. In my opinion, yes, and thus the ending point of a tournament is important to incorporate. For instance, if MarineKing beat two great players in Ro64 and Ro32, that is good, but not as good as beating them in the Ro16 and Ro8. I believe proper seeding and points for tournament ending spot takes this situation into account.
The Law Giver
KillerDucky
Profile Blog Joined July 2010
United States498 Posts
December 08 2010 21:06 GMT
#48
On December 08 2010 19:10 Mip wrote:
Time effects are something I definitely have in mind for future use. I mean, it's pretty clear that a year from now, no one will care what happened in GSL Season 1 as far as predictions are concerned.


Here is a paper for accounting for time effects:
"Whole-History Rating: A Bayesian Rating System for Players of Time-Varying Strength"
http://remi.coulom.free.fr/WHR/

I thought it sounds like a cool concept and I'd like to see it used. On a different game server I play on (KGS - a Go server) they use Bayesian, but to account for time variation they use a simple weight-decay, and it has some strange side effects.
MarineKingPrime Forever!
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 21:21 GMT
#49
To Solon TLG: About the Stat of the Game podcast, my thought is that the only thing that matters is the skill of the players involved. Whether MarineKing beats FruitDealer matters only how skillful they are. I don't think it matters which round they are in. I don't see that being in the round of 32 vs the finals will make a difference. Since they are both comfortable under pressure, I think it's reasonable to assume that the round effects them both in the same way. If that is not true, who is favored? So if neither are favored, we should be able to treat the data as if the round doesn't matter.

To KillerDucky: Thanks for the article. My thought for a time parameter was to have some measurement of the time passed and have the likelihood of past events shrink toward 50/50 as the data becomes older, the past significant upsets will shrink towards non-significance as time passes.
beat farm
Profile Joined October 2010
United States478 Posts
December 08 2010 22:07 GMT
#50
is this somewhat like true skill on the xbox?
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 22:23 GMT
#51
@beat farm:
They are both Bayesian approaches... so probably.
Sandermatt
Profile Joined December 2010
Switzerland1365 Posts
December 08 2010 22:27 GMT
#52
I think the predictions could be made more accurate, if you take into account the players strength in each matchup. The problem is thatit may require more games to become accurate (as each matchup is only one third of the games, for a random player even worse).
Still I think, once enough data is available it would be more accurate to give the players seperate rankings for each match-up.
kazansky
Profile Blog Joined February 2010
Germany931 Posts
December 08 2010 22:41 GMT
#53
On December 09 2010 02:31 Cel.erity wrote:
Show nested quote +
On December 08 2010 22:24 kazansky wrote:
On December 08 2010 22:20 aka_star wrote:
I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right.


You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches.
Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football.
The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it.


The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40.

I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.



I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-)
I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.
"Mathematicians don't understand mathematics, they get used to it." - Prof. Kredler || "That was more one-sided that a mobius strip." - Tasteless
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 23:20 GMT
#54
@Sandermatt Yeah, I would like add something like that. It would take more data (which is already a problem). The way I would do it is have a skill rating for each player, and then an adjustment for the opponents race. Would be very easy to add if I had more data.
PROJECTILE
Profile Joined April 2010
United States226 Posts
December 09 2010 00:02 GMT
#55
where are you going to school for statistics?
Mip
Profile Joined June 2010
United States63 Posts
December 09 2010 06:58 GMT
#56
On December 09 2010 07:41 kazansky wrote:
Show nested quote +
On December 09 2010 02:31 Cel.erity wrote:
On December 08 2010 22:24 kazansky wrote:
On December 08 2010 22:20 aka_star wrote:
I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right.


You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches.
Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football.
The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it.


The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40.

I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.



I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-)
I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.


I think you guys are kind of off base, I already have a model that can rate Starcraft players with a decent amount of accuracy with only 400 something games. Is it perfect? No. But it has a lot of strength and will learn as it gets more data.

Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.

As for this model and map imbalance, this model averages over all maps. It's primary function is the rate the players objectively based on their performance, which I believe it does quite nicely. If you want to optimize this for prediction, which I believe there is enough data out there that we could start, we need to pull together more data, which I would like help with if there is anyone out there good at parsing webpages.

Like I said in the original post, my data look like this
[2343,] "MC" "MarineKing"
[2344,] "MC" "MarineKing"
[2345,] "MC" "MarineKing"
[2346,] "Jinro" "Choya"
[2347,] "Jinro" "Choya"
[2348,] "Jinro" "Choya"
[2349,] "Choya" "Jinro"
[2350,] "Choya" "Jinro"

It actually starts out like this :

MarineKing 1
MC 3

Jinro 3
Choya 2

and then I convert it.

If instead I could get my data to look more like this:
MC Protoss MarineKing Terran Lost Temple
MC Protoss MarineKing Terran Blistering Sands
MC Protoss MarineKing Terran Jungle Basin

I could then start adjusting for those kinds of things. There should already be enough data to start something like this. So long as I have more data than the effective number of parameters that I'm trying to estimate, I can do it no problem.
Mip
Profile Joined June 2010
United States63 Posts
December 09 2010 06:59 GMT
#57
@PROJECTILE I'm going to school at BYU in Provo, UT. They have a pretty good statistics program, but no PhD option, they stop at Master's degrees.
kazansky
Profile Blog Joined February 2010
Germany931 Posts
December 09 2010 07:36 GMT
#58
On December 09 2010 15:58 Mip wrote:
Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.


As I said, yes statistic models of this kind are able to give very high prediction accuracy.
I didn't say yours will yet, and I think if you keep the work up, yours will in about 5 years, or lets say 2 years. That is at least what I meant. To provide high accuracy for the complete outcome of a tournament, and very reliable predictions, you need a huge amount of data to weight, on the one hand.

Why I said 5 years was: if you knew every result of the SC2 players right now to base your assumption on, or every result of the BW players, you would highly likely choose the Broodwar players to predict, because the game is far more figured out, so your variation is narrowed down by far, because no every week a new cheese appears.

You can start making predictions whenever you want, but if you want to hit +95% over a total GSL (every game) just based on a statistical model, I think you will have to rely on 5 years of tactic development and 2 years of data :-)

I didn't want to spoil your fun, I love your work and totally appreciate it.
"Mathematicians don't understand mathematics, they get used to it." - Prof. Kredler || "That was more one-sided that a mobius strip." - Tasteless
Darkstar_X
Profile Joined May 2010
United States197 Posts
December 09 2010 07:50 GMT
#59
Interesting and fun project, though, as you said, you don't have enough data to actually make that strong of predictions. As others have said, you probably need to include a time factor as well.
Mip
Profile Joined June 2010
United States63 Posts
Last Edited: 2010-12-09 08:26:05
December 09 2010 08:16 GMT
#60
@Kazansky Small variance and prediction accuracy are not the same thing in this kind of model.

Each player has an unmeasurable skill parameter, that we can get glimpses of when they win or lose. So the more wins and loses I observe, the more I can nail down exactly what a player's skill parameter is. Over time, I can hope to achieve a fairly high precision with many player's skill levels.

But knowing a player's skill is only the parameter that feeds my function that tells me the probability that a player will win, which from the first post is exp(skill1)/(exp(skill1)+exp(skill2)). If in 5 years, I have 2 players of the same skill, the according to this formula, the probability of either winning is 50/50. Which makes sense for players of identical skill. So right now, I might say, well, there's a 30-70% chance player 1 wins (centered at 50-50, but I'm uncertain about exactly what it is), then 5 years from now I can say that there's a 49-51% chance player 1 wins (still 50-50, but I'm certain it's about 50-50 at this point). I'll be able to narrow in only on the probability that a specific player can beat another, not on the actual outcome.

What you are saying is that in 5 years, there will only be <5% upsets, and >95% perfect predictability. According to any paired comparison model, that would imply that all player's skill levels are tremendously far apart, which is not likely to be the case. That would imply that no rivalry would exist, no excitement in wondering who will come out on top in any match-up because 95%+ of the time you'd know the victor in advance.

I don't understand how one could ever have high predictability of evenly match opponents. I think that would, by definition, make them not evenly matched.
Prev 1 2 3 4 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 1d 11h
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft518
Nathanias 132
StarCraft: Brood War
Hyun 72
NaDa 27
Rock 10
Dota 2
NeuroSwarm111
Counter-Strike
tarik_tv4454
m0e_tv220
Super Smash Bros
Mew2King120
Other Games
summit1g7448
Sick232
febbydoto11
Guitarcheese7
Organizations
Other Games
gamesdonequick933
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 19 non-featured ]
StarCraft 2
• HeavenSC 49
• davetesta48
• Kozan
• sooper7s
• Migwel
• AfreecaTV YouTube
• LaughNgamezSOOP
• intothetv
• IndyKCrew
StarCraft: Brood War
• Azhi_Dahaki58
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota22482
League of Legends
• Doublelift4194
• Stunt102
Other Games
• imaqtpie2663
• Scarra1809
• tFFMrPink 16
Upcoming Events
OSC
1d 11h
Korean StarCraft League
2 days
OSC
2 days
IPSL
2 days
Dewalt vs Bonyth
OSC
2 days
OSC
3 days
uThermal 2v2 Circuit
3 days
Replay Cast
4 days
Patches Events
4 days
Liquipedia Results

Completed

C-Race Season 1
WardiTV 2025
META Madness #9

Ongoing

IPSL Winter 2025-26
BSL Season 21
Slon Tour Season 2
CSL 2025 WINTER (S19)
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025

Upcoming

Escore Tournament S1: W2
Escore Tournament S1: W3
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Thunderfire SC2 All-star 2025
Big Gabe Cup #3
OSC Championship Season 13
Nations Cup 2026
Underdog Cup #3
NA Kuram Kup
ESL Pro League Season 23
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.