• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 19:47
CET 01:47
KST 09:47
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
ByuL: The Forgotten Master of ZvT18Behind the Blue - Team Liquid History Book16Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8
Community News
Weekly Cups (Feb 9-15): herO doubles up2ACS replaced by "ASL Season Open" - Starts 21/0224LiuLi Cup: 2025 Grand Finals (Feb 10-16)46Weekly Cups (Feb 2-8): Classic, Solar, MaxPax win2Nexon's StarCraft game could be FPS, led by UMS maker16
StarCraft 2
General
ByuL: The Forgotten Master of ZvT Nexon's StarCraft game could be FPS, led by UMS maker How do you think the 5.0.15 balance patch (Oct 2025) for StarCraft II has affected the game? Weekly Cups (Feb 9-15): herO doubles up SpeCial on The Tasteless Podcast
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar) LiuLi Cup: 2025 Grand Finals (Feb 10-16) Master Swan Open (Global Bronze-Master 2) WardiTV Team League Season 10
Strategy
Custom Maps
Map Editor closed ? [A] Starcraft Sound Mod
External Content
Mutation # 513 Attrition Warfare The PondCast: SC2 News & Results Mutation # 512 Overclocked Mutation # 511 Temple of Rebirth
Brood War
General
TvZ is the most complete match up BGH Auto Balance -> http://bghmmr.eu/ Ladder maps - how we can make blizz update them? Gypsy to Korea Brood War inspired Terran vs Zerg cinematic – feed
Tourneys
[Megathread] Daily Proleagues Escore Tournament StarCraft Season 1 Small VOD Thread 2.0 KCM Race Survival 2026 Season 1
Strategy
Simple Questions, Simple Answers Fighting Spirit mining rates Zealot bombing is no longer popular? Current Meta
Other Games
General Games
ZeroSpace Megathread Path of Exile Diablo 2 thread Nintendo Switch Thread Battle Aces/David Kim RTS Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Ask and answer stupid questions here! Things Aren’t Peaceful in Palestine European Politico-economics QA Mega-thread
Fan Clubs
The IdrA Fan Club The herO Fan Club!
Media & Entertainment
[Req][Books] Good Fantasy/SciFi books [Manga] One Piece Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion TL MMA Pick'em Pool 2013
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
The Search For Meaning in Vi…
TrAiDoS
My 2025 Magic: The Gathering…
DARKING
Life Update and thoughts.
FuDDx
How do archons sleep?
8882
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2644 users

GSL Season 3 predictions using statistics - Page 3

Forum Index > StarCraft 2 Tournaments
Post a Reply
Prev 1 2 3 4 Next All
RoboBob
Profile Blog Joined September 2010
United States798 Posts
December 08 2010 18:52 GMT
#41
As an economist I understand all too well how frustrating it can be to have assumptions block the practical implementation of your model, so I definitely feel you there.

I don't know if you've thought of this yet, but if you want to test the validity of your model, why not use SC1 data instead of SC2? Yes, the differences between SC1 and SC2 will introduce more uncertainty into your model, however the wealth of data points you'll gain from it might be worth it. Just a thought.
Mip
Profile Joined June 2010
United States63 Posts
Last Edited: 2010-12-08 20:39:39
December 08 2010 19:35 GMT
#42
My problem with a point system is that it doesn't take into account the skill of the players you play against. The base of the system I used is the same as the ELO system that is used for Chess ranking except that I took a Bayesian approach.

The rankings that I posted are the posterior means of the skill parameters - 1 standard deviation. It's somewhat arbitrary. If I made it 2 standard deviations, you'd see players like LiveForever drop down a lot. For people like FruitDealer and NesTea, there are a lot more games used to estimate their skill, so if you penalize uncertainty, players who have played a lot of GSL games will float to the top.

Here's a google spreadsheet of the full ranking results : GSL Ranking Results
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 19:40 GMT
#43
Updated rankings in original post, see if you find them more agreeable. I think most will.
SolonTLG
Profile Joined November 2010
United States299 Posts
December 08 2010 19:57 GMT
#44
I totally agree with you that a point system with random tournament seeding does not tell you very much. However, large elimination tournaments with huge skill differences between players like GSL and MLG seed their tournament brackets. (Note: Just like in tennis these seedings can be independent of the ATP-style rankings and it won't change the story.) These seedings will get better over time and thus reduce the luck aspect. In the end, good players will get farther in tournaments more often, and thus accumulated more points in a point system. Thus, a point system with NON-random tournament seeding should be a good approximation of skill given the sparseness of games compared to all the possible 1v1 match-ups.

Again, I agree that a point system cannot account for everything, and a richer model would be preferable. All I am saying is that a point system can be informative.

The Law Giver
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 20:24 GMT
#45
Oh, I totally agree. I think seeding is always valuable so as to maximize the opportunity to gather data from the players. I see a point system as an approximation to dynamic Bayesian system however. It's not that it doesn't work or that it's not valuable, but the Bayesian approach just lets the data inform the rankings entirely, where a point system is only informed by the round reached.

For example, in the GSL Season 2, FruitDealer lost to MarineKing in the Ro32. By the point system, FruitDealer lost out on a lot of points by not getting further in the tournament. In the Bayesian model, it takes into account that MarineKing is friggen good, so losing to him isn't really that big of an upset. The point system also gives no way to quantify uncertainty about the players skill.

Realistically, either approach works fairly well, the Bayesian approach is just more dynamic in the way it ranks.
CrAzEdBaDgEr
Profile Joined August 2010
Canada166 Posts
December 08 2010 20:30 GMT
#46
Can't access the Google spreadsheet in the OP, just to let you know.

Keep up the good work.
SolonTLG
Profile Joined November 2010
United States299 Posts
Last Edited: 2010-12-08 21:02:41
December 08 2010 20:47 GMT
#47
However, if is true that FruitDealer and MarineKing are both "good" players, then under a "good" seeding system they would not be meeting the Ro32. For example, no tennis tournament would ever have the possibility Roger Federer and Rafael Nadal meeting in the 2nd round of their tournament, as both are considered good players, and thus the point system per round makes sense for tennis. Unfortunately, SC2 is not well developed enough yet to make clean seedings. In this respect, your Bayesian approach adds value in these early stages of the game, and I would be very curious to see the details of your analysis.

On a slightly different tack, on the State of the Game podcast a couple of weeks age there was a big discussion about MLG's extended series tiebreaker. The crux of the argument centered around whether different rounds of a tournament should be considered different; that is, does defeating someone in the Ro32 mean something different than defeating someone in the Ro8. In my opinion, yes, and thus the ending point of a tournament is important to incorporate. For instance, if MarineKing beat two great players in Ro64 and Ro32, that is good, but not as good as beating them in the Ro16 and Ro8. I believe proper seeding and points for tournament ending spot takes this situation into account.
The Law Giver
KillerDucky
Profile Blog Joined July 2010
United States498 Posts
December 08 2010 21:06 GMT
#48
On December 08 2010 19:10 Mip wrote:
Time effects are something I definitely have in mind for future use. I mean, it's pretty clear that a year from now, no one will care what happened in GSL Season 1 as far as predictions are concerned.


Here is a paper for accounting for time effects:
"Whole-History Rating: A Bayesian Rating System for Players of Time-Varying Strength"
http://remi.coulom.free.fr/WHR/

I thought it sounds like a cool concept and I'd like to see it used. On a different game server I play on (KGS - a Go server) they use Bayesian, but to account for time variation they use a simple weight-decay, and it has some strange side effects.
MarineKingPrime Forever!
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 21:21 GMT
#49
To Solon TLG: About the Stat of the Game podcast, my thought is that the only thing that matters is the skill of the players involved. Whether MarineKing beats FruitDealer matters only how skillful they are. I don't think it matters which round they are in. I don't see that being in the round of 32 vs the finals will make a difference. Since they are both comfortable under pressure, I think it's reasonable to assume that the round effects them both in the same way. If that is not true, who is favored? So if neither are favored, we should be able to treat the data as if the round doesn't matter.

To KillerDucky: Thanks for the article. My thought for a time parameter was to have some measurement of the time passed and have the likelihood of past events shrink toward 50/50 as the data becomes older, the past significant upsets will shrink towards non-significance as time passes.
beat farm
Profile Joined October 2010
United States478 Posts
December 08 2010 22:07 GMT
#50
is this somewhat like true skill on the xbox?
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 22:23 GMT
#51
@beat farm:
They are both Bayesian approaches... so probably.
Sandermatt
Profile Joined December 2010
Switzerland1365 Posts
December 08 2010 22:27 GMT
#52
I think the predictions could be made more accurate, if you take into account the players strength in each matchup. The problem is thatit may require more games to become accurate (as each matchup is only one third of the games, for a random player even worse).
Still I think, once enough data is available it would be more accurate to give the players seperate rankings for each match-up.
kazansky
Profile Blog Joined February 2010
Germany931 Posts
December 08 2010 22:41 GMT
#53
On December 09 2010 02:31 Cel.erity wrote:
Show nested quote +
On December 08 2010 22:24 kazansky wrote:
On December 08 2010 22:20 aka_star wrote:
I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right.


You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches.
Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football.
The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it.


The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40.

I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.



I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-)
I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.
"Mathematicians don't understand mathematics, they get used to it." - Prof. Kredler || "That was more one-sided that a mobius strip." - Tasteless
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 23:20 GMT
#54
@Sandermatt Yeah, I would like add something like that. It would take more data (which is already a problem). The way I would do it is have a skill rating for each player, and then an adjustment for the opponents race. Would be very easy to add if I had more data.
PROJECTILE
Profile Joined April 2010
United States226 Posts
December 09 2010 00:02 GMT
#55
where are you going to school for statistics?
Mip
Profile Joined June 2010
United States63 Posts
December 09 2010 06:58 GMT
#56
On December 09 2010 07:41 kazansky wrote:
Show nested quote +
On December 09 2010 02:31 Cel.erity wrote:
On December 08 2010 22:24 kazansky wrote:
On December 08 2010 22:20 aka_star wrote:
I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right.


You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches.
Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football.
The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it.


The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40.

I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.



I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-)
I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.


I think you guys are kind of off base, I already have a model that can rate Starcraft players with a decent amount of accuracy with only 400 something games. Is it perfect? No. But it has a lot of strength and will learn as it gets more data.

Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.

As for this model and map imbalance, this model averages over all maps. It's primary function is the rate the players objectively based on their performance, which I believe it does quite nicely. If you want to optimize this for prediction, which I believe there is enough data out there that we could start, we need to pull together more data, which I would like help with if there is anyone out there good at parsing webpages.

Like I said in the original post, my data look like this
[2343,] "MC" "MarineKing"
[2344,] "MC" "MarineKing"
[2345,] "MC" "MarineKing"
[2346,] "Jinro" "Choya"
[2347,] "Jinro" "Choya"
[2348,] "Jinro" "Choya"
[2349,] "Choya" "Jinro"
[2350,] "Choya" "Jinro"

It actually starts out like this :

MarineKing 1
MC 3

Jinro 3
Choya 2

and then I convert it.

If instead I could get my data to look more like this:
MC Protoss MarineKing Terran Lost Temple
MC Protoss MarineKing Terran Blistering Sands
MC Protoss MarineKing Terran Jungle Basin

I could then start adjusting for those kinds of things. There should already be enough data to start something like this. So long as I have more data than the effective number of parameters that I'm trying to estimate, I can do it no problem.
Mip
Profile Joined June 2010
United States63 Posts
December 09 2010 06:59 GMT
#57
@PROJECTILE I'm going to school at BYU in Provo, UT. They have a pretty good statistics program, but no PhD option, they stop at Master's degrees.
kazansky
Profile Blog Joined February 2010
Germany931 Posts
December 09 2010 07:36 GMT
#58
On December 09 2010 15:58 Mip wrote:
Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.


As I said, yes statistic models of this kind are able to give very high prediction accuracy.
I didn't say yours will yet, and I think if you keep the work up, yours will in about 5 years, or lets say 2 years. That is at least what I meant. To provide high accuracy for the complete outcome of a tournament, and very reliable predictions, you need a huge amount of data to weight, on the one hand.

Why I said 5 years was: if you knew every result of the SC2 players right now to base your assumption on, or every result of the BW players, you would highly likely choose the Broodwar players to predict, because the game is far more figured out, so your variation is narrowed down by far, because no every week a new cheese appears.

You can start making predictions whenever you want, but if you want to hit +95% over a total GSL (every game) just based on a statistical model, I think you will have to rely on 5 years of tactic development and 2 years of data :-)

I didn't want to spoil your fun, I love your work and totally appreciate it.
"Mathematicians don't understand mathematics, they get used to it." - Prof. Kredler || "That was more one-sided that a mobius strip." - Tasteless
Darkstar_X
Profile Joined May 2010
United States197 Posts
December 09 2010 07:50 GMT
#59
Interesting and fun project, though, as you said, you don't have enough data to actually make that strong of predictions. As others have said, you probably need to include a time factor as well.
Mip
Profile Joined June 2010
United States63 Posts
Last Edited: 2010-12-09 08:26:05
December 09 2010 08:16 GMT
#60
@Kazansky Small variance and prediction accuracy are not the same thing in this kind of model.

Each player has an unmeasurable skill parameter, that we can get glimpses of when they win or lose. So the more wins and loses I observe, the more I can nail down exactly what a player's skill parameter is. Over time, I can hope to achieve a fairly high precision with many player's skill levels.

But knowing a player's skill is only the parameter that feeds my function that tells me the probability that a player will win, which from the first post is exp(skill1)/(exp(skill1)+exp(skill2)). If in 5 years, I have 2 players of the same skill, the according to this formula, the probability of either winning is 50/50. Which makes sense for players of identical skill. So right now, I might say, well, there's a 30-70% chance player 1 wins (centered at 50-50, but I'm uncertain about exactly what it is), then 5 years from now I can say that there's a 49-51% chance player 1 wins (still 50-50, but I'm certain it's about 50-50 at this point). I'll be able to narrow in only on the probability that a specific player can beat another, not on the actual outcome.

What you are saying is that in 5 years, there will only be <5% upsets, and >95% perfect predictability. According to any paired comparison model, that would imply that all player's skill levels are tremendously far apart, which is not likely to be the case. That would imply that no rivalry would exist, no excitement in wondering who will come out on top in any match-up because 95%+ of the time you'd know the victor in advance.

I don't understand how one could ever have high predictability of evenly match opponents. I think that would, by definition, make them not evenly matched.
Prev 1 2 3 4 Next All
Please log in or register to reply.
Live Events Refresh
Replay Cast
00:00
HomeStory Cup 28 - Group D
CranKy Ducklings113
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Nathanias 44
SpeCial 10
StarCraft: Brood War
Artosis 679
nyoken 68
NaDa 8
Dota 2
monkeys_forever372
NeuroSwarm152
Counter-Strike
tarik_tv4197
taco 959
fl0m652
Super Smash Bros
hungrybox1131
Heroes of the Storm
Khaldor151
Other Games
summit1g10751
Day[9].tv708
shahzam390
C9.Mang0342
Maynarde130
ViBE100
ToD90
KnowMe65
Trikslyr63
Organizations
Other Games
gamesdonequick814
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 18 non-featured ]
StarCraft 2
• Hupsaiya 103
• davetesta35
• Kozan
• sooper7s
• Migwel
• LaughNgamezSOOP
• AfreecaTV YouTube
• IndyKCrew
• intothetv
StarCraft: Brood War
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota2992
League of Legends
• Doublelift4412
• Scarra894
Other Games
• imaqtpie1740
• Day9tv708
• Shiphtur234
Upcoming Events
PiG Sty Festival
8h 13m
Maru vs Bunny
Classic vs SHIN
The PondCast
9h 13m
KCM Race Survival
9h 13m
WardiTV Winter Champion…
11h 13m
OSC
11h 13m
Replay Cast
23h 13m
PiG Sty Festival
1d 8h
Clem vs Percival
Zoun vs Solar
Escore
1d 9h
Epic.LAN
1d 11h
Replay Cast
1d 23h
[ Show More ]
PiG Sty Festival
2 days
herO vs NightMare
Reynor vs Cure
CranKy Ducklings
2 days
Epic.LAN
2 days
Replay Cast
2 days
PiG Sty Festival
3 days
Serral vs YoungYakov
ByuN vs ShoWTimE
Sparkling Tuna Cup
3 days
Replay Cast
3 days
Replay Cast
4 days
Wardi Open
4 days
Monday Night Weeklies
4 days
Replay Cast
4 days
WardiTV Winter Champion…
5 days
WardiTV Winter Champion…
6 days
Liquipedia Results

Completed

Proleague 2026-02-18
LiuLi Cup: 2025 Grand Finals
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
WardiTV Winter 2026
PiG Sty Festival 7.0
Nations Cup 2026
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
eXTREMESLAND 2025
SL Budapest Major 2025

Upcoming

Escore Tournament S1: King of Kings
[S:21] ASL SEASON OPEN 1st Round
[S:21] ASL SEASON OPEN 1st Round Qualifier
Acropolis #4 - TS5
Jeongseon Sooper Cup
Spring Cup 2026: China & Korea Invitational
[S:21] ASL SEASON OPEN 2nd Round
HSC XXIX
uThermal 2v2 2026 Main Event
Bellum Gens Elite Stara Zagora 2026
RSL Revival: Season 4
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
FISSURE Playground #3
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League Season 23
ESL Pro League Season 23
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.