• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 17:00
CEST 23:00
KST 06:00
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Team TLMC #5 - Finalists & Open Tournaments0[ASL20] Ro16 Preview Pt2: Turbulence10Classic Games #3: Rogue vs Serral at BlizzCon9[ASL20] Ro16 Preview Pt1: Ascent10Maestros of the Game: Week 1/Play-in Preview12
Community News
Weekly Cups (Sept 8-14): herO & MaxPax split cups4WardiTV TL Team Map Contest #5 Tournaments1SC4ALL $6,000 Open LAN in Philadelphia8Weekly Cups (Sept 1-7): MaxPax rebounds & Clem saga continues29LiuLi Cup - September 2025 Tournaments3
StarCraft 2
General
#1: Maru - Greatest Players of All Time Weekly Cups (Sept 8-14): herO & MaxPax split cups Team Liquid Map Contest #21 - Presented by Monster Energy SpeCial on The Tasteless Podcast Team TLMC #5 - Finalists & Open Tournaments
Tourneys
Maestros of The Game—$20k event w/ live finals in Paris Sparkling Tuna Cup - Weekly Open Tournament SC4ALL $6,000 Open LAN in Philadelphia WardiTV TL Team Map Contest #5 Tournaments RSL: Revival, a new crowdfunded tournament series
Strategy
Custom Maps
External Content
Mutation # 491 Night Drive Mutation # 490 Masters of Midnight Mutation # 489 Bannable Offense Mutation # 488 What Goes Around
Brood War
General
Soulkey on ASL S20 A cwal.gg Extension - Easily keep track of anyone BGH Auto Balance -> http://bghmmr.eu/ ASL20 General Discussion Pros React To: SoulKey's 5-Peat Challenge
Tourneys
[ASL20] Ro16 Group D [ASL20] Ro16 Group C [Megathread] Daily Proleagues SC4ALL $1,500 Open Bracket LAN
Strategy
Simple Questions, Simple Answers Muta micro map competition Fighting Spirit mining rates [G] Mineral Boosting
Other Games
General Games
Stormgate/Frost Giant Megathread Borderlands 3 Path of Exile General RTS Discussion Thread Nintendo Switch Thread
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread UK Politics Mega-thread Canadian Politics Mega-thread
Fan Clubs
The Happy Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion MLB/Baseball 2023
World Cup 2022
Tech Support
Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread High temperatures on bridge(s)
TL Community
BarCraft in Tokyo Japan for ASL Season5 Final The Automated Ban List
Blogs
The Personality of a Spender…
TrAiDoS
A very expensive lesson on ma…
Garnet
hello world
radishsoup
Lemme tell you a thing o…
JoinTheRain
RTS Design in Hypercoven
a11
Evil Gacha Games and the…
ffswowsucks
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1228 users

GSL Season 3 predictions using statistics - Page 3

Forum Index > StarCraft 2 Tournaments
Post a Reply
Prev 1 2 3 4 Next All
RoboBob
Profile Blog Joined September 2010
United States798 Posts
December 08 2010 18:52 GMT
#41
As an economist I understand all too well how frustrating it can be to have assumptions block the practical implementation of your model, so I definitely feel you there.

I don't know if you've thought of this yet, but if you want to test the validity of your model, why not use SC1 data instead of SC2? Yes, the differences between SC1 and SC2 will introduce more uncertainty into your model, however the wealth of data points you'll gain from it might be worth it. Just a thought.
Mip
Profile Joined June 2010
United States63 Posts
Last Edited: 2010-12-08 20:39:39
December 08 2010 19:35 GMT
#42
My problem with a point system is that it doesn't take into account the skill of the players you play against. The base of the system I used is the same as the ELO system that is used for Chess ranking except that I took a Bayesian approach.

The rankings that I posted are the posterior means of the skill parameters - 1 standard deviation. It's somewhat arbitrary. If I made it 2 standard deviations, you'd see players like LiveForever drop down a lot. For people like FruitDealer and NesTea, there are a lot more games used to estimate their skill, so if you penalize uncertainty, players who have played a lot of GSL games will float to the top.

Here's a google spreadsheet of the full ranking results : GSL Ranking Results
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 19:40 GMT
#43
Updated rankings in original post, see if you find them more agreeable. I think most will.
SolonTLG
Profile Joined November 2010
United States299 Posts
December 08 2010 19:57 GMT
#44
I totally agree with you that a point system with random tournament seeding does not tell you very much. However, large elimination tournaments with huge skill differences between players like GSL and MLG seed their tournament brackets. (Note: Just like in tennis these seedings can be independent of the ATP-style rankings and it won't change the story.) These seedings will get better over time and thus reduce the luck aspect. In the end, good players will get farther in tournaments more often, and thus accumulated more points in a point system. Thus, a point system with NON-random tournament seeding should be a good approximation of skill given the sparseness of games compared to all the possible 1v1 match-ups.

Again, I agree that a point system cannot account for everything, and a richer model would be preferable. All I am saying is that a point system can be informative.

The Law Giver
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 20:24 GMT
#45
Oh, I totally agree. I think seeding is always valuable so as to maximize the opportunity to gather data from the players. I see a point system as an approximation to dynamic Bayesian system however. It's not that it doesn't work or that it's not valuable, but the Bayesian approach just lets the data inform the rankings entirely, where a point system is only informed by the round reached.

For example, in the GSL Season 2, FruitDealer lost to MarineKing in the Ro32. By the point system, FruitDealer lost out on a lot of points by not getting further in the tournament. In the Bayesian model, it takes into account that MarineKing is friggen good, so losing to him isn't really that big of an upset. The point system also gives no way to quantify uncertainty about the players skill.

Realistically, either approach works fairly well, the Bayesian approach is just more dynamic in the way it ranks.
CrAzEdBaDgEr
Profile Joined August 2010
Canada166 Posts
December 08 2010 20:30 GMT
#46
Can't access the Google spreadsheet in the OP, just to let you know.

Keep up the good work.
SolonTLG
Profile Joined November 2010
United States299 Posts
Last Edited: 2010-12-08 21:02:41
December 08 2010 20:47 GMT
#47
However, if is true that FruitDealer and MarineKing are both "good" players, then under a "good" seeding system they would not be meeting the Ro32. For example, no tennis tournament would ever have the possibility Roger Federer and Rafael Nadal meeting in the 2nd round of their tournament, as both are considered good players, and thus the point system per round makes sense for tennis. Unfortunately, SC2 is not well developed enough yet to make clean seedings. In this respect, your Bayesian approach adds value in these early stages of the game, and I would be very curious to see the details of your analysis.

On a slightly different tack, on the State of the Game podcast a couple of weeks age there was a big discussion about MLG's extended series tiebreaker. The crux of the argument centered around whether different rounds of a tournament should be considered different; that is, does defeating someone in the Ro32 mean something different than defeating someone in the Ro8. In my opinion, yes, and thus the ending point of a tournament is important to incorporate. For instance, if MarineKing beat two great players in Ro64 and Ro32, that is good, but not as good as beating them in the Ro16 and Ro8. I believe proper seeding and points for tournament ending spot takes this situation into account.
The Law Giver
KillerDucky
Profile Blog Joined July 2010
United States498 Posts
December 08 2010 21:06 GMT
#48
On December 08 2010 19:10 Mip wrote:
Time effects are something I definitely have in mind for future use. I mean, it's pretty clear that a year from now, no one will care what happened in GSL Season 1 as far as predictions are concerned.


Here is a paper for accounting for time effects:
"Whole-History Rating: A Bayesian Rating System for Players of Time-Varying Strength"
http://remi.coulom.free.fr/WHR/

I thought it sounds like a cool concept and I'd like to see it used. On a different game server I play on (KGS - a Go server) they use Bayesian, but to account for time variation they use a simple weight-decay, and it has some strange side effects.
MarineKingPrime Forever!
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 21:21 GMT
#49
To Solon TLG: About the Stat of the Game podcast, my thought is that the only thing that matters is the skill of the players involved. Whether MarineKing beats FruitDealer matters only how skillful they are. I don't think it matters which round they are in. I don't see that being in the round of 32 vs the finals will make a difference. Since they are both comfortable under pressure, I think it's reasonable to assume that the round effects them both in the same way. If that is not true, who is favored? So if neither are favored, we should be able to treat the data as if the round doesn't matter.

To KillerDucky: Thanks for the article. My thought for a time parameter was to have some measurement of the time passed and have the likelihood of past events shrink toward 50/50 as the data becomes older, the past significant upsets will shrink towards non-significance as time passes.
beat farm
Profile Joined October 2010
United States478 Posts
December 08 2010 22:07 GMT
#50
is this somewhat like true skill on the xbox?
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 22:23 GMT
#51
@beat farm:
They are both Bayesian approaches... so probably.
Sandermatt
Profile Joined December 2010
Switzerland1365 Posts
December 08 2010 22:27 GMT
#52
I think the predictions could be made more accurate, if you take into account the players strength in each matchup. The problem is thatit may require more games to become accurate (as each matchup is only one third of the games, for a random player even worse).
Still I think, once enough data is available it would be more accurate to give the players seperate rankings for each match-up.
kazansky
Profile Blog Joined February 2010
Germany931 Posts
December 08 2010 22:41 GMT
#53
On December 09 2010 02:31 Cel.erity wrote:
Show nested quote +
On December 08 2010 22:24 kazansky wrote:
On December 08 2010 22:20 aka_star wrote:
I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right.


You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches.
Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football.
The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it.


The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40.

I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.



I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-)
I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.
"Mathematicians don't understand mathematics, they get used to it." - Prof. Kredler || "That was more one-sided that a mobius strip." - Tasteless
Mip
Profile Joined June 2010
United States63 Posts
December 08 2010 23:20 GMT
#54
@Sandermatt Yeah, I would like add something like that. It would take more data (which is already a problem). The way I would do it is have a skill rating for each player, and then an adjustment for the opponents race. Would be very easy to add if I had more data.
PROJECTILE
Profile Joined April 2010
United States226 Posts
December 09 2010 00:02 GMT
#55
where are you going to school for statistics?
Mip
Profile Joined June 2010
United States63 Posts
December 09 2010 06:58 GMT
#56
On December 09 2010 07:41 kazansky wrote:
Show nested quote +
On December 09 2010 02:31 Cel.erity wrote:
On December 08 2010 22:24 kazansky wrote:
On December 08 2010 22:20 aka_star wrote:
I don't honestly know how you can model the probability of the players, it just blows my mind how complex putting a value on a player could be. It would says nothing about a winning strategy or the countless variables of real day events but seems to me that this system focuses more on averaging out past performance which following a market or a horse in its career is no guarantee. and even more sporadic the lesser the data. I suppose its a better guide than anything but I'm convinced this method would in itself require a probability of being right.


You would be surprised. There are several professional booking companies in the UK that have specialized on betting on football matches.
Their model does only incorporate past match data and does hit almost 90% for win tendencies, which is unbelievably high for football.
The model is secret for obvious reasons but german journalist Christoph Biermann wrote a book about it.


The difference between football and Starcraft is variance, especially in SC2. Football teams have a lot of players, so the impact of one players having a bad/good day is relatively low compared to a team of one. If the solo player has a bad/good day, it skews the results immensely. Also, football teams have faced each other many times in the professional arena, so there is a lot more data to draw upon. SC2 is also a new game with evolving strategies and nobody is at the top level yet, making the data even more inconsistent. Finally, I don't believe the formula accounts properly for player skill difference. In SC2, a player who is just slightly better than another will almost never lose on a favorable map, even though the data says it's 60/40.

I think it's a good effort, but I don't believe there is any formula that can rate SC2 players right now with any degree of accuracy. This would be better applied to BW where the data, players, and maps are more consistent.



I just wanted to state out that it is possible to build up very good models just on match histories, not that it is in any way comparable, i'm sorry if I didn't point that out enough :-)
I totally agree if you that if it should be any accurate, only a very researched game with at least 5 years of history could fit something like that.


I think you guys are kind of off base, I already have a model that can rate Starcraft players with a decent amount of accuracy with only 400 something games. Is it perfect? No. But it has a lot of strength and will learn as it gets more data.

Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.

As for this model and map imbalance, this model averages over all maps. It's primary function is the rate the players objectively based on their performance, which I believe it does quite nicely. If you want to optimize this for prediction, which I believe there is enough data out there that we could start, we need to pull together more data, which I would like help with if there is anyone out there good at parsing webpages.

Like I said in the original post, my data look like this
[2343,] "MC" "MarineKing"
[2344,] "MC" "MarineKing"
[2345,] "MC" "MarineKing"
[2346,] "Jinro" "Choya"
[2347,] "Jinro" "Choya"
[2348,] "Jinro" "Choya"
[2349,] "Choya" "Jinro"
[2350,] "Choya" "Jinro"

It actually starts out like this :

MarineKing 1
MC 3

Jinro 3
Choya 2

and then I convert it.

If instead I could get my data to look more like this:
MC Protoss MarineKing Terran Lost Temple
MC Protoss MarineKing Terran Blistering Sands
MC Protoss MarineKing Terran Jungle Basin

I could then start adjusting for those kinds of things. There should already be enough data to start something like this. So long as I have more data than the effective number of parameters that I'm trying to estimate, I can do it no problem.
Mip
Profile Joined June 2010
United States63 Posts
December 09 2010 06:59 GMT
#57
@PROJECTILE I'm going to school at BYU in Provo, UT. They have a pretty good statistics program, but no PhD option, they stop at Master's degrees.
kazansky
Profile Blog Joined February 2010
Germany931 Posts
December 09 2010 07:36 GMT
#58
On December 09 2010 15:58 Mip wrote:
Statistical models of this sort are not going to ever give very high prediction accuracy. If you take players with similar skills, you are always going to have difficulty predicting the outcome. But to say that I need 5 years of "research" to start making predictions is just absurd.


As I said, yes statistic models of this kind are able to give very high prediction accuracy.
I didn't say yours will yet, and I think if you keep the work up, yours will in about 5 years, or lets say 2 years. That is at least what I meant. To provide high accuracy for the complete outcome of a tournament, and very reliable predictions, you need a huge amount of data to weight, on the one hand.

Why I said 5 years was: if you knew every result of the SC2 players right now to base your assumption on, or every result of the BW players, you would highly likely choose the Broodwar players to predict, because the game is far more figured out, so your variation is narrowed down by far, because no every week a new cheese appears.

You can start making predictions whenever you want, but if you want to hit +95% over a total GSL (every game) just based on a statistical model, I think you will have to rely on 5 years of tactic development and 2 years of data :-)

I didn't want to spoil your fun, I love your work and totally appreciate it.
"Mathematicians don't understand mathematics, they get used to it." - Prof. Kredler || "That was more one-sided that a mobius strip." - Tasteless
Darkstar_X
Profile Joined May 2010
United States197 Posts
December 09 2010 07:50 GMT
#59
Interesting and fun project, though, as you said, you don't have enough data to actually make that strong of predictions. As others have said, you probably need to include a time factor as well.
Mip
Profile Joined June 2010
United States63 Posts
Last Edited: 2010-12-09 08:26:05
December 09 2010 08:16 GMT
#60
@Kazansky Small variance and prediction accuracy are not the same thing in this kind of model.

Each player has an unmeasurable skill parameter, that we can get glimpses of when they win or lose. So the more wins and loses I observe, the more I can nail down exactly what a player's skill parameter is. Over time, I can hope to achieve a fairly high precision with many player's skill levels.

But knowing a player's skill is only the parameter that feeds my function that tells me the probability that a player will win, which from the first post is exp(skill1)/(exp(skill1)+exp(skill2)). If in 5 years, I have 2 players of the same skill, the according to this formula, the probability of either winning is 50/50. Which makes sense for players of identical skill. So right now, I might say, well, there's a 30-70% chance player 1 wins (centered at 50-50, but I'm uncertain about exactly what it is), then 5 years from now I can say that there's a 49-51% chance player 1 wins (still 50-50, but I'm certain it's about 50-50 at this point). I'll be able to narrow in only on the probability that a specific player can beat another, not on the actual outcome.

What you are saying is that in 5 years, there will only be <5% upsets, and >95% perfect predictability. According to any paired comparison model, that would imply that all player's skill levels are tremendously far apart, which is not likely to be the case. That would imply that no rivalry would exist, no excitement in wondering who will come out on top in any match-up because 95%+ of the time you'd know the victor in advance.

I don't understand how one could ever have high predictability of evenly match opponents. I think that would, by definition, make them not evenly matched.
Prev 1 2 3 4 Next All
Please log in or register to reply.
Live Events Refresh
OSC
19:00
Mid Season Playoffs
Gerald vs ArTLIVE!
Solar vs goblin
Nicoract vs Cure
Spirit vs Percival
Cham vs TBD
ByuN vs Jumy
SteadfastSC954
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
SteadfastSC 954
ZombieGrub152
JuggernautJason86
Nathanias 24
Lillekanin 14
StarCraft: Brood War
Britney 19517
Rain 1577
Shuttle 497
Dewaltoss 51
ggaemo 30
Hm[arnc] 9
Dota 2
Pyrionflax218
NeuroSwarm103
Counter-Strike
apEX2337
fl0m1215
Stewie2K503
Heroes of the Storm
Liquid`Hasu501
Other Games
Grubby3943
summit1g3409
FrodaN1211
ArmadaUGS242
ToD227
Hui .134
C9.Mang0117
Trikslyr46
Kaelaris8
Organizations
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 21 non-featured ]
StarCraft 2
• Hupsaiya 63
• davetesta13
• Reevou 2
• LaughNgamezSOOP
• sooper7s
• AfreecaTV YouTube
• intothetv
• Migwel
• Kozan
• IndyKCrew
StarCraft: Brood War
• FirePhoenix13
• Azhi_Dahaki10
• Pr0nogo 4
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota22053
League of Legends
• imaqtpie797
Other Games
• Scarra917
• WagamamaTV256
• Shiphtur193
Upcoming Events
RSL Revival
13h
Maru vs Reynor
Cure vs TriGGeR
Map Test Tournament
14h
The PondCast
16h
RSL Revival
1d 13h
Zoun vs Classic
Korean StarCraft League
2 days
BSL Open LAN 2025 - War…
2 days
RSL Revival
2 days
BSL Open LAN 2025 - War…
3 days
RSL Revival
3 days
Online Event
3 days
[ Show More ]
Wardi Open
4 days
Monday Night Weeklies
4 days
Sparkling Tuna Cup
5 days
LiuLi Cup
6 days
Liquipedia Results

Completed

Proleague 2025-09-10
Chzzk MurlocKing SC1 vs SC2 Cup #2
HCC Europe

Ongoing

BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
LASL Season 20
RSL Revival: Season 2
Maestros of the Game
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1

Upcoming

2025 Chongqing Offline CUP
BSL World Championship of Poland 2025
IPSL Winter 2025-26
BSL Season 21
SC4ALL: Brood War
BSL 21 Team A
Stellar Fest
SC4ALL: StarCraft II
EC S1
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
MESA Nomadic Masters Fall
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.