• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 18:30
CEST 00:30
KST 07:30
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL19] Finals Preview: Daunting Task25[ASL19] Ro4 Recap : The Peak15DreamHack Dallas 2025 - Info & Preview19herO wins GSL Code S Season 1 (2025)17Code S RO4 & Finals Preview: herO, GuMiho, Classic, Cure6
Community News
[BSL20] RO20 Group Stage1EWC 2025 Regional Qualifiers (May 28-June 1)9Weekly Cups (May 12-18): Clem sweeps WardiTV May3Code S Season 2 (2025) - Qualifier Results212025 GSL Season 2 (Qualifiers)14
StarCraft 2
General
Interview with oPZesty on Cheeseadelphia/Coaching herO wins GSL Code S Season 1 (2025) DreamHack Dallas 2025 - Info & Preview Power Rank: October 2018 Code S Season 2 (2025) - Qualifier Results
Tourneys
DreamHack Dallas 2025 EWC 2025 Regional Qualifiers (May 28-June 1) Last Chance Qualifiers for OlimoLeague 2024 Winter $5,100+ SEL Season 2 Championship (SC: Evo) StarCraft Evolution League (SC Evo Biweekly)
Strategy
Simple Questions Simple Answers [G] PvT Cheese: 13 Gate Proxy Robo
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 474 Futile Resistance Mutation # 473 Cold is the Void Mutation # 472 Dead Heat Mutation # 471 Delivery Guaranteed
Brood War
General
[ASL19] Finals Preview: Daunting Task [ASL19] Ro4 Recap : The Peak BGH auto balance -> http://bghmmr.eu/ ASL 19 Tickets for foreigners BW General Discussion
Tourneys
[ASL19] Grand Finals [BSL20] RO20 Group Stage [BSL20] RO20 Group A - Sunday 20:00 CET [ASL19] Semifinal B
Strategy
I am doing this better than progamers do. [G] How to get started on ladder as a new Z player
Other Games
General Games
Battle Aces/David Kim RTS Megathread Stormgate/Frost Giant Megathread Nintendo Switch Thread Beyond All Reason What do you want from future RTS games?
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
LiquidLegends to reintegrate into TL.net
Heroes of the Storm
Simple Questions, Simple Answers
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia TL Mafia Community Thread TL Mafia Plays: Diplomacy TL Mafia: Generative Agents Showdown Survivor II: The Amazon
Community
General
Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine US Politics Mega-thread European Politico-economics QA Mega-thread Trading/Investing Thread
Fan Clubs
Serral Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread
Sports
2024 - 2025 Football Thread Formula 1 Discussion NHL Playoffs 2024 NBA General Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread Cleaning My Mechanical Keyboard How to clean a TTe Thermaltake keyboard?
TL Community
The Automated Ban List TL.net Ten Commandments
Blogs
Yes Sir! How Commanding Impr…
TrAiDoS
Poker
Nebuchad
Info SLEgma_12
SLEgma_12
SECOND COMMING
XenOsky
WombaT’s Old BW Terran Theme …
WombaT
Heero Yuy & the Tax…
KrillinFromwales
Customize Sidebar...

Website Feedback

Closed Threads



Active: 11705 users

Statistical Analysis of StarCraft 2 Balance - Page 5

Forum Index > SC2 General
Post a Reply
Prev 1 2 3 4 5 6 7 Next All
SlipperySnake
Profile Blog Joined November 2010
248 Posts
May 06 2011 16:01 GMT
#81
I really enjoyed your model and I look forward to you improving it and maybe adding variables to better estimate match outcomes. It would be great to see more than just GSL ran through this sort of a model but I understand it would be a ton of work. Maybe one solution is to have people email you data in a form that you can use or partner with a few spectators to keep track of game stats.

I mean someone just needs to have and excel workbook open and type in the things you measured so that you wouldn't have to go through it all. Anyways I look forward to any future analysis, I feel like this was a damn good start at estimating balance. Thanks.
Mactator
Profile Joined March 2011
109 Posts
Last Edited: 2011-05-06 16:54:51
May 06 2011 16:49 GMT
#82
The imbalance issue is not necessarily related to probability of a player winning. The usual notion of "imbalance" refers to specific issues rather than xvy being imbalanced.

If we consider the notorious example of the protoss death ball then many people complains that zerg players can't win against it. Let's assume that is correct. Then the obvious thing to do if you are a zerg player is to avoid getting into late game against a protoss. This may be a very effective strategy and you may even measure a high probability for zerg players to win. The game would still be imbalanced though!

Therefore the question of imbalance is a matter of strategies. To quantify it you need to consider a specific case of imbalance. If for example you can prove statistically that zvp never goes in to late game and if it does then protoss has an extreme win-loss ratio then you can conclude either that
1) there is an imbalance issue
or
2) zerg players are bad at playing late game.

ffdestiny
Profile Joined September 2010
United States773 Posts
May 06 2011 17:01 GMT
#83
Quoting Day9: "You can't really talk about balance before you take a hell of a lot of time analyzing the data." Unfortunately, your article has 5 references, uses a small sample size and jumps to conclusions based on your model. Obviously you're industrious and want to prove a point, but without just statistically gathering the entire database of games played (on this patch) there is no room for an argument on balance. Also, balance is so tied to maps that it almost nearly becomes a moot point to measure racial imbalances rather than map imbalances. There are just so many factors.

How do you measure balance in terms of the whole game? If we measure balance using data by pro players that doesn't indicate the whole game, but the subset.

How do you measure balance in terms of a race? If we measure by race then how do we correlate that data to maps.

How do you measure balance in terms of games? If we measure imbalance by games then how do we account for strategies that are intended to kill the opponent before he or she has expansions--cheesing or all-ins.

How do you measure balance by all of the above? If we measure imbalance by the whole game, the race and the games then how do those relate to one and another, because if we analyze all of the data and it comes up with a statistical win ratio favoring the zerg but then our subsets of data prove that zerg is weaker on certain maps, against certain strategies, etc. this totally negates our first assumption.

You see it's almost pointless to try and argue imbalance?
Lingy
Profile Joined December 2010
England201 Posts
May 06 2011 17:03 GMT
#84
IMO there is no way toss is better than zerg, i dont care what the stats say
Hydraliskuuuuhh
d_ijk_stra
Profile Joined March 2011
United States36 Posts
May 06 2011 17:25 GMT
#85
On May 07 2011 02:01 ffdestiny wrote:
Quoting Day9: "You can't really talk about balance before you take a hell of a lot of time analyzing the data." Unfortunately, your article has 5 references, uses a small sample size and jumps to conclusions based on your model. Obviously you're industrious and want to prove a point, but without just statistically gathering the entire database of games played (on this patch) there is no room for an argument on balance. Also, balance is so tied to maps that it almost nearly becomes a moot point to measure racial imbalances rather than map imbalances. There are just so many factors.

How do you measure balance in terms of the whole game? If we measure balance using data by pro players that doesn't indicate the whole game, but the subset.

How do you measure balance in terms of a race? If we measure by race then how do we correlate that data to maps.

How do you measure balance in terms of games? If we measure imbalance by games then how do we account for strategies that are intended to kill the opponent before he or she has expansions--cheesing or all-ins.

How do you measure balance by all of the above? If we measure imbalance by the whole game, the race and the games then how do those relate to one and another, because if we analyze all of the data and it comes up with a statistical win ratio favoring the zerg but then our subsets of data prove that zerg is weaker on certain maps, against certain strategies, etc. this totally negates our first assumption.

You see it's almost pointless to try and argue imbalance?


First of all, the analysis takes the effect of map into account,
thus actually this analysis can be thought of as "DO WE HAVE BALANCED MAPS?",
and try to see how many P>Z imba or T>Z imba maps there are, so on.

Secondly, I understand you feel uncomfortable with statistical analysis.
Say, there are 50 students in the class. Let's say the mean of heights is 170cm.
What does it talk about the individual person? Nothing. Any student in the class
can be 150cm tall, or 200cm tall. However, the mean itself is still not meaningless.
To gain information, we sometimes have to find out what is the clever way of
summarizing things. Of course the more complex the situation is, the harder
and less intuitive the statistics become.

If you think statistical analysis explains the detail of EVERY GAME,
I think you are misled. That is not the point of conducting an analysis.
The point is to find out whether there is an overall trend.
In one game, a Terran gamer can cheese a Zerg gamer.
However, can he do it in every game? Absolutely not.
But there are maps that a cheese can be succeed in high probability (ex: steppes of war).
In such a case, it is not hard to see there is a balance issue. (ex: the infamous Mercury map in BW)
My Life for IU!
Argolis
Profile Joined August 2010
Canada211 Posts
May 06 2011 17:26 GMT
#86
Well done. Stats are always fun, not so much as proof of anything because they can always be argued, but because numbers are fun.
d_ijk_stra
Profile Joined March 2011
United States36 Posts
May 06 2011 17:37 GMT
#87
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.
My Life for IU!
d_ijk_stra
Profile Joined March 2011
United States36 Posts
May 06 2011 17:42 GMT
#88
On May 07 2011 01:49 Mactator wrote:
The imbalance issue is not necessarily related to probability of a player winning. The usual notion of "imbalance" refers to specific issues rather than xvy being imbalanced.

If we consider the notorious example of the protoss death ball then many people complains that zerg players can't win against it. Let's assume that is correct. Then the obvious thing to do if you are a zerg player is to avoid getting into late game against a protoss. This may be a very effective strategy and you may even measure a high probability for zerg players to win. The game would still be imbalanced though!

Therefore the question of imbalance is a matter of strategies. To quantify it you need to consider a specific case of imbalance. If for example you can prove statistically that zvp never goes in to late game and if it does then protoss has an extreme win-loss ratio then you can conclude either that
1) there is an imbalance issue
or
2) zerg players are bad at playing late game.



To make a change in a game as a patch, you are definitely correct.
However, there ARE imbalances sometimes.
If you watched BW for a long time, do you remember the infamous map Mercury?
What was the P score? Did P win more than 2 games in that map?

From game to game, yes there are differences.
Even July was defeated in Mercury in OSL final.
However, everyone who has been playing SC1/SC2 for long time KNOWS that
certain maps REQUIRE PLAYERS of certain race to do things x, y, z,....
and thus it leads to imbalance issues.
My Life for IU!
latan
Profile Joined July 2010
740 Posts
May 06 2011 17:50 GMT
#89
I like your initiative but this analysis is almost a joke. badly written, poorly justified and pretty much naive for something that tries to pass as a scientific paper, I only say this because i don't like that things like this are on arxiv.

I would rather it being limited to discussing possible statistical models and methods to approach the problem.
Elean
Profile Joined October 2010
689 Posts
Last Edited: 2011-05-06 18:03:26
May 06 2011 18:02 GMT
#90
On May 07 2011 02:37 d_ijk_stra wrote:
Show nested quote +
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.
Cheerio
Profile Blog Joined August 2007
Ukraine3178 Posts
Last Edited: 2011-05-06 18:40:50
May 06 2011 18:17 GMT
#91
On May 05 2011 10:21 professorjoak wrote:
Data set had only about ~620 nonmirror games in it. It would be interesting to use this methodology on the Brood War TSL Season 1 and 2 full ladder replay packs, which have several times more data in them.

I looked into trying a statistical analysis for TSL Season 1 at one point to see if the distribution of build orders on a map had any correlation with win percent. A first glance at the data showed all matchups on any map where I had 100+ games in that specific map and matchup balanced within 52-48. (Which is different than the Korean results in the TLPD which usually split 60-40 or 55-45, though those are based on far fewer games). However, I then realized the data set had many duplicate games from a game between two top ladder players being counted in each player's replay pack and decided it would be too much trouble to properly sort them out so I quit there and didn't take the analysis much further.

well what's wrong with duplicates? It's not like in a replay from the opposite player the winner would somehow change. Even if many replays are duplicated and many are not it is still ok as long as duplication is random (though it can hurt the result it's much more probable the difference would be minor)
Mactator
Profile Joined March 2011
109 Posts
Last Edited: 2011-05-06 19:59:34
May 06 2011 19:38 GMT
#92
On May 07 2011 02:42 d_ijk_stra wrote:
Show nested quote +
On May 07 2011 01:49 Mactator wrote:
The imbalance issue is not necessarily related to probability of a player winning. The usual notion of "imbalance" refers to specific issues rather than xvy being imbalanced.

If we consider the notorious example of the protoss death ball then many people complains that zerg players can't win against it. Let's assume that is correct. Then the obvious thing to do if you are a zerg player is to avoid getting into late game against a protoss. This may be a very effective strategy and you may even measure a high probability for zerg players to win. The game would still be imbalanced though!

Therefore the question of imbalance is a matter of strategies. To quantify it you need to consider a specific case of imbalance. If for example you can prove statistically that zvp never goes in to late game and if it does then protoss has an extreme win-loss ratio then you can conclude either that
1) there is an imbalance issue
or
2) zerg players are bad at playing late game.



To make a change in a game as a patch, you are definitely correct.
However, there ARE imbalances sometimes.
If you watched BW for a long time, do you remember the infamous map Mercury?
What was the P score? Did P win more than 2 games in that map?

From game to game, yes there are differences.
Even July was defeated in Mercury in OSL final.
However, everyone who has been playing SC1/SC2 for long time KNOWS that
certain maps REQUIRE PLAYERS of certain race to do things x, y, z,....
and thus it leads to imbalance issues.


You are right about maps being important. Some maps can be abused if you are playing a specific race but I don't think that is the issue that frustrates people.

It would be nice to have a homepage where you for a specific patch could see things like 1) the average time (perhaps with standard deviation) played for a specific map and races (x vs y) 2) the most popular units/army composition in early, mid and late game i.e. at a specific time, 3) correlation plots etc.. It would also be good to have the division or tournaments such as GSL, MLG etc. as a variable. Like sc2ranks although with different data.

This would add some useful data to the discussion about imbalance and strategy.
tdt
Profile Joined October 2010
United States3179 Posts
Last Edited: 2011-05-06 20:05:35
May 06 2011 19:41 GMT
#93
Don't know stats but I believe it. When blizzz used to release numbers it showed same with P on short end. When you look at tipsy tops of ladders Terran just dominate everywhere. When you combine bunches of tournaments Terran is on top.

Maybe terrans just better skilled though? How do you know?

Saying Terran is IMBA It's like saying basketball is imbalanced towards USA rather than we have better players. No?

I prefer too look at individual strategies instead. If something can not be beaten like 3 50 DPS VR in Zergs base early and nothing you can do about it, that's imbalanced so it was patched.

Everything else, including these stats, IMO is just whining and could be just as well be attributed to superior/inferior play if we step back and look objectively with neutral glasses on.
MC for president
d_ijk_stra
Profile Joined March 2011
United States36 Posts
May 06 2011 20:54 GMT
#94
On May 07 2011 03:02 Elean wrote:
Show nested quote +
On May 07 2011 02:37 d_ijk_stra wrote:
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.


By LASSO, you mean the existence of (L_1) regularizer.
When you add 10,000 to your parameter, you are being penalized a lot.
I suspect you understand the concept of regularization, sorry.
My Life for IU!
Elean
Profile Joined October 2010
689 Posts
Last Edited: 2011-05-07 06:45:47
May 07 2011 06:31 GMT
#95
On May 07 2011 05:54 d_ijk_stra wrote:
Show nested quote +
On May 07 2011 03:02 Elean wrote:
On May 07 2011 02:37 d_ijk_stra wrote:
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.


By LASSO, you mean the existence of (L_1) regularizer.
When you add 10,000 to your parameter, you are being penalized a lot.
I suspect you understand the concept of regularization, sorry.

As far as I can tell, LASSO is a least square method that set up a constraint on the L1 norm. Constraint that has no justification in this case.

You have to understand that if 2 models give the exact same results for any match, there is no way to tell which one is better. I explained to you, that there is an infinite number of models that will give the same resuts with different "balance between 2 races". This means you can not tell if there is an unbalance.

I will explain to you on an example, why the L1 constraint does not have any justification.

For simplicity sake, let's consider only 2 races, T and Z, and let's consider that all the players of 1 race have the same skill.
Suppose TvZ is unbalance, and the actuall value of beta_TvZ is 500.
Since all the players made it in the tournaments, they are likely to have roughly the same strengh (skill + balance). This means the Z players have likely a beta_player that is 500 above the beta_player of the T players.

Now, run your mode, with a sample size extremely large. You get the solution beta_TvZ=0, beta_playerZ=0 and beta_playerT=0. This solution clearly minimize the L1 norm, and is also give the exact result for the probability of each match. However, it is completly wrong and does not manage to see the unbalance.

Your method fails to catch any unbalance for the exact same reason everyone can't tell the balance: we don't know if the zerg players are more or less skilled than the terran players.
d_ijk_stra
Profile Joined March 2011
United States36 Posts
May 07 2011 14:47 GMT
#96
On May 07 2011 15:31 Elean wrote:
Show nested quote +
On May 07 2011 05:54 d_ijk_stra wrote:
On May 07 2011 03:02 Elean wrote:
On May 07 2011 02:37 d_ijk_stra wrote:
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.


By LASSO, you mean the existence of (L_1) regularizer.
When you add 10,000 to your parameter, you are being penalized a lot.
I suspect you understand the concept of regularization, sorry.

As far as I can tell, LASSO is a least square method that set up a constraint on the L1 norm. Constraint that has no justification in this case.

You have to understand that if 2 models give the exact same results for any match, there is no way to tell which one is better. I explained to you, that there is an infinite number of models that will give the same resuts with different "balance between 2 races". This means you can not tell if there is an unbalance.

I will explain to you on an example, why the L1 constraint does not have any justification.

For simplicity sake, let's consider only 2 races, T and Z, and let's consider that all the players of 1 race have the same skill.
Suppose TvZ is unbalance, and the actuall value of beta_TvZ is 500.
Since all the players made it in the tournaments, they are likely to have roughly the same strengh (skill + balance). This means the Z players have likely a beta_player that is 500 above the beta_player of the T players.

Now, run your mode, with a sample size extremely large. You get the solution beta_TvZ=0, beta_playerZ=0 and beta_playerT=0. This solution clearly minimize the L1 norm, and is also give the exact result for the probability of each match. However, it is completly wrong and does not manage to see the unbalance.

Your method fails to catch any unbalance for the exact same reason everyone can't tell the balance: we don't know if the zerg players are more or less skilled than the terran players.


That part is already (implicitly) mentioned by other users. If there is NO MIRROR MATCH, your point is right. The existence of mirror match enables you to do such an analysis.

Of course, (as another user already pointed out) you can question it. Every gamer may have different levels of skill depending on the race of his/her opponent. But I think this assumption itself is not too strong to make everything nonsense: we know that most top level players are also good at mirror matches.
My Life for IU!
d_ijk_stra
Profile Joined March 2011
United States36 Posts
May 07 2011 14:56 GMT
#97
On May 07 2011 15:31 Elean wrote:
Show nested quote +
On May 07 2011 05:54 d_ijk_stra wrote:
On May 07 2011 03:02 Elean wrote:
On May 07 2011 02:37 d_ijk_stra wrote:
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.


By LASSO, you mean the existence of (L_1) regularizer.
When you add 10,000 to your parameter, you are being penalized a lot.
I suspect you understand the concept of regularization, sorry.

As far as I can tell, LASSO is a least square method that set up a constraint on the L1 norm. Constraint that has no justification in this case.

You have to understand that if 2 models give the exact same results for any match, there is no way to tell which one is better. I explained to you, that there is an infinite number of models that will give the same resuts with different "balance between 2 races". This means you can not tell if there is an unbalance.

I will explain to you on an example, why the L1 constraint does not have any justification.

For simplicity sake, let's consider only 2 races, T and Z, and let's consider that all the players of 1 race have the same skill.
Suppose TvZ is unbalance, and the actuall value of beta_TvZ is 500.
Since all the players made it in the tournaments, they are likely to have roughly the same strengh (skill + balance). This means the Z players have likely a beta_player that is 500 above the beta_player of the T players.

Now, run your mode, with a sample size extremely large. You get the solution beta_TvZ=0, beta_playerZ=0 and beta_playerT=0. This solution clearly minimize the L1 norm, and is also give the exact result for the probability of each match. However, it is completly wrong and does not manage to see the unbalance.

Your method fails to catch any unbalance for the exact same reason everyone can't tell the balance: we don't know if the zerg players are more or less skilled than the terran players.


Oh, and it seems like you missed this point: every beta_player of each user is ALSO penalized by LASSO. This is a very important point, but I thought when I say LASSO everyone would imagine every variable is being penalized. Isn't it the usual case? I think not penalizing certain variables is an exceptional case when using LASSO.
My Life for IU!
Elean
Profile Joined October 2010
689 Posts
May 07 2011 15:06 GMT
#98
On May 07 2011 23:47 d_ijk_stra wrote:
Show nested quote +
On May 07 2011 15:31 Elean wrote:
On May 07 2011 05:54 d_ijk_stra wrote:
On May 07 2011 03:02 Elean wrote:
On May 07 2011 02:37 d_ijk_stra wrote:
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.


By LASSO, you mean the existence of (L_1) regularizer.
When you add 10,000 to your parameter, you are being penalized a lot.
I suspect you understand the concept of regularization, sorry.

As far as I can tell, LASSO is a least square method that set up a constraint on the L1 norm. Constraint that has no justification in this case.

You have to understand that if 2 models give the exact same results for any match, there is no way to tell which one is better. I explained to you, that there is an infinite number of models that will give the same resuts with different "balance between 2 races". This means you can not tell if there is an unbalance.

I will explain to you on an example, why the L1 constraint does not have any justification.

For simplicity sake, let's consider only 2 races, T and Z, and let's consider that all the players of 1 race have the same skill.
Suppose TvZ is unbalance, and the actuall value of beta_TvZ is 500.
Since all the players made it in the tournaments, they are likely to have roughly the same strengh (skill + balance). This means the Z players have likely a beta_player that is 500 above the beta_player of the T players.

Now, run your mode, with a sample size extremely large. You get the solution beta_TvZ=0, beta_playerZ=0 and beta_playerT=0. This solution clearly minimize the L1 norm, and is also give the exact result for the probability of each match. However, it is completly wrong and does not manage to see the unbalance.

Your method fails to catch any unbalance for the exact same reason everyone can't tell the balance: we don't know if the zerg players are more or less skilled than the terran players.


That part is already (implicitly) mentioned by other users. If there is NO MIRROR MATCH, your point is right. The existence of mirror match enables you to do such an analysis.

Of course, (as another user already pointed out) you can question it. Every gamer may have different levels of skill depending on the race of his/her opponent. But I think this assumption itself is not too strong to make everything nonsense: we know that most top level players are also good at mirror matches.

Obviously, mirror matches change nothing. My exemple still stands with an extremely large number of mirror matches.

I didn't say that everything was nonsense. Your model is probably good to estimate the odds of a match, or to tell which player is the best within one race. However it is completely blind to balance.
Elean
Profile Joined October 2010
689 Posts
Last Edited: 2011-05-07 15:08:28
May 07 2011 15:07 GMT
#99
On May 07 2011 23:56 d_ijk_stra wrote:
Show nested quote +
On May 07 2011 15:31 Elean wrote:
On May 07 2011 05:54 d_ijk_stra wrote:
On May 07 2011 03:02 Elean wrote:
On May 07 2011 02:37 d_ijk_stra wrote:
On May 06 2011 22:25 Elean wrote:
It looks like this model assumes that protoss players are extremely skilled (6 Protoss in the top 10 skilled player), and get to the conclusion that Protoss is underpowered.
Basicaly, it has exactly the same value as Idra saying "I'm the best player, I don't win, thus there is an imbalance".

(actually, this model can converge to different solutions, the particular solution the author got was "protoss players are skilled and protoss are underpowered...", it could very well have converged to "protoss players have no skill and protoss are overpowered")

All the people reading this should understand that this is not a scientific peer reviewed paper.

There is no way, this would be accepted as it is now.
If I were to review this paper I would ask for several modifications, and I would actually reject the paper unless the author answer this question:
How can you tell there is no offset on the "skill parameter" of all the players of 1 race ?

I would also ask a plot of the "skill pararemeter" distribution for each race.


First of all, I think you read it very carefully. Thank you very much for your interest.
I'll talk in technical sense, since it seems like you have good background in statistics.

The problem you're worried of can happen in "unidentifiable" cases,
that is, there are multiple parameters that can represent the same model.
This is not the case for this problem, since I used either

1) Use LASSO as a L_1 regularier,
2) Use non-informative gamers as baseline

Therefore, things like what you described cannot happen.
The existence of regularizer tries to not have the presence of extraordinary gamers
as much as it can, unless he wins too many games.

It is very important to check identifiability of the model before conducting an analysis,
and it is good for you to check out this issue. I understand for you to miss this point
since 1) I agree that the document is poorly written. It should be rejected in every journal/conference 2) you should've not read it as a professional reviewer

And it is also good to point out that THIS IS NOT A SCIENTIFIC PEER-REVIEWED PAPER.
I DID IT FOR FUN, and the fact that I am a Statistics major does not guarantee that the
analysis is correct. I didn't worry much at this point at the time posting it, but people without
proper background could've misled. Thanks.

Your model is:

logit(P)=beta_player1-beta_player2+beta_matchup

You use the LASSO method to fit the value of beta_playerx, and beta_matchupx

You get ONE fit, but there are other degenerated solutions, here is the proof:
Take the values of your solution, then decrease by 10000 all the beta_player of protoss players, then increase by 10000 the beta_PvZ and the beta_PvT.
If you do that, you get another fit that is just as good as the one you first had (i.e. all the logit(P) are unchanged). However, now the beta_PvZ and the beta_PvT are extremely high, and protoss become clearly overpowered.


Your model is probably good to estimate how likely a player is to win a match, but it is 100% blind to balance.

The problem is that all the players only play 1 race, and you will never be able to make the difference between "all the protoss players are way better than the others, but protoss is underpowered" and "all the protoss players are noobs, but it's ok since protoss is way overpowered".
There is absolutely nothing you can do about it. Not with this sample of data.


By LASSO, you mean the existence of (L_1) regularizer.
When you add 10,000 to your parameter, you are being penalized a lot.
I suspect you understand the concept of regularization, sorry.

As far as I can tell, LASSO is a least square method that set up a constraint on the L1 norm. Constraint that has no justification in this case.

You have to understand that if 2 models give the exact same results for any match, there is no way to tell which one is better. I explained to you, that there is an infinite number of models that will give the same resuts with different "balance between 2 races". This means you can not tell if there is an unbalance.

I will explain to you on an example, why the L1 constraint does not have any justification.

For simplicity sake, let's consider only 2 races, T and Z, and let's consider that all the players of 1 race have the same skill.
Suppose TvZ is unbalance, and the actuall value of beta_TvZ is 500.
Since all the players made it in the tournaments, they are likely to have roughly the same strengh (skill + balance). This means the Z players have likely a beta_player that is 500 above the beta_player of the T players.

Now, run your mode, with a sample size extremely large. You get the solution beta_TvZ=0, beta_playerZ=0 and beta_playerT=0. This solution clearly minimize the L1 norm, and is also give the exact result for the probability of each match. However, it is completly wrong and does not manage to see the unbalance.

Your method fails to catch any unbalance for the exact same reason everyone can't tell the balance: we don't know if the zerg players are more or less skilled than the terran players.


Oh, and it seems like you missed this point: every beta_player of each user is ALSO penalized by LASSO. This is a very important point, but I thought when I say LASSO everyone would imagine every variable is being penalized. Isn't it the usual case? I think not penalizing certain variables is an exceptional case when using LASSO.


Yeah of course, every parameter is constrained. This is why in my example, where all the players have the same strenght (skill + race balance), your model will set all the parameters to 0, despite the unbalance.
FoxNews
Profile Joined February 2011
1 Post
May 07 2011 15:18 GMT
#100
Nice work! i've always been interested in doing a statistical study myself, but i have yet to take stats--lol in hs it was either stats or calc.. It's also refreshing to see another Cornellian on here. I'm a freshman undergrad myself planning on majoring in physics. Anyway, nice work, and don't listen to the haters who couldn't have done a study like this in the first place.
Keep up the good work!
Also, did you go see nelly? lol he's so bad.
Prev 1 2 3 4 5 6 7 Next All
Please log in or register to reply.
Live Events Refresh
Road to EWC
15:00
DreamHack Dallas Group Stage
ewc_black2192
ComeBackTV 1408
SteadfastSC396
CranKy Ducklings371
CosmosSc2 153
Rex106
EnkiAlexander 11
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
SteadfastSC 396
NeuroSwarm 187
CosmosSc2 153
Rex 106
EnDerr 7
StarCraft: Brood War
Britney 14811
Calm 3261
Mini 511
firebathero 189
ggaemo 117
Dota 2
LuMiX1
Counter-Strike
fl0m3342
Fnx 1907
Stewie2K587
Super Smash Bros
hungrybox1256
AZ_Axe70
Heroes of the Storm
Grubby4082
Other Games
tarik_tv15479
gofns13404
summit1g9208
FrodaN3523
shahzam507
ViBE141
Hui .92
NightEnD68
KnowMe52
Sick34
Organizations
Other Games
gamesdonequick1390
BasetradeTV198
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 19 non-featured ]
StarCraft 2
• Hupsaiya 113
• davetesta31
• HeavenSC 22
• IndyKCrew
• sooper7s
• AfreecaTV YouTube
• Migwel
• intothetv
• LaughNgamezSOOP
• Kozan
StarCraft: Brood War
• blackmanpl 84
• Eskiya23 22
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• masondota21091
League of Legends
• Doublelift3615
Other Games
• imaqtpie1449
• Shiphtur161
Upcoming Events
Afreeca Starleague
6h 30m
BeSt vs Soulkey
AllThingsProtoss
12h 30m
Road to EWC
15h 30m
BSL: ProLeague
19h 30m
Cross vs TT1
spx vs Hawk
JDConan vs TBD
Wardi Open
1d 12h
SOOP
2 days
NightMare vs Wayne
Replay Cast
2 days
Replay Cast
3 days
GSL Code S
3 days
Cure vs Zoun
Solar vs Creator
The PondCast
3 days
[ Show More ]
Online Event
3 days
Clem vs ShoWTimE
herO vs MaxPax
GSL Code S
4 days
GuMiho vs Bunny
ByuN vs SHIN
Online Event
4 days
Replay Cast
5 days
CranKy Ducklings
6 days
Liquipedia Results

Completed

Proleague 2025-05-20
2025 GSL S1
Calamity Stars S2

Ongoing

JPL Season 2
ASL Season 19
YSL S1
BSL 2v2 Season 3
BSL Season 20
China & Korea Top Challenge
KCM Race Survival 2025 Season 2
NPSL S3
Rose Open S1
DreamHack Dallas 2025
Heroes 10 EU
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
ECL Season 49: Europe
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025
PGL Bucharest 2025
BLAST Open Spring 2025
ESL Pro League S21

Upcoming

CSL 17: 2025 SUMMER
Copa Latinoamericana 4
CSLPRO Last Chance 2025
CSLAN 2025
K-Championship
SEL Season 2 Championship
Esports World Cup 2025
HSC XXVII
Championship of Russia 2025
Bellum Gens Elite Stara Zagora 2025
2025 GSL S2
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.