• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 12:33
CET 18:33
KST 02:33
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
ComeBackTV's documentary on Byun's Career !8Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win4Weekly Cups (Dec 1-7): Clem doubles, Solar gets over the hump1Weekly Cups (Nov 24-30): MaxPax, Clem, herO win2BGE Stara Zagora 2026 announced15
StarCraft 2
General
When will we find out if there are more tournament ComeBackTV's documentary on Byun's Career ! Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win RSL Revival - 2025 Season Finals Preview Weekly Cups (Dec 1-7): Clem doubles, Solar gets over the hump
Tourneys
$5,000+ WardiTV 2025 Championship RSL Offline Finals Info - Dec 13 and 14! Master Swan Open (Global Bronze-Master 2) Winter Warp Gate Amateur Showdown #1: Sparkling Tuna Cup - Weekly Open Tournament
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 504 Retribution Mutation # 503 Fowl Play Mutation # 502 Negative Reinforcement Mutation # 501 Price of Progress
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ FlaSh on: Biggest Problem With SnOw's Playstyle How Rain Became ProGamer in Just 3 Months screp: Command line app to parse SC rep files [BSL21] RO8 Bracket & Prediction Contest
Tourneys
Small VOD Thread 2.0 [Megathread] Daily Proleagues [BSL21] WB SEMIFINALS - Saturday 21:00 CET [BSL21] RO8 - Day 2 - Sunday 21:00 CET
Strategy
Game Theory for Starcraft Current Meta Simple Questions, Simple Answers Fighting Spirit mining rates
Other Games
General Games
PC Games Sales Thread Stormgate/Frost Giant Megathread Nintendo Switch Thread Path of Exile General RTS Discussion Thread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Survivor II: The Amazon Sengoku Mafia TL Mafia Community Thread
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine The Games Industry And ATVI YouTube Thread
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
TL+ Announced Where to ask questions and add stream?
Blogs
The (Hidden) Drug Problem in…
TrAiDoS
I decided to write a webnov…
DjKniteX
James Bond movies ranking - pa…
Topin
Thanks for the RSL
Hildegard
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1803 users

Computer Simulation to Understand Arena Results

Forum Index > Hearthstone General
Post a Reply
Normal
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
Last Edited: 2014-03-26 00:46:00
March 25 2014 01:45 GMT
#1
Something always bothered me about the way people treat the evaluation of skill using as a basis the amount of wins in the Arena. It seems that this modality is treated like a "challenge against AI" where everyone can succeed simultaneously. They forget that it is a game that generates duels between actual players and - guess what - someone has to lose.
When I saw people complaining of having achieved "only" 4 or 5 wins in Arena I was thinking how would be the distribution of persons by number of victories. How many can achieve 12-0 ? What about 0-3 ?
Without access to the real numbers I decided to generate a computer simulation.

About the program I made:
+ Show Spoiler +

I made a small program which performs the following tasks:
  1. Generates a population of 100,000 players with different skill levels that simulate a normal distribution across the entire set.
  2. Performs crossings players using matchmaking by similar score. Important to note that some matchmaking is important for the final result. Tests with complete randomness in the selection of opponents were executed and although the macro results remain very similar when we look at the result with matchmaking we see important differences as considerable less players with 0-3 result.
  3. Performs duels between players considering their skills and applying a small margin of RNG in each game, such margin is more relevant to players with less skill.
  4. Retire players who reach 12 wins or 3 losses.
  5. With all players retired, collect information about percentages of players who have reached each one of the possible final results.



Note that from now on the conclusions and opinions that I present are based entirely on the results of the computer simulation. If in the future it proves to be excessively flawed by the presentation of real numbers, some or all of the following arguments may become invalids.

The results were stable in several different executions of the program. Almost 300,000 matches are played in each run, and the variations in results between different runs are quite small.

Data :
+ Show Spoiler +

Total population: 100,000
[image loading]


Main Graph:
[image loading]
The first graph shows the percentage of players at each one of the possible outcomes.

Additional Graph:
+ Show Spoiler +

[image loading]
The second graph shows the cumulative result of the percentage of players from 0-3 to 12-0, increasing the total amout until reaching 1, which means 100%. Note that the line bends strongly after the 4-3, where we have passed by 78% of the population and fewer players can move on to better results.


Some interesting data :
  • The number of players who won 7 or more games is equal to the number of players that gets 0-3.
  • Approximately 50% of the players do not reach their third victory.
  • Winning 10 or more games means being among the top 2%, making a comparision with starcraft it's like you're in the masters league.
  • Winning 5 matches already puts you among the top 22%.


My Conclusion:
  • I believe the way Hearthstone community evaluates the results may be negative for the people's motivation.
  • You can't give advices hoping that all people can reach 12 wins, because there will always be many people with low scores for others to achieve better results.
  • I think Arena may not be the best place to earn gold for some players. It is a way to have fun, for sure, because the player has access to new cards, but if the results are bad (and for many people they are) directly exchanging gold gain in daily quests for packets seems better.
  • We can't consider a score of 5 wins in Arena as mediocre, as you left almost 80% of the population behind.
  • It is very important teaching and learning strategies, and have some ambition to improve, but this needs to be done understanding certain limits. Not everyone will be able to reach the top, because it is expected by the rules of the game that a lot of people NEED to get poor results in order to sustain some people with very good results.



UPDATE - from relevant inputs:

Gerenal questions about the program:
+ Show Spoiler +

"gerenal questions about the program"
How did you make the normal distribuiton of skill?
How did you make the matchmaking?
How did you decide the winner?
How did you make the RNG?


I really don't want to discuss details of the program, just because it's tiresome. There is no much famous algorithm behind this program, I made myself every step of it.
Some overview:
Skill is a number, the greater, the better.
Normal distribution: More players have skills like 5, less players have skill like 10 or zero. It increases and decreases gradually.
Matchmaking: I don't know how blizzard does, I just pick a random player and try to find another with the same (or as close as possible) "win balance" (win - loss). So a player with 7-0 will play against a 9-2 instead a 9-0.
The winner is decided by comparing the skills. But I introduced some RNG to allow a worse skilled player to have a chance to win. Lets say player X with skill equals to 6.5 faces a player Y with 7.0. The considered skill in the match is something like this:
X: a random number from 5.85 to 7.15
Y: a random number from 6.3 to 7.7
The player Y has more chance to win, but player X still have his chance.
If the players have too discrepant skills then the victory of the best player is certain.

Important update: I just made some more simulations varying the amount of RNG and find out that it does not make that big of a difference. Also, I mixed the normal distribution of skill and still get similar results. On the other hand matchmaking is much more relevant and affects a lot the results.



Comparing this result with...
+ Show Spoiler +

On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.
http://www.arenamastery.com/sitewide.php

I don't think arenamastery has all necessary data, just the ones people submit. If so, several results, specially the bad ones, may not be present, which ruins the metric.
The only way to confirm the results would be comparing with real data from Blizzard, and I would love to do that.

I know this simulation is not perfect, but I think it can give us a clue about what happens. Some methods should be done differently, but so far I think nothing is "wrong enough" to invalidate the results. What I'm saying is that we can't learn here that exactly 77.96% of the players reach no more than 4 victories, but we can imagine this number is not 30% neither 90%.


Some aspects the simulation does not consider:
+ Show Spoiler +

On March 25 2014 17:23 RenSC2 wrote:
One further consideration is that each "player" in the simulation is not actually an individual, but instead a single run of the arena. A person who averages 3 wins or less in a run will likely only be able to play in the arena once every couple days. A person in that 4-6 range will be able to run arena about 1-2 times per day. People who average 7+ get to run the arena as much as they want. Some of the really top notch arena people are full time streamers/players and so do a very large number of runs every day and account for many runs in the 7+ club.

So the actual % of people who will average 7+ is less than what the simulation will tell you because a larger portion of those runs are being filled by the same people. Meanwhile, the less than 3 club will actually be a much larger % of people since those people don't get to make nearly as many runs as even the 4-6 people.

Essentially, averaging 5 wins or more per run actually puts you in better standing than the top 22% that the initial post calculated.

The only caveat is that real $ infusions are probably highest in the less than 3 club and those people could be balancing things out a bit.

Still, we live on a forum where being "only" in Masters in Starcraft makes you a "bad" player. Masters was initially supposed to be the top 2% of the active playerbase. We're a very elitist community.



Other simulation
+ Show Spoiler +

On March 26 2014 06:27 obesechicken13 wrote:
I don't think anyone's linked it yet, but OP maybe you should take a read at this other simulation:
http://www.liquidhearth.com/forum/hearthstone/393-monte-carlo-simulation-of-hearthstone-ranking

This is simulating the ladder, it's interesting, but I wanted to try Arena which I think is more interesting


Inspiration
+ Show Spoiler +

On March 26 2014 02:33 mikeymoo wrote:
I have issues with this methodology, but you've inspired me to construct my own simulation.


On March 25 2014 11:02 Came Norrection wrote:
You just made me want to write my own simulation.


Feel free to try your own tests! When blizzard reveals the real numbers we may see who was more accurate


Came Norrection
Profile Joined March 2011
Canada168 Posts
March 25 2014 02:02 GMT
#2
You just made me want to write my own simulation. There is something rather strange about your data since the expected value is not remotely close to 3, which would be what I'd have guess. You seem to have too many 12s, there are more 12 wins than 11 wins. I am not sure the reasoning for that.
"The lie is just a great story ruined by the truth."
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 25 2014 02:04 GMT
#3
^

I think that there are more 12s because the 11-2s face each other and so one person is bound to end up 11-3 and one is bound to end up 12-2. Then there's the cases of the people hitting 12-0 and 12-1 which adds a bit more percentage.

Overall cool data though, pretty nice to see.
@Drayxs | Drayxs.221 | Drayxs#1802
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
March 25 2014 02:35 GMT
#4
On March 25 2014 11:02 Came Norrection wrote:
You just made me want to write my own simulation. There is something rather strange about your data since the expected value is not remotely close to 3, which would be what I'd have guess. You seem to have too many 12s, there are more 12 wins than 11 wins. I am not sure the reasoning for that.


I didn't exactly understood what you mean by expected value. In my simulation I have (sort of) 50% of the population below 3 wins, and 50% above 3 wins. That was not expected?
About the "total 12's", I think we must consider 12-2 independent from 12-1 the same way it is from 11-3. They are distinct outcomes.
Came Norrection
Profile Joined March 2011
Canada168 Posts
March 25 2014 03:27 GMT
#5
On March 25 2014 11:35 MarcoBrei wrote:
Show nested quote +
On March 25 2014 11:02 Came Norrection wrote:
You just made me want to write my own simulation. There is something rather strange about your data since the expected value is not remotely close to 3, which would be what I'd have guess. You seem to have too many 12s, there are more 12 wins than 11 wins. I am not sure the reasoning for that.


I didn't exactly understood what you mean by expected value. In my simulation I have (sort of) 50% of the population below 3 wins, and 50% above 3 wins. That was not expected?
About the "total 12's", I think we must consider 12-2 independent from 12-1 the same way it is from 11-3. They are distinct outcomes.

70% of the runs are 3-3 or below which is somewhat counter intuitive to me. The Average run from your data is 2/3 and not 3/3 which is very strange to me.
"The lie is just a great story ruined by the truth."
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 25 2014 04:09 GMT
#6
It's actually very easy to do a quick calculation to show that you expect 50% of players to achieve a record of 2/3 or worse.

Suppose that you have a population of N = 100 000 players.

Suppose that all players have an equal probability of winning a game against any other player. This is a simplification for the purposes of estimation. This OP assumed a distribution in skill.

Finally, we impose a condition that players only face other players with the same record.

Round 1:
50 000 - 1/0
50 000 - 0/1

Round 2:
25 000 - 2/0
25 000 - 1/1
25 000 - 1/1
25 000 - 0/2
---------------------------
25 000 - 2/0
50 000 - 1/1
25 000 - 0/2

Round 3:
12 500 - 3/0
12 500 - 2/1
25 000 - 2/1
25 000 - 1/2
12 500 - 1/2
12 500 - 0/3
---------------------------
12 500 - 3/0
37 500 - 2/1
37 500 - 1/2
12 500 - 0/3

Round 4:
6 250 - 4/0
6 250 - 3/1
18 750 - 3/1
18 750 - 2/2
18 750 - 2/2
18 750 - 1/3
12 500 - 0/3
---------------------------
6 250 - 4/0
25 000 - 3/1
37 500 - 2/2
18 750 - 1/3
12 500 - 0/3

Round 5:
3 125 - 5/0
3 125 - 4/1
12 500 - 4/1
12 500 - 3/2
18 750 - 3/2
18 750 - 2/3
18 750 - 1/3
12 500 - 0/3
---------------------------
3 125 - 5/0
15 625 - 4/1
31 250 - 3/2
18 750 - 2/3
18 750 - 1/3
12 500 - 0/3

Thus, we expect 50% of the population to have a record of 2/3 or worse under the conditions specified above.
Who would sup with the mighty, must walk the path of daggers.
Jinault
Profile Joined February 2014
Ireland0 Posts
Last Edited: 2014-03-25 06:37:31
March 25 2014 06:36 GMT
#7
Why do you need a computer simulation when you can just calculate exact probabilities?

All you need 'access' to is that you're 50% likely to win each game.
Siggen
Profile Joined November 2011
143 Posts
March 25 2014 06:44 GMT
#8
Wow! Great job! You always have the perception that you need to have 6+ wins to keep your gold stack high enough for another arena run. I will keep this in mind when I "only" get 4 wins or less.
tomnov
Profile Blog Joined January 2011
Israel148 Posts
March 25 2014 08:11 GMT
#9
On March 25 2014 11:02 Came Norrection wrote:
You just made me want to write my own simulation. There is something rather strange about your data since the expected value is not remotely close to 3, which would be what I'd have guess. You seem to have too many 12s, there are more 12 wins than 11 wins. I am not sure the reasoning for that.

I calculate the expected value and it's 2.98 - just as expected (no pun intended) a little bit below 3. this because the number of wins must be the same as the number of losses and the number of losses is always 3 except for arenas that end with 12 wins
I reject your reality and substitute my own
RenSC2
Profile Blog Joined August 2011
United States1074 Posts
March 25 2014 08:23 GMT
#10
One further consideration is that each "player" in the simulation is not actually an individual, but instead a single run of the arena. A person who averages 3 wins or less in a run will likely only be able to play in the arena once every couple days. A person in that 4-6 range will be able to run arena about 1-2 times per day. People who average 7+ get to run the arena as much as they want. Some of the really top notch arena people are full time streamers/players and so do a very large number of runs every day and account for many runs in the 7+ club.

So the actual % of people who will average 7+ is less than what the simulation will tell you because a larger portion of those runs are being filled by the same people. Meanwhile, the less than 3 club will actually be a much larger % of people since those people don't get to make nearly as many runs as even the 4-6 people.

Essentially, averaging 5 wins or more per run actually puts you in better standing than the top 22% that the initial post calculated.

The only caveat is that real $ infusions are probably highest in the less than 3 club and those people could be balancing things out a bit.

Still, we live on a forum where being "only" in Masters in Starcraft makes you a "bad" player. Masters was initially supposed to be the top 2% of the active playerbase. We're a very elitist community.
Playing better than standard requires deviation. This divergence usually results in sub-standard play.
ashara
Profile Joined July 2008
France22 Posts
March 25 2014 08:25 GMT
#11
Nice job, it would be interesting to see what is the win average for the players with highest/lowest skill level in your simulation.

Otherwise I agree that not everyone can reach 7+ wins consistently since on all players the average will be a bit less than 3 wins. But since some players are able to currently maintain average 7+ wins consistently with current player pool, then if you don't reach this kind of number, either you are very unlucky ( unlikely ) or you have things to improve with your draft / play.

It would be cool if you could simulate deck strength as well. Since a player can have strong deck, that are likely to get high number of wins even if skill of player is low and vice-versa.
flamewheel
Profile Blog Joined December 2009
FREEAGLELAND26781 Posts
March 25 2014 08:43 GMT
#12
Love these sorts of threads.
Writerdamn, i was two days from retirement
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
March 25 2014 12:02 GMT
#13
On March 25 2014 15:36 Jinault wrote:
Why do you need a computer simulation when you can just calculate exact probabilities?

All you need 'access' to is that you're 50% likely to win each game.


So you could tell the amount of players in each level without a simulation, just with math. And you can asses the "exact" probabilities? I'd love to see how you make this calculations.



On March 25 2014 17:23 RenSC2 wrote:
One further consideration is that each "player" in the simulation is not actually an individual, but instead a single run of the arena. A person who averages 3 wins or less in a run will likely only be able to play in the arena once every couple days. A person in that 4-6 range will be able to run arena about 1-2 times per day. People who average 7+ get to run the arena as much as they want. Some of the really top notch arena people are full time streamers/players and so do a very large number of runs every day and account for many runs in the 7+ club.

So the actual % of people who will average 7+ is less than what the simulation will tell you because a larger portion of those runs are being filled by the same people. Meanwhile, the less than 3 club will actually be a much larger % of people since those people don't get to make nearly as many runs as even the 4-6 people.

Essentially, averaging 5 wins or more per run actually puts you in better standing than the top 22% that the initial post calculated.

The only caveat is that real $ infusions are probably highest in the less than 3 club and those people could be balancing things out a bit.

Still, we live on a forum where being "only" in Masters in Starcraft makes you a "bad" player. Masters was initially supposed to be the top 2% of the active playerbase. We're a very elitist community.


This actually makes sense. The simulation does not consider the recurrence of players, just consider that each of the 100,000 players have 1 ticket to play in the Arena. In practice, the number of " distinct people" at the top of the pyramid must be even smaller.
obesechicken13
Profile Blog Joined July 2008
United States10467 Posts
March 25 2014 12:14 GMT
#14
I think the actual distributions may be slightly different for arena runs.

http://www.arenamastery.com/sitewide.php
Arena mastery shows some great players so their sitewide stats sit around 4 wins average, but they have a lot of 12s compared to your simulation. This could be because some players legitimately have a greater than 50% chance of winning a certain match.

The average # of wins in a run should be around 3 though. 2.9-3.1 ish.
I think in our modern age technology has evolved to become more addictive. The things that don't give us pleasure aren't used as much. Work was never meant to be fun, but doing it makes us happier in the long run.
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
March 25 2014 12:28 GMT
#15
On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.

http://www.arenamastery.com/sitewide.php
Arena mastery shows some great players so their sitewide stats sit around 4 wins average, but they have a lot of 12s compared to your simulation. This could be because some players legitimately have a greater than 50% chance of winning a certain match.

The average # of wins in a run should be around 3 though. 2.9-3.1 ish.


Arena mastery does not have all information, right? Just the ones people submit, if I'm correct. If so, several results, specially the bad ones, may not be present, which ruins the metric.
About "50% of chance", let me clarify that in my simulation we do not have this rule. Players win game based on their skills, and people have different skills simulating a normal distribution over the population.
obesechicken13
Profile Blog Joined July 2008
United States10467 Posts
March 25 2014 12:44 GMT
#16
On March 25 2014 21:28 MarcoBrei wrote:
Show nested quote +
On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.

http://www.arenamastery.com/sitewide.php
Arena mastery shows some great players so their sitewide stats sit around 4 wins average, but they have a lot of 12s compared to your simulation. This could be because some players legitimately have a greater than 50% chance of winning a certain match.

The average # of wins in a run should be around 3 though. 2.9-3.1 ish.


Arena mastery does not have all information, right? Just the ones people submit, if I'm correct. If so, several results, specially the bad ones, may not be present, which ruins the metric.
About "50% of chance", let me clarify that in my simulation we do not have this rule. Players win game based on their skills, and people have different skills simulating a normal distribution over the population.

I agree with the first point.

How did you assign skill levels to players? did you give each a point rating and use that to determine their probability of winning? What did the distribution of skill levels look like? Did you use something like Elo distributions for chess or lol?
I think in our modern age technology has evolved to become more addictive. The things that don't give us pleasure aren't used as much. Work was never meant to be fun, but doing it makes us happier in the long run.
Osyrul
Profile Joined February 2012
257 Posts
March 25 2014 15:08 GMT
#17
Just because you are in top 20% doesn't mean you are good, it means the other 80% are bad.
Masters league doesn't mean much, even grandmaster to some extent.
xTeiwazx
Profile Joined January 2014
0 Posts
March 25 2014 15:26 GMT
#18
Depends what you define as good. If good > average -> top20% = good.

@topic
there is a thread on reddit with a similar topic. Don't know if he made calculations or simulations but there were some really good results based on the skill of a player.
mikeymoo
Profile Blog Joined October 2006
Canada7170 Posts
March 25 2014 15:52 GMT
#19
Scattered thoughts:

Excellent work. As a complete newb to HS, I generally get 1-4 wins in arena.

I'm curious about your Monte Carlo sim, you mention that skill levels are normally distributed. What is the variance of this distribution? How is player skill determined? I would assume that something like Elo rating would sense in this context.

I don't know if applying a small RNG bonus is mathematically sound- shouldn't the "RNG factor" be implied as a function of skill? Maybe there is some use to genetic algorithms to first generate your pool of players and their "skill/luck" level, then perform the runs?

Could someone enlighten me to how matchmaking in arena works?

This is quite interesting to me and I may come up with my own simulation method, but I doubt it would end up with significantly different results.
o_x | Ow. | 1003 ESPORTS dollars | If you have any questions about bans please PM Kennigit
HardlyNever
Profile Blog Joined July 2011
United States1258 Posts
March 25 2014 16:09 GMT
#20
I noticed this same trend. I think it is the way you "feel" if you go less than about 7 wins.

Technically, 4 wins should "feel" like a victory, because you won more than you lost, right? But the reality is that going 4-3 feels like losing. I think there are 2 main contributors to this, maybe more:

1. The rewards at the 4-5 win level are pretty mediocre/bad. You barely get 150 gold worth of "stuff." So at 4 wins, sometimes you don't even really break even, in terms of what it cost you to enter.

2. Most of the popular streamers play a lot of hearthstone, and as such tend to go 7+ wins. This creates the perception that "everyone" gets 7+ wins, and you should, too.
Out there, the Kid learned to fend for himself. Learned to build. Learned to break.
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
March 25 2014 17:18 GMT
#21
On March 25 2014 21:44 obesechicken13 wrote:
Show nested quote +
On March 25 2014 21:28 MarcoBrei wrote:
On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.

http://www.arenamastery.com/sitewide.php
Arena mastery shows some great players so their sitewide stats sit around 4 wins average, but they have a lot of 12s compared to your simulation. This could be because some players legitimately have a greater than 50% chance of winning a certain match.

The average # of wins in a run should be around 3 though. 2.9-3.1 ish.


Arena mastery does not have all information, right? Just the ones people submit, if I'm correct. If so, several results, specially the bad ones, may not be present, which ruins the metric.
About "50% of chance", let me clarify that in my simulation we do not have this rule. Players win game based on their skills, and people have different skills simulating a normal distribution over the population.

I agree with the first point.

How did you assign skill levels to players? did you give each a point rating and use that to determine their probability of winning? What did the distribution of skill levels look like? Did you use something like Elo distributions for chess or lol?


Skill is a number. Players have skills varying from 0 to 10. As expected in a normal distribution, there are a lot more people with skill around 5 than people with skill around 0 or 10, and that's done gradually. I also know the distribution I made is not perfect real, but I think it is enough to use in this test.
When a player faces another player, basically the one with better skill wins. But I introduced some RNG to allow a worse skilled player to have a chance to win. Lets say player X with skill equals to 6.5 faces a player Y with 7.0. The considered skill in the match is something like this:
X: a random number from 5.85 to 7.15
Y: a random number from 6.3 to 7.7
The player Y has more chance to win, but player X still have his chance.
If the players have too discrepant skills then the victory of the best player is certain.


mikeymoo
Profile Blog Joined October 2006
Canada7170 Posts
March 25 2014 17:33 GMT
#22
On March 26 2014 02:18 MarcoBrei wrote:
Show nested quote +
On March 25 2014 21:44 obesechicken13 wrote:
On March 25 2014 21:28 MarcoBrei wrote:
On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.

http://www.arenamastery.com/sitewide.php
Arena mastery shows some great players so their sitewide stats sit around 4 wins average, but they have a lot of 12s compared to your simulation. This could be because some players legitimately have a greater than 50% chance of winning a certain match.

The average # of wins in a run should be around 3 though. 2.9-3.1 ish.


Arena mastery does not have all information, right? Just the ones people submit, if I'm correct. If so, several results, specially the bad ones, may not be present, which ruins the metric.
About "50% of chance", let me clarify that in my simulation we do not have this rule. Players win game based on their skills, and people have different skills simulating a normal distribution over the population.

I agree with the first point.

How did you assign skill levels to players? did you give each a point rating and use that to determine their probability of winning? What did the distribution of skill levels look like? Did you use something like Elo distributions for chess or lol?


Skill is a number. Players have skills varying from 0 to 10. As expected in a normal distribution, there are a lot more people with skill around 5 than people with skill around 0 or 10, and that's done gradually. I also know the distribution I made is not perfect real, but I think it is enough to use in this test.
When a player faces another player, basically the one with better skill wins. But I introduced some RNG to allow a worse skilled player to have a chance to win. Lets say player X with skill equals to 6.5 faces a player Y with 7.0. The considered skill in the match is something like this:
X: a random number from 5.85 to 7.15
Y: a random number from 6.3 to 7.7
The player Y has more chance to win, but player X still have his chance.
If the players have too discrepant skills then the victory of the best player is certain.



I have issues with this methodology, but you've inspired me to construct my own simulation.
o_x | Ow. | 1003 ESPORTS dollars | If you have any questions about bans please PM Kennigit
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
March 25 2014 17:42 GMT
#23
On March 26 2014 02:33 mikeymoo wrote:
Show nested quote +
On March 26 2014 02:18 MarcoBrei wrote:
On March 25 2014 21:44 obesechicken13 wrote:
On March 25 2014 21:28 MarcoBrei wrote:
On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.

http://www.arenamastery.com/sitewide.php
Arena mastery shows some great players so their sitewide stats sit around 4 wins average, but they have a lot of 12s compared to your simulation. This could be because some players legitimately have a greater than 50% chance of winning a certain match.

The average # of wins in a run should be around 3 though. 2.9-3.1 ish.


Arena mastery does not have all information, right? Just the ones people submit, if I'm correct. If so, several results, specially the bad ones, may not be present, which ruins the metric.
About "50% of chance", let me clarify that in my simulation we do not have this rule. Players win game based on their skills, and people have different skills simulating a normal distribution over the population.

I agree with the first point.

How did you assign skill levels to players? did you give each a point rating and use that to determine their probability of winning? What did the distribution of skill levels look like? Did you use something like Elo distributions for chess or lol?


Skill is a number. Players have skills varying from 0 to 10. As expected in a normal distribution, there are a lot more people with skill around 5 than people with skill around 0 or 10, and that's done gradually. I also know the distribution I made is not perfect real, but I think it is enough to use in this test.
When a player faces another player, basically the one with better skill wins. But I introduced some RNG to allow a worse skilled player to have a chance to win. Lets say player X with skill equals to 6.5 faces a player Y with 7.0. The considered skill in the match is something like this:
X: a random number from 5.85 to 7.15
Y: a random number from 6.3 to 7.7
The player Y has more chance to win, but player X still have his chance.
If the players have too discrepant skills then the victory of the best player is certain.



I have issues with this methodology, but you've inspired me to construct my own simulation.


Even if my simulation have some flaws, it inspired someone to try some similar work, so it's a good thing!
figq
Profile Blog Joined May 2010
12519 Posts
Last Edited: 2014-03-27 09:58:48
March 25 2014 18:16 GMT
#24
Thank you. You did something I was considering doing, but was too busy. Basically, I suspected the way certain people (who are good at arena) advertise arena may be over the top a bit. Like poker, and essentially most things that depend on experience, this looked a bit like a pyramid scheme. In which they always claim your best value is in a arena, but if we actually take *all* players in arena and just average out their performance, it would turn out that the "bank" (in our case Blizzard) wins more from us, compared to what "we" (that is the whole average of all players) can just win from constructed. So yeah, the arguments still stand, you could keep playing arena (do the pyramid work) until you get good enough to actually pull better gold from it than you would from constructed. Individually - for those that are good or become good - there's great value in arena, the best value. But as a whole, I suspected the entirety of players do not benefit from it, and your results pretty much confirm it. Of course, the difference is very small, and arena is fun, so it's still okay. EDIT: also, of course, there's a 100 gold limit in constructed which if you grind a lot will be an issue.
If you stand next to my head, you can hear the ocean. - Day[9]
obesechicken13
Profile Blog Joined July 2008
United States10467 Posts
Last Edited: 2014-03-25 21:27:30
March 25 2014 21:27 GMT
#25
I don't think anyone's linked it yet, but OP maybe you should take a read at this other simulation:
http://www.liquidhearth.com/forum/hearthstone/393-monte-carlo-simulation-of-hearthstone-ranking
I think in our modern age technology has evolved to become more addictive. The things that don't give us pleasure aren't used as much. Work was never meant to be fun, but doing it makes us happier in the long run.
MarcoBrei
Profile Joined May 2011
Brazil66 Posts
Last Edited: 2014-03-26 00:46:32
March 26 2014 00:45 GMT
#26
Original post updated:

UPDATE - from relevant inputs:

Gerenal questions about the program:
+ Show Spoiler +

"gerenal questions about the program"
How did you make the normal distribuiton of skill?
How did you make the matchmaking?
How did you decide the winner?
How did you make the RNG?


I really don't want to discuss details of the program, just because it's tiresome. There is no much famous algorithm behind this program, I made myself every step of it.
Some overview:
Skill is a number, the greater, the better.
Normal distribution: More players have skills like 5, less players have skill like 10 or zero. It increases and decreases gradually.
Matchmaking: I don't know how blizzard does, I just pick a random player and try to find another with the same (or as close as possible) "win balance" (win - loss). So a player with 7-0 will play against a 9-2 instead a 9-0.
The winner is decided by comparing the skills. But I introduced some RNG to allow a worse skilled player to have a chance to win. Lets say player X with skill equals to 6.5 faces a player Y with 7.0. The considered skill in the match is something like this:
X: a random number from 5.85 to 7.15
Y: a random number from 6.3 to 7.7
The player Y has more chance to win, but player X still have his chance.
If the players have too discrepant skills then the victory of the best player is certain.

Important update: I just made some more simulations varying the amount of RNG and find out that it does not make that big of a difference. Also, I mixed the normal distribution of skill and still get similar results. On the other hand matchmaking is much more relevant and affects a lot the results.



Comparing this result with...
+ Show Spoiler +

On March 25 2014 21:14 obesechicken13 wrote:
I think the actual distributions may be slightly different for arena runs.
http://www.arenamastery.com/sitewide.php

I don't think arenamastery has all necessary data, just the ones people submit. If so, several results, specially the bad ones, may not be present, which ruins the metric.
The only way to confirm the results would be comparing with real data from Blizzard, and I would love to do that.

I know this simulation is not perfect, but I think it can give us a clue about what happens. Some methods should be done differently, but so far I think nothing is "wrong enough" to invalidate the results. What I'm saying is that we can't learn here that exactly 77.96% of the players reach no more than 4 victories, but we can imagine this number is not 30% neither 90%.


Some aspects the simulation does not consider:
+ Show Spoiler +

On March 25 2014 17:23 RenSC2 wrote:
One further consideration is that each "player" in the simulation is not actually an individual, but instead a single run of the arena. A person who averages 3 wins or less in a run will likely only be able to play in the arena once every couple days. A person in that 4-6 range will be able to run arena about 1-2 times per day. People who average 7+ get to run the arena as much as they want. Some of the really top notch arena people are full time streamers/players and so do a very large number of runs every day and account for many runs in the 7+ club.

So the actual % of people who will average 7+ is less than what the simulation will tell you because a larger portion of those runs are being filled by the same people. Meanwhile, the less than 3 club will actually be a much larger % of people since those people don't get to make nearly as many runs as even the 4-6 people.

Essentially, averaging 5 wins or more per run actually puts you in better standing than the top 22% that the initial post calculated.

The only caveat is that real $ infusions are probably highest in the less than 3 club and those people could be balancing things out a bit.

Still, we live on a forum where being "only" in Masters in Starcraft makes you a "bad" player. Masters was initially supposed to be the top 2% of the active playerbase. We're a very elitist community.



Other simulation
+ Show Spoiler +

On March 26 2014 06:27 obesechicken13 wrote:
I don't think anyone's linked it yet, but OP maybe you should take a read at this other simulation:
http://www.liquidhearth.com/forum/hearthstone/393-monte-carlo-simulation-of-hearthstone-ranking

This is simulating the ladder, it's interesting, but I wanted to try Arena which I think is more interesting


Inspiration
+ Show Spoiler +

On March 26 2014 02:33 mikeymoo wrote:
I have issues with this methodology, but you've inspired me to construct my own simulation.


On March 25 2014 11:02 Came Norrection wrote:
You just made me want to write my own simulation.


Feel free to try your own tests! When blizzard reveals the real numbers we may see who was more accurate


C[h]ili
Profile Joined December 2011
Germany167 Posts
April 12 2014 17:49 GMT
#27
This is a well done analysis. Thank you very much for the effort you have put into this, and for sharing the results with us. I have two comments:

1. Please share the computer code you have used for your simulation. This allows us to more deeply understand what you have done.

2. My guess would be that the implied distribution of players across "number-of-win-categories" (0-3,1-3, and so on) dependens to some degree on the assumed probability distribution for "skill". If you have time, I would be interested in seeing the implied distribution for different assumptions on "skill".
BenJamesBen
Profile Joined May 2014
0 Posts
Last Edited: 2014-05-09 12:34:34
May 09 2014 03:53 GMT
#28
I was inspired by MarcoBrei's work to write my own simulation. There are some differences between those numbers and mine, but they mostly agree:
  • 50% of players end up with 2 wins or less. Only 50% have 3+ wins. (Compare to MarcoBrei's 51.57%/48.43%)
  • 65.6% end up with 3 wins or less. Only 35.4% have 4+ wins. (MarcoBrei: 66.91%/33.09%)
  • Only about 9% have 7+ wins. (MarcoBrei: 9.01%)
  • I did not see significantly different results when match outcomes were determined randomly (50/50 chance to win) vs. determined by skill (higher skilled player always wins).
Data:
+ Show Spoiler +
0-3: 12.500% (12.500% 0-0 wins, 87.500% 1+ wins))
1-3: 18.750% (31.250% 0-1 wins, 68.750% 2+ wins))
2-3: 18.750% (50.001% 0-2 wins, 49.999% 3+ wins))
3-3: 15.625% (65.626% 0-3 wins, 34.374% 4+ wins))
4-3: 11.719% (77.345% 0-4 wins, 22.655% 5+ wins))
5-3: 8.203% (85.548% 0-5 wins, 14.452% 6+ wins))
6-3: 5.469% (91.017% 0-6 wins, 8.983% 7+ wins))
7-3: 3.516% (94.532% 0-7 wins, 5.468% 8+ wins))
8-3: 2.197% (96.729% 0-8 wins, 3.271% 9+ wins))
9-3: 1.343% (98.072% 0-9 wins, 1.928% 10+ wins))
10-3: 0.806% (98.877% 0-10 wins, 1.123% 11+ wins))
11-3: 0.476% (99.353% 0-11 wins, 0.647% 12+ wins))
12-2: 0.476% 12-2, (99.829% less or equal, 0.171% higher))
12-1: 0.146% 12-1, (99.976% less or equal, 0.024% higher))
12-0: 0.024% 12-0, (100.000% less or equal, 0.000% higher))


I believe that my simulation differs in that it uses "perfect matchmaking", without any randomness. For example, an 0-2 player will only ever get matched with another 0-2 player, and the result is that there will always be one of those players ending with an 0-3 run.

I've also played around with different player distributions. For example, under the assumption that more higher-skilled players tend to play Arena (because they find it profitable), that lower-skilled players avoid Arena (more efficient to buy packs), and that there's a crop of new, unskilled players playing their free Arena run. The other distribution types I've tried are Normal (bell curve) and equal. So far, these different distributions don't seem to affect the results much in the long run, in a situation where there is perfect matchmaking.

I agree that the matchmaking algorithm is the most important factor in the distribution of arena win rates.

I will make my computer source code available for people to view and run. I'm currently looking into where best to host the code. Thanks, MarcoBrei for inspiring this effort.

Edit: added: The source code and sample run output.
Amui
Profile Blog Joined August 2010
Canada10567 Posts
Last Edited: 2014-05-09 21:32:27
May 09 2014 21:26 GMT
#29
Interesting simulation.

I do agree with the analysis that the people who can consistently hit 7+ wins will represent a significantly lower percentage of players though. There is a small subset of users that can average 7+, and that means that for every arena run, they take out 2 1/3 players on average, not to mention the fact that they'll get enough money to go right back and do it again.

While John Pub might win 2-3 games most of the time, they won't get enough gold for more than one arena run a day, in comparison to a higher level player who might do 5 or more, averaging more than double the wins.

I also think getting to 12 wins requires an element of luck in addition to skill. Some good decks that could hit 6-7 most of the time occasionally run into 3 very strong decks, and then sometimes an average deck will hit 12. Unless you've drafted something like 3 consecrate 5 truesilver triple aldor tirion pally, or 4 frostbolt 3 manawyrm 5 fireball 3 flamestrike double poly pyro mage with solid supporting cards(which is an element of luck as well), you can predict a good range for a deck, but never be certain.
Porouscloud - NA LoL
BenJamesBen
Profile Joined May 2014
0 Posts
May 11 2014 00:50 GMT
#30
I updated the simulation program: source code and sample output.

We already know what distribution of results to expect overall. However, what people are probably more interested in is the question, what Arena result can an individual player expect to receive based on their skill level? The simulation program now tries to tally this information. There are 14 player skill levels, from 1 (worst) to 14 (best).

Average Skill Level per Result:
+ Show Spoiler +
0-3: n=125000, average level=4.1
1-3: n=187500, average level=6.5
2-3: n=187500, average level=7.6
3-3: n=156250, average level=8.2
4-3: n=117187, average level=8.9
5-3: n=82031, average level=9.7
6-3: n=54687, average level=10.6
7-3: n=35155, average level=11.3
8-3: n=21971, average level=11.8
9-3: n=13426, average level=12.3
10-3: n=8055, average level=12.6
11-3: n=4759, average level=12.9
12-2: n=4759, average level=13.2
12-1: n=1464, average level=13.4
12-0: n=244, average level=13.5

Average Result per Skill Level (whale2 distribution):
Level 1: n=70044, average wins=0.3
Level 2: n=4995, average wins=0.4
Level 3: n=9944, average wins=0.5
Level 4: n=15038, average wins=0.8
Level 5: n=19942, average wins=1.2
Level 6: n=89749, average wins=1.6
Level 7: n=200544, average wins=2.1
Level 8: n=199102, average wins=2.5
Level 9: n=110207, average wins=3.2
Level 10: n=99744, average wins=4.0
Level 11: n=80368, average wins=5.1
Level 12: n=50238, average wins=6.2
Level 13: n=30051, average wins=7.3
Level 14: n=20022, average wins=8.6

The above is using a non-standard distribution of players that I've named "whale2". It assumes that Arena has more higher-skilled players than the normal population, with low-skilled players avoiding Arena but with a number of new, inexperienced players trying out their free run.

Player skill level distribution (whale2):
L1: 7.00%, L2: 0.50%, L3: 0.99%, L4: 1.50%, L5: 1.99%,
L6: 8.97%, L7: 20.05%, L8: 19.91%, L9: 11.02%, L10: 9.97%,
L11: 8.04%, L12: 5.02%, L13: 3.01%, L14: 2.00%,

With a normal distribution of players, the results instead look like:
+ Show Spoiler +
Average Result per Skill Level (normal distribution):
Level 1: n=1016, average wins=0.2
Level 2: n=5053, average wins=0.3
Level 3: n=16623, average wins=0.5
Level 4: n=43939, average wins=0.8
Level 5: n=91984, average wins=1.3
Level 6: n=150681, average wins=1.8
Level 7: n=191184, average wins=2.4
Level 8: n=190992, average wins=3.1
Level 9: n=150072, average wins=3.9
Level 10: n=91477, average wins=5.0
Level 11: n=44057, average wins=6.4
Level 12: n=16932, average wins=7.9
Level 13: n=4965, average wins=9.5
Level 14: n=1013, average wins=10.8
Hryul
Profile Blog Joined March 2011
Austria2609 Posts
May 11 2014 02:22 GMT
#31
On May 09 2014 12:53 BenJamesBen wrote:
I was inspired by MarcoBrei's work to write my own simulation. There are some differences between those numbers and mine, but they mostly agree:
  • 50% of players end up with 2 wins or less. Only 50% have 3+ wins. (Compare to MarcoBrei's 51.57%/48.43%)
  • 65.6% end up with 3 wins or less. Only 35.4% have 4+ wins. (MarcoBrei: 66.91%/33.09%)
  • Only about 9% have 7+ wins. (MarcoBrei: 9.01%)
  • I did not see significantly different results when match outcomes were determined randomly (50/50 chance to win) vs. determined by skill (higher skilled player always wins).
Data:
+ Show Spoiler +
0-3: 12.500% (12.500% 0-0 wins, 87.500% 1+ wins))
1-3: 18.750% (31.250% 0-1 wins, 68.750% 2+ wins))
2-3: 18.750% (50.001% 0-2 wins, 49.999% 3+ wins))
3-3: 15.625% (65.626% 0-3 wins, 34.374% 4+ wins))
4-3: 11.719% (77.345% 0-4 wins, 22.655% 5+ wins))
5-3: 8.203% (85.548% 0-5 wins, 14.452% 6+ wins))
6-3: 5.469% (91.017% 0-6 wins, 8.983% 7+ wins))
7-3: 3.516% (94.532% 0-7 wins, 5.468% 8+ wins))
8-3: 2.197% (96.729% 0-8 wins, 3.271% 9+ wins))
9-3: 1.343% (98.072% 0-9 wins, 1.928% 10+ wins))
10-3: 0.806% (98.877% 0-10 wins, 1.123% 11+ wins))
11-3: 0.476% (99.353% 0-11 wins, 0.647% 12+ wins))
12-2: 0.476% 12-2, (99.829% less or equal, 0.171% higher))
12-1: 0.146% 12-1, (99.976% less or equal, 0.024% higher))
12-0: 0.024% 12-0, (100.000% less or equal, 0.000% higher))


I believe that my simulation differs in that it uses "perfect matchmaking", without any randomness. For example, an 0-2 player will only ever get matched with another 0-2 player, and the result is that there will always be one of those players ending with an 0-3 run.

I've also played around with different player distributions. For example, under the assumption that more higher-skilled players tend to play Arena (because they find it profitable), that lower-skilled players avoid Arena (more efficient to buy packs), and that there's a crop of new, unskilled players playing their free Arena run. The other distribution types I've tried are Normal (bell curve) and equal. So far, these different distributions don't seem to affect the results much in the long run, in a situation where there is perfect matchmaking.

I agree that the matchmaking algorithm is the most important factor in the distribution of arena win rates.

I will make my computer source code available for people to view and run. I'm currently looking into where best to host the code. Thanks, MarcoBrei for inspiring this effort.

Edit: added: The source code and sample run output.

If you take your assumption, there is no need for a computer simulation, since it can be solved analytically. As has been done here:

On May 08 2014 02:49 GuntherBovine wrote:
If you assume that every arena match is between two decks with the same record, you can calculate how many decks achieve a certain record. Here are the chances of getting a particular record, the cumulative chance of getting that many wins and the inverse of the first column, which gives you that one out of X decks gets that many wins:
0-3 - 12.50% - 12.50% - 8.0
1-3 - 18.75% - 31.25% - 5.3
2-3 - 18.75% - 50.00% - 5.3
3-3 - 15.63% - 65.63% - 6.4
4-3 - 11.72% - 77.34% - 8.5
5-3 - 8.20% - 85.55% - 12.2
6-3 - 5.47% - 91.02% - 18.3
7-3 - 3.52% - 94.53% - 28.4
8-3 - 2.20% - 96.73% - 45.5
9-3 - 1.34% - 98.07% - 74.5
10-3 - 0.81% - 98.88% - 124.1
11-3 - 0.48% - 99.35% - 210.1
12-2 - 0.48% - 99.83% - 210.1
12-1 - 0.15% - 99.98% - 682.7
12-0 - 0.02% - 100.00% - 4096.0

Countdown to victory: 1 200!
Came Norrection
Profile Joined March 2011
Canada168 Posts
Last Edited: 2014-05-11 04:08:39
May 11 2014 04:06 GMT
#32
I ran my own simulation as well, and I don't think my results matter too much but I did find an interesting fact of how the distribution of luck and skill in the game.

1 run of my simulation is 10000 players playing and all players finish their runs if the games don't ends up being odd. My algorithm does flat distribution of skill into 100 bins which means equal chance of players of any skill from 0-99, about 100 players per bin. I did NOT use normal distribution mostly because skill is something that is only determined relative to others and I already use a normal distribution for deciding on who wins. Matchmaking is given on number of wins within 1 of each other, so a 7 win person can play a 6,7,8 win person and I think this is close to what blizzard uses since it will match you against people who have different record than you in arena.

For deciding who wins a game, I use the following formula:
+ Show Spoiler +
winner = As + r
A is a factor of how much skill comes into play
s is the difference in skill level of the players
r is a normal distributed Gaussian


For a game of pure skill, where As >>>> r:
average run length is 2.89
5.3% makes it to 12 wins
the average run length of the top 1% of players is 11.89 with a 79.86% win rate

For a game of about even skill and luck, where As ~ r:
average run length is 2.972
1.92% makes it to 12 wins
the average run length of the top 1% of players is 7.5 with a 71.49% win rate

For a game of pure luck, where As <<<<< r:
average run length is 2.994
0.62% makes it to 12 wins
the average run length of the top 1% of players is 2.835 with a 48.59% win rate

I just find it interesting to see how much luck is perceived in the game vs my simulation. My guess is skill is a lot more important than luck give how the win rates of players are distributed.
"The lie is just a great story ruined by the truth."
Normal
Please log in or register to reply.
Live Events Refresh
Big Brain Bouts
17:00
#102
YoungYakov vs Jumy
TriGGeR vs Spirit
RotterdaM514
IndyStarCraft 0
Liquipedia
OSC
14:00
King of the Hill #234
SteadfastSC98
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
IndyStarCraft 738
Lowko522
RotterdaM 514
SteadfastSC 98
Liquid`VortiX 82
BRAT_OK 47
MindelVK 21
DivinesiaTV 14
StarCraft: Brood War
Britney 25269
Calm 2579
Bisu 1999
Rain 1776
Horang2 528
Stork 501
Shuttle 357
Larva 135
firebathero 123
Mini 122
[ Show more ]
Hyun 101
Zeus 65
Mind 62
Aegong 57
JYJ 54
910 52
Killer 47
ggaemo 42
Mong 29
Dewaltoss 26
Shinee 22
JulyZerg 19
soO 18
Yoon 16
GoRush 15
sorry 14
Sacsri 12
ajuk12(nOOB) 11
SilentControl 8
Dota 2
Gorgc6412
singsing4032
qojqva3390
Counter-Strike
Foxcn94
Other Games
FrodaN750
hiko560
crisheroes353
XaKoH 121
KnowMe97
Trikslyr70
Chillindude32
Organizations
StarCraft 2
ComeBackTV 1091
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 13 non-featured ]
StarCraft 2
• poizon28 23
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• lizZardDota238
League of Legends
• Nemesis3883
Upcoming Events
The PiG Daily
3h 28m
SHIN vs ByuN
Reynor vs Classic
TBD vs herO
Maru vs SHIN
TBD vs Classic
CranKy Ducklings
16h 28m
WardiTV 2025
17h 28m
Reynor vs MaxPax
SHIN vs TBD
Solar vs herO
Classic vs TBD
SC Evo League
18h 58m
Ladder Legends
1d 1h
BSL 21
1d 2h
Sziky vs Dewalt
eOnzErG vs Cross
Sparkling Tuna Cup
1d 16h
Ladder Legends
1d 23h
BSL 21
2 days
StRyKeR vs TBD
Bonyth vs TBD
Replay Cast
2 days
[ Show More ]
Wardi Open
2 days
Monday Night Weeklies
2 days
WardiTV Invitational
4 days
Replay Cast
5 days
WardiTV Invitational
5 days
ByuN vs Solar
Clem vs Classic
Cure vs herO
Reynor vs MaxPax
Liquipedia Results

Completed

Acropolis #4 - TS3
RSL Offline Finals
Kuram Kup

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
YSL S2
BSL Season 21
Slon Tour Season 2
CSL Season 19: Qualifier 1
WardiTV 2025
META Madness #9
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22

Upcoming

CSL Season 19: Qualifier 2
CSL 2025 WINTER (S19)
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Big Gabe Cup #3
OSC Championship Season 13
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.