• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 03:42
CET 08:42
KST 16:42
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting10[ASL20] Ro4 Preview: Descent11Team TLMC #5: Winners Announced!3[ASL20] Ro8 Preview Pt2: Holding On9
Community News
2025 RSL Offline Finals Dates + Ticket Sales!9BSL21 Open Qualifiers Week & CONFIRM PARTICIPATION1Crank Gathers Season 2: SC II Pro Teams7Merivale 8 Open - LAN - Stellar Fest3Chinese SC2 server to reopen; live all-star event in Hangzhou22
StarCraft 2
General
RotterdaM "Serral is the GOAT, and it's not close" The New Patch Killed Mech! Could we add "Avoid Matchup" Feature for rankgame Smart servos says it affects liberators as well Chinese SC2 server to reopen; live all-star event in Hangzhou
Tourneys
2025 RSL Offline Finals Dates + Ticket Sales! Crank Gathers Season 2: SC II Pro Teams Merivale 8 Open - LAN - Stellar Fest $5,000+ WardiTV 2025 Championship $3,500 WardiTV Korean Royale S4
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 497 Battle Haredened Mutation # 496 Endless Infection Mutation # 495 Rest In Peace Mutation # 494 Unstable Environment
Brood War
General
BSL Team A vs Koreans - Sat-Sun 16:00 CET [ASL20] Finals Preview: Arrival BW General Discussion BSL Season 21 ASL20 Pre-season Tier List ranking!
Tourneys
[ASL20] Grand Finals BSL21 Open Qualifiers Week & CONFIRM PARTICIPATION ASL final tickets help [ASL20] Semifinal A
Strategy
PvZ map balance Soma's 9 hatch build from ASL Game 2 Current Meta Simple Questions, Simple Answers
Other Games
General Games
Stormgate/Frost Giant Megathread Path of Exile General RTS Discussion Thread Nintendo Switch Thread Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
Things Aren’t Peaceful in Palestine US Politics Mega-thread Russo-Ukrainian War Thread YouTube Thread The Chess Thread
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
Movie Discussion! Anime Discussion Thread [Manga] One Piece Korean Music Discussion Series you have seen recently...
Sports
Formula 1 Discussion 2024 - 2026 Football Thread MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023 NBA General Discussion
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List Recent Gifted Posts
Blogs
The Benefits Of Limited Comm…
TrAiDoS
Sabrina was soooo lame on S…
Peanutsc
Our Last Hope in th…
KrillinFromwales
Certified Crazy
Hildegard
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1587 users

DeepMind sets AlphaGo's sights on SCII - Page 6

Forum Index > SC2 General
Post a Reply
Prev 1 4 5 6 7 8 16 Next All
HellHound
Profile Joined September 2014
Bulgaria5962 Posts
March 28 2016 17:19 GMT
#101
On March 28 2016 13:12 Ilikestarcraft wrote:
Show nested quote +
On March 28 2016 13:04 Fran_ wrote:
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?


And if you don't allow the machine to have arbitrary APM, what APM will you allow? The choice is completely arbitrary.

I don't think an apm cap would be completely arbitrary. Somewhere around the average apm of top pros or maybe a little higher I think is reasonable.

EPM not APM.
Classic GosoO |sOs| Everyone has to give in, let Life win | Zest Is The Best | Roach Cultist | I recognize the might and wisdom of my Otherworldly overlord | Air vs Air 200/200 SC2 is best SC2 | PRIME has been robbed | Fuck prime go ST | ROACH ROACH ROACH
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
March 28 2016 17:39 GMT
#102
On March 29 2016 02:19 HellHound wrote:
Show nested quote +
On March 28 2016 13:12 Ilikestarcraft wrote:
On March 28 2016 13:04 Fran_ wrote:
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?


And if you don't allow the machine to have arbitrary APM, what APM will you allow? The choice is completely arbitrary.

I don't think an apm cap would be completely arbitrary. Somewhere around the average apm of top pros or maybe a little higher I think is reasonable.

EPM not APM.


Even EPM is "inflated" since many of the actions everyone does are tiny corrections on earlier clicks (due to misclicks and whatnot) which a computer wouldn't need.
diabcockiful
Profile Joined January 2016
22 Posts
March 28 2016 17:39 GMT
#103
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?



Yeah, but SC2 requires a little more creativity than GO or Chess...I think eventually an AI could be better of course, but I don't think we're anywhere near that point yet. There aren't predetermined tiles or spots for pieces to move...SC2 units don't have simple rules, and there are almost infinite possibilities. I suppose you could teach an AI to macro like an animal and then A-move over someone, or maybe even attempt to setup concaves. But man, there are a lot more factors involved in winning an SC2 engagement than taking someone's rook in chess. And while I don't know the rules of Euro.Go, it looks like you have one type of unit and a grid to move on... No setting up army compositions, or gaining vision of your opponent, etc.

Maybe I'll be surprised as to how far AIs have come, but methinks they will be mechanically perfect but fairly retarded in many ways.
Pseudorandom
Profile Joined April 2010
United States120 Posts
March 28 2016 17:58 GMT
#104
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.
"This is scissors, paper is fine, paper just needs to learn how to play. Paper needs to stop complaining." - richlol
Incognoto
Profile Blog Joined May 2010
France10239 Posts
March 28 2016 18:22 GMT
#105
On March 29 2016 02:58 Pseudorandom wrote:
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.


Yeah I think it's possible that deepmind wins solely off of micro.

Unless they implement an APM limiter or something
maru lover forever
Hexe
Profile Joined August 2014
United States332 Posts
March 28 2016 18:36 GMT
#106
On March 29 2016 03:22 Incognoto wrote:
Show nested quote +
On March 29 2016 02:58 Pseudorandom wrote:
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.


Yeah I think it's possible that deepmind wins solely off of micro.

Unless they implement an APM limiter or something

And all the zerg human hero needs to do is force a bunch of wasted stims and threaten to counter
Clonester
Profile Joined August 2014
Germany2808 Posts
March 28 2016 18:40 GMT
#107
On March 29 2016 03:36 Hexe wrote:
Show nested quote +
On March 29 2016 03:22 Incognoto wrote:
On March 29 2016 02:58 Pseudorandom wrote:
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.


Yeah I think it's possible that deepmind wins solely off of micro.

Unless they implement an APM limiter or something

And all the zerg human hero needs to do is force a bunch of wasted stims and threaten to counter


Deepmind will just send over his starting SCVs and win a 12 SCV vs 15 Drones fight.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
The_Masked_Shrimp
Profile Joined February 2012
425 Posts
Last Edited: 2016-03-28 18:47:58
March 28 2016 18:45 GMT
#108
For people wondering if blizzard will "tolerate" Google developing this since it is like a 3rd party program / mod, you don't realize that there isn't a single company that will spit on the amount of exposure a company like google can bring to you. A good AlphaSC2 would save Blizzard millions of $ in advertisement and will probably bring more players on the scene.

Oh and also, an AI cannot waste stims. You can program it so that it only stims when it knows it can reach yours units for sure even if they run away, and there is no turning back on individually controlled stimmed units lol.

The AI won't need a fancy late game strategy to win, simple rush builds with perfect micro should be enough to defeat humans. Doing the same in a long macro game would be a lot more difficult
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 19:09:49
March 28 2016 18:49 GMT
#109
On March 29 2016 02:14 Mendelfist wrote:
Show nested quote +
On March 29 2016 01:33 The Bottle wrote:
I'm not talking about the number of possible game states. That's not important for a machine learning algorithm. Number of possible moves you can make is important. That is, the way you encode a particular action. This is essential for making a training data set for your algorithm to learn.

Why do you assume that the best way to solve this problem is to throw every single game state variable or pixel at a neural net and then hope that it somehow works out?


Show nested quote +
Your billiard example doesn't work here, because there's no self-learning AI algorithm for billiard, at least not that I know of.

Then imagine one. One that learns for example by self play. Do you really think that it would have a hard time finding "the right moves" just because they are infinite in number? Edit: And the question is not if it can be better than a script, which can be near perfect. The question is if you think it's a hard problem.

Show nested quote +
They will have to find clever ways to transform the game input data in order to remove redundancies, and coarsen the scale of discrete moves. I'm sure they did something like that with Go already, but it will be substantially harder for SC2. You say it's just a different problem, sure. But a much, much harder problem, one I'm not quite sure they'll solve, even knowing their success with Go.

Yes, THIS is the problem, and once you have done this the "number of possible moves" in the original problem is irrelevant. That only tells you that you have THIS problem on your hands and that you can't solve it by ordinary search algorithms. You have to find a way to reduce the original problem by levels of abstraction. I don't know if there is a way for an AI find these abstractions by itself. Maybe that's how Alphago works, In any case, I'm not in the least convinced that it's as hard as you are trying to make it sound. In the simplest form you can have an ordinary scripted bot that asks an AI for advice. "Attack now", "build sentries", "expand there" etc. Or you could throw every pixel at it, like you want. I don't think that would work. Or something in between. How about that?


What you explained at the beginning is actually similar to how they train the initial state of the Go algorithm, before they get into the reinforced learning. It's not sufficient to create the algorithm as intelligent as it is, but that is what they do in their initial stages.

To clarify, they don't feed all possible permutations of the board, because that's obviously intractable. But they do feed it a large set of board positions from many games, with a target variable of which side won the game, and train a neural network on that. The input data for that is not hard to encode at all. It's simply a data set of 361 trinary points (black, white, or blank) and a binary target variable (which side won). Then in practice, given a board state, you calculate the probability of victory of all possible subsequent board states (all single turn moves you can make from the given state) using the NN trained by the above process. Such a method was used as the initial stage of the Go algorithm, as explained in this paper
https://vk.com/doc-44016343_437229031?dl=56ce06e325d42fbc72
before they started the reinforced learning, but this would be impossible to do for Starcraft. (I mean this particular method, not any supervised learning method.)

But listen, because I think you're still misunderstanding me. I know that they're taking shortcuts to greatly reduce the search space of possible moves in Go, the paper states this pretty clearly. But the problem is still that, because of the sheer number of possible moves you can make, and the stark difference in outcome between those moves, it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible. The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft. Yes, they will have to take new shortcuts, of the sort that they didn't take in Go. But every time they do such a thing, they have to be incredibly careful not to remove certain specific crucial moves, or coarse grain it in with other moves that have drastically different results. (An example of this is in zvz ling bane wars, where a couple pixels different motion can be the difference between 2 dead lings and 20 dead lings, or aiming a disruptor shot, or things like that).

I should clarify, I don't think this task is impossible. For sure, in principle it's very feasible to imagine a self trained Starcraft algorithm that can beat any human. But I'm trying to explain why it's monumentally more difficult than training a Go algorithm. And why the actual difference in depth of strategy between the two games from a heuristic standpoint is not nearly as important as the complexity of move permutations. You say it's a different problem. Well it's a different problem in the same sense that doing long division and proving Fermat's Last Theorem are different problems.

As for the billiard example. I did explain how the training set of a deep learning NN algorithm of billiard can be encoded, and why it's incredibly easy to do this in comparison to the other problems. I can clarify, but from your response I feel like you didn't read that bit. In your defense, it was sneakily put in beside my other point, so maybe I'll let it sit a little longer.
Karis Vas Ryaar
Profile Blog Joined July 2011
United States4396 Posts
March 28 2016 18:50 GMT
#110
I want to know if they do end up doing this do they go with a famous name to play or do they go with whoevers the best at the time.
"I'm not agreeing with a lot of Virus's decisions but they are working" Tasteless. Ipl4 Losers Bracket Virus 2-1 Maru
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 28 2016 19:15 GMT
#111
Terran drops could be very abusive by this robot if we assume it's not restricted.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 28 2016 19:32 GMT
#112
On March 29 2016 03:49 The Bottle wrote:
The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft.

And I'm saying that you're just making things up. An intractable large number of possible moves (or number of input variables) doesn't necessarily mean that the problem is hard (although it is a requirement) and reducing Starcraft to a problem on a higher level of abstraction than pixels or coordinates isn't necessarily very hard either. At least you haven't showed any arguments for it. Once you have moved to a high abstraction level Starcraft IS simple compared to Go, which cannot be reduced to builds or strategies in any similar way. This is the reason why I think it's at least possible that Starcraft is even easier to master than Go for an AI.

it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible

We are not trying to find the best moves possible. We are trying to beat the world champion, or someone similar. You are again making this harder than it is.
endy
Profile Blog Joined May 2009
Switzerland8970 Posts
March 28 2016 19:41 GMT
#113
does sc2 even have a public API in order to code an AI?
ॐ
andrewlt
Profile Joined August 2009
United States7702 Posts
March 28 2016 19:42 GMT
#114
On March 29 2016 02:39 diabcockiful wrote:
Show nested quote +
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?



Yeah, but SC2 requires a little more creativity than GO or Chess...I think eventually an AI could be better of course, but I don't think we're anywhere near that point yet. There aren't predetermined tiles or spots for pieces to move...SC2 units don't have simple rules, and there are almost infinite possibilities. I suppose you could teach an AI to macro like an animal and then A-move over someone, or maybe even attempt to setup concaves. But man, there are a lot more factors involved in winning an SC2 engagement than taking someone's rook in chess. And while I don't know the rules of Euro.Go, it looks like you have one type of unit and a grid to move on... No setting up army compositions, or gaining vision of your opponent, etc.

Maybe I'll be surprised as to how far AIs have come, but methinks they will be mechanically perfect but fairly retarded in many ways.


It seems like there is a huge divide in this thread between the people who know Go and those who don't...
Musicus
Profile Joined August 2011
Germany23576 Posts
Last Edited: 2016-03-28 19:45:06
March 28 2016 19:44 GMT
#115
On March 29 2016 04:41 endy wrote:
does sc2 even have a public API in order to code an AI?


Since they are already talking, I'm sure Blizzard will provide Google with whatever they need. The publicity for sc2 will be insane.
Maru and Serral are probably top 5.
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 28 2016 19:48 GMT
#116
On March 29 2016 03:49 The Bottle wrote:
But listen, because I think you're still misunderstanding me. I know that they're taking shortcuts to greatly reduce the search space of possible moves in Go, the paper states this pretty clearly. But the problem is still that, because of the sheer number of possible moves you can make, and the stark difference in outcome between those moves, it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible. The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft. Yes, they will have to take new shortcuts, of the sort that they didn't take in Go. But every time they do such a thing, they have to be incredibly careful not to remove certain specific crucial moves, or coarse grain it in with other moves that have drastically different results. (An example of this is in zvz ling bane wars, where a couple pixels different motion can be the difference between 2 dead lings and 20 dead lings, or aiming a disruptor shot, or things like that).


Though I think you have an understanding of the workings of AlphaGo beyond the average layperson, I just want to point out that some of the language you use is incorrect. You seem to imply that DeepMind will influence AlphaGo's decisions which is incorrect.

Beyond a ladder (a large board-scale trap) calculator, the DeepMind team did not provide AlphaGo any hints, tips or tricks. Through many iterations, AlphaGo learned to place stronger weights on specific paths to search.
Who would sup with the mighty, must walk the path of daggers.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 28 2016 20:00 GMT
#117
On March 29 2016 04:48 kingjames01 wrote:
Beyond a ladder (a large board-scale trap) calculator, the DeepMind team did not provide AlphaGo any hints, tips or tricks. Through many iterations, AlphaGo learned to place stronger weights on specific paths to search.

I'm going off a slight tangent here, but someone on the DeepMind team spoke of a possible future development, and that would be doing Alphago again but without the first training step of the neural nets with lots of human games. While they didn't teach it any specific tricks, these games may have taught it bad habits. Go is actually a very little researched game, which Go Seigen proved in the middle of the last century by turning everything upside down. I would very VERY much want them to do this instead of trying Starcraft. It could have vast implications for our understanding of Go. Maybe the best starting move is right in the middle?
purakushi
Profile Joined August 2012
United States3300 Posts
Last Edited: 2016-03-28 20:12:33
March 28 2016 20:08 GMT
#118
On March 29 2016 04:44 Musicus wrote:
Show nested quote +
On March 29 2016 04:41 endy wrote:
does sc2 even have a public API in order to code an AI?


Since they are already talking, I'm sure Blizzard will provide Google with whatever they need. The publicity for sc2 will be insane.


Yep, we will probably see an influx of "ded gaem" comments.

Unless AlphaStar is using computer vision, Blizzard should just release the SC2 API that will be required for this to happen to everyone.
T P Z sagi
Incognoto
Profile Blog Joined May 2010
France10239 Posts
March 28 2016 20:11 GMT
#119
Go has nothing to do with chess. chess can be brute forced, go cannot
maru lover forever
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 28 2016 20:13 GMT
#120
On March 29 2016 05:00 Mendelfist wrote:
Show nested quote +
On March 29 2016 04:48 kingjames01 wrote:
Beyond a ladder (a large board-scale trap) calculator, the DeepMind team did not provide AlphaGo any hints, tips or tricks. Through many iterations, AlphaGo learned to place stronger weights on specific paths to search.

I'm going off a slight tangent here, but someone on the DeepMind team spoke of a possible future development, and that would be doing Alphago again but without the first training step of the neural nets with lots of human games. While they didn't teach it any specific tricks, these games may have taught it bad habits. Go is actually a very little researched game, which Go Seigen proved in the middle of the last century by turning everything upside down. I would very VERY much want them to do this instead of trying Starcraft. It could have vast implications for our understanding of Go. Maybe the best starting move is right in the middle?


The issue with removing the first 'supervised' learning process for the policy network - namely, training AlphaGo without copying human moves at the very first - is that it might then take months and months before converging. Arguably policies learned them might be stronger, but it might be that networks take too long, or simply fail to converge. So, that approach is dependent on future progress in unsupervised machine learning methods.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
Prev 1 4 5 6 7 8 16 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 2h 18m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
SortOf 67
Nina 64
StarCraft: Brood War
Bale 35
ToSsGirL 35
Dota 2
XaKoH 654
ODPixel147
NeuroSwarm86
League of Legends
JimRising 753
Other Games
summit1g8048
Tasteless433
Hui .129
Mew2King61
NotJumperer2
Organizations
Counter-Strike
PGL856
Other Games
gamesdonequick689
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 16 non-featured ]
StarCraft 2
• Berry_CruncH124
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• LUISG 0
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Jankos1284
• Lourlo807
• Stunt585
Other Games
• WagamamaTV31
Upcoming Events
Replay Cast
2h 18m
Streamerzone vs Shopify Rebellion
Streamerzone vs Team Vitality
Shopify Rebellion vs Team Vitality
WardiTV Invitational
4h 18m
CrankTV Team League
5h 18m
BASILISK vs Shopify Rebellion
Team Liquid vs Team Falcon
BSL 21
17h 18m
Replay Cast
1d 2h
BASILISK vs TBD
Team Liquid vs Team Falcon
OSC
1d 4h
CrankTV Team League
1d 5h
Replay Cast
1d 15h
The PondCast
2 days
CrankTV Team League
2 days
[ Show More ]
Replay Cast
3 days
WardiTV Invitational
3 days
CrankTV Team League
3 days
Replay Cast
4 days
BSL Team A[vengers]
4 days
Dewalt vs Shine
UltrA vs ZeLoT
BSL 21
4 days
Sparkling Tuna Cup
5 days
BSL Team A[vengers]
5 days
Cross vs Motive
Sziky vs HiyA
BSL 21
5 days
Wardi Open
6 days
Monday Night Weeklies
6 days
Liquipedia Results

Completed

ASL Season 20
WardiTV TLMC #15
Eternal Conflict S1

Ongoing

BSL 21 Points
CSL 2025 AUTUMN (S18)
BSL 21 Team A
C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
CranK Gathers Season 2: SC II Pro Teams
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025

Upcoming

SC4ALL: Brood War
YSL S2
BSL Season 21
SLON Tour Season 2
BSL 21 Non-Korean Championship
RSL Offline Finals
WardiTV 2025
RSL Revival: Season 3
Stellar Fest
SC4ALL: StarCraft II
META Madness #9
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.