• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 06:16
CEST 12:16
KST 19:16
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro8 Preview Pt1: Inheritors12[ASL21] Ro16 Preview Pt2: All Star10Team Liquid Map Contest #22 - The Finalists18[ASL21] Ro16 Preview Pt1: Fresh Flow9[ASL21] Ro24 Preview Pt2: News Flash10
Community News
2026 GSL Season 1 Qualifiers20Maestros of the Game 2 announced92026 GSL Tour plans announced15Weekly Cups (April 6-12): herO doubles, "Villains" prevail1MaNa leaves Team Liquid25
StarCraft 2
General
Team Liquid Map Contest #22 - The Finalists MaNa leaves Team Liquid Maestros of the Game 2 announced 2026 GSL Tour plans announced Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool
Tourneys
2026 GSL Season 1 Qualifiers Sparkling Tuna Cup - Weekly Open Tournament INu's Battles#14 <BO.9 2Matches> GSL CK: More events planned pending crowdfunding RSL Revival: Season 5 - Qualifiers and Main Event
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players [M] (2) Frigid Storage
External Content
The PondCast: SC2 News & Results Mutation # 523 Firewall Mutation # 522 Flip My Base Mutation # 521 Memorable Boss
Brood War
General
[ASL21] Ro8 Preview Pt1: Inheritors FlaSh: This Will Be My Final ASL【ASL S21 Ro.16】 Leta's ASL S21 Ro.16 review BGH Auto Balance -> http://bghmmr.eu/ ASL21 General Discussion
Tourneys
[ASL21] Ro8 Day 1 [Megathread] Daily Proleagues [ASL21] Ro16 Group D Escore Tournament StarCraft Season 2
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers What's the deal with APM & what's its true value Any training maps people recommend?
Other Games
General Games
Stormgate/Frost Giant Megathread Diablo IV Nintendo Switch Thread Dawn of War IV Total Annihilation Server - TAForever
Dota 2
The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread 3D technology/software discussion European Politico-economics QA Mega-thread Canadian Politics Mega-thread Things Aren’t Peaceful in Palestine
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion McBoner: A hockey love story
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Sexual Health Of Gamers
TrAiDoS
lurker extra damage testi…
StaticNine
Broowar part 2
qwaykee
Funny Nicknames
LUCKY_NOOB
Iranian anarchists: organize…
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2209 users

DeepMind sets AlphaGo's sights on SCII - Page 6

Forum Index > SC2 General
Post a Reply
Prev 1 4 5 6 7 8 16 Next All
HellHound
Profile Joined September 2014
Bulgaria5962 Posts
March 28 2016 17:19 GMT
#101
On March 28 2016 13:12 Ilikestarcraft wrote:
Show nested quote +
On March 28 2016 13:04 Fran_ wrote:
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?


And if you don't allow the machine to have arbitrary APM, what APM will you allow? The choice is completely arbitrary.

I don't think an apm cap would be completely arbitrary. Somewhere around the average apm of top pros or maybe a little higher I think is reasonable.

EPM not APM.
Classic GosoO |sOs| Everyone has to give in, let Life win | Zest Is The Best | Roach Cultist | I recognize the might and wisdom of my Otherworldly overlord | Air vs Air 200/200 SC2 is best SC2 | PRIME has been robbed | Fuck prime go ST | ROACH ROACH ROACH
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
March 28 2016 17:39 GMT
#102
On March 29 2016 02:19 HellHound wrote:
Show nested quote +
On March 28 2016 13:12 Ilikestarcraft wrote:
On March 28 2016 13:04 Fran_ wrote:
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?


And if you don't allow the machine to have arbitrary APM, what APM will you allow? The choice is completely arbitrary.

I don't think an apm cap would be completely arbitrary. Somewhere around the average apm of top pros or maybe a little higher I think is reasonable.

EPM not APM.


Even EPM is "inflated" since many of the actions everyone does are tiny corrections on earlier clicks (due to misclicks and whatnot) which a computer wouldn't need.
diabcockiful
Profile Joined January 2016
22 Posts
March 28 2016 17:39 GMT
#103
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?



Yeah, but SC2 requires a little more creativity than GO or Chess...I think eventually an AI could be better of course, but I don't think we're anywhere near that point yet. There aren't predetermined tiles or spots for pieces to move...SC2 units don't have simple rules, and there are almost infinite possibilities. I suppose you could teach an AI to macro like an animal and then A-move over someone, or maybe even attempt to setup concaves. But man, there are a lot more factors involved in winning an SC2 engagement than taking someone's rook in chess. And while I don't know the rules of Euro.Go, it looks like you have one type of unit and a grid to move on... No setting up army compositions, or gaining vision of your opponent, etc.

Maybe I'll be surprised as to how far AIs have come, but methinks they will be mechanically perfect but fairly retarded in many ways.
Pseudorandom
Profile Joined April 2010
United States120 Posts
March 28 2016 17:58 GMT
#104
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.
"This is scissors, paper is fine, paper just needs to learn how to play. Paper needs to stop complaining." - richlol
Incognoto
Profile Blog Joined May 2010
France10239 Posts
March 28 2016 18:22 GMT
#105
On March 29 2016 02:58 Pseudorandom wrote:
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.


Yeah I think it's possible that deepmind wins solely off of micro.

Unless they implement an APM limiter or something
maru lover forever
Hexe
Profile Joined August 2014
United States332 Posts
March 28 2016 18:36 GMT
#106
On March 29 2016 03:22 Incognoto wrote:
Show nested quote +
On March 29 2016 02:58 Pseudorandom wrote:
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.


Yeah I think it's possible that deepmind wins solely off of micro.

Unless they implement an APM limiter or something

And all the zerg human hero needs to do is force a bunch of wasted stims and threaten to counter
Clonester
Profile Joined August 2014
Germany2808 Posts
March 28 2016 18:40 GMT
#107
On March 29 2016 03:36 Hexe wrote:
Show nested quote +
On March 29 2016 03:22 Incognoto wrote:
On March 29 2016 02:58 Pseudorandom wrote:
perfect macro and perfect micro will solve a lot of the 'strategical' issues by simply winning fights that, as humans, we believe you should lose. Easiest is TvZ, off creep perfect micro means you shouldn't lose any marines to banelings as long as you have stim.


Yeah I think it's possible that deepmind wins solely off of micro.

Unless they implement an APM limiter or something

And all the zerg human hero needs to do is force a bunch of wasted stims and threaten to counter


Deepmind will just send over his starting SCVs and win a 12 SCV vs 15 Drones fight.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
The_Masked_Shrimp
Profile Joined February 2012
425 Posts
Last Edited: 2016-03-28 18:47:58
March 28 2016 18:45 GMT
#108
For people wondering if blizzard will "tolerate" Google developing this since it is like a 3rd party program / mod, you don't realize that there isn't a single company that will spit on the amount of exposure a company like google can bring to you. A good AlphaSC2 would save Blizzard millions of $ in advertisement and will probably bring more players on the scene.

Oh and also, an AI cannot waste stims. You can program it so that it only stims when it knows it can reach yours units for sure even if they run away, and there is no turning back on individually controlled stimmed units lol.

The AI won't need a fancy late game strategy to win, simple rush builds with perfect micro should be enough to defeat humans. Doing the same in a long macro game would be a lot more difficult
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 19:09:49
March 28 2016 18:49 GMT
#109
On March 29 2016 02:14 Mendelfist wrote:
Show nested quote +
On March 29 2016 01:33 The Bottle wrote:
I'm not talking about the number of possible game states. That's not important for a machine learning algorithm. Number of possible moves you can make is important. That is, the way you encode a particular action. This is essential for making a training data set for your algorithm to learn.

Why do you assume that the best way to solve this problem is to throw every single game state variable or pixel at a neural net and then hope that it somehow works out?


Show nested quote +
Your billiard example doesn't work here, because there's no self-learning AI algorithm for billiard, at least not that I know of.

Then imagine one. One that learns for example by self play. Do you really think that it would have a hard time finding "the right moves" just because they are infinite in number? Edit: And the question is not if it can be better than a script, which can be near perfect. The question is if you think it's a hard problem.

Show nested quote +
They will have to find clever ways to transform the game input data in order to remove redundancies, and coarsen the scale of discrete moves. I'm sure they did something like that with Go already, but it will be substantially harder for SC2. You say it's just a different problem, sure. But a much, much harder problem, one I'm not quite sure they'll solve, even knowing their success with Go.

Yes, THIS is the problem, and once you have done this the "number of possible moves" in the original problem is irrelevant. That only tells you that you have THIS problem on your hands and that you can't solve it by ordinary search algorithms. You have to find a way to reduce the original problem by levels of abstraction. I don't know if there is a way for an AI find these abstractions by itself. Maybe that's how Alphago works, In any case, I'm not in the least convinced that it's as hard as you are trying to make it sound. In the simplest form you can have an ordinary scripted bot that asks an AI for advice. "Attack now", "build sentries", "expand there" etc. Or you could throw every pixel at it, like you want. I don't think that would work. Or something in between. How about that?


What you explained at the beginning is actually similar to how they train the initial state of the Go algorithm, before they get into the reinforced learning. It's not sufficient to create the algorithm as intelligent as it is, but that is what they do in their initial stages.

To clarify, they don't feed all possible permutations of the board, because that's obviously intractable. But they do feed it a large set of board positions from many games, with a target variable of which side won the game, and train a neural network on that. The input data for that is not hard to encode at all. It's simply a data set of 361 trinary points (black, white, or blank) and a binary target variable (which side won). Then in practice, given a board state, you calculate the probability of victory of all possible subsequent board states (all single turn moves you can make from the given state) using the NN trained by the above process. Such a method was used as the initial stage of the Go algorithm, as explained in this paper
https://vk.com/doc-44016343_437229031?dl=56ce06e325d42fbc72
before they started the reinforced learning, but this would be impossible to do for Starcraft. (I mean this particular method, not any supervised learning method.)

But listen, because I think you're still misunderstanding me. I know that they're taking shortcuts to greatly reduce the search space of possible moves in Go, the paper states this pretty clearly. But the problem is still that, because of the sheer number of possible moves you can make, and the stark difference in outcome between those moves, it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible. The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft. Yes, they will have to take new shortcuts, of the sort that they didn't take in Go. But every time they do such a thing, they have to be incredibly careful not to remove certain specific crucial moves, or coarse grain it in with other moves that have drastically different results. (An example of this is in zvz ling bane wars, where a couple pixels different motion can be the difference between 2 dead lings and 20 dead lings, or aiming a disruptor shot, or things like that).

I should clarify, I don't think this task is impossible. For sure, in principle it's very feasible to imagine a self trained Starcraft algorithm that can beat any human. But I'm trying to explain why it's monumentally more difficult than training a Go algorithm. And why the actual difference in depth of strategy between the two games from a heuristic standpoint is not nearly as important as the complexity of move permutations. You say it's a different problem. Well it's a different problem in the same sense that doing long division and proving Fermat's Last Theorem are different problems.

As for the billiard example. I did explain how the training set of a deep learning NN algorithm of billiard can be encoded, and why it's incredibly easy to do this in comparison to the other problems. I can clarify, but from your response I feel like you didn't read that bit. In your defense, it was sneakily put in beside my other point, so maybe I'll let it sit a little longer.
Karis Vas Ryaar
Profile Blog Joined July 2011
United States4396 Posts
March 28 2016 18:50 GMT
#110
I want to know if they do end up doing this do they go with a famous name to play or do they go with whoevers the best at the time.
"I'm not agreeing with a lot of Virus's decisions but they are working" Tasteless. Ipl4 Losers Bracket Virus 2-1 Maru
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 28 2016 19:15 GMT
#111
Terran drops could be very abusive by this robot if we assume it's not restricted.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 28 2016 19:32 GMT
#112
On March 29 2016 03:49 The Bottle wrote:
The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft.

And I'm saying that you're just making things up. An intractable large number of possible moves (or number of input variables) doesn't necessarily mean that the problem is hard (although it is a requirement) and reducing Starcraft to a problem on a higher level of abstraction than pixels or coordinates isn't necessarily very hard either. At least you haven't showed any arguments for it. Once you have moved to a high abstraction level Starcraft IS simple compared to Go, which cannot be reduced to builds or strategies in any similar way. This is the reason why I think it's at least possible that Starcraft is even easier to master than Go for an AI.

it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible

We are not trying to find the best moves possible. We are trying to beat the world champion, or someone similar. You are again making this harder than it is.
endy
Profile Blog Joined May 2009
Switzerland8970 Posts
March 28 2016 19:41 GMT
#113
does sc2 even have a public API in order to code an AI?
ॐ
andrewlt
Profile Joined August 2009
United States7702 Posts
March 28 2016 19:42 GMT
#114
On March 29 2016 02:39 diabcockiful wrote:
Show nested quote +
On March 28 2016 12:53 Circumstance wrote:
The real-time aspect will be critical. Go, being turn-baseed, is purely a matchup of mind against "mind". If They allow the machine unlimited APM, then this won't be much of a match.

Also, you'd need representatives from all 3 races, wouldn't you?



Yeah, but SC2 requires a little more creativity than GO or Chess...I think eventually an AI could be better of course, but I don't think we're anywhere near that point yet. There aren't predetermined tiles or spots for pieces to move...SC2 units don't have simple rules, and there are almost infinite possibilities. I suppose you could teach an AI to macro like an animal and then A-move over someone, or maybe even attempt to setup concaves. But man, there are a lot more factors involved in winning an SC2 engagement than taking someone's rook in chess. And while I don't know the rules of Euro.Go, it looks like you have one type of unit and a grid to move on... No setting up army compositions, or gaining vision of your opponent, etc.

Maybe I'll be surprised as to how far AIs have come, but methinks they will be mechanically perfect but fairly retarded in many ways.


It seems like there is a huge divide in this thread between the people who know Go and those who don't...
Musicus
Profile Joined August 2011
Germany23576 Posts
Last Edited: 2016-03-28 19:45:06
March 28 2016 19:44 GMT
#115
On March 29 2016 04:41 endy wrote:
does sc2 even have a public API in order to code an AI?


Since they are already talking, I'm sure Blizzard will provide Google with whatever they need. The publicity for sc2 will be insane.
Maru and Serral are probably top 5.
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 28 2016 19:48 GMT
#116
On March 29 2016 03:49 The Bottle wrote:
But listen, because I think you're still misunderstanding me. I know that they're taking shortcuts to greatly reduce the search space of possible moves in Go, the paper states this pretty clearly. But the problem is still that, because of the sheer number of possible moves you can make, and the stark difference in outcome between those moves, it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible. The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft. Yes, they will have to take new shortcuts, of the sort that they didn't take in Go. But every time they do such a thing, they have to be incredibly careful not to remove certain specific crucial moves, or coarse grain it in with other moves that have drastically different results. (An example of this is in zvz ling bane wars, where a couple pixels different motion can be the difference between 2 dead lings and 20 dead lings, or aiming a disruptor shot, or things like that).


Though I think you have an understanding of the workings of AlphaGo beyond the average layperson, I just want to point out that some of the language you use is incorrect. You seem to imply that DeepMind will influence AlphaGo's decisions which is incorrect.

Beyond a ladder (a large board-scale trap) calculator, the DeepMind team did not provide AlphaGo any hints, tips or tricks. Through many iterations, AlphaGo learned to place stronger weights on specific paths to search.
Who would sup with the mighty, must walk the path of daggers.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 28 2016 20:00 GMT
#117
On March 29 2016 04:48 kingjames01 wrote:
Beyond a ladder (a large board-scale trap) calculator, the DeepMind team did not provide AlphaGo any hints, tips or tricks. Through many iterations, AlphaGo learned to place stronger weights on specific paths to search.

I'm going off a slight tangent here, but someone on the DeepMind team spoke of a possible future development, and that would be doing Alphago again but without the first training step of the neural nets with lots of human games. While they didn't teach it any specific tricks, these games may have taught it bad habits. Go is actually a very little researched game, which Go Seigen proved in the middle of the last century by turning everything upside down. I would very VERY much want them to do this instead of trying Starcraft. It could have vast implications for our understanding of Go. Maybe the best starting move is right in the middle?
purakushi
Profile Joined August 2012
United States3301 Posts
Last Edited: 2016-03-28 20:12:33
March 28 2016 20:08 GMT
#118
On March 29 2016 04:44 Musicus wrote:
Show nested quote +
On March 29 2016 04:41 endy wrote:
does sc2 even have a public API in order to code an AI?


Since they are already talking, I'm sure Blizzard will provide Google with whatever they need. The publicity for sc2 will be insane.


Yep, we will probably see an influx of "ded gaem" comments.

Unless AlphaStar is using computer vision, Blizzard should just release the SC2 API that will be required for this to happen to everyone.
T P Z sagi
Incognoto
Profile Blog Joined May 2010
France10239 Posts
March 28 2016 20:11 GMT
#119
Go has nothing to do with chess. chess can be brute forced, go cannot
maru lover forever
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 28 2016 20:13 GMT
#120
On March 29 2016 05:00 Mendelfist wrote:
Show nested quote +
On March 29 2016 04:48 kingjames01 wrote:
Beyond a ladder (a large board-scale trap) calculator, the DeepMind team did not provide AlphaGo any hints, tips or tricks. Through many iterations, AlphaGo learned to place stronger weights on specific paths to search.

I'm going off a slight tangent here, but someone on the DeepMind team spoke of a possible future development, and that would be doing Alphago again but without the first training step of the neural nets with lots of human games. While they didn't teach it any specific tricks, these games may have taught it bad habits. Go is actually a very little researched game, which Go Seigen proved in the middle of the last century by turning everything upside down. I would very VERY much want them to do this instead of trying Starcraft. It could have vast implications for our understanding of Go. Maybe the best starting move is right in the middle?


The issue with removing the first 'supervised' learning process for the policy network - namely, training AlphaGo without copying human moves at the very first - is that it might then take months and months before converging. Arguably policies learned them might be stronger, but it might be that networks take too long, or simply fail to converge. So, that approach is dependent on future progress in unsupervised machine learning methods.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
Prev 1 4 5 6 7 8 16 Next All
Please log in or register to reply.
Live Events Refresh
Afreeca Starleague
10:00
Ro8 Match 1
Soma vs hero
Afreeca ASL 8493
StarCastTV_EN182
Liquipedia
Replay Cast
09:00
WardiTV Mondays #79
CranKy Ducklings82
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
ProTech168
SortOf 146
StarCraft: Brood War
Calm 10592
Jaedong 3481
Sea 2692
Hyuk 614
BeSt 548
EffOrt 520
Larva 275
actioN 213
Pusan 212
ZerO 183
[ Show more ]
Stork 181
Hyun 118
PianO 98
Rush 96
ToSsGirL 78
Killer 67
Aegong 59
Free 43
Nal_rA 22
HiyA 19
soO 16
yabsab 16
Sacsri 14
Shine 13
Bale 13
SilentControl 11
ajuk12(nOOB) 7
Barracks 0
Dota 2
XaKoH 590
NeuroSwarm454
resolut1ontv 187
XcaliburYe92
League of Legends
JimRising 432
Counter-Strike
olofmeister1828
shoxiejesuss1610
allub393
edward82
x6flipin63
Heroes of the Storm
Khaldor199
Other Games
singsing1451
B2W.Neo425
Pyrionflax202
Happy186
Mew2King55
Organizations
Dota 2
PGL Dota 2 - Main Stream225
StarCraft: Brood War
UltimateBattle 145
Kim Chul Min (afreeca) 8
lovetv 5
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 14 non-featured ]
StarCraft 2
• CranKy Ducklings SOOP7
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• TFBlade1275
• Stunt628
Other Games
• WagamamaTV261
Upcoming Events
Wardi Open
44m
Monday Night Weeklies
5h 44m
Replay Cast
13h 44m
Replay Cast
22h 44m
Afreeca Starleague
23h 44m
Leta vs YSC
GSL
1d 23h
Replay Cast
2 days
GSL
2 days
The PondCast
2 days
KCM Race Survival
2 days
[ Show More ]
Replay Cast
3 days
Replay Cast
3 days
Escore
3 days
Replay Cast
4 days
Replay Cast
4 days
IPSL
5 days
Ret vs Art_Of_Turtle
Radley vs TBD
BSL
5 days
Replay Cast
5 days
uThermal 2v2 Circuit
6 days
BSL
6 days
IPSL
6 days
eOnzErG vs TBD
G5 vs Nesh
Replay Cast
6 days
Wardi Open
6 days
Afreeca Starleague
6 days
Jaedong vs Light
Liquipedia Results

Completed

Escore Tournament S2: W4
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
CSL 2026 SPRING (S20)
IPSL Spring 2026
KCM Race Survival 2026 Season 2
StarCraft2 Community Team League 2026 Spring
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026

Upcoming

Escore Tournament S2: W5
KK 2v2 League Season 1
Acropolis #4
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
2026 GSL S2
RSL Revival: Season 5
2026 GSL S1
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
BLAST Rivals Spring 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.