• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 09:15
CEST 15:15
KST 22:15
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro8 Preview Pt1: Inheritors13[ASL21] Ro16 Preview Pt2: All Star10Team Liquid Map Contest #22 - The Finalists19[ASL21] Ro16 Preview Pt1: Fresh Flow9[ASL21] Ro24 Preview Pt2: News Flash10
Community News
2026 GSL Season 1 Qualifiers20Maestros of the Game 2 announced92026 GSL Tour plans announced15Weekly Cups (April 6-12): herO doubles, "Villains" prevail1MaNa leaves Team Liquid25
StarCraft 2
General
Team Liquid Map Contest #22 - The Finalists MaNa leaves Team Liquid Maestros of the Game 2 announced 2026 GSL Tour plans announced Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool
Tourneys
2026 GSL Season 1 Qualifiers Sparkling Tuna Cup - Weekly Open Tournament INu's Battles#14 <BO.9 2Matches> GSL CK: More events planned pending crowdfunding RSL Revival: Season 5 - Qualifiers and Main Event
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players [M] (2) Frigid Storage
External Content
The PondCast: SC2 News & Results Mutation # 523 Firewall Mutation # 522 Flip My Base Mutation # 521 Memorable Boss
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ [ASL21] Ro8 Preview Pt1: Inheritors FlaSh: This Will Be My Final ASL【ASL S21 Ro.16】 Leta's ASL S21 Ro.16 review ASL21 General Discussion
Tourneys
[ASL21] Ro8 Day 1 [Megathread] Daily Proleagues [ASL21] Ro16 Group D Escore Tournament StarCraft Season 2
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers What's the deal with APM & what's its true value Any training maps people recommend?
Other Games
General Games
Stormgate/Frost Giant Megathread Diablo IV Nintendo Switch Thread Dawn of War IV Total Annihilation Server - TAForever
Dota 2
The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread 3D technology/software discussion European Politico-economics QA Mega-thread Canadian Politics Mega-thread Things Aren’t Peaceful in Palestine
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion McBoner: A hockey love story
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Sexual Health Of Gamers
TrAiDoS
lurker extra damage testi…
StaticNine
Broowar part 2
qwaykee
Funny Nicknames
LUCKY_NOOB
Iranian anarchists: organize…
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1857 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 22

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 20 21 22 23 24 29 Next All
cutha
Profile Joined April 2017
2 Posts
Last Edited: 2017-05-26 15:45:18
May 26 2017 15:40 GMT
#421
I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.


I agree with most of what you said about "strategy" and mechanics and how scouting/reacting is most crucial to winning games. However, I think you may be thinking in the wrong perspective here as a human. Scouting/reacting is not human-exclusive abilities. They are still within the boundaries of learn-able information during the training. For example as a Zerg, the AI can generalize the strategy as: "if I didn't see a natural at X min, I need to sacrifice an overlord to scout. If I see Y amounts of certain units, I need to adopt plan B" etc. If the game samples for training is carefully chosen to cover a wide range of excellent scouting/reactive actions, then in theory the AI has no problem learning from them. It's no different than say, learning active actions like build-order wise "strategy" and mechanics.

To elaborate more, for the double medivac drop in TvZ, the Zerg AI can precisely keep track of the exact number of marines and another other units/SCVs and make optimized defense strategy based on map length, and thus able to maximize drone count before making defensive lings at the last moment. And it can have a lot of wiggle room to decide on the best number of lings depending on maps and other situations which even top human players are impossible to keep track of.
Heartland
Profile Blog Joined May 2012
Sweden24602 Posts
May 26 2017 15:42 GMT
#422
I came here for jokes about Innovation and found none. What has happened to all the quality shitposting in this place?!
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 15:50 GMT
#423
On May 27 2017 00:42 Heartland wrote:
I came here for jokes about Innovation and found none. What has happened to all the quality shitposting in this place?!


We're in mourning

World's best Go player flummoxed by Google’s ‘godlike’ AlphaGo AI
https://www.theguardian.com/technology/2017/may/23/alphago-google-ai-beats-ke-jie-china-go

After his defeat, a visibly flummoxed Ke – who last year declared he would never lose to an AI opponent – said AlphaGo had become too strong for humans, despite the razor-thin half-point winning margin.

“I feel like his game is more and more like the ‘Go god’. Really, it is brilliant,” he said.

Ke vowed never again to subject himself to the “horrible experience”.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 15:59 GMT
#424
On May 27 2017 00:40 cutha wrote:
Show nested quote +
I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.


I agree with most of what you said about "strategy" and mechanics and how scouting/reacting is most crucial to winning games. However, I think you may be thinking in the wrong perspective here as a human. Scouting/reacting is not human-exclusive abilities. They are still within the boundaries of learn-able information during the training. For example as a Zerg, the AI can generalize the strategy as: "if I didn't see a natural at X min, I need to sacrifice an overlord to scout. If I see Y amounts of certain units, I need to adopt plan B" etc. If the game samples for training is carefully chosen to cover a wide range of excellent scouting/reactive actions, then in theory the AI has no problem learning from them. It's no different than say, learning active actions like build-order wise "strategy" and mechanics.

To elaborate more, for the double medivac drop in TvZ, the Zerg AI can precisely keep track of the exact number of marines and another other units/SCVs and make optimized defense strategy based on map length, and thus able to maximize drone count before making defensive lings at the last moment. And it can have a lot of wiggle room to decide on the best number of lings depending on maps and other situations which even top human players are impossible to keep track of.


I don't think we really disagree here at a fundamental level. I agree that the AI can learn a lot of the things that are needed. At a general level, I was disagreeing with two ideas that I've seen presented. First, that an AI learning Starcraft is a "lots of data" question, which is the answer to a lot of learning problems but for various reasons I contest that in this case. Second, that it's in the margins of mechanics or strategic insight that the AI will win games. It's going to have to win games just like everybody else: making inferences from limited information. I think we probably agree on both of these points.

I think where we probably disagree is that I think the training method isn't probably going to be best done by a careful sample. I just really really don't think that Starcraft is the kind of problem that can be solved in the way that games like Go or Chess are. Those you can train with thousands if not millions of games and get great results. But at least in Chess if not Go, the whole board is known completely to both players. The AI doesn't have to make inferences about what the actual state of affairs is, because the actual state of affairs is known. When it has to start making those judgments, even if they are high reliability judgments like I didn't see natural at X minutes then do Y, you're opening up a brand new world of complexity.
Writersator arepo tenet opera rotas
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 16:05 GMT
#425
Neural networks are already known to be strong classifiers of X or not X (ex. spam or not spam). Thus, they already make inferences from limited information.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
Last Edited: 2017-05-26 17:07:21
May 26 2017 17:05 GMT
#426
While its true that AIs have a harder time in partially observable environments, I don't think it'll take more than a decade for AIs to beat humans at SC2. And that's a conservative timeline in my opinion. Go AIs weren't predicted to beat humans before another 30 years just 2 years ago.

But if I was to build a NN to determine if a mail is spam, I would feed it the whole email instead of a few binary values on wether a word is there or not. This sounds more like a naive bayes approach.
Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
Charoisaur
Profile Joined August 2014
Germany16062 Posts
May 26 2017 17:16 GMT
#427
I heard Google's new AI "AlphaSC2" is ready and will be tested tomorrow in the GSL.
Many of the coolest moments in sc2 happen due to worker harassment
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 17:38 GMT
#428
I don't know why it's put up as some mystical bonjwa inference mastery on predicting possibilities of your opponent's build order and strategy. I don't think it's all that complicated of a decision tree.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Blardy
Profile Joined January 2011
United States290 Posts
May 26 2017 17:38 GMT
#429
If AI is allowed unlimited or 1000+ APM at all times then no human will beat it within a year. If they were given a cap of 400 then I don't see an AI beating a human for a long time.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 17:48:12
May 26 2017 17:44 GMT
#430
Nah, the AI will adapt. It might even use its extra computational power to, in 1 ms, assess which of 10-100 potential actions are likely to have the most effect on their chances of winning. Sort of a real-time Most Effective Actions calculator.

This would be interesting as it could be tuned to always maintain its APM lower than its opponent.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
cutha
Profile Joined April 2017
2 Posts
May 26 2017 17:45 GMT
#431
I think where we probably disagree is that I think the training method isn't probably going to be best done by a careful sample. I just really really don't think that Starcraft is the kind of problem that can be solved in the way that games like Go or Chess are. Those you can train with thousands if not millions of games and get great results. But at least in Chess if not Go, the whole board is known completely to both players. The AI doesn't have to make inferences about what the actual state of affairs is, because the actual state of affairs is known. When it has to start making those judgments, even if they are high reliability judgments like I didn't see natural at X minutes then do Y, you're opening up a brand new world of complexity.


I did misinterpret you in the previous post. But I think what I said still stands - all the winning strategy regardless of forms, be it reactive defense, aggressive all-in, or pure superior mechanics, are all very reasonable trainable knowledge. What you are basically saying here is that it is impossible to make "perfect" judgement due to fog of war, so there has to be always some kind of educated guessing and gambling involved in the game. And this is different from chess/Go, since all the pieces are always visible on board. However, from knowing exactly the current "state" of the game, AlphaGo is playing by its trained neural network which is based on human experience plus its own reinforced learning. There is no way to play it perfectly based on current state of the game because there are an unimaginably large number of variations for future moves. In this regard, that unknown factor due to large number of variations is similar to the unknown factor in Starcraft 2 due to fog of war. If you compare the strategic complexity of Go one player can employ given a certain state of the board, with the number of popular choices any top SC2 player would do given an in-game situation, it seems to me SC2 is complete childplay. Think it from another perspective, a top SC2 player needs to decide his reactive actions based on scouting information within seconds, but a top Go player may often need minutes at any turn. The hard part of SC2 for AI is how to achieve balanced performance among a multitude of different aspects like mechanics, micro based on restricted APM, reactive actions etc. But for the strategic part, if AlphaGo can conquer Go, SC2 is a no-brainer in my opinion.
niteReloaded
Profile Blog Joined February 2007
Croatia5282 Posts
May 26 2017 18:52 GMT
#432
this is laughable.

it would probably be pretty easy to make an AI that dominates humans.

-> If there is no APM limit, then i guess we all agree. For example just pick Zerg and go muta.

-> no apm limit, still go for attention-intensive strategies. Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click. From the point of view of mechanics, it could be better than Flash playing the game on slowest speed setting.

fishjie
Profile Blog Joined September 2010
United States1519 Posts
May 26 2017 18:56 GMT
#433
Depends - would the AI be able to have unlimited APM? Or would there be a cap to APM. If there is an APM cap, then strategy would be more important, and it would have a tougher time.

One of the key ideas that made alpha GO work is that they looked at the probability either side would win given a position on the board, if the rest of the game were played out using random moves. They then did monte carlo simulations to play those out, and used that to evaluate how good a position was. That assumption won't work in a game like starcraft.

https://www.tastehit.com/blog/google-deepmind-alphago-how-it-works/
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 19:28 GMT
#434
So, in the article

"AlphaGo relies on two different components: A tree search procedure, and convolutional networks that guide the tree search procedure. The convolutional networks are conceptually somewhat similar to the evaluation function in Deep Blue, except that they are learned and not designed. The tree search procedure can be regarded as a brute-force approach, whereas the convolutional networks provide a level on intuition to the game-play."

The monte carlo method that you mention is the tree searching, but, as above, there seems to be more to AlphaGo.

Of course, they will have to build new models for starcraft, otherwise the notion of a 'move' isn't well defined even.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
May 26 2017 20:04 GMT
#435
On May 27 2017 03:52 niteReloaded wrote:
this is laughable.

it would probably be pretty easy to make an AI that dominates humans.

-> If there is no APM limit, then i guess we all agree. For example just pick Zerg and go muta.

-> no apm limit, still go for attention-intensive strategies. Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click. From the point of view of mechanics, it could be better than Flash playing the game on slowest speed setting.



That defeats the entire exercise of making the AI. It's supposed to try to outsmart so the APM will be limited.
Moderator
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 20:17:47
May 26 2017 20:07 GMT
#436
Can't wait to see what race the AI favors. This might even change depending on what APM setting it's on. Well, and the map come to think of it.

Apparently in Go, it gives a slight edge to the white stones (playing 2nd).

Unlike in the first round, AlphaGo played the black stones, which means it played first, something it views as a small handicap. "It thinks there is a just a slight advantage to the player taking the white stones,” AlphaGo’s lead researcher, David Silver, said just before the game. And as match commentator Andrew Jackson pointed out, Ke Jie is known for playing well with white.


Oh, it also defeated a team of 5 Champions today
+ Show Spoiler +
[image loading]
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 20:28 GMT
#437
Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click.


Ya, considering 400 apm gives it an average of 2.5 milliseconds per click and modern processors run at around 4 GHz, that's 10 million raw CPU cycles per click and "AlphaGo ran on 48 CPUs and 8 GPUs and the distributed version of AlphaGo ran on 1202 CPUs and 176 GPUs."
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
KungKras
Profile Joined August 2008
Sweden484 Posts
May 26 2017 20:38 GMT
#438
4 pool vs the computer. All that counts is micro. No macro can save it
"When life gives me lemons, I go look for oranges"
fishjie
Profile Blog Joined September 2010
United States1519 Posts
Last Edited: 2017-05-26 21:34:46
May 26 2017 21:32 GMT
#439
On May 27 2017 04:28 mishimaBeef wrote:
So, in the article

"AlphaGo relies on two different components: A tree search procedure, and convolutional networks that guide the tree search procedure. The convolutional networks are conceptually somewhat similar to the evaluation function in Deep Blue, except that they are learned and not designed. The tree search procedure can be regarded as a brute-force approach, whereas the convolutional networks provide a level on intuition to the game-play."

The monte carlo method that you mention is the tree searching, but, as above, there seems to be more to AlphaGo.

Of course, they will have to build new models for starcraft, otherwise the notion of a 'move' isn't well defined even.


Also in the article:
value of a state = value network output + simulation result

I'd be interested to see how much they weighted the monte carlo vs the value network (the convolutional neural net). It sounds like trying either one solo did worse than the combination. So both are needed. But I don't think the monte carlo part wouldn't work in starcraft, because you can't just play random moves in an RTS. Furthermore, in a turn based game, you can only make one move per turn, so you can easily simulate resulting positions from a current position. In RTS, you can move multiple units, with different abilities, and combinatorial explosion would be disastrous.

Still, if I understand the article correctly, the neural net was used to evaluate positions and classify "good" or "bad" positions. It was trained by playing games against itself. The input to the neural net would presumably be the positions of the pieces. Currently neural networks take a long time to train, and every hidden layer you add the slower it gets. In a game like starcraft, there would be far more inputs needed to represent a current given position than in Go, and getting the NN to converge would take much longer.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 21:46 GMT
#440
Yeah if you consider move = click, then it explodes. But usually you think in terms of high level "moves" (tech to vessel, pump marine medic, deflect muta) and use clicks to implement the higher level strategic "moves".
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Prev 1 20 21 22 23 24 29 Next All
Please log in or register to reply.
Live Events Refresh
Wardi Open
11:00
#84
IntoTheiNu 998
WardiTV950
OGKoka 443
Rex127
Ryung 28
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
OGKoka 443
Lowko307
SortOf 150
Rex 127
Ryung 28
StarCraft: Brood War
Calm 10255
Sea 2703
Jaedong 2263
EffOrt 1014
BeSt 617
Hyuk 582
actioN 531
Mini 520
Stork 472
ggaemo 232
[ Show more ]
ZerO 188
Light 177
Hyun 162
Killer 137
Snow 130
Rush 90
PianO 87
Pusan 85
Nal_rA 84
ToSsGirL 67
Aegong 60
Sea.KH 56
[sc1f]eonzerg 49
Free 44
Barracks 37
Shinee 33
soO 28
Bale 24
ajuk12(nOOB) 24
Sacsri 22
JYJ 21
scan(afreeca) 20
HiyA 17
GoRush 15
Shine 15
Noble 11
Sexy 10
yabsab 10
Icarus 9
910 8
Terrorterran 5
Dota 2
qojqva952
BananaSlamJamma84
Counter-Strike
olofmeister3088
zeus830
byalli653
allub584
x6flipin427
markeloff221
edward134
Super Smash Bros
Mew2King51
Heroes of the Storm
Khaldor212
Other Games
singsing2451
B2W.Neo1284
hiko725
XBOCT331
crisheroes294
XaKoH 237
Pyrionflax174
Hui .145
Liquid`VortiX86
Liquid`LucifroN76
Fuzer 50
ArmadaUGS50
ZerO(Twitch)12
Organizations
Dota 2
PGL Dota 2 - Main Stream411
StarCraft: Brood War
Kim Chul Min (afreeca) 11
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 11 non-featured ]
StarCraft 2
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Nemesis2489
Upcoming Events
Monday Night Weeklies
2h 46m
Replay Cast
10h 46m
Replay Cast
19h 46m
Afreeca Starleague
20h 46m
Leta vs YSC
GSL
1d 20h
Rogue vs Percival
Zoun vs Solar
Replay Cast
2 days
GSL
2 days
Cure vs TriGGeR
ByuN vs Bunny
The PondCast
2 days
KCM Race Survival
2 days
Replay Cast
3 days
[ Show More ]
Replay Cast
3 days
Escore
3 days
Replay Cast
4 days
Replay Cast
4 days
IPSL
5 days
Ret vs Art_Of_Turtle
Radley vs TBD
BSL
5 days
Replay Cast
5 days
uThermal 2v2 Circuit
6 days
BSL
6 days
IPSL
6 days
eOnzErG vs TBD
G5 vs Nesh
Replay Cast
6 days
Wardi Open
6 days
Afreeca Starleague
6 days
Jaedong vs Light
Liquipedia Results

Completed

Escore Tournament S2: W4
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
CSL 2026 SPRING (S20)
IPSL Spring 2026
KCM Race Survival 2026 Season 2
StarCraft2 Community Team League 2026 Spring
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026

Upcoming

Escore Tournament S2: W5
KK 2v2 League Season 1
Acropolis #4
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
2026 GSL S2
RSL Revival: Season 5
2026 GSL S1
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
BLAST Rivals Spring 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.