• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 11:46
CET 17:46
KST 01:46
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
TL.net Map Contest #21: Winners11Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting12[ASL20] Ro4 Preview: Descent11
Community News
Weekly Cups (Nov 3-9): Clem Conquers in Canada2SC: Evo Complete - Ranked Ladder OPEN ALPHA8StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7[BSL21] RO32 Group Stage4
StarCraft 2
General
Mech is the composition that needs teleportation t SC: Evo Complete - Ranked Ladder OPEN ALPHA Weekly Cups (Nov 3-9): Clem Conquers in Canada Craziest Micro Moments Of All Time? RotterdaM "Serral is the GOAT, and it's not close"
Tourneys
Constellation Cup - Main Event - Stellar Fest Tenacious Turtle Tussle Sparkling Tuna Cup - Weekly Open Tournament $5,000+ WardiTV 2025 Championship Merivale 8 Open - LAN - Stellar Fest
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
FlaSh on: Biggest Problem With SnOw's Playstyle BW General Discussion BGH Auto Balance -> http://bghmmr.eu/ [ASL20] Ask the mapmakers — Drop your questions Where's CardinalAllin/Jukado the mapmaker?
Tourneys
[Megathread] Daily Proleagues [ASL20] Grand Finals [BSL21] RO32 Group A - Saturday 21:00 CET [BSL21] RO32 Group B - Sunday 21:00 CET
Strategy
Current Meta PvZ map balance How to stay on top of macro? Soma's 9 hatch build from ASL Game 2
Other Games
General Games
Nintendo Switch Thread Stormgate/Frost Giant Megathread Should offensive tower rushing be viable in RTS games? Path of Exile Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine US Politics Mega-thread Canadian Politics Mega-thread The Games Industry And ATVI
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread Movie Discussion! Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Dyadica Gospel – a Pulp No…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1907 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 22

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 20 21 22 23 24 29 Next All
cutha
Profile Joined April 2017
2 Posts
Last Edited: 2017-05-26 15:45:18
May 26 2017 15:40 GMT
#421
I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.


I agree with most of what you said about "strategy" and mechanics and how scouting/reacting is most crucial to winning games. However, I think you may be thinking in the wrong perspective here as a human. Scouting/reacting is not human-exclusive abilities. They are still within the boundaries of learn-able information during the training. For example as a Zerg, the AI can generalize the strategy as: "if I didn't see a natural at X min, I need to sacrifice an overlord to scout. If I see Y amounts of certain units, I need to adopt plan B" etc. If the game samples for training is carefully chosen to cover a wide range of excellent scouting/reactive actions, then in theory the AI has no problem learning from them. It's no different than say, learning active actions like build-order wise "strategy" and mechanics.

To elaborate more, for the double medivac drop in TvZ, the Zerg AI can precisely keep track of the exact number of marines and another other units/SCVs and make optimized defense strategy based on map length, and thus able to maximize drone count before making defensive lings at the last moment. And it can have a lot of wiggle room to decide on the best number of lings depending on maps and other situations which even top human players are impossible to keep track of.
Heartland
Profile Blog Joined May 2012
Sweden24591 Posts
May 26 2017 15:42 GMT
#422
I came here for jokes about Innovation and found none. What has happened to all the quality shitposting in this place?!
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 15:50 GMT
#423
On May 27 2017 00:42 Heartland wrote:
I came here for jokes about Innovation and found none. What has happened to all the quality shitposting in this place?!


We're in mourning

World's best Go player flummoxed by Google’s ‘godlike’ AlphaGo AI
https://www.theguardian.com/technology/2017/may/23/alphago-google-ai-beats-ke-jie-china-go

After his defeat, a visibly flummoxed Ke – who last year declared he would never lose to an AI opponent – said AlphaGo had become too strong for humans, despite the razor-thin half-point winning margin.

“I feel like his game is more and more like the ‘Go god’. Really, it is brilliant,” he said.

Ke vowed never again to subject himself to the “horrible experience”.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 15:59 GMT
#424
On May 27 2017 00:40 cutha wrote:
Show nested quote +
I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.


I agree with most of what you said about "strategy" and mechanics and how scouting/reacting is most crucial to winning games. However, I think you may be thinking in the wrong perspective here as a human. Scouting/reacting is not human-exclusive abilities. They are still within the boundaries of learn-able information during the training. For example as a Zerg, the AI can generalize the strategy as: "if I didn't see a natural at X min, I need to sacrifice an overlord to scout. If I see Y amounts of certain units, I need to adopt plan B" etc. If the game samples for training is carefully chosen to cover a wide range of excellent scouting/reactive actions, then in theory the AI has no problem learning from them. It's no different than say, learning active actions like build-order wise "strategy" and mechanics.

To elaborate more, for the double medivac drop in TvZ, the Zerg AI can precisely keep track of the exact number of marines and another other units/SCVs and make optimized defense strategy based on map length, and thus able to maximize drone count before making defensive lings at the last moment. And it can have a lot of wiggle room to decide on the best number of lings depending on maps and other situations which even top human players are impossible to keep track of.


I don't think we really disagree here at a fundamental level. I agree that the AI can learn a lot of the things that are needed. At a general level, I was disagreeing with two ideas that I've seen presented. First, that an AI learning Starcraft is a "lots of data" question, which is the answer to a lot of learning problems but for various reasons I contest that in this case. Second, that it's in the margins of mechanics or strategic insight that the AI will win games. It's going to have to win games just like everybody else: making inferences from limited information. I think we probably agree on both of these points.

I think where we probably disagree is that I think the training method isn't probably going to be best done by a careful sample. I just really really don't think that Starcraft is the kind of problem that can be solved in the way that games like Go or Chess are. Those you can train with thousands if not millions of games and get great results. But at least in Chess if not Go, the whole board is known completely to both players. The AI doesn't have to make inferences about what the actual state of affairs is, because the actual state of affairs is known. When it has to start making those judgments, even if they are high reliability judgments like I didn't see natural at X minutes then do Y, you're opening up a brand new world of complexity.
Writersator arepo tenet opera rotas
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 16:05 GMT
#425
Neural networks are already known to be strong classifiers of X or not X (ex. spam or not spam). Thus, they already make inferences from limited information.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
Last Edited: 2017-05-26 17:07:21
May 26 2017 17:05 GMT
#426
While its true that AIs have a harder time in partially observable environments, I don't think it'll take more than a decade for AIs to beat humans at SC2. And that's a conservative timeline in my opinion. Go AIs weren't predicted to beat humans before another 30 years just 2 years ago.

But if I was to build a NN to determine if a mail is spam, I would feed it the whole email instead of a few binary values on wether a word is there or not. This sounds more like a naive bayes approach.
Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
Charoisaur
Profile Joined August 2014
Germany16008 Posts
May 26 2017 17:16 GMT
#427
I heard Google's new AI "AlphaSC2" is ready and will be tested tomorrow in the GSL.
Many of the coolest moments in sc2 happen due to worker harassment
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 17:38 GMT
#428
I don't know why it's put up as some mystical bonjwa inference mastery on predicting possibilities of your opponent's build order and strategy. I don't think it's all that complicated of a decision tree.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Blardy
Profile Joined January 2011
United States290 Posts
May 26 2017 17:38 GMT
#429
If AI is allowed unlimited or 1000+ APM at all times then no human will beat it within a year. If they were given a cap of 400 then I don't see an AI beating a human for a long time.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 17:48:12
May 26 2017 17:44 GMT
#430
Nah, the AI will adapt. It might even use its extra computational power to, in 1 ms, assess which of 10-100 potential actions are likely to have the most effect on their chances of winning. Sort of a real-time Most Effective Actions calculator.

This would be interesting as it could be tuned to always maintain its APM lower than its opponent.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
cutha
Profile Joined April 2017
2 Posts
May 26 2017 17:45 GMT
#431
I think where we probably disagree is that I think the training method isn't probably going to be best done by a careful sample. I just really really don't think that Starcraft is the kind of problem that can be solved in the way that games like Go or Chess are. Those you can train with thousands if not millions of games and get great results. But at least in Chess if not Go, the whole board is known completely to both players. The AI doesn't have to make inferences about what the actual state of affairs is, because the actual state of affairs is known. When it has to start making those judgments, even if they are high reliability judgments like I didn't see natural at X minutes then do Y, you're opening up a brand new world of complexity.


I did misinterpret you in the previous post. But I think what I said still stands - all the winning strategy regardless of forms, be it reactive defense, aggressive all-in, or pure superior mechanics, are all very reasonable trainable knowledge. What you are basically saying here is that it is impossible to make "perfect" judgement due to fog of war, so there has to be always some kind of educated guessing and gambling involved in the game. And this is different from chess/Go, since all the pieces are always visible on board. However, from knowing exactly the current "state" of the game, AlphaGo is playing by its trained neural network which is based on human experience plus its own reinforced learning. There is no way to play it perfectly based on current state of the game because there are an unimaginably large number of variations for future moves. In this regard, that unknown factor due to large number of variations is similar to the unknown factor in Starcraft 2 due to fog of war. If you compare the strategic complexity of Go one player can employ given a certain state of the board, with the number of popular choices any top SC2 player would do given an in-game situation, it seems to me SC2 is complete childplay. Think it from another perspective, a top SC2 player needs to decide his reactive actions based on scouting information within seconds, but a top Go player may often need minutes at any turn. The hard part of SC2 for AI is how to achieve balanced performance among a multitude of different aspects like mechanics, micro based on restricted APM, reactive actions etc. But for the strategic part, if AlphaGo can conquer Go, SC2 is a no-brainer in my opinion.
niteReloaded
Profile Blog Joined February 2007
Croatia5282 Posts
May 26 2017 18:52 GMT
#432
this is laughable.

it would probably be pretty easy to make an AI that dominates humans.

-> If there is no APM limit, then i guess we all agree. For example just pick Zerg and go muta.

-> no apm limit, still go for attention-intensive strategies. Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click. From the point of view of mechanics, it could be better than Flash playing the game on slowest speed setting.

fishjie
Profile Blog Joined September 2010
United States1519 Posts
May 26 2017 18:56 GMT
#433
Depends - would the AI be able to have unlimited APM? Or would there be a cap to APM. If there is an APM cap, then strategy would be more important, and it would have a tougher time.

One of the key ideas that made alpha GO work is that they looked at the probability either side would win given a position on the board, if the rest of the game were played out using random moves. They then did monte carlo simulations to play those out, and used that to evaluate how good a position was. That assumption won't work in a game like starcraft.

https://www.tastehit.com/blog/google-deepmind-alphago-how-it-works/
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 19:28 GMT
#434
So, in the article

"AlphaGo relies on two different components: A tree search procedure, and convolutional networks that guide the tree search procedure. The convolutional networks are conceptually somewhat similar to the evaluation function in Deep Blue, except that they are learned and not designed. The tree search procedure can be regarded as a brute-force approach, whereas the convolutional networks provide a level on intuition to the game-play."

The monte carlo method that you mention is the tree searching, but, as above, there seems to be more to AlphaGo.

Of course, they will have to build new models for starcraft, otherwise the notion of a 'move' isn't well defined even.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
May 26 2017 20:04 GMT
#435
On May 27 2017 03:52 niteReloaded wrote:
this is laughable.

it would probably be pretty easy to make an AI that dominates humans.

-> If there is no APM limit, then i guess we all agree. For example just pick Zerg and go muta.

-> no apm limit, still go for attention-intensive strategies. Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click. From the point of view of mechanics, it could be better than Flash playing the game on slowest speed setting.



That defeats the entire exercise of making the AI. It's supposed to try to outsmart so the APM will be limited.
Moderator
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 20:17:47
May 26 2017 20:07 GMT
#436
Can't wait to see what race the AI favors. This might even change depending on what APM setting it's on. Well, and the map come to think of it.

Apparently in Go, it gives a slight edge to the white stones (playing 2nd).

Unlike in the first round, AlphaGo played the black stones, which means it played first, something it views as a small handicap. "It thinks there is a just a slight advantage to the player taking the white stones,” AlphaGo’s lead researcher, David Silver, said just before the game. And as match commentator Andrew Jackson pointed out, Ke Jie is known for playing well with white.


Oh, it also defeated a team of 5 Champions today
+ Show Spoiler +
[image loading]
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 20:28 GMT
#437
Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click.


Ya, considering 400 apm gives it an average of 2.5 milliseconds per click and modern processors run at around 4 GHz, that's 10 million raw CPU cycles per click and "AlphaGo ran on 48 CPUs and 8 GPUs and the distributed version of AlphaGo ran on 1202 CPUs and 176 GPUs."
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
KungKras
Profile Joined August 2008
Sweden484 Posts
May 26 2017 20:38 GMT
#438
4 pool vs the computer. All that counts is micro. No macro can save it
"When life gives me lemons, I go look for oranges"
fishjie
Profile Blog Joined September 2010
United States1519 Posts
Last Edited: 2017-05-26 21:34:46
May 26 2017 21:32 GMT
#439
On May 27 2017 04:28 mishimaBeef wrote:
So, in the article

"AlphaGo relies on two different components: A tree search procedure, and convolutional networks that guide the tree search procedure. The convolutional networks are conceptually somewhat similar to the evaluation function in Deep Blue, except that they are learned and not designed. The tree search procedure can be regarded as a brute-force approach, whereas the convolutional networks provide a level on intuition to the game-play."

The monte carlo method that you mention is the tree searching, but, as above, there seems to be more to AlphaGo.

Of course, they will have to build new models for starcraft, otherwise the notion of a 'move' isn't well defined even.


Also in the article:
value of a state = value network output + simulation result

I'd be interested to see how much they weighted the monte carlo vs the value network (the convolutional neural net). It sounds like trying either one solo did worse than the combination. So both are needed. But I don't think the monte carlo part wouldn't work in starcraft, because you can't just play random moves in an RTS. Furthermore, in a turn based game, you can only make one move per turn, so you can easily simulate resulting positions from a current position. In RTS, you can move multiple units, with different abilities, and combinatorial explosion would be disastrous.

Still, if I understand the article correctly, the neural net was used to evaluate positions and classify "good" or "bad" positions. It was trained by playing games against itself. The input to the neural net would presumably be the positions of the pieces. Currently neural networks take a long time to train, and every hidden layer you add the slower it gets. In a game like starcraft, there would be far more inputs needed to represent a current given position than in Go, and getting the NN to converge would take much longer.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 21:46 GMT
#440
Yeah if you consider move = click, then it explodes. But usually you think in terms of high level "moves" (tech to vessel, pump marine medic, deflect muta) and use clicks to implement the higher level strategic "moves".
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Prev 1 20 21 22 23 24 29 Next All
Please log in or register to reply.
Live Events Refresh
OSC
16:00
Masters Cup #150: Group A
davetesta114
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
mouzHeroMarine 177
MindelVK 16
StarCraft: Brood War
Britney 26929
Calm 3821
Soma 990
Shuttle 650
Stork 491
hero 409
ZerO 349
firebathero 274
Rush 179
Sharp 178
[ Show more ]
Barracks 102
sSak 82
Mong 38
Aegong 32
Terrorterran 18
Sexy 15
Dota 2
singsing2431
Dendi1037
BananaSlamJamma203
XcaliburYe124
Counter-Strike
oskar103
markeloff94
FunKaTv 50
Other Games
hiko672
B2W.Neo588
DeMusliM388
Hui .348
Lowko348
crisheroes222
Sick218
Fuzer 209
Liquid`VortiX170
ArmadaUGS152
Trikslyr32
ZerO(Twitch)16
fpsfer 1
Organizations
Other Games
WardiTV328
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 19 non-featured ]
StarCraft 2
• poizon28 22
• Adnapsc2 13
• Reevou 2
• LaughNgamezSOOP
• sooper7s
• AfreecaTV YouTube
• intothetv
• Migwel
• Kozan
• IndyKCrew
StarCraft: Brood War
• blackmanpl 7
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 2296
• WagamamaTV316
League of Legends
• Nemesis3380
• TFBlade826
Other Games
• Shiphtur157
Upcoming Events
Replay Cast
6h 14m
Replay Cast
16h 14m
Kung Fu Cup
19h 14m
Classic vs Solar
herO vs Cure
Reynor vs GuMiho
ByuN vs ShoWTimE
Tenacious Turtle Tussle
1d 6h
The PondCast
1d 17h
RSL Revival
1d 17h
Solar vs Zoun
MaxPax vs Bunny
Kung Fu Cup
1d 19h
WardiTV Korean Royale
1d 19h
PiGosaur Monday
2 days
RSL Revival
2 days
Classic vs Creator
Cure vs TriGGeR
[ Show More ]
Kung Fu Cup
2 days
CranKy Ducklings
3 days
RSL Revival
3 days
herO vs Gerald
ByuN vs SHIN
Kung Fu Cup
3 days
BSL 21
4 days
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
4 days
RSL Revival
4 days
Reynor vs sOs
Maru vs Ryung
Kung Fu Cup
4 days
WardiTV Korean Royale
4 days
BSL 21
5 days
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
Wardi Open
5 days
Monday Night Weeklies
6 days
WardiTV Korean Royale
6 days
Liquipedia Results

Completed

Proleague 2025-11-07
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual

Upcoming

SLON Tour Season 2
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
RSL Revival: Season 3
META Madness #9
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.