• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 21:49
CEST 03:49
KST 10:49
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S RO8 Preview: herO, Zoun, Bunny, Classic3Code S RO8 Preview: Rogue, GuMiho, Solar, Maru3BGE Stara Zagora 2025: Info & Preview27Code S RO12 Preview: GuMiho, Bunny, SHIN, ByuN3The Memories We Share - Facing the Final(?) GSL47
Community News
BGE Stara Zagora 2025 - Replay Pack2Weekly Cups (June 2-8): herO doubles down1[BSL20] ProLeague: Bracket Stage & Dates9GSL Ro4 and Finals moved to Sunday June 15th13Weekly Cups (May 27-June 1): ByuN goes back-to-back0
StarCraft 2
General
Code S RO8 Preview: herO, Zoun, Bunny, Classic Jim claims he and Firefly were involved in match-fixing The SCII GOAT: A statistical Evaluation DreamHack Dallas 2025 - Official Replay Pack BGE Stara Zagora 2025 - Replay Pack
Tourneys
[GSL 2025] Code S:Season 2 - RO8 - Group A RSL: Revival, a new crowdfunded tournament series SOOPer7s Showmatches 2025 Sparkling Tuna Cup - Weekly Open Tournament Sea Duckling Open (Global, Bronze-Diamond)
Strategy
[G] Darkgrid Layout Simple Questions Simple Answers [G] PvT Cheese: 13 Gate Proxy Robo
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 477 Slow and Steady Mutation # 476 Charnel House Mutation # 475 Hard Target Mutation # 474 Futile Resistance
Brood War
General
BGH auto balance -> http://bghmmr.eu/ BW General Discussion FlaSh Witnesses SCV Pull Off the Impossible vs Shu StarCraft & BroodWar Campaign Speedrun Quest Will foreigners ever be able to challenge Koreans?
Tourneys
NA Team League 6/8/2025 [ASL19] Grand Finals [BSL20] GosuLeague RO16 - Tue & Wed 20:00+CET [Megathread] Daily Proleagues
Strategy
I am doing this better than progamers do. [G] How to get started on ladder as a new Z player
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Beyond All Reason Path of Exile What do you want from future RTS games?
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Wizard Hilton Cybertech Crypto Recovery: Proven Re
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine UK Politics Mega-thread Russo-Ukrainian War Thread Vape Nation Thread
Fan Clubs
Maru Fan Club Serral Fan Club
Media & Entertainment
Korean Music Discussion [Manga] One Piece
Sports
2024 - 2025 Football Thread Formula 1 Discussion NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
A Better Routine For Progame…
TrAiDoS
StarCraft improvement
iopq
Heero Yuy & the Tax…
KrillinFromwales
I was completely wrong ab…
jameswatts
Need Your Help/Advice
Glider
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 26774 users

Flash on DeepMind: "I think I can win" - Page 8

Forum Index > SC2 General
Post a Reply
Prev 1 6 7 8 9 10 Next All
Haukinger
Profile Joined June 2012
Germany131 Posts
Last Edited: 2016-03-11 08:36:08
March 11 2016 08:28 GMT
#141
You can have that today with human players if you remove the mechanical stress, leaving more room for actual thinking.

That's the core problem why starcraft is boring to play and boring to watch for most people: mechanics play an overwhelming part in winning. You can get to GM just by cannon rushing or 4-gating mechanically well, and I'm sure a bot would win GSL just by worker rush. That means players have to completely know their maps and chose a more or less static build orders because there's no time in the game to do think.
sertas
Profile Joined April 2012
Sweden881 Posts
March 11 2016 08:47 GMT
#142
you cant get gm by cannon rushing or 4 gating wtf not in this expansion at least
heqat
Profile Joined October 2011
Switzerland96 Posts
March 11 2016 09:18 GMT
#143
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.



NiHiLuSsc2
Profile Blog Joined November 2012
United States50 Posts
March 11 2016 10:28 GMT
#144
if anyone can do it its God himself
PBJT
sakurazawanamo
Profile Joined March 2016
Korea (South)1 Post
March 11 2016 10:40 GMT
#145
i wonder how an AI will react to fakes and misdirection in builds
DwD
Profile Joined January 2010
Sweden8621 Posts
March 11 2016 10:42 GMT
#146
After seeing some of those micro bots with like 50.000 APM(or whatever) in the SC2 map editor I'm pretty sure Flash would get smoked pretty hard.
~ T-ARA ~ DREAMCATCHER ~ EVERGLOW ~ OH MY GIRL ~ DIA ~ BOL4 ~ CHUNGHA ~
coolprogrammingstuff
Profile Joined December 2015
906 Posts
March 11 2016 11:17 GMT
#147
why are people talking about insane micro? Give it some unique quirks, perhaps, but "hurrr insane micro ai" is fucking stupid -- completely defeats the point if you give it perfect mechanics where it macros exactly on point, and micros 10 stacks of 11 mutas at once - pointless and stupid. I'm cringing reading comments discussing the micro mechanics and it being unstoppable.

Make it play like a human. Don't restrict the APM - It's not how algorithms operate. They'd had EAPM close to 100%. Restrict that instead, to a human level. Make it so it's actually a contest in natural ability - see if it can micro logically better from splitting, positioning, and general human-tier control, rather than by just maneuvering ridiculously. Make it execute build orders, rather than a 2 hatch muta all in every game with impossible micro. Making it play like a human and contest in a way that's human-esque is what makes it interesting, otherwise no human can stop even a perfect 4pool.

Besides from that, I think that at this stage it'd be close if it was to go up against Flash shortly, with Flash pulling ahead. However, if Flash was at his peak in 2 years, hypothetically, as mentioned before, if the bot was just fed brood war, I think he'd have no chance. And it'd be fascinating to watch how it plays.
Dromar
Profile Blog Joined June 2007
United States2145 Posts
March 11 2016 11:26 GMT
#148
On March 11 2016 18:18 heqat wrote:
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.






Well the game played will be Brood War, but even if it were SC2, the AI could control everything without moving the screen. It could simply hotkey every unit as it is produced, remember its location, and from that hotkey select and give commands to each individual unit. Isn't there also a "Select Army" button?
heqat
Profile Joined October 2011
Switzerland96 Posts
March 11 2016 11:49 GMT
#149
On March 11 2016 20:26 Dromar wrote:
Show nested quote +
On March 11 2016 18:18 heqat wrote:
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.






Well the game played will be Brood War, but even if it were SC2, the AI could control everything without moving the screen. It could simply hotkey every unit as it is produced, remember its location, and from that hotkey select and give commands to each individual unit. Isn't there also a "Select Army" button?


Sorry yes, it would be BW. Regarding your point, what I mean is that for a perfect test, the AI should use the same user-interface than a human. It should take decisions using a flat 2D picture and control the game using hotkey, scrolling, etc.(don't need a physical robot, just wire the data to the AI software). In regular game AI (such as SC2 AI), the software has access to the complete game internal state and can take decision at every step by simply checking unit positions, states, etc. with some specific rules to avoid cheating (like preveting the AI to access non-visible units).

Now I guess it would become much more difficult for the AI if it has to play from the exact same user-interface than a human (which makes sens for a true SC human/machine match, contrary to Go/Chess where user-interface does not change the result of the performance). It would require some very advanded real time visual recognition algorithm for instance.

ETisME
Profile Blog Joined April 2011
12352 Posts
Last Edited: 2016-03-11 12:03:15
March 11 2016 11:52 GMT
#150
after reading some interviews, I think deepmind team just used starcraft as a point of reference because it is famous and strategy game, not aware that mechanics plays a huge part of the game.

Anyway I really don't think it is going to pose any challenge for the AI.
I am not an expert but certainly it can just scout every once awhile and deduct what is the most possible and threatening strategy/timing coming in and then win by perfect attention to everything, perfect micro, perfect reactionary decision etc.

Each harass/engagement just limits more and more uncertainty for the AI.
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
Last Edited: 2016-03-11 13:00:42
March 11 2016 12:57 GMT
#151
On March 11 2016 13:01 ZAiNs wrote:
Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


That is a fair point. But I think you can break a game in several mini-games, having a little algorithm to guess who has the advantage, based on material advantage, positioning, etc (just as you said they did for Go).

While Go can be perfectly modelled, the number of possible states is intractable. Just as you need heuristics to cut the search tree in table games, you can also "cheat" in SC by having sort of a hash function on states. That's what I meant by parametrization earlier: a lot of the work involved in building neural nets is choosing which are the inputs.

By the way: I don't really know anything about what I'm saying. I just played with machine learning, never studied it seriously.

Edit: if anyone is interested, here's a great free book about it: http://neuralnetworksanddeeplearning.com . You gotta love mathematics, though.
What qxc said.
Vasoline73
Profile Blog Joined February 2008
United States7799 Posts
Last Edited: 2016-03-11 13:27:17
March 11 2016 13:14 GMT
#152
People severely underestimating the difficulty of achieving an effective AI for BW. As someone has pointed out it's not going to have access to the game state beyond seeing a 800x600 2D image in real time. It may see dots on the mini-map but it's not going to know what it is or how to properly react without moving it's "screen" there. Obviously it will have speed but...

...stuff like, how does the AI react to a map (building placement, etc) it's never played on before? What if there's no immediate natural and it typically fast expands? When it sends it's scout out onto the map, goes down the ramp and sees no natural... does it start looking for one? Scout for the enemy first? Does it change it's build order to a one base play when it may just have not scouted a expansion spot yet? The clock is ticking and supply is going up. How does it play on Monty Hall or some crazy shit for the first time?

Etc etc. That stuff will make an "all around" BW AI that beats top humans on the level chess engines do, or as AlphaGo is very likely to continue doing, very difficult.

Now if they make the AI just a one base BBS or 4 pool + drones killing machine on standard maps it recognizes then I see success being plausible quickly... probably now even, but I don't think google is trying to win that way. Guessing they have loftier ideas for their AI and what they want it to symbolize/accomplish.

All that said, it's more than possible and it would be cool to see it happen someday sooner than expected.
reminisce12
Profile Joined March 2012
Australia318 Posts
March 11 2016 13:35 GMT
#153
perfect macro and micro aint gonna matter when flash siege tanks reign fire down on ya
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 11 2016 13:47 GMT
#154
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
BeStFAN
Profile Blog Joined April 2015
483 Posts
March 12 2016 00:06 GMT
#155
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?
❤ BeSt... ༼ つ ◕_◕༽つ #YEAROFKOMA #YEAROFKOMA #YEAROFKOMA ༼ つ ◕_◕༽つ
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 12 2016 12:55 GMT
#156
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).
What qxc said.
Superbanana
Profile Joined May 2014
2369 Posts
Last Edited: 2016-03-12 13:08:32
March 12 2016 13:08 GMT
#157
Hard? 10 years? Are you kidding?
+ Show Spoiler +

Just put INnoVation in a box and call it a day
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
Makro
Profile Joined March 2011
France16890 Posts
March 12 2016 13:29 GMT
#158
On March 12 2016 22:08 Superbanana wrote:
Hard? 10 years? Are you kidding?
+ Show Spoiler +

Just put INnoVation in a box and call it a day

haha
Matthew 5:10 "Blessed are those who are persecuted because of shitposting, for theirs is the kingdom of heaven".
TL+ Member
OtherWorld
Profile Blog Joined October 2013
France17333 Posts
March 12 2016 13:30 GMT
#159
On March 12 2016 22:08 Superbanana wrote:
Hard? 10 years? Are you kidding?
+ Show Spoiler +

Just put INnoVation in a box and call it a day

Didn't know choking was an integral part of being an IA
Used Sigs - New Sigs - Cheap Sigs - Buy the Best Cheap Sig near You at www.cheapsigforsale.com
thezanursic
Profile Blog Joined July 2011
5478 Posts
March 12 2016 14:11 GMT
#160
Is this a joke article. Or is this legit?
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
Prev 1 6 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Online Event
00:00
LATAM SC2 League: Semifinals
CranKy Ducklings77
Liquipedia
GSL
23:00
Replay Cast
Rogue vs GuMiho
Maru vs Solar
PiGStarcraft597
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft597
RuFF_SC2 155
Nina 70
CosmosSc2 65
StarCraft: Brood War
Britney 21856
Rain 1755
Artosis 868
Noble 9
Icarus 5
Dota 2
ROOTCatZ12
LuMiX2
Counter-Strike
Coldzera 379
Super Smash Bros
hungrybox874
Heroes of the Storm
Khaldor144
Other Games
summit1g12760
shahzam1535
JimRising 405
C9.Mang0308
ViBE229
Maynarde120
ToD89
Organizations
Dota 2
PGL Dota 2 - Main Stream5019
Other Games
gamesdonequick1114
BasetradeTV96
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• Berry_CruncH267
• Hupsaiya 64
• davetesta34
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Doublelift5411
Upcoming Events
Replay Cast
11m
GSL Code S
7h 41m
herO vs Zoun
Classic vs Bunny
The PondCast
8h 11m
Replay Cast
22h 11m
WardiTV Invitational
1d 9h
OSC
1d 11h
Korean StarCraft League
2 days
SOOP
2 days
sOs vs Percival
CranKy Ducklings
2 days
WardiTV Invitational
2 days
[ Show More ]
Cheesadelphia
2 days
CSO Cup
2 days
GSL Code S
3 days
Sparkling Tuna Cup
3 days
Replay Cast
3 days
Wardi Open
4 days
Replay Cast
4 days
Replay Cast
5 days
RSL Revival
5 days
Cure vs Percival
ByuN vs Spirit
RSL Revival
6 days
herO vs sOs
Zoun vs Clem
Replay Cast
6 days
Liquipedia Results

Completed

CSL Season 17: Qualifier 2
BGE Stara Zagora 2025
Heroes 10 EU

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
KCM Race Survival 2025 Season 2
NPSL S3
Rose Open S1
CSL 17: 2025 SUMMER
2025 GSL S2
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025
PGL Bucharest 2025
BLAST Open Spring 2025

Upcoming

Copa Latinoamericana 4
CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
K-Championship
SEL Season 2 Championship
Esports World Cup 2025
HSC XXVII
Championship of Russia 2025
Murky Cup #2
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.