• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 11:17
CET 17:17
KST 01:17
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Behind the Blue - Team Liquid History Book13Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8herO wins SC2 All-Star Invitational14
Community News
LiuLi Cup: 2025 Grand Finals (Feb 10-16)1Weekly Cups (Feb 2-8): Classic, Solar, MaxPax win2Nexon's StarCraft game could be FPS, led by UMS maker7PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar)11Weekly Cups (Jan 26-Feb 1): herO, Clem, ByuN, Classic win2
StarCraft 2
General
How do you think the 5.0.15 balance patch (Oct 2025) for StarCraft II has affected the game? Rongyi Cup S3 - Preview & Info Behind the Blue - Team Liquid History Book Nexon's StarCraft game could be FPS, led by UMS maker Terran Scanner Sweep
Tourneys
Sparkling Tuna Cup - Weekly Open Tournament PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar) LiuLi Cup: 2025 Grand Finals (Feb 10-16) RSL Season 4 announced for March-April WardiTV Mondays
Strategy
Custom Maps
Modalert 200 for Focus and Alertness Map Editor closed ? [A] Starcraft Sound Mod
External Content
Mutation # 512 Overclocked The PondCast: SC2 News & Results Mutation # 511 Temple of Rebirth Mutation # 510 Safety Violation
Brood War
General
BW General Discussion BGH Auto Balance -> http://bghmmr.eu/ [ASL21] Potential Map Candidates StarCraft player reflex TE scores Gypsy to Korea
Tourneys
[Megathread] Daily Proleagues Escore Tournament StarCraft Season 1 Small VOD Thread 2.0 KCM Race Survival 2026 Season 1
Strategy
Fighting Spirit mining rates Zealot bombing is no longer popular? Simple Questions, Simple Answers Current Meta
Other Games
General Games
Nintendo Switch Thread Battle Aces/David Kim RTS Megathread Diablo 2 thread ZeroSpace Megathread EVE Corporation
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia TL Mafia Community Thread
Community
General
Sex and weight loss YouTube Thread US Politics Mega-thread Russo-Ukrainian War Thread The Games Industry And ATVI
Fan Clubs
The herO Fan Club! The IdrA Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece
Sports
2024 - 2026 Football Thread
World Cup 2022
Tech Support
TL Community
The Automated Ban List
Blogs
Play, Watch, Drink: Esports …
TrAiDoS
My 2025 Magic: The Gathering…
DARKING
Life Update and thoughts.
FuDDx
How do archons sleep?
8882
StarCraft improvement
iopq
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2026 users

Flash on DeepMind: "I think I can win" - Page 9

Forum Index > SC2 General
Post a Reply
Prev 1 7 8 9 10 Next All
thezanursic
Profile Blog Joined July 2011
5497 Posts
March 12 2016 14:13 GMT
#161
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!

I think a lot of programming would be required to make it work, but it is definitely possble.
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 12 2016 14:18 GMT
#162
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The advancements from AlphaGo are mainly relevant to point 6. Combinatorial explosion is something that you have to deal with in Go as well.
If you cannot win with 100 apm, win with 100 cpm.
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 14:42 GMT
#163
https://xkcd.com/1002/

This is probably relevant.
shid0x
Profile Joined July 2012
Korea (South)5014 Posts
Last Edited: 2016-03-12 15:20:04
March 12 2016 15:16 GMT
#164
Just saying that because he wants to hype the event.
I highly doubt anyone could be cooky enough to even think about beating an AI made by google unless you take some brain enchantment supplement or have some kind of brain chips. ( by the way in case you were wondering we are able to read other being thoughts with brain implants already ).

Google is the biggest and most sucessful trans-humanist firm, their AI would potentially even be able to "read" flash's mind.

He's gonna get his ass handed to him in a not so pretty fashion.

As someone who follow transhumanisn very close i can't help to laugh at how much of an idiot he is even ( but that's because he probably never even really looked into google's projects, he would shit his pants )
RIP MKP
75
Profile Joined December 2012
Germany4057 Posts
March 12 2016 15:58 GMT
#165
no way when there is no apm cap.

another question: can AIs beat top level poker players?
yo twitch, as long as I can watch 480p lagfree I'm happy
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 15:59 GMT
#166
On March 13 2016 00:58 75 wrote:
no way when there is no apm cap.

another question: can AIs beat top level poker players?


Is this asking: 'can an AI do an estimated bluff'?
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 16:19:07
March 12 2016 16:11 GMT
#167
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.

If Google actually proceeds with a serious project to make an AI that can beat Flash he won't have a chance. The only possibility is if they lower its efficient APM to realistic high level human standards. Then maybe there's a way to win. Although in hindsight I suppose that's exactly what they would do if they were to challenge him since everyone knows it's pointless if it can play with thousands of APM spent on useful things. They would want to test the intelligence not the brute force. It would also be important to make it unable to do more than one thing at the same time since humans can't do that.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:36 GMT
#168
On March 12 2016 21:55 rockslave wrote:
Show nested quote +
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).


This is the approach taken so far by the Deepmind team when they came up with their general algorithm to play 2D Atari games. In particular the same algorithm was used to play 40 or so different games simply from pixels on the screen and score as an input. This precludes looking at any game-specific files. Learning was done from self-play only.

Source : www.nature.com

' We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. '
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:41 GMT
#169
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The Lee Sedol match is showcasing in Go context an AI technique of learning to play a game through self-play and the data of a boardgame or screen pixels only. This has already applied to the case of quasi-8 bit games in Atari 2600, see the relevant Nature article : www.nature.com

Much more research is required to generalize that algorithm enough to make it play Broodwar efficiently ( Jeff Dean from Google is already singling it as next goal ). My guess would be 3 to 10 years time. My post earlier was about specific sticking points that will need to be improved in the current algorithm before we get to that level. I believe we ultimately will.


"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
evilfatsh1t
Profile Joined October 2010
Australia8801 Posts
March 13 2016 04:22 GMT
#170
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 07:33 GMT
#171
On March 13 2016 13:22 evilfatsh1t wrote:
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing

I think people are discussing how hard it'll be. Don't think anyone is seriously arguing that it is impossible if you give skilled people unlimited time.

People also discuss exactly what restriction to set on the computer, if any.

And some discuss if these announcements are just publicity stunts, riding on the alphaGo wave.
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
Last Edited: 2016-03-13 10:06:22
March 13 2016 09:56 GMT
#172
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 10:25 GMT
#173
On March 13 2016 18:56 WinterViewbot420 wrote:
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol

It's gained consciousness!!! :o :o
RUN FOR THE HILLS!
Hryul
Profile Blog Joined March 2011
Austria2609 Posts
Last Edited: 2016-03-13 11:02:44
March 13 2016 11:02 GMT
#174
+ Show Spoiler +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.

I think also the learning algorithm might need some thought. So far the computer played itself and learned through this. But there are certain tactics which are more effective against someone with delayed reaction time.
For example: a human player might not be able to beat an AI microed rush/all in, but the AI might be able to hold it by itself thus discarding this line of play.
Countdown to victory: 1 200!
evilfatsh1t
Profile Joined October 2010
Australia8801 Posts
March 13 2016 11:42 GMT
#175
it says the ai didnt lose. alphago lost
Poopi
Profile Blog Joined November 2010
France12908 Posts
March 13 2016 12:28 GMT
#176
On March 13 2016 01:11 DuckloadBlackra wrote:
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.


Nice sophism
WriterMaru
boxerfred
Profile Blog Joined December 2012
Germany8360 Posts
March 13 2016 13:03 GMT
#177
The AI is able to simultanously micro at 2, 3, 4, n places on the map. No way a human will stop that.
waiting2Bbanned
Profile Joined November 2015
United States154 Posts
Last Edited: 2016-03-13 15:13:05
March 13 2016 15:08 GMT
#178
It's funny to me that people think the human could win. Even with capped APM the AI would use its APM in the most efficient way (no spamming), it could probably win with something like 90-100 APM easily.
It could probably win in any type of game as well: worker rush, 3 marines-1medic-1dropship, also late game when microing a big army the AI would crush a human with almost no losses; at the same time perfect macro (going back for a split second to his base at the perfect time, every time); also perfect mini-map awareness and reaction time, while being able to tell which units he sees based on their speed on the mini-map and determine the best response, without delay. It would also spend his minerals/gas in the most efficient way.
All this with perfect timed & positional scouting while extrapolating the opponent's build based on opponent's unit composition and timing.
IMHO the AI would utterly crush any human, even if it would tell the human ahead of time when it would do it.

"Now I will do a mid-game 2-or-3-base attack."
"This time I will attempt a maxed-out army build while keeping you pinned in your base with continuous harass. GLHF"

I would like to see the AI learn to BM, that would probably be the only real challenge
"If you are going to break the law, do it with two thousand people.. and Mozart." - Howard Zinn
TelecoM
Profile Blog Joined January 2010
United States10691 Posts
Last Edited: 2016-03-13 16:18:01
March 13 2016 16:16 GMT
#179
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol
AKA: TelecoM[WHITE] Protoss fighting
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 13 2016 23:55 GMT
#180
On March 14 2016 01:16 GGzerG wrote:
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol

We only have one master and he was not online.
Prev 1 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 7h 43m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
TKL 175
StarCraft: Brood War
Britney 34079
Horang2 2436
Shuttle 452
hero 234
Mong 175
Barracks 116
Zeus 59
scan(afreeca) 34
Yoon 33
Rock 32
[ Show more ]
ToSsGirL 30
Hm[arnc] 24
910 20
Backho 19
Shine 18
Movie 14
Terrorterran 13
Noble 8
Dota 2
Gorgc2655
Dendi604
syndereN260
420jenkins217
League of Legends
Rex48
Counter-Strike
allub262
adren_tv93
Other Games
singsing1670
hiko989
B2W.Neo899
RotterdaM379
DeMusliM340
crisheroes293
Mew2King96
KnowMe93
ArmadaUGS88
Livibee35
Trikslyr33
Organizations
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• Berry_CruncH278
• HeavenSC 38
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• WagamamaTV422
League of Legends
• TFBlade1193
Upcoming Events
Replay Cast
7h 43m
The PondCast
17h 43m
KCM Race Survival
17h 43m
LiuLi Cup
18h 43m
Scarlett vs TriGGeR
ByuN vs herO
Replay Cast
1d 7h
Online Event
1d 17h
LiuLi Cup
1d 18h
Serral vs Zoun
Cure vs Classic
Big Brain Bouts
2 days
Serral vs TBD
RSL Revival
2 days
RSL Revival
2 days
[ Show More ]
LiuLi Cup
2 days
uThermal 2v2 Circuit
2 days
RSL Revival
3 days
Replay Cast
3 days
Sparkling Tuna Cup
3 days
LiuLi Cup
3 days
Replay Cast
4 days
Replay Cast
4 days
LiuLi Cup
4 days
Wardi Open
4 days
Monday Night Weeklies
5 days
Replay Cast
5 days
WardiTV Winter Champion…
5 days
WardiTV Winter Champion…
6 days
Liquipedia Results

Completed

Proleague 2026-02-10
Rongyi Cup S3
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
LiuLi Cup: 2025 Grand Finals
Nations Cup 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8

Upcoming

Escore Tournament S1: W8
Acropolis #4
IPSL Spring 2026
HSC XXIX
uThermal 2v2 2026 Main Event
Bellum Gens Elite Stara Zagora 2026
RSL Revival: Season 4
WardiTV Winter 2026
CCT Season 3 Global Finals
FISSURE Playground #3
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League Season 23
ESL Pro League Season 23
PGL Cluj-Napoca 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.