• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 01:19
CEST 07:19
KST 14:19
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Team TLMC #5 - Finalists & Open Tournaments0[ASL20] Ro16 Preview Pt2: Turbulence10Classic Games #3: Rogue vs Serral at BlizzCon9[ASL20] Ro16 Preview Pt1: Ascent10Maestros of the Game: Week 1/Play-in Preview12
Community News
BSL 2025 Warsaw LAN + Legends Showmatch0Weekly Cups (Sept 8-14): herO & MaxPax split cups4WardiTV TL Team Map Contest #5 Tournaments1SC4ALL $6,000 Open LAN in Philadelphia8Weekly Cups (Sept 1-7): MaxPax rebounds & Clem saga continues29
StarCraft 2
General
#1: Maru - Greatest Players of All Time Weekly Cups (Sept 8-14): herO & MaxPax split cups Team Liquid Map Contest #21 - Presented by Monster Energy SpeCial on The Tasteless Podcast Team TLMC #5 - Finalists & Open Tournaments
Tourneys
Maestros of The Game—$20k event w/ live finals in Paris Sparkling Tuna Cup - Weekly Open Tournament SC4ALL $6,000 Open LAN in Philadelphia WardiTV TL Team Map Contest #5 Tournaments RSL: Revival, a new crowdfunded tournament series
Strategy
Custom Maps
External Content
Mutation # 491 Night Drive Mutation # 490 Masters of Midnight Mutation # 489 Bannable Offense Mutation # 488 What Goes Around
Brood War
General
Soulkey on ASL S20 BW General Discussion A cwal.gg Extension - Easily keep track of anyone BGH Auto Balance -> http://bghmmr.eu/ ASL20 General Discussion
Tourneys
[ASL20] Ro16 Group C [Megathread] Daily Proleagues BSL 2025 Warsaw LAN + Legends Showmatch [ASL20] Ro16 Group D
Strategy
Simple Questions, Simple Answers Muta micro map competition Fighting Spirit mining rates [G] Mineral Boosting
Other Games
General Games
Path of Exile Stormgate/Frost Giant Megathread Borderlands 3 General RTS Discussion Thread Nintendo Switch Thread
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread
Community
General
US Politics Mega-thread Canadian Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread UK Politics Mega-thread
Fan Clubs
The Happy Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion MLB/Baseball 2023
World Cup 2022
Tech Support
Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread High temperatures on bridge(s)
TL Community
BarCraft in Tokyo Japan for ASL Season5 Final The Automated Ban List
Blogs
I <=> 9
KrillinFromwales
The Personality of a Spender…
TrAiDoS
A very expensive lesson on ma…
Garnet
hello world
radishsoup
Lemme tell you a thing o…
JoinTheRain
RTS Design in Hypercoven
a11
Evil Gacha Games and the…
ffswowsucks
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1478 users

Flash on DeepMind: "I think I can win" - Page 9

Forum Index > SC2 General
Post a Reply
Prev 1 7 8 9 10 Next All
thezanursic
Profile Blog Joined July 2011
5489 Posts
March 12 2016 14:13 GMT
#161
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!

I think a lot of programming would be required to make it work, but it is definitely possble.
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 12 2016 14:18 GMT
#162
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The advancements from AlphaGo are mainly relevant to point 6. Combinatorial explosion is something that you have to deal with in Go as well.
If you cannot win with 100 apm, win with 100 cpm.
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 14:42 GMT
#163
https://xkcd.com/1002/

This is probably relevant.
shid0x
Profile Joined July 2012
Korea (South)5014 Posts
Last Edited: 2016-03-12 15:20:04
March 12 2016 15:16 GMT
#164
Just saying that because he wants to hype the event.
I highly doubt anyone could be cooky enough to even think about beating an AI made by google unless you take some brain enchantment supplement or have some kind of brain chips. ( by the way in case you were wondering we are able to read other being thoughts with brain implants already ).

Google is the biggest and most sucessful trans-humanist firm, their AI would potentially even be able to "read" flash's mind.

He's gonna get his ass handed to him in a not so pretty fashion.

As someone who follow transhumanisn very close i can't help to laugh at how much of an idiot he is even ( but that's because he probably never even really looked into google's projects, he would shit his pants )
RIP MKP
75
Profile Joined December 2012
Germany4057 Posts
March 12 2016 15:58 GMT
#165
no way when there is no apm cap.

another question: can AIs beat top level poker players?
yo twitch, as long as I can watch 480p lagfree I'm happy
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 15:59 GMT
#166
On March 13 2016 00:58 75 wrote:
no way when there is no apm cap.

another question: can AIs beat top level poker players?


Is this asking: 'can an AI do an estimated bluff'?
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 16:19:07
March 12 2016 16:11 GMT
#167
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.

If Google actually proceeds with a serious project to make an AI that can beat Flash he won't have a chance. The only possibility is if they lower its efficient APM to realistic high level human standards. Then maybe there's a way to win. Although in hindsight I suppose that's exactly what they would do if they were to challenge him since everyone knows it's pointless if it can play with thousands of APM spent on useful things. They would want to test the intelligence not the brute force. It would also be important to make it unable to do more than one thing at the same time since humans can't do that.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:36 GMT
#168
On March 12 2016 21:55 rockslave wrote:
Show nested quote +
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).


This is the approach taken so far by the Deepmind team when they came up with their general algorithm to play 2D Atari games. In particular the same algorithm was used to play 40 or so different games simply from pixels on the screen and score as an input. This precludes looking at any game-specific files. Learning was done from self-play only.

Source : www.nature.com

' We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. '
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:41 GMT
#169
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The Lee Sedol match is showcasing in Go context an AI technique of learning to play a game through self-play and the data of a boardgame or screen pixels only. This has already applied to the case of quasi-8 bit games in Atari 2600, see the relevant Nature article : www.nature.com

Much more research is required to generalize that algorithm enough to make it play Broodwar efficiently ( Jeff Dean from Google is already singling it as next goal ). My guess would be 3 to 10 years time. My post earlier was about specific sticking points that will need to be improved in the current algorithm before we get to that level. I believe we ultimately will.


"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
evilfatsh1t
Profile Joined October 2010
Australia8691 Posts
March 13 2016 04:22 GMT
#170
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 07:33 GMT
#171
On March 13 2016 13:22 evilfatsh1t wrote:
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing

I think people are discussing how hard it'll be. Don't think anyone is seriously arguing that it is impossible if you give skilled people unlimited time.

People also discuss exactly what restriction to set on the computer, if any.

And some discuss if these announcements are just publicity stunts, riding on the alphaGo wave.
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
Last Edited: 2016-03-13 10:06:22
March 13 2016 09:56 GMT
#172
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 10:25 GMT
#173
On March 13 2016 18:56 WinterViewbot420 wrote:
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol

It's gained consciousness!!! :o :o
RUN FOR THE HILLS!
Hryul
Profile Blog Joined March 2011
Austria2609 Posts
Last Edited: 2016-03-13 11:02:44
March 13 2016 11:02 GMT
#174
+ Show Spoiler +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.

I think also the learning algorithm might need some thought. So far the computer played itself and learned through this. But there are certain tactics which are more effective against someone with delayed reaction time.
For example: a human player might not be able to beat an AI microed rush/all in, but the AI might be able to hold it by itself thus discarding this line of play.
Countdown to victory: 1 200!
evilfatsh1t
Profile Joined October 2010
Australia8691 Posts
March 13 2016 11:42 GMT
#175
it says the ai didnt lose. alphago lost
Poopi
Profile Blog Joined November 2010
France12897 Posts
March 13 2016 12:28 GMT
#176
On March 13 2016 01:11 DuckloadBlackra wrote:
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.


Nice sophism
WriterMaru
boxerfred
Profile Blog Joined December 2012
Germany8360 Posts
March 13 2016 13:03 GMT
#177
The AI is able to simultanously micro at 2, 3, 4, n places on the map. No way a human will stop that.
waiting2Bbanned
Profile Joined November 2015
United States154 Posts
Last Edited: 2016-03-13 15:13:05
March 13 2016 15:08 GMT
#178
It's funny to me that people think the human could win. Even with capped APM the AI would use its APM in the most efficient way (no spamming), it could probably win with something like 90-100 APM easily.
It could probably win in any type of game as well: worker rush, 3 marines-1medic-1dropship, also late game when microing a big army the AI would crush a human with almost no losses; at the same time perfect macro (going back for a split second to his base at the perfect time, every time); also perfect mini-map awareness and reaction time, while being able to tell which units he sees based on their speed on the mini-map and determine the best response, without delay. It would also spend his minerals/gas in the most efficient way.
All this with perfect timed & positional scouting while extrapolating the opponent's build based on opponent's unit composition and timing.
IMHO the AI would utterly crush any human, even if it would tell the human ahead of time when it would do it.

"Now I will do a mid-game 2-or-3-base attack."
"This time I will attempt a maxed-out army build while keeping you pinned in your base with continuous harass. GLHF"

I would like to see the AI learn to BM, that would probably be the only real challenge
"If you are going to break the law, do it with two thousand people.. and Mozart." - Howard Zinn
TelecoM
Profile Blog Joined January 2010
United States10675 Posts
Last Edited: 2016-03-13 16:18:01
March 13 2016 16:16 GMT
#179
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol
AKA: TelecoM[WHITE] Protoss fighting
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 13 2016 23:55 GMT
#180
On March 14 2016 01:16 GGzerG wrote:
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol

We only have one master and he was not online.
Prev 1 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 4h 41m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
ProTech74
StarCraft: Brood War
PianO 414
Nal_rA 43
JulyZerg 39
Bale 23
SilentControl 11
Icarus 6
Dota 2
NeuroSwarm135
League of Legends
JimRising 622
Counter-Strike
Stewie2K650
Coldzera 405
Other Games
summit1g7597
C9.Mang0360
XaKoH 141
RuFF_SC255
ViBE38
Trikslyr24
Organizations
Other Games
gamesdonequick716
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 16 non-featured ]
StarCraft 2
• practicex 40
• Sammyuel 37
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• RayReign 20
• Diggity5
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Lourlo972
• Stunt462
Upcoming Events
RSL Revival
4h 41m
Maru vs Reynor
Cure vs TriGGeR
Map Test Tournament
5h 41m
The PondCast
7h 41m
RSL Revival
1d 4h
Zoun vs Classic
Korean StarCraft League
1d 21h
BSL Open LAN 2025 - War…
2 days
RSL Revival
2 days
BSL Open LAN 2025 - War…
3 days
RSL Revival
3 days
Online Event
3 days
[ Show More ]
Wardi Open
4 days
Monday Night Weeklies
4 days
Sparkling Tuna Cup
5 days
LiuLi Cup
6 days
Liquipedia Results

Completed

Proleague 2025-09-10
Chzzk MurlocKing SC1 vs SC2 Cup #2
HCC Europe

Ongoing

BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
LASL Season 20
RSL Revival: Season 2
Maestros of the Game
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1

Upcoming

2025 Chongqing Offline CUP
BSL World Championship of Poland 2025
IPSL Winter 2025-26
BSL Season 21
SC4ALL: Brood War
BSL 21 Team A
Stellar Fest
SC4ALL: StarCraft II
EC S1
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
MESA Nomadic Masters Fall
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.