• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 06:05
CET 12:05
KST 20:05
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
TL.net Map Contest #21: Winners5Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting12[ASL20] Ro4 Preview: Descent11
Community News
Starcraft, SC2, HoTS, WC3, returning to Blizzcon!28$5,000+ WardiTV 2025 Championship5[BSL21] RO32 Group Stage4Weekly Cups (Oct 26-Nov 2): Liquid, Clem, Solar win; LAN in Philly2Weekly Cups (Oct 20-26): MaxPax, Clem, Creator win9
StarCraft 2
General
5.0.15 Patch Balance Hotfix (2025-10-8) Starcraft, SC2, HoTS, WC3, returning to Blizzcon! TL.net Map Contest #21: Winners RotterdaM "Serral is the GOAT, and it's not close" Weekly Cups (Oct 20-26): MaxPax, Clem, Creator win
Tourneys
- nuked - Constellation Cup - Main Event - Stellar Fest $5,000+ WardiTV 2025 Championship Merivale 8 Open - LAN - Stellar Fest Sea Duckling Open (Global, Bronze-Diamond)
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection Mutation # 495 Rest In Peace
Brood War
General
BW General Discussion [ASL20] Ask the mapmakers — Drop your questions [BSL21] RO32 Group Stage BGH Auto Balance -> http://bghmmr.eu/ SnOw's ASL S20 Finals Review
Tourneys
[Megathread] Daily Proleagues [BSL21] RO32 Group B - Sunday 21:00 CET [BSL21] RO32 Group A - Saturday 21:00 CET BSL21 Open Qualifiers Week & CONFIRM PARTICIPATION
Strategy
PvZ map balance Current Meta How to stay on top of macro? Soma's 9 hatch build from ASL Game 2
Other Games
General Games
Stormgate/Frost Giant Megathread Dawn of War IV Nintendo Switch Thread ZeroSpace Megathread General RTS Discussion Thread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine YouTube Thread Dating: How's your luck?
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
Anime Discussion Thread Movie Discussion! [Manga] One Piece Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List Recent Gifted Posts
Blogs
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Why we need SC3
Hildegard
Reality "theory" prov…
perfectspheres
Our Last Hope in th…
KrillinFromwales
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1571 users

Flash on DeepMind: "I think I can win" - Page 9

Forum Index > SC2 General
Post a Reply
Prev 1 7 8 9 10 Next All
thezanursic
Profile Blog Joined July 2011
5497 Posts
March 12 2016 14:13 GMT
#161
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!

I think a lot of programming would be required to make it work, but it is definitely possble.
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 12 2016 14:18 GMT
#162
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The advancements from AlphaGo are mainly relevant to point 6. Combinatorial explosion is something that you have to deal with in Go as well.
If you cannot win with 100 apm, win with 100 cpm.
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 14:42 GMT
#163
https://xkcd.com/1002/

This is probably relevant.
shid0x
Profile Joined July 2012
Korea (South)5014 Posts
Last Edited: 2016-03-12 15:20:04
March 12 2016 15:16 GMT
#164
Just saying that because he wants to hype the event.
I highly doubt anyone could be cooky enough to even think about beating an AI made by google unless you take some brain enchantment supplement or have some kind of brain chips. ( by the way in case you were wondering we are able to read other being thoughts with brain implants already ).

Google is the biggest and most sucessful trans-humanist firm, their AI would potentially even be able to "read" flash's mind.

He's gonna get his ass handed to him in a not so pretty fashion.

As someone who follow transhumanisn very close i can't help to laugh at how much of an idiot he is even ( but that's because he probably never even really looked into google's projects, he would shit his pants )
RIP MKP
75
Profile Joined December 2012
Germany4057 Posts
March 12 2016 15:58 GMT
#165
no way when there is no apm cap.

another question: can AIs beat top level poker players?
yo twitch, as long as I can watch 480p lagfree I'm happy
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 15:59 GMT
#166
On March 13 2016 00:58 75 wrote:
no way when there is no apm cap.

another question: can AIs beat top level poker players?


Is this asking: 'can an AI do an estimated bluff'?
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 16:19:07
March 12 2016 16:11 GMT
#167
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.

If Google actually proceeds with a serious project to make an AI that can beat Flash he won't have a chance. The only possibility is if they lower its efficient APM to realistic high level human standards. Then maybe there's a way to win. Although in hindsight I suppose that's exactly what they would do if they were to challenge him since everyone knows it's pointless if it can play with thousands of APM spent on useful things. They would want to test the intelligence not the brute force. It would also be important to make it unable to do more than one thing at the same time since humans can't do that.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:36 GMT
#168
On March 12 2016 21:55 rockslave wrote:
Show nested quote +
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).


This is the approach taken so far by the Deepmind team when they came up with their general algorithm to play 2D Atari games. In particular the same algorithm was used to play 40 or so different games simply from pixels on the screen and score as an input. This precludes looking at any game-specific files. Learning was done from self-play only.

Source : www.nature.com

' We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. '
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:41 GMT
#169
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The Lee Sedol match is showcasing in Go context an AI technique of learning to play a game through self-play and the data of a boardgame or screen pixels only. This has already applied to the case of quasi-8 bit games in Atari 2600, see the relevant Nature article : www.nature.com

Much more research is required to generalize that algorithm enough to make it play Broodwar efficiently ( Jeff Dean from Google is already singling it as next goal ). My guess would be 3 to 10 years time. My post earlier was about specific sticking points that will need to be improved in the current algorithm before we get to that level. I believe we ultimately will.


"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
evilfatsh1t
Profile Joined October 2010
Australia8762 Posts
March 13 2016 04:22 GMT
#170
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 07:33 GMT
#171
On March 13 2016 13:22 evilfatsh1t wrote:
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing

I think people are discussing how hard it'll be. Don't think anyone is seriously arguing that it is impossible if you give skilled people unlimited time.

People also discuss exactly what restriction to set on the computer, if any.

And some discuss if these announcements are just publicity stunts, riding on the alphaGo wave.
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
Last Edited: 2016-03-13 10:06:22
March 13 2016 09:56 GMT
#172
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 10:25 GMT
#173
On March 13 2016 18:56 WinterViewbot420 wrote:
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol

It's gained consciousness!!! :o :o
RUN FOR THE HILLS!
Hryul
Profile Blog Joined March 2011
Austria2609 Posts
Last Edited: 2016-03-13 11:02:44
March 13 2016 11:02 GMT
#174
+ Show Spoiler +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.

I think also the learning algorithm might need some thought. So far the computer played itself and learned through this. But there are certain tactics which are more effective against someone with delayed reaction time.
For example: a human player might not be able to beat an AI microed rush/all in, but the AI might be able to hold it by itself thus discarding this line of play.
Countdown to victory: 1 200!
evilfatsh1t
Profile Joined October 2010
Australia8762 Posts
March 13 2016 11:42 GMT
#175
it says the ai didnt lose. alphago lost
Poopi
Profile Blog Joined November 2010
France12904 Posts
March 13 2016 12:28 GMT
#176
On March 13 2016 01:11 DuckloadBlackra wrote:
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.


Nice sophism
WriterMaru
boxerfred
Profile Blog Joined December 2012
Germany8360 Posts
March 13 2016 13:03 GMT
#177
The AI is able to simultanously micro at 2, 3, 4, n places on the map. No way a human will stop that.
waiting2Bbanned
Profile Joined November 2015
United States154 Posts
Last Edited: 2016-03-13 15:13:05
March 13 2016 15:08 GMT
#178
It's funny to me that people think the human could win. Even with capped APM the AI would use its APM in the most efficient way (no spamming), it could probably win with something like 90-100 APM easily.
It could probably win in any type of game as well: worker rush, 3 marines-1medic-1dropship, also late game when microing a big army the AI would crush a human with almost no losses; at the same time perfect macro (going back for a split second to his base at the perfect time, every time); also perfect mini-map awareness and reaction time, while being able to tell which units he sees based on their speed on the mini-map and determine the best response, without delay. It would also spend his minerals/gas in the most efficient way.
All this with perfect timed & positional scouting while extrapolating the opponent's build based on opponent's unit composition and timing.
IMHO the AI would utterly crush any human, even if it would tell the human ahead of time when it would do it.

"Now I will do a mid-game 2-or-3-base attack."
"This time I will attempt a maxed-out army build while keeping you pinned in your base with continuous harass. GLHF"

I would like to see the AI learn to BM, that would probably be the only real challenge
"If you are going to break the law, do it with two thousand people.. and Mozart." - Howard Zinn
TelecoM
Profile Blog Joined January 2010
United States10682 Posts
Last Edited: 2016-03-13 16:18:01
March 13 2016 16:16 GMT
#179
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol
AKA: TelecoM[WHITE] Protoss fighting
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 13 2016 23:55 GMT
#180
On March 14 2016 01:16 GGzerG wrote:
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol

We only have one master and he was not online.
Prev 1 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 55m
[ Submit Event ]
Live Streams
Refresh
StarCraft: Brood War
Jaedong 692
BeSt 349
Larva 316
Stork 297
Mini 245
Light 173
JYJ169
hero 169
EffOrt 152
Leta 151
[ Show more ]
Barracks 145
PianO 120
Aegong 110
Pusan 90
sSak 78
Rush 51
Sharp 47
Backho 37
soO 36
Snow 25
yabsab 22
Bale 18
NotJumperer 16
sorry 16
scan(afreeca) 16
HiyA 13
Icarus 13
Noble 11
Terrorterran 10
NaDa 8
Sea.KH 3
Dota 2
Gorgc5035
XcaliburYe266
KheZu177
League of Legends
JimRising 375
Reynor141
Counter-Strike
fl0m1745
zeus283
taco 98
Other Games
summit1g16294
singsing1447
ceh9422
Sick243
Happy242
crisheroes213
XaKoH 109
B2W.Neo99
NeuroSwarm41
Organizations
Other Games
gamesdonequick532
Counter-Strike
PGL113
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 13 non-featured ]
StarCraft 2
• LUISG 24
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• iopq 1
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Jankos3708
Upcoming Events
OSC
55m
LAN Event
3h 55m
Lambo vs Harstem
FuturE vs Maplez
Scarlett vs FoxeR
Gerald vs Mixu
Zoun vs TBD
Clem vs TBD
ByuN vs TBD
TriGGeR vs TBD
Korean StarCraft League
15h 55m
CranKy Ducklings
22h 55m
LAN Event
1d 3h
IPSL
1d 6h
dxtr13 vs OldBoy
Napoleon vs Doodle
BSL 21
1d 8h
Gosudark vs Kyrie
Gypsy vs Sterling
UltrA vs Radley
Dandy vs Ptak
Replay Cast
1d 11h
Sparkling Tuna Cup
1d 22h
WardiTV Korean Royale
2 days
[ Show More ]
LAN Event
2 days
IPSL
2 days
JDConan vs WIZARD
WolFix vs Cross
BSL 21
2 days
spx vs rasowy
HBO vs KameZerg
Cross vs Razz
dxtr13 vs ZZZero
Replay Cast
2 days
Wardi Open
3 days
WardiTV Korean Royale
4 days
Replay Cast
4 days
Kung Fu Cup
5 days
Classic vs Solar
herO vs Cure
Reynor vs GuMiho
ByuN vs ShoWTimE
Tenacious Turtle Tussle
5 days
The PondCast
5 days
RSL Revival
5 days
Solar vs Zoun
MaxPax vs Bunny
Kung Fu Cup
6 days
WardiTV Korean Royale
6 days
RSL Revival
6 days
Classic vs Creator
Cure vs TriGGeR
Liquipedia Results

Completed

BSL 21 Points
SC4ALL: StarCraft II
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
Stellar Fest: Constellation Cup
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual

Upcoming

BSL Season 21
SLON Tour Season 2
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
RSL Revival: Season 3
META Madness #9
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.