• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 23:38
CET 05:38
KST 13:38
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
ComeBackTV's documentary on Byun's Career !8Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win4Weekly Cups (Dec 1-7): Clem doubles, Solar gets over the hump1Weekly Cups (Nov 24-30): MaxPax, Clem, herO win2BGE Stara Zagora 2026 announced15
StarCraft 2
General
When will we find out if there are more tournament ComeBackTV's documentary on Byun's Career ! Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win RSL Revival - 2025 Season Finals Preview Weekly Cups (Dec 1-7): Clem doubles, Solar gets over the hump
Tourneys
Winter Warp Gate Amateur Showdown #1: Sparkling Tuna Cup - Weekly Open Tournament $5,000+ WardiTV 2025 Championship RSL Offline Finals Info - Dec 13 and 14! Master Swan Open (Global Bronze-Master 2)
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 504 Retribution Mutation # 503 Fowl Play Mutation # 502 Negative Reinforcement Mutation # 501 Price of Progress
Brood War
General
How Rain Became ProGamer in Just 3 Months BGH Auto Balance -> http://bghmmr.eu/ FlaSh on: Biggest Problem With SnOw's Playstyle screp: Command line app to parse SC rep files [BSL21] RO8 Bracket & Prediction Contest
Tourneys
Small VOD Thread 2.0 [Megathread] Daily Proleagues [BSL21] WB SEMIFINALS - Saturday 21:00 CET [BSL21] RO8 - Day 2 - Sunday 21:00 CET
Strategy
Simple Questions, Simple Answers Game Theory for Starcraft Current Meta Fighting Spirit mining rates
Other Games
General Games
Mechabellum Stormgate/Frost Giant Megathread Nintendo Switch Thread PC Games Sales Thread Path of Exile
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Survivor II: The Amazon Sengoku Mafia TL Mafia Community Thread
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread Things Aren’t Peaceful in Palestine The Games Industry And ATVI YouTube Thread
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
TL+ Announced Where to ask questions and add stream?
Blogs
The (Hidden) Drug Problem in…
TrAiDoS
I decided to write a webnov…
DjKniteX
James Bond movies ranking - pa…
Topin
Thanks for the RSL
Hildegard
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1823 users

Flash on DeepMind: "I think I can win" - Page 9

Forum Index > SC2 General
Post a Reply
Prev 1 7 8 9 10 Next All
thezanursic
Profile Blog Joined July 2011
5497 Posts
March 12 2016 14:13 GMT
#161
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!

I think a lot of programming would be required to make it work, but it is definitely possble.
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 12 2016 14:18 GMT
#162
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The advancements from AlphaGo are mainly relevant to point 6. Combinatorial explosion is something that you have to deal with in Go as well.
If you cannot win with 100 apm, win with 100 cpm.
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 14:42 GMT
#163
https://xkcd.com/1002/

This is probably relevant.
shid0x
Profile Joined July 2012
Korea (South)5014 Posts
Last Edited: 2016-03-12 15:20:04
March 12 2016 15:16 GMT
#164
Just saying that because he wants to hype the event.
I highly doubt anyone could be cooky enough to even think about beating an AI made by google unless you take some brain enchantment supplement or have some kind of brain chips. ( by the way in case you were wondering we are able to read other being thoughts with brain implants already ).

Google is the biggest and most sucessful trans-humanist firm, their AI would potentially even be able to "read" flash's mind.

He's gonna get his ass handed to him in a not so pretty fashion.

As someone who follow transhumanisn very close i can't help to laugh at how much of an idiot he is even ( but that's because he probably never even really looked into google's projects, he would shit his pants )
RIP MKP
75
Profile Joined December 2012
Germany4057 Posts
March 12 2016 15:58 GMT
#165
no way when there is no apm cap.

another question: can AIs beat top level poker players?
yo twitch, as long as I can watch 480p lagfree I'm happy
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 15:59 GMT
#166
On March 13 2016 00:58 75 wrote:
no way when there is no apm cap.

another question: can AIs beat top level poker players?


Is this asking: 'can an AI do an estimated bluff'?
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 16:19:07
March 12 2016 16:11 GMT
#167
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.

If Google actually proceeds with a serious project to make an AI that can beat Flash he won't have a chance. The only possibility is if they lower its efficient APM to realistic high level human standards. Then maybe there's a way to win. Although in hindsight I suppose that's exactly what they would do if they were to challenge him since everyone knows it's pointless if it can play with thousands of APM spent on useful things. They would want to test the intelligence not the brute force. It would also be important to make it unable to do more than one thing at the same time since humans can't do that.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:36 GMT
#168
On March 12 2016 21:55 rockslave wrote:
Show nested quote +
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).


This is the approach taken so far by the Deepmind team when they came up with their general algorithm to play 2D Atari games. In particular the same algorithm was used to play 40 or so different games simply from pixels on the screen and score as an input. This precludes looking at any game-specific files. Learning was done from self-play only.

Source : www.nature.com

' We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. '
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:41 GMT
#169
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The Lee Sedol match is showcasing in Go context an AI technique of learning to play a game through self-play and the data of a boardgame or screen pixels only. This has already applied to the case of quasi-8 bit games in Atari 2600, see the relevant Nature article : www.nature.com

Much more research is required to generalize that algorithm enough to make it play Broodwar efficiently ( Jeff Dean from Google is already singling it as next goal ). My guess would be 3 to 10 years time. My post earlier was about specific sticking points that will need to be improved in the current algorithm before we get to that level. I believe we ultimately will.


"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
evilfatsh1t
Profile Joined October 2010
Australia8778 Posts
March 13 2016 04:22 GMT
#170
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 07:33 GMT
#171
On March 13 2016 13:22 evilfatsh1t wrote:
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing

I think people are discussing how hard it'll be. Don't think anyone is seriously arguing that it is impossible if you give skilled people unlimited time.

People also discuss exactly what restriction to set on the computer, if any.

And some discuss if these announcements are just publicity stunts, riding on the alphaGo wave.
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
Last Edited: 2016-03-13 10:06:22
March 13 2016 09:56 GMT
#172
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 10:25 GMT
#173
On March 13 2016 18:56 WinterViewbot420 wrote:
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol

It's gained consciousness!!! :o :o
RUN FOR THE HILLS!
Hryul
Profile Blog Joined March 2011
Austria2609 Posts
Last Edited: 2016-03-13 11:02:44
March 13 2016 11:02 GMT
#174
+ Show Spoiler +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.

I think also the learning algorithm might need some thought. So far the computer played itself and learned through this. But there are certain tactics which are more effective against someone with delayed reaction time.
For example: a human player might not be able to beat an AI microed rush/all in, but the AI might be able to hold it by itself thus discarding this line of play.
Countdown to victory: 1 200!
evilfatsh1t
Profile Joined October 2010
Australia8778 Posts
March 13 2016 11:42 GMT
#175
it says the ai didnt lose. alphago lost
Poopi
Profile Blog Joined November 2010
France12906 Posts
March 13 2016 12:28 GMT
#176
On March 13 2016 01:11 DuckloadBlackra wrote:
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.


Nice sophism
WriterMaru
boxerfred
Profile Blog Joined December 2012
Germany8360 Posts
March 13 2016 13:03 GMT
#177
The AI is able to simultanously micro at 2, 3, 4, n places on the map. No way a human will stop that.
waiting2Bbanned
Profile Joined November 2015
United States154 Posts
Last Edited: 2016-03-13 15:13:05
March 13 2016 15:08 GMT
#178
It's funny to me that people think the human could win. Even with capped APM the AI would use its APM in the most efficient way (no spamming), it could probably win with something like 90-100 APM easily.
It could probably win in any type of game as well: worker rush, 3 marines-1medic-1dropship, also late game when microing a big army the AI would crush a human with almost no losses; at the same time perfect macro (going back for a split second to his base at the perfect time, every time); also perfect mini-map awareness and reaction time, while being able to tell which units he sees based on their speed on the mini-map and determine the best response, without delay. It would also spend his minerals/gas in the most efficient way.
All this with perfect timed & positional scouting while extrapolating the opponent's build based on opponent's unit composition and timing.
IMHO the AI would utterly crush any human, even if it would tell the human ahead of time when it would do it.

"Now I will do a mid-game 2-or-3-base attack."
"This time I will attempt a maxed-out army build while keeping you pinned in your base with continuous harass. GLHF"

I would like to see the AI learn to BM, that would probably be the only real challenge
"If you are going to break the law, do it with two thousand people.. and Mozart." - Howard Zinn
TelecoM
Profile Blog Joined January 2010
United States10686 Posts
Last Edited: 2016-03-13 16:18:01
March 13 2016 16:16 GMT
#179
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol
AKA: TelecoM[WHITE] Protoss fighting
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 13 2016 23:55 GMT
#180
On March 14 2016 01:16 GGzerG wrote:
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol

We only have one master and he was not online.
Prev 1 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 5h 23m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiLiPiLi 42
StarCraft: Brood War
Shuttle 900
Leta 337
ZergMaN 125
Mong 22
Hm[arnc] 22
Icarus 8
Dota 2
monkeys_forever374
LuMiX1
League of Legends
JimRising 538
Counter-Strike
summit1g8111
Super Smash Bros
hungrybox165
Other Games
Coldzera 975
WinterStarcraft336
ViBE151
C9.Mang0124
Trikslyr64
Organizations
Other Games
gamesdonequick1164
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• practicex 17
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• Azhi_Dahaki14
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• masondota22300
Other Games
• Scarra1498
• Shiphtur129
Upcoming Events
CranKy Ducklings
5h 23m
WardiTV 2025
6h 23m
Spirit vs Cure
Reynor vs MaxPax
SHIN vs TBD
Solar vs herO
Classic vs TBD
SC Evo League
7h 53m
Ladder Legends
14h 23m
BSL 21
15h 23m
Sziky vs Dewalt
eOnzErG vs Cross
Sparkling Tuna Cup
1d 5h
Ladder Legends
1d 12h
BSL 21
1d 15h
StRyKeR vs TBD
Bonyth vs TBD
Replay Cast
2 days
Wardi Open
2 days
[ Show More ]
Monday Night Weeklies
2 days
WardiTV Invitational
4 days
Replay Cast
5 days
WardiTV Invitational
5 days
ByuN vs Solar
Clem vs Classic
Cure vs herO
Reynor vs MaxPax
Liquipedia Results

Completed

Acropolis #4 - TS3
RSL Offline Finals
Kuram Kup

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
YSL S2
BSL Season 21
Slon Tour Season 2
CSL Season 19: Qualifier 1
WardiTV 2025
META Madness #9
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22

Upcoming

CSL Season 19: Qualifier 2
CSL 2025 WINTER (S19)
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Big Gabe Cup #3
OSC Championship Season 13
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.