• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 14:07
CET 19:07
KST 03:07
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
ByuL: The Forgotten Master of ZvT30Behind the Blue - Team Liquid History Book19Clem wins HomeStory Cup 289HomeStory Cup 28 - Info & Preview13Rongyi Cup S3 - Preview & Info8
Community News
2026 KongFu Cup Announcement3BGE Stara Zagora 2026 cancelled12Blizzard Classic Cup - Tastosis announced as captains15Weekly Cups (March 2-8): ByuN overcomes PvT block4GSL CK - New online series19
StarCraft 2
General
GSL CK - New online series BGE Stara Zagora 2026 cancelled Blizzard Classic Cup - Tastosis announced as captains BGE Stara Zagora 2026 announced ByuL: The Forgotten Master of ZvT
Tourneys
RSL Season 4 announced for March-April PIG STY FESTIVAL 7.0! (19 Feb - 1 Mar) Sparkling Tuna Cup - Weekly Open Tournament 2026 KongFu Cup Announcement [GSL CK] Team Maru vs. Team herO
Strategy
Custom Maps
Publishing has been re-enabled! [Feb 24th 2026] Map Editor closed ?
External Content
The PondCast: SC2 News & Results Mutation # 516 Specter of Death Mutation # 515 Together Forever Mutation # 514 Ulnar New Year
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ BSL 22 Map Contest — Submissions OPEN to March 10 ASL21 General Discussion Are you ready for ASL 21? Hype VIDEO Gypsy to Korea
Tourneys
[Megathread] Daily Proleagues [BSL22] Open Qualifiers & Ladder Tours IPSL Spring 2026 is here! ASL Season 21 Qualifiers March 7-8
Strategy
Simple Questions, Simple Answers Soma's 9 hatch build from ASL Game 2 Fighting Spirit mining rates Zealot bombing is no longer popular?
Other Games
General Games
Stormgate/Frost Giant Megathread Path of Exile Nintendo Switch Thread PC Games Sales Thread No Man's Sky (PS4 and PC)
Dota 2
Official 'what is Dota anymore' discussion The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Five o'clock TL Mafia Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia TL Mafia Community Thread
Community
General
Things Aren’t Peaceful in Palestine US Politics Mega-thread Mexico's Drug War Russo-Ukrainian War Thread NASA and the Private Sector
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion! [Req][Books] Good Fantasy/SciFi books
Sports
Formula 1 Discussion 2024 - 2026 Football Thread General nutrition recommendations Cricket [SPORT] TL MMA Pick'em Pool 2013
World Cup 2022
Tech Support
Laptop capable of using Photoshop Lightroom?
TL Community
The Automated Ban List
Blogs
Funny Nicknames
LUCKY_NOOB
Money Laundering In Video Ga…
TrAiDoS
Iranian anarchists: organize…
XenOsky
FS++
Kraekkling
Shocked by a laser…
Spydermine0240
Unintentional protectionism…
Uldridge
ASL S21 English Commentary…
namkraft
Customize Sidebar...

Website Feedback

Closed Threads



Active: 2419 users

Flash on DeepMind: "I think I can win" - Page 7

Forum Index > SC2 General
Post a Reply
Prev 1 5 6 7 8 9 10 Next All
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 11 2016 02:04 GMT
#121
I hope this happens.
@Drayxs | Drayxs.221 | Drayxs#1802
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 11 2016 02:06 GMT
#122
This technology is amazing but quite frightening.
chipmonklord17
Profile Joined February 2011
United States11944 Posts
Last Edited: 2016-03-11 02:09:14
March 11 2016 02:08 GMT
#123
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 11 2016 02:08 GMT
#124
On March 11 2016 10:40 Superbanana wrote:
Imba Ai goes 3 rax reaper every game no matter what and wins every game

Don't say "solved". Chess is not solved, Go is not solved.


You're right about that. I should've said "they beat Kasparov without a flying penis"

Checkers is solved though.
What qxc said.
Jonoman92
Profile Blog Joined September 2006
United States9107 Posts
March 11 2016 02:10 GMT
#125
I don't think an AI will be able to beat a current level top BW player within 50 years. Though it'd be cool to see... and terrifying.
Hypertension
Profile Joined April 2011
United States802 Posts
March 11 2016 02:55 GMT
#126
I think Deepmind wins this no contest with a few months training. Nearly perfect micro and macro will make up for a lot of tactical errors and build order mistakes, especially in Broodwar. After the AI builds a medic and marine it gets tough, once a dropship comes out gg
Buy boots first. Boots good item.
b0lt
Profile Joined March 2009
United States790 Posts
March 11 2016 03:50 GMT
#127
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports


And it'd be completely pointless?
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 04:00 GMT
#128
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports

But that's the cool thing about Google... They're not doing things to polish their image, but to innovate. They're pushing the boundaries.

Sponsoring a team wouldn't really do that, hm? Sponsoring a team is just for PR.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 04:01 GMT
#129
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
evilfatsh1t
Profile Joined October 2010
Australia8819 Posts
March 11 2016 05:45 GMT
#130
just imagine an ai that is following flash's timing builds advancing towards you. it would siege the exact amount of tanks at the exact range for it to destroy your army, whilst advancing with the remaining unsieged units as you back off. kind of like a tidal wave slowly advancing to you but so beautifully smooth that youd piss your pants trying to look for an opening.
gives me chills just thinking about that possibility.
that said though, i dont know how deepmind is programmed enough to comment on its ability but i do know that go is at its roots a game that could in theory be solved by maths. the only advantage pros had over ai in past years was there was no ai that could calculate every single possible move until recently. im not sure if this is how deepmind works now, but if the ai is able to calculate every single variable in a game that follows mathematical rules then a human shouldnt be able to win.
starcraft however doesnt follow these rules so i dont see ai being able to defeat the decision making of a pro for a long time
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 05:47 GMT
#131
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
BronzeKnee
Profile Joined March 2011
United States5219 Posts
Last Edited: 2016-03-11 05:50:52
March 11 2016 05:49 GMT
#132
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).


The thing about Sc2 though is that it is different.

In Poker, or Go or Chess, when you move, you move. That's it. And a computer can process that. SC2 is different.

If I load up a drop and sit it outside your base, I don't have to drop. But I might. But the dropship might actually be empty. What do you do? What does the AI do? I might show extreme aggression, but be taking a hidden expansion. I could also show an expansion, but then cancel it or not make it and attack.

Unless the computer wins with perfect micro and macro, I think it would struggle against non-traditional builds, timing attacks and mind games.
Wrath
Profile Blog Joined July 2014
3174 Posts
March 11 2016 05:57 GMT
#133
1. It is for BW.

2. The APM most likely will be restricted to around 200. AI's APM is equal to its EPM, it does not waste clicks like progamers and those who spam boxing or clicking to increase their APM. So for guys like EffOrt who can go to around 450 ~ 500 APM, what is the actual EPM of them? Does it go beyond 200? That is what we need to consider for AI.
CursOr
Profile Blog Joined January 2009
United States6335 Posts
March 11 2016 05:58 GMT
#134
All whilst Blizzard has absolutely no interest in making their AI even remotely strategic or interesting in any way. Once again, thank god for community interest.

I would love to see an AI that dropped in different places, tried to deceive opponents, did real different build orders, and played map specific strategies, just as a person would.
CJ forever (-_-(-_-(-_-(-_-)-_-)-_-)-_-)
ETisME
Profile Blog Joined April 2011
12698 Posts
March 11 2016 06:02 GMT
#135
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 06:09 GMT
#136
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 11 2016 07:24 GMT
#137
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.

It would be nice if wherever Koreans play BW would automatically save the replay, scramble the names, and send it off to google. Or imagine people at google becoming frustrated because for once they do not have big data sets available for everything.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
lpunatic
Profile Joined October 2011
235 Posts
Last Edited: 2016-03-11 07:59:01
March 11 2016 07:53 GMT
#138
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.


AlphaGo got off the ground with a big bank of games, but recently it's been improving purely through self-play.

I think if the DeepMind team put their effort into BW, they'll be able to achieve superhuman performance in a few years time.

There are some ways that the problem is harder than Go - partial information, real time and a much more complex raw game state. On the other hand, there are some clear advantages an AI will have over people (APM, multitasking) which are not present in Go. It seems to me that if you can get an AI that makes decisions like a half decent human player, it will be able to press its advantages well beyond human competition.
lpunatic
Profile Joined October 2011
235 Posts
March 11 2016 08:17 GMT
#139
On March 11 2016 13:01 ZAiNs wrote:
Show nested quote +
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


On the other hand, evaluating a stone in Go is a very hard problem - it may depend on the position of every other stone on the board. For starcraft, the value of a base or a zealot is pretty simple to evaulate in comparison, and while zealots in a good position are better than zealots in a bad position, the positional relationships aren't anywhere near as complex as in Go.

Point being, you maybe can get away with a simplified game state representation.
Gluon
Profile Joined April 2011
Netherlands410 Posts
March 11 2016 08:25 GMT
#140
On March 11 2016 15:02 ETisME wrote:
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing


Exactly this. With the way the AI learns, the most interesting development will be in the fact that it will not be constrained to any conventional build orders. It could semi-randomly develop completely new builds for specific match-ups on specific maps. I'm really looking forward to that.

Other than that, Deepmind should eventually win with stellar macro and micro, just by going 3 rax every game
Administrator
Prev 1 5 6 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Patches Events
17:00
Modded Open Cup
davetesta39
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Liquid`TLO 442
MindelVK 99
EmSc Tv 21
StarCraft: Brood War
Calm 5668
Jaedong 765
Mini 568
EffOrt 344
actioN 205
sorry 61
Backho 43
Hm[arnc] 30
GoRush 22
ivOry 9
Dota 2
Gorgc8137
qojqva1222
capcasts153
monkeys_forever105
Counter-Strike
fl0m4377
byalli642
Heroes of the Storm
Khaldor791
Liquid`Hasu445
Other Games
gofns26631
tarik_tv19281
Liquid`RaSZi1450
B2W.Neo575
crisheroes257
KnowMe215
Fuzer 186
Hui .139
Mew2King38
Organizations
Dota 2
PGL Dota 2 - Main Stream25190
Other Games
gamesdonequick944
ComeBackTV 291
StarCraft 2
EmSc Tv 21
EmSc2Tv 21
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 18 non-featured ]
StarCraft 2
• mYiSmile125
• Adnapsc2 15
• Reevou 3
• Kozan
• Migwel
• AfreecaTV YouTube
• sooper7s
• intothetv
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• iopq 6
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 1215
• lizZardDota247
League of Legends
• Shiphtur422
Other Games
• imaqtpie998
Upcoming Events
BSL
1h 53m
GSL
13h 53m
Wardi Open
17h 53m
Monday Night Weeklies
22h 53m
WardiTV Team League
1d 17h
PiGosaur Cup
2 days
Kung Fu Cup
2 days
OSC
3 days
The PondCast
3 days
KCM Race Survival
3 days
[ Show More ]
WardiTV Team League
3 days
Replay Cast
4 days
KCM Race Survival
4 days
WardiTV Team League
4 days
Korean StarCraft League
5 days
uThermal 2v2 Circuit
5 days
BSL
6 days
Liquipedia Results

Completed

Proleague 2026-03-13
WardiTV Winter 2026
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
Jeongseon Sooper Cup
BSL Season 22
RSL Revival: Season 4
Nations Cup 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual

Upcoming

CSL Elite League 2026
ASL Season 21
Acropolis #4 - TS6
2026 Changsha Offline CUP
Acropolis #4
IPSL Spring 2026
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
NationLESS Cup
Stake Ranked Episode 2
CS Asia Championships 2026
IEM Atlanta 2026
Asian Champions League 2026
PGL Astana 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.