• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 01:22
CEST 07:22
KST 14:22
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S Season 1 (2026) - RO4 & Finals Preview0[ASL21] Ro4 Preview: On Course12Code S Season 1 - RO8 Preview7[ASL21] Ro8 Preview Pt2: Progenitors8Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun13
Community News
Code S Season 1 (2026) - RO8 Results2Weekly Cups (May 4-10): Clem, MaxPax, herO win1Maestros of The Game 2 announcement and schedule !11Weekly Cups (April 27-May 4): Clem takes triple0RSL Revival: Season 5 - Qualifiers and Main Event12
StarCraft 2
General
Code S Season 1 (2026) - RO4 & Finals Preview Code S Season 1 (2026) - RO8 Results Code S Season 1 (2026) - RO12 Results Team Liquid Map Contest #22 - The Finalists MaNa leaves Team Liquid
Tourneys
KSL Week 89 2026 GSL Season 2 Qualifiers Maestros of The Game 2 announcement and schedule ! $5,000 WardiTV Spring Championship 2026 SC2 INu's Battles#16 <BO.9>
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players
External Content
Mutation # 525 Wheel of Misfortune The PondCast: SC2 News & Results Mutation # 524 Death and Taxes Mutation # 523 Firewall
Brood War
General
vespene.gg — BW replays in browser BW General Discussion ASL21 General Discussion Pros React to: TvT Masterclass in FlaSh vs Light BGH Auto Balance -> http://bghmmr.eu/
Tourneys
Escore Tournament StarCraft Season 2 [ASL21] Semifinals B [Megathread] Daily Proleagues [ASL21] Semifinals A
Strategy
Fighting Spirit mining rates [G] Hydra ZvZ: An Introduction Simple Questions, Simple Answers Muta micro map competition
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Warcraft III: The Frozen Throne Starcraft Tabletop Miniature Game PC Games Sales Thread
Dota 2
The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread Russo-Ukrainian War Thread UK Politics Mega-thread YouTube Thread European Politico-economics QA Mega-thread
Fan Clubs
The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books
Sports
2024 - 2026 Football Thread McBoner: A hockey love story Formula 1 Discussion
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
How EEG Data Can Predict Gam…
TrAiDoS
ramps on octagon
StaticNine
Funny Nicknames
LUCKY_NOOB
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1937 users

Flash on DeepMind: "I think I can win" - Page 7

Forum Index > SC2 General
Post a Reply
Prev 1 5 6 7 8 9 10 Next All
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 11 2016 02:04 GMT
#121
I hope this happens.
@Drayxs | Drayxs.221 | Drayxs#1802
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 11 2016 02:06 GMT
#122
This technology is amazing but quite frightening.
chipmonklord17
Profile Joined February 2011
United States11944 Posts
Last Edited: 2016-03-11 02:09:14
March 11 2016 02:08 GMT
#123
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 11 2016 02:08 GMT
#124
On March 11 2016 10:40 Superbanana wrote:
Imba Ai goes 3 rax reaper every game no matter what and wins every game

Don't say "solved". Chess is not solved, Go is not solved.


You're right about that. I should've said "they beat Kasparov without a flying penis"

Checkers is solved though.
What qxc said.
Jonoman92
Profile Blog Joined September 2006
United States9109 Posts
March 11 2016 02:10 GMT
#125
I don't think an AI will be able to beat a current level top BW player within 50 years. Though it'd be cool to see... and terrifying.
Hypertension
Profile Joined April 2011
United States802 Posts
March 11 2016 02:55 GMT
#126
I think Deepmind wins this no contest with a few months training. Nearly perfect micro and macro will make up for a lot of tactical errors and build order mistakes, especially in Broodwar. After the AI builds a medic and marine it gets tough, once a dropship comes out gg
Buy boots first. Boots good item.
b0lt
Profile Joined March 2009
United States790 Posts
March 11 2016 03:50 GMT
#127
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports


And it'd be completely pointless?
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 04:00 GMT
#128
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports

But that's the cool thing about Google... They're not doing things to polish their image, but to innovate. They're pushing the boundaries.

Sponsoring a team wouldn't really do that, hm? Sponsoring a team is just for PR.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 04:01 GMT
#129
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
evilfatsh1t
Profile Joined October 2010
Australia8865 Posts
March 11 2016 05:45 GMT
#130
just imagine an ai that is following flash's timing builds advancing towards you. it would siege the exact amount of tanks at the exact range for it to destroy your army, whilst advancing with the remaining unsieged units as you back off. kind of like a tidal wave slowly advancing to you but so beautifully smooth that youd piss your pants trying to look for an opening.
gives me chills just thinking about that possibility.
that said though, i dont know how deepmind is programmed enough to comment on its ability but i do know that go is at its roots a game that could in theory be solved by maths. the only advantage pros had over ai in past years was there was no ai that could calculate every single possible move until recently. im not sure if this is how deepmind works now, but if the ai is able to calculate every single variable in a game that follows mathematical rules then a human shouldnt be able to win.
starcraft however doesnt follow these rules so i dont see ai being able to defeat the decision making of a pro for a long time
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 05:47 GMT
#131
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
BronzeKnee
Profile Joined March 2011
United States5219 Posts
Last Edited: 2016-03-11 05:50:52
March 11 2016 05:49 GMT
#132
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).


The thing about Sc2 though is that it is different.

In Poker, or Go or Chess, when you move, you move. That's it. And a computer can process that. SC2 is different.

If I load up a drop and sit it outside your base, I don't have to drop. But I might. But the dropship might actually be empty. What do you do? What does the AI do? I might show extreme aggression, but be taking a hidden expansion. I could also show an expansion, but then cancel it or not make it and attack.

Unless the computer wins with perfect micro and macro, I think it would struggle against non-traditional builds, timing attacks and mind games.
Wrath
Profile Blog Joined July 2014
3174 Posts
March 11 2016 05:57 GMT
#133
1. It is for BW.

2. The APM most likely will be restricted to around 200. AI's APM is equal to its EPM, it does not waste clicks like progamers and those who spam boxing or clicking to increase their APM. So for guys like EffOrt who can go to around 450 ~ 500 APM, what is the actual EPM of them? Does it go beyond 200? That is what we need to consider for AI.
CursOr
Profile Blog Joined January 2009
United States6335 Posts
March 11 2016 05:58 GMT
#134
All whilst Blizzard has absolutely no interest in making their AI even remotely strategic or interesting in any way. Once again, thank god for community interest.

I would love to see an AI that dropped in different places, tried to deceive opponents, did real different build orders, and played map specific strategies, just as a person would.
CJ forever (-_-(-_-(-_-(-_-)-_-)-_-)-_-)
ETisME
Profile Blog Joined April 2011
12715 Posts
March 11 2016 06:02 GMT
#135
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 06:09 GMT
#136
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
Grumbels
Profile Blog Joined May 2009
Netherlands7032 Posts
March 11 2016 07:24 GMT
#137
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.

It would be nice if wherever Koreans play BW would automatically save the replay, scramble the names, and send it off to google. Or imagine people at google becoming frustrated because for once they do not have big data sets available for everything.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
lpunatic
Profile Joined October 2011
235 Posts
Last Edited: 2016-03-11 07:59:01
March 11 2016 07:53 GMT
#138
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.


AlphaGo got off the ground with a big bank of games, but recently it's been improving purely through self-play.

I think if the DeepMind team put their effort into BW, they'll be able to achieve superhuman performance in a few years time.

There are some ways that the problem is harder than Go - partial information, real time and a much more complex raw game state. On the other hand, there are some clear advantages an AI will have over people (APM, multitasking) which are not present in Go. It seems to me that if you can get an AI that makes decisions like a half decent human player, it will be able to press its advantages well beyond human competition.
lpunatic
Profile Joined October 2011
235 Posts
March 11 2016 08:17 GMT
#139
On March 11 2016 13:01 ZAiNs wrote:
Show nested quote +
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


On the other hand, evaluating a stone in Go is a very hard problem - it may depend on the position of every other stone on the board. For starcraft, the value of a base or a zealot is pretty simple to evaulate in comparison, and while zealots in a good position are better than zealots in a bad position, the positional relationships aren't anywhere near as complex as in Go.

Point being, you maybe can get away with a simplified game state representation.
Gluon
Profile Joined April 2011
Netherlands421 Posts
March 11 2016 08:25 GMT
#140
On March 11 2016 15:02 ETisME wrote:
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing


Exactly this. With the way the AI learns, the most interesting development will be in the fact that it will not be constrained to any conventional build orders. It could semi-randomly develop completely new builds for specific match-ups on specific maps. I'm really looking forward to that.

Other than that, Deepmind should eventually win with stellar macro and micro, just by going 3 rax every game
Administrator
Prev 1 5 6 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Korean StarCraft League
03:00
Korean Starcraft League #89
CranKy Ducklings152
davetesta54
Liquipedia
The PiG Daily
21:30
Best Games
Maru vs Rogue
ByuN vs herO
Maru vs Classic
SHIN vs Zoun
Clem vs MaxPax
SHIN vs ByuN
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
ByuN 415
Ketroc 42
ROOTCatZ 37
StarCraft: Brood War
GuemChi 856
yabsab 18
ZergMaN 4
Dota 2
NeuroSwarm190
League of Legends
JimRising 784
Counter-Strike
Stewie2K921
Heroes of the Storm
Trikslyr27
Other Games
summit1g7502
C9.Mang0491
WinterStarcraft457
monkeys_forever238
ViBE158
Organizations
Other Games
gamesdonequick1075
StarCraft: Brood War
lovetv 6
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 14 non-featured ]
StarCraft 2
• EnkiAlexander 70
• CranKy Ducklings SOOP49
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Lourlo1276
• Stunt316
Upcoming Events
RSL Revival
4h 38m
Clem vs Rogue
Bunny vs Lambo
IPSL
10h 38m
Dewalt vs nOmaD
Ret vs Cross
BSL
10h 38m
Artosis vs Sterling
eOnzErG vs TBD
BSL
13h 38m
Bonyth vs Doodle
Dewalt vs TerrOr
GSL
1d 2h
Cure vs herO
SHIN vs Maru
IPSL
1d 10h
Bonyth vs Napoleon
G5 vs JDConan
BSL
1d 13h
OyAji vs JDConan
DragOn vs TBD
Replay Cast
2 days
Monday Night Weeklies
2 days
Replay Cast
2 days
[ Show More ]
The PondCast
3 days
GSL
4 days
Replay Cast
4 days
GSL
5 days
Replay Cast
5 days
Sparkling Tuna Cup
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Escore Tournament S2: W7
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
IPSL Spring 2026
KCM Race Survival 2026 Season 2
Acropolis #4
KK 2v2 League Season 1
BSL 22 Non-Korean Championship
SCTL 2026 Spring
RSL Revival: Season 5
2026 GSL S1
Heroes Pulsing #1
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2

Upcoming

YSL S3
Escore Tournament S2: W8
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
WardiTV Spring 2026
2026 GSL S2
BLAST Bounty Summer Qual
Stake Ranked Episode 3
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.