• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 01:07
CET 07:07
KST 15:07
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
$21,000 Rongyi Cup Season 3 announced (Jan 22-Feb 7)11Weekly Cups (Dec 29-Jan 4): Protoss rolls, 2v2 returns6[BSL21] Non-Korean Championship - Starts Jan 103SC2 All-Star Invitational: Jan 17-1822Weekly Cups (Dec 22-28): Classic & MaxPax win, Percival surprises3
StarCraft 2
General
Spontaneous hotkey change zerg Chinese SC2 server to reopen; live all-star event in Hangzhou Weekly Cups (Dec 29-Jan 4): Protoss rolls, 2v2 returns SC2 All-Star Invitational: Jan 17-18 Weekly Cups (Dec 22-28): Classic & MaxPax win, Percival surprises
Tourneys
$21,000 Rongyi Cup Season 3 announced (Jan 22-Feb 7) WardiTV Winter Cup WardiTV Mondays SC2 AI Tournament 2026 OSC Season 13 World Championship
Strategy
Simple Questions Simple Answers
Custom Maps
Map Editor closed ?
External Content
Mutation # 508 Violent Night Mutation # 507 Well Trained Mutation # 506 Warp Zone Mutation # 505 Rise From Ashes
Brood War
General
I would like to say something about StarCraft Potential ASL qualifier breakthroughs? BGH Auto Balance -> http://bghmmr.eu/ BW General Discussion StarCraft & BroodWar Campaign Speedrun Quest
Tourneys
[Megathread] Daily Proleagues [BSL21] Grand Finals - Sunday 21:00 CET [BSL21] Non-Korean Championship - Starts Jan 10 SLON Grand Finals – Season 2
Strategy
Game Theory for Starcraft Simple Questions, Simple Answers Current Meta [G] How to get started on ladder as a new Z player
Other Games
General Games
Awesome Games Done Quick 2026! Mechabellum Beyond All Reason Stormgate/Frost Giant Megathread General RTS Discussion Thread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas
Community
General
US Politics Mega-thread European Politico-economics QA Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread Trading/Investing Thread
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List TL+ Announced
Blogs
My 2025 Magic: The Gathering…
DARKING
Physical Exercise (HIIT) Bef…
TrAiDoS
Life Update and thoughts.
FuDDx
How do archons sleep?
8882
James Bond movies ranking - pa…
Topin
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1838 users

Flash on DeepMind: "I think I can win" - Page 7

Forum Index > SC2 General
Post a Reply
Prev 1 5 6 7 8 9 10 Next All
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 11 2016 02:04 GMT
#121
I hope this happens.
@Drayxs | Drayxs.221 | Drayxs#1802
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 11 2016 02:06 GMT
#122
This technology is amazing but quite frightening.
chipmonklord17
Profile Joined February 2011
United States11944 Posts
Last Edited: 2016-03-11 02:09:14
March 11 2016 02:08 GMT
#123
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 11 2016 02:08 GMT
#124
On March 11 2016 10:40 Superbanana wrote:
Imba Ai goes 3 rax reaper every game no matter what and wins every game

Don't say "solved". Chess is not solved, Go is not solved.


You're right about that. I should've said "they beat Kasparov without a flying penis"

Checkers is solved though.
What qxc said.
Jonoman92
Profile Blog Joined September 2006
United States9105 Posts
March 11 2016 02:10 GMT
#125
I don't think an AI will be able to beat a current level top BW player within 50 years. Though it'd be cool to see... and terrifying.
Hypertension
Profile Joined April 2011
United States802 Posts
March 11 2016 02:55 GMT
#126
I think Deepmind wins this no contest with a few months training. Nearly perfect micro and macro will make up for a lot of tactical errors and build order mistakes, especially in Broodwar. After the AI builds a medic and marine it gets tough, once a dropship comes out gg
Buy boots first. Boots good item.
b0lt
Profile Joined March 2009
United States790 Posts
March 11 2016 03:50 GMT
#127
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports


And it'd be completely pointless?
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 04:00 GMT
#128
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports

But that's the cool thing about Google... They're not doing things to polish their image, but to innovate. They're pushing the boundaries.

Sponsoring a team wouldn't really do that, hm? Sponsoring a team is just for PR.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 04:01 GMT
#129
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
evilfatsh1t
Profile Joined October 2010
Australia8782 Posts
March 11 2016 05:45 GMT
#130
just imagine an ai that is following flash's timing builds advancing towards you. it would siege the exact amount of tanks at the exact range for it to destroy your army, whilst advancing with the remaining unsieged units as you back off. kind of like a tidal wave slowly advancing to you but so beautifully smooth that youd piss your pants trying to look for an opening.
gives me chills just thinking about that possibility.
that said though, i dont know how deepmind is programmed enough to comment on its ability but i do know that go is at its roots a game that could in theory be solved by maths. the only advantage pros had over ai in past years was there was no ai that could calculate every single possible move until recently. im not sure if this is how deepmind works now, but if the ai is able to calculate every single variable in a game that follows mathematical rules then a human shouldnt be able to win.
starcraft however doesnt follow these rules so i dont see ai being able to defeat the decision making of a pro for a long time
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 05:47 GMT
#131
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
BronzeKnee
Profile Joined March 2011
United States5219 Posts
Last Edited: 2016-03-11 05:50:52
March 11 2016 05:49 GMT
#132
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).


The thing about Sc2 though is that it is different.

In Poker, or Go or Chess, when you move, you move. That's it. And a computer can process that. SC2 is different.

If I load up a drop and sit it outside your base, I don't have to drop. But I might. But the dropship might actually be empty. What do you do? What does the AI do? I might show extreme aggression, but be taking a hidden expansion. I could also show an expansion, but then cancel it or not make it and attack.

Unless the computer wins with perfect micro and macro, I think it would struggle against non-traditional builds, timing attacks and mind games.
Wrath
Profile Blog Joined July 2014
3174 Posts
March 11 2016 05:57 GMT
#133
1. It is for BW.

2. The APM most likely will be restricted to around 200. AI's APM is equal to its EPM, it does not waste clicks like progamers and those who spam boxing or clicking to increase their APM. So for guys like EffOrt who can go to around 450 ~ 500 APM, what is the actual EPM of them? Does it go beyond 200? That is what we need to consider for AI.
CursOr
Profile Blog Joined January 2009
United States6335 Posts
March 11 2016 05:58 GMT
#134
All whilst Blizzard has absolutely no interest in making their AI even remotely strategic or interesting in any way. Once again, thank god for community interest.

I would love to see an AI that dropped in different places, tried to deceive opponents, did real different build orders, and played map specific strategies, just as a person would.
CJ forever (-_-(-_-(-_-(-_-)-_-)-_-)-_-)
ETisME
Profile Blog Joined April 2011
12631 Posts
March 11 2016 06:02 GMT
#135
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 06:09 GMT
#136
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 11 2016 07:24 GMT
#137
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.

It would be nice if wherever Koreans play BW would automatically save the replay, scramble the names, and send it off to google. Or imagine people at google becoming frustrated because for once they do not have big data sets available for everything.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
lpunatic
Profile Joined October 2011
235 Posts
Last Edited: 2016-03-11 07:59:01
March 11 2016 07:53 GMT
#138
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.


AlphaGo got off the ground with a big bank of games, but recently it's been improving purely through self-play.

I think if the DeepMind team put their effort into BW, they'll be able to achieve superhuman performance in a few years time.

There are some ways that the problem is harder than Go - partial information, real time and a much more complex raw game state. On the other hand, there are some clear advantages an AI will have over people (APM, multitasking) which are not present in Go. It seems to me that if you can get an AI that makes decisions like a half decent human player, it will be able to press its advantages well beyond human competition.
lpunatic
Profile Joined October 2011
235 Posts
March 11 2016 08:17 GMT
#139
On March 11 2016 13:01 ZAiNs wrote:
Show nested quote +
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


On the other hand, evaluating a stone in Go is a very hard problem - it may depend on the position of every other stone on the board. For starcraft, the value of a base or a zealot is pretty simple to evaulate in comparison, and while zealots in a good position are better than zealots in a bad position, the positional relationships aren't anywhere near as complex as in Go.

Point being, you maybe can get away with a simplified game state representation.
Gluon
Profile Joined April 2011
Netherlands405 Posts
March 11 2016 08:25 GMT
#140
On March 11 2016 15:02 ETisME wrote:
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing


Exactly this. With the way the AI learns, the most interesting development will be in the fact that it will not be constrained to any conventional build orders. It could semi-randomly develop completely new builds for specific match-ups on specific maps. I'm really looking forward to that.

Other than that, Deepmind should eventually win with stellar macro and micro, just by going 3 rax every game
Administrator
Prev 1 5 6 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 2h 53m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
WinterStarcraft424
Livibee 214
RuFF_SC2 209
NeuroSwarm 122
FoxeR 28
StarCraft: Brood War
Shuttle 458
zelot 200
Snow 127
ZergMaN 79
actioN 51
JulyZerg 43
Noble 17
Icarus 7
League of Legends
JimRising 836
Counter-Strike
summit1g9805
Super Smash Bros
hungrybox463
Other Games
ViBE97
Liquid`Ken12
Organizations
Other Games
gamesdonequick4807
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 20 non-featured ]
StarCraft 2
• Hupsaiya 85
• Berry_CruncH49
• practicex 26
• Migwel
• sooper7s
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• RayReign 116
• Diggity16
• iopq 3
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
League of Legends
• Rush1588
• Scarra1555
• Lourlo1123
• Stunt394
Upcoming Events
Replay Cast
2h 53m
Wardi Open
7h 53m
Monday Night Weeklies
10h 53m
WardiTV Invitational
1d 5h
WardiTV Invitational
2 days
The PondCast
3 days
OSC
3 days
OSC
4 days
All Star Teams
4 days
INnoVation vs soO
sOs vs Scarlett
uThermal 2v2 Circuit
5 days
[ Show More ]
All Star Teams
5 days
MMA vs DongRaeGu
Rogue vs Oliveira
Sparkling Tuna Cup
6 days
OSC
6 days
Liquipedia Results

Completed

Proleague 2026-01-11
Big Gabe Cup #3
NA Kuram Kup

Ongoing

C-Race Season 1
IPSL Winter 2025-26
BSL 21 Non-Korean Championship
CSL 2025 WINTER (S19)
OSC Championship Season 13
Underdog Cup #3
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025

Upcoming

Escore Tournament S1: W4
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Rongyi Cup S3
Thunderfire SC2 All-star 2025
Nations Cup 2026
BLAST Open Spring 2026
ESL Pro League Season 23
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.