• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 09:34
CEST 15:34
KST 22:34
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Team TLMC #5 - Finalists & Open Tournaments0[ASL20] Ro16 Preview Pt2: Turbulence10Classic Games #3: Rogue vs Serral at BlizzCon9[ASL20] Ro16 Preview Pt1: Ascent10Maestros of the Game: Week 1/Play-in Preview12
Community News
BSL 2025 Warsaw LAN + Legends Showmatch0Weekly Cups (Sept 8-14): herO & MaxPax split cups4WardiTV TL Team Map Contest #5 Tournaments1SC4ALL $6,000 Open LAN in Philadelphia8Weekly Cups (Sept 1-7): MaxPax rebounds & Clem saga continues29
StarCraft 2
General
#1: Maru - Greatest Players of All Time Weekly Cups (Sept 8-14): herO & MaxPax split cups Team Liquid Map Contest #21 - Presented by Monster Energy SpeCial on The Tasteless Podcast Team TLMC #5 - Finalists & Open Tournaments
Tourneys
RSL: Revival, a new crowdfunded tournament series Maestros of The Game—$20k event w/ live finals in Paris Sparkling Tuna Cup - Weekly Open Tournament SC4ALL $6,000 Open LAN in Philadelphia WardiTV TL Team Map Contest #5 Tournaments
Strategy
Custom Maps
External Content
Mutation # 491 Night Drive Mutation # 490 Masters of Midnight Mutation # 489 Bannable Offense Mutation # 488 What Goes Around
Brood War
General
ASL20 General Discussion BW General Discussion ASL TICKET LIVE help! :D Soulkey on ASL S20 NaDa's Body
Tourneys
[ASL20] Ro16 Group D [ASL20] Ro16 Group C [Megathread] Daily Proleagues BSL 2025 Warsaw LAN + Legends Showmatch
Strategy
Simple Questions, Simple Answers Muta micro map competition Fighting Spirit mining rates [G] Mineral Boosting
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Path of Exile Borderlands 3 General RTS Discussion Thread
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread
Community
General
US Politics Mega-thread UK Politics Mega-thread Things Aren’t Peaceful in Palestine Canadian Politics Mega-thread Russo-Ukrainian War Thread
Fan Clubs
The Happy Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion MLB/Baseball 2023
World Cup 2022
Tech Support
Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread High temperatures on bridge(s)
TL Community
BarCraft in Tokyo Japan for ASL Season5 Final The Automated Ban List
Blogs
i'm really bored guys
Peanutsc
I <=> 9
KrillinFromwales
The Personality of a Spender…
TrAiDoS
A very expensive lesson on ma…
Garnet
hello world
radishsoup
Lemme tell you a thing o…
JoinTheRain
RTS Design in Hypercoven
a11
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1375 users

Flash on DeepMind: "I think I can win" - Page 7

Forum Index > SC2 General
Post a Reply
Prev 1 5 6 7 8 9 10 Next All
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 11 2016 02:04 GMT
#121
I hope this happens.
@Drayxs | Drayxs.221 | Drayxs#1802
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 11 2016 02:06 GMT
#122
This technology is amazing but quite frightening.
chipmonklord17
Profile Joined February 2011
United States11944 Posts
Last Edited: 2016-03-11 02:09:14
March 11 2016 02:08 GMT
#123
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 11 2016 02:08 GMT
#124
On March 11 2016 10:40 Superbanana wrote:
Imba Ai goes 3 rax reaper every game no matter what and wins every game

Don't say "solved". Chess is not solved, Go is not solved.


You're right about that. I should've said "they beat Kasparov without a flying penis"

Checkers is solved though.
What qxc said.
Jonoman92
Profile Blog Joined September 2006
United States9104 Posts
March 11 2016 02:10 GMT
#125
I don't think an AI will be able to beat a current level top BW player within 50 years. Though it'd be cool to see... and terrifying.
Hypertension
Profile Joined April 2011
United States802 Posts
March 11 2016 02:55 GMT
#126
I think Deepmind wins this no contest with a few months training. Nearly perfect micro and macro will make up for a lot of tactical errors and build order mistakes, especially in Broodwar. After the AI builds a medic and marine it gets tough, once a dropship comes out gg
Buy boots first. Boots good item.
b0lt
Profile Joined March 2009
United States790 Posts
March 11 2016 03:50 GMT
#127
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports


And it'd be completely pointless?
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 04:00 GMT
#128
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports

But that's the cool thing about Google... They're not doing things to polish their image, but to innovate. They're pushing the boundaries.

Sponsoring a team wouldn't really do that, hm? Sponsoring a team is just for PR.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 04:01 GMT
#129
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
evilfatsh1t
Profile Joined October 2010
Australia8691 Posts
March 11 2016 05:45 GMT
#130
just imagine an ai that is following flash's timing builds advancing towards you. it would siege the exact amount of tanks at the exact range for it to destroy your army, whilst advancing with the remaining unsieged units as you back off. kind of like a tidal wave slowly advancing to you but so beautifully smooth that youd piss your pants trying to look for an opening.
gives me chills just thinking about that possibility.
that said though, i dont know how deepmind is programmed enough to comment on its ability but i do know that go is at its roots a game that could in theory be solved by maths. the only advantage pros had over ai in past years was there was no ai that could calculate every single possible move until recently. im not sure if this is how deepmind works now, but if the ai is able to calculate every single variable in a game that follows mathematical rules then a human shouldnt be able to win.
starcraft however doesnt follow these rules so i dont see ai being able to defeat the decision making of a pro for a long time
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 05:47 GMT
#131
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
BronzeKnee
Profile Joined March 2011
United States5217 Posts
Last Edited: 2016-03-11 05:50:52
March 11 2016 05:49 GMT
#132
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).


The thing about Sc2 though is that it is different.

In Poker, or Go or Chess, when you move, you move. That's it. And a computer can process that. SC2 is different.

If I load up a drop and sit it outside your base, I don't have to drop. But I might. But the dropship might actually be empty. What do you do? What does the AI do? I might show extreme aggression, but be taking a hidden expansion. I could also show an expansion, but then cancel it or not make it and attack.

Unless the computer wins with perfect micro and macro, I think it would struggle against non-traditional builds, timing attacks and mind games.
Wrath
Profile Blog Joined July 2014
3174 Posts
March 11 2016 05:57 GMT
#133
1. It is for BW.

2. The APM most likely will be restricted to around 200. AI's APM is equal to its EPM, it does not waste clicks like progamers and those who spam boxing or clicking to increase their APM. So for guys like EffOrt who can go to around 450 ~ 500 APM, what is the actual EPM of them? Does it go beyond 200? That is what we need to consider for AI.
CursOr
Profile Blog Joined January 2009
United States6335 Posts
March 11 2016 05:58 GMT
#134
All whilst Blizzard has absolutely no interest in making their AI even remotely strategic or interesting in any way. Once again, thank god for community interest.

I would love to see an AI that dropped in different places, tried to deceive opponents, did real different build orders, and played map specific strategies, just as a person would.
CJ forever (-_-(-_-(-_-(-_-)-_-)-_-)-_-)
ETisME
Profile Blog Joined April 2011
12477 Posts
March 11 2016 06:02 GMT
#135
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 06:09 GMT
#136
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 11 2016 07:24 GMT
#137
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.

It would be nice if wherever Koreans play BW would automatically save the replay, scramble the names, and send it off to google. Or imagine people at google becoming frustrated because for once they do not have big data sets available for everything.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
lpunatic
Profile Joined October 2011
235 Posts
Last Edited: 2016-03-11 07:59:01
March 11 2016 07:53 GMT
#138
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.


AlphaGo got off the ground with a big bank of games, but recently it's been improving purely through self-play.

I think if the DeepMind team put their effort into BW, they'll be able to achieve superhuman performance in a few years time.

There are some ways that the problem is harder than Go - partial information, real time and a much more complex raw game state. On the other hand, there are some clear advantages an AI will have over people (APM, multitasking) which are not present in Go. It seems to me that if you can get an AI that makes decisions like a half decent human player, it will be able to press its advantages well beyond human competition.
lpunatic
Profile Joined October 2011
235 Posts
March 11 2016 08:17 GMT
#139
On March 11 2016 13:01 ZAiNs wrote:
Show nested quote +
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


On the other hand, evaluating a stone in Go is a very hard problem - it may depend on the position of every other stone on the board. For starcraft, the value of a base or a zealot is pretty simple to evaulate in comparison, and while zealots in a good position are better than zealots in a bad position, the positional relationships aren't anywhere near as complex as in Go.

Point being, you maybe can get away with a simplified game state representation.
Gluon
Profile Joined April 2011
Netherlands398 Posts
March 11 2016 08:25 GMT
#140
On March 11 2016 15:02 ETisME wrote:
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing


Exactly this. With the way the AI learns, the most interesting development will be in the fact that it will not be constrained to any conventional build orders. It could semi-randomly develop completely new builds for specific match-ups on specific maps. I'm really looking forward to that.

Other than that, Deepmind should eventually win with stellar macro and micro, just by going 3 rax every game
Administrator
Prev 1 5 6 7 8 9 10 Next All
Please log in or register to reply.
Live Events Refresh
The PondCast
13:00
Episode 63
CranKy Ducklings62
Liquipedia
Map Test Tournament
11:00
$450 3v3 Open Cup
WardiTV814
IndyStarCraft 186
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Lowko367
IndyStarCraft 186
Rex 110
mcanning 25
StarCraft: Brood War
Britney 54747
Calm 8108
Horang2 5509
Bisu 2031
Hyuk 602
EffOrt 503
actioN 438
Light 313
Mini 310
Pusan 289
[ Show more ]
Soma 262
ZerO 221
Soulkey 130
Snow 117
Last 101
hero 83
Hyun 75
Rush 60
Mind 57
ggaemo 54
HiyA 39
ToSsGirL 36
Sharp 32
Free 31
JYJ27
Sexy 24
Yoon 24
sorry 23
scan(afreeca) 16
Icarus 11
Terrorterran 11
Aegong 10
IntoTheRainbow 8
SilentControl 8
Dota 2
Gorgc4668
singsing3427
qojqva1488
Dendi954
Fuzer 232
XcaliburYe155
Counter-Strike
zeus573
hiko428
markeloff163
oskar101
edward55
Other Games
gofns18481
tarik_tv13828
olofmeister1161
B2W.Neo990
DeMusliM359
Hui .259
XaKoH 140
ArmadaUGS85
QueenE45
NeuroSwarm34
Trikslyr23
ZerO(Twitch)6
Organizations
StarCraft: Brood War
CasterMuse 15
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Nemesis3932
• Jankos1418
Other Games
• WagamamaTV216
• Shiphtur89
Upcoming Events
RSL Revival
20h 26m
Zoun vs Classic
Korean StarCraft League
1d 13h
BSL Open LAN 2025 - War…
1d 18h
RSL Revival
1d 20h
BSL Open LAN 2025 - War…
2 days
RSL Revival
2 days
Online Event
3 days
Wardi Open
3 days
Monday Night Weeklies
4 days
Sparkling Tuna Cup
4 days
[ Show More ]
LiuLi Cup
5 days
The PondCast
6 days
Liquipedia Results

Completed

Proleague 2025-09-10
Chzzk MurlocKing SC1 vs SC2 Cup #2
HCC Europe

Ongoing

BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
LASL Season 20
RSL Revival: Season 2
Maestros of the Game
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1

Upcoming

2025 Chongqing Offline CUP
BSL World Championship of Poland 2025
IPSL Winter 2025-26
BSL Season 21
SC4ALL: Brood War
BSL 21 Team A
Stellar Fest
SC4ALL: StarCraft II
EC S1
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.