• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 23:49
CEST 05:49
KST 12:49
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
BGE Stara Zagora 2025: Info & Preview27Code S RO12 Preview: GuMiho, Bunny, SHIN, ByuN3The Memories We Share - Facing the Final(?) GSL47Code S RO12 Preview: Cure, Zoun, Solar, Creator4[ASL19] Finals Preview: Daunting Task30
Community News
Weekly Cups (June 2-8): herO doubles down1[BSL20] ProLeague: Bracket Stage & Dates9GSL Ro4 and Finals moved to Sunday June 15th13Weekly Cups (May 27-June 1): ByuN goes back-to-back0EWC 2025 Regional Qualifier Results26
StarCraft 2
General
The SCII GOAT: A statistical Evaluation Jim claims he and Firefly were involved in match-fixing CN community: Firefly accused of suspicious activities How does the number of casters affect your enjoyment of esports? Serious Question: Mech
Tourneys
Bellum Gens Elite: Stara Zagora 2025 $3,500 WardiTV European League 2025 Sparkling Tuna Cup - Weekly Open Tournament SOOPer7s Showmatches 2025 Master Swan Open (Global Bronze-Master 2)
Strategy
[G] Darkgrid Layout Simple Questions Simple Answers [G] PvT Cheese: 13 Gate Proxy Robo
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 477 Slow and Steady Mutation # 476 Charnel House Mutation # 475 Hard Target Mutation # 474 Futile Resistance
Brood War
General
BW General Discussion StarCraft & BroodWar Campaign Speedrun Quest BGH auto balance -> http://bghmmr.eu/ Will foreigners ever be able to challenge Koreans? Mihu vs Korea Players Statistics
Tourneys
[ASL19] Grand Finals NA Team League 6/8/2025 [Megathread] Daily Proleagues [BSL20] ProLeague Bracket Stage - Day 2
Strategy
I am doing this better than progamers do. [G] How to get started on ladder as a new Z player
Other Games
General Games
Stormgate/Frost Giant Megathread What do you want from future RTS games? Armies of Exigo - YesYes? Nintendo Switch Thread Path of Exile
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
LiquidLegends to reintegrate into TL.net
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread Vape Nation Thread European Politico-economics QA Mega-thread
Fan Clubs
Maru Fan Club Serral Fan Club
Media & Entertainment
Korean Music Discussion [Manga] One Piece
Sports
2024 - 2025 Football Thread Formula 1 Discussion NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Cognitive styles x game perf…
TrAiDoS
StarCraft improvement
iopq
Heero Yuy & the Tax…
KrillinFromwales
I was completely wrong ab…
jameswatts
Need Your Help/Advice
Glider
Trip to the Zoo
micronesia
Poker
Nebuchad
Customize Sidebar...

Website Feedback

Closed Threads



Active: 21330 users

Flash on DeepMind: "I think I can win"

Forum Index > SC2 General
Post a Reply
Normal
Waxangel
Profile Blog Joined September 2002
United States33302 Posts
Last Edited: 2016-03-10 14:00:14
March 10 2016 13:59 GMT
#1
As DeepMind's AlphaGo artificial intelligence continues to shock the Baduk (Go) community with consecutive victories against top pro player Lee Se-Dol, StarCraft has made an unexpected appearance in the spotlight. Google's Jeff Dean singled out StarCraft as a future challenge for DeepMind.

[image loading]


When interviewed by SBS News, Flash responded with guarded confidence.

"Honestly I think I can win. The difference with Baduk(Go) is both sides play in a state where you don't know what's happening, and you collect information—I think that point is a bit different."
AdministratorHey HP can you redo everything youve ever done because i have a small complaint?
Musicus
Profile Joined August 2011
Germany23576 Posts
March 10 2016 14:12 GMT
#2
So I am pretty sure they will go with Starcraft 1, question is if it's Vanilla or BW. Will take some time until this happens though.
Maru and Serral are probably top 5.
brickrd
Profile Blog Joined March 2014
United States4894 Posts
March 10 2016 14:13 GMT
#3
this is really cool! i've been following the go match a bit. competitive and learning AI are super fascinating. i also used to LOVE making custom games and watching AIs battle it out in BW and SC2, and the idea of watching strategically intelligent AI play the game excites me

if this gets serious and the AI is legit, i think showmatches would be an awesome way to generate interest in the game
TL+ Member
ThomasjServo
Profile Blog Joined May 2012
15244 Posts
March 10 2016 14:34 GMT
#4
Good to have goals, I'd watch a cast of it, but that is a tall, tall order.
B-royal
Profile Joined May 2015
Belgium1330 Posts
March 10 2016 14:35 GMT
#5
This will be quite scary I think.

AIs are able to have unlimited APM which allows for not only the obvious things such as impeccable micro, but also a significant boost in economy by micro-managing their workers (http://www.teamliquid.net/forum/brood-war/484849-improving-mineral-gathering-rate-in-brood-war).

Furthermore, when you think about it. AIs won't have as much of a problem with the "veiled information". For an AI it should be possible to determine based on unit building times and worker gathering rates to predict what an opponent reasonably could have during the game.

What will be the most difficult in my opinion is to have the AI make decisions such as where to attack, when to attack, multi-pronged attacks, when to get certain units and how to use spells such as dark swarm properly. It seems to me like it would be fairly easy to trick and abuse the behavior of the AI.
new BW-player (~E rank fish) twitch.tv/crispydrone || What plays 500 games a season but can't get better? => http://imgur.com/a/pLzf9 <= ||
Pandemona *
Profile Blog Joined March 2011
Charlie Sheens House51471 Posts
Last Edited: 2016-03-10 14:36:35
March 10 2016 14:36 GMT
#6
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!
ModeratorTeam Liquid Football Thread Guru! - Chelsea FC ♥
ETisME
Profile Blog Joined April 2011
12351 Posts
March 10 2016 14:38 GMT
#7
if this is done on bw, there is even less chance to win, the AI would essentially playing without the buggy AI and mechanic barriers that made bw hard.
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
brickrd
Profile Blog Joined March 2014
United States4894 Posts
March 10 2016 14:42 GMT
#8
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!

skepticism is natural, but i'm sure Go players were saying the same thing, just like chess players :D
TL+ Member
Liquid`Zephyr
Profile Blog Joined October 2006
United States996 Posts
Last Edited: 2016-03-10 14:45:38
March 10 2016 14:45 GMT
#9
i had read somewhere that these researchers thought an ai that could beat the best humans in sc1 would take 5-10 years. if thats true its unfortunate since the level of top play likely wont keep up to then (not enough interest/pros getting too old or other responsibilities/wrist problems etc)
Team LiquidPoorUser
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 10 2016 14:45 GMT
#10
i hope they are going to do this. I used to say something like the following: Starcraft AI's will easily crush all opposition if it is taken seriously as a research project by someone other than bachelor students. I imagine that is still the case, even if some people curiously said that because it is more difficult to quantify states in Starcraft it can't be done.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Glad_
Profile Joined April 2015
France3 Posts
March 10 2016 14:46 GMT
#11
I want to see this on SC2 ! Imagine Innovation vs AI, machine vs machine, it would be so sick.
Life finds a way.
etofok
Profile Blog Joined March 2012
138 Posts
Last Edited: 2016-03-10 14:54:03
March 10 2016 14:47 GMT
#12
Honestly, the outcome will come down to its date: today? Flash. In a couple of years? AI 100%. Given the fact Google won't show up until they are ready they have already won, it just hasn't happened yet.

Even with APM cap a human will have very little chance. Given it's an RTS with FoW and such you can say AI technically can drop a game once in a while, but winning a long series after the AI is ready to show off is something that I do not expect.

"Time has been on my side, [...] fortunately there is no death sentence in this country"
The king, the priest, the rich man—who lives and who dies? Who will the swordsman obey?
c3rberUs
Profile Blog Joined December 2010
Japan11286 Posts
March 10 2016 14:50 GMT
#13
I'd still bet on pros or high level amateurs but a showmatch with an AI should bring attention to the game which is good.
WriterMovie, 진영화 : "StarCraft will never die".
OtherWorld
Profile Blog Joined October 2013
France17333 Posts
March 10 2016 14:52 GMT
#14
I'm quite confident FlaSh/a top BW player would win, at least if they do this before 2030 or so. It's much harder to determine the optimal move in a RTS than in a board strategy game like chess or Go.
Used Sigs - New Sigs - Cheap Sigs - Buy the Best Cheap Sig near You at www.cheapsigforsale.com
opisska
Profile Blog Joined February 2011
Poland8852 Posts
March 10 2016 14:52 GMT
#15
Aren't there already many bots for BW? Are those easily beatable for good players or not?
"Jeez, that's far from ideal." - Serral, the king of mild trashtalk
TL+ Member
sparklyresidue
Profile Joined August 2011
United States5523 Posts
March 10 2016 14:55 GMT
#16
This would be super cool to watch, I bet the amount of case-specific AI for understanding weird micro tricks in an older game like BW would be high.
Like Tinkerbelle, I leave behind a sparkly residue.
etofok
Profile Blog Joined March 2012
138 Posts
Last Edited: 2016-03-10 14:58:24
March 10 2016 14:57 GMT
#17
Aren't there already many bots for BW?


This is not just a bot mind you - it's a thinking AI that responds to situations accordingly to its accumulated experience. A bot is something that just has a decision making tree predetermined by its developer. This one develops its own decision making tree. On its own.

Watch 30 seconds of this guys.
The king, the priest, the rich man—who lives and who dies? Who will the swordsman obey?
WiSaGaN
Profile Joined July 2006
203 Posts
March 10 2016 14:57 GMT
#18
I don't think this will become a strategy game.
It is more likely that the impeccable micro attack will just dominate the game.
How will Flash counter a perfect Drone hit-and-run?
pc_room_freak
Profile Joined June 2011
United States26 Posts
March 10 2016 14:57 GMT
#19
Can we get tastosis cast the games for English speakers on a separate stream?
opisska
Profile Blog Joined February 2011
Poland8852 Posts
March 10 2016 15:00 GMT
#20
On March 10 2016 23:57 etofok wrote:
Show nested quote +
Aren't there already many bots for BW?


This is not just a bot mind you - it's a thinking AI that responds to situations accordingly to its accumulated experience. A bot is something that just has a decision making tree predetermined by its developer. This one develops its own decision making tree. On its own.

Watch 30 seconds of this guys.


I understand that. My question is, whether those bots that exist are capable of beating a good player or not, because I just don't know that.

On one hand, I can imagine that the bots will be "strategically" stupid, but I can't judge if it couldn't be compensated by the flawless mechanics.
"Jeez, that's far from ideal." - Serral, the king of mild trashtalk
TL+ Member
usopsama
Profile Joined April 2008
6502 Posts
March 10 2016 15:00 GMT
#21
Computers may be able to defeat humans, but computers cannot defeat God.
Monochromatic
Profile Blog Joined March 2012
United States997 Posts
March 10 2016 15:00 GMT
#22
This is a pretty interesting proposition. I think it depends on how much google will let the ai go beyond human capabilities. There already exist stupid micro ai, and if google lets the ai micro flawlessly it shouldn't be a contest. Of course, if they limit it to human capabilities (Like a hard cap of 400 APM) , then I think a human will always have the ablity to win.
MC: "Guys I need your support! iam poor make me nerd baller" __________________________________________RIP Violet
brickrd
Profile Blog Joined March 2014
United States4894 Posts
Last Edited: 2016-03-10 15:06:47
March 10 2016 15:03 GMT
#23
On March 10 2016 23:57 WiSaGaN wrote:
I don't think this will become a strategy game.
It is more likely that the impeccable micro attack will just dominate the game.
How will Flash counter a perfect Drone hit-and-run?

well, the same can be said about harassment by top terrans. how can you counter it? have vision, see it coming and have enough to fight it. even if the mechanics are "perfect," in sc2 not every fight is won by micro, and the kinds of micro that are important (casters, lurker positions etc.) are often quite difficult for AIs to grasp.

it's a fascinating experiment. no one is saying it will be the same as human 1v1. but how can you not be intrigued by the challenge? it's science!

On March 11 2016 00:00 Monochromatic wrote:
This is a pretty interesting proposition. I think it depends on how much google will let the ai go beyond human capabilities. There already exist stupid micro ai, and if google lets the ai micro flawlessly it shouldn't be a contest. Of course, if they limit it to human capabilities (Like a hard cap of 400 APM) , then I think a human will always have the ablity to win.

i think that even an AI with perfect mechanics has the capability to fail strategically by allocating its resources poorly. if it's spending 800 APM on making an overlord patrol in the corner of the map (and i've seen SC2 AI do things like this), APM isn't the issue. as long as it's not "cheater AI" with vision or extra resources it should be a very interesting idea.

i definitely see what you are saying, but there are a lot of nuances to this concept
TL+ Member
etofok
Profile Blog Joined March 2012
138 Posts
Last Edited: 2016-03-10 15:08:16
March 10 2016 15:06 GMT
#24
Of course, if they limit it to human capabilities (Like a hard cap of 400 APM)


I guess AI doesn't need to "check" every second x different keybinds, it should be drastically more efficient with its APM allocation. I'd rather limit cursor movement speed, so the AI can't micro completely unrealistically, but at "human" speed. However even with this limitation the AI will be more efficient with its decisions of "what exactly to prioritize" given sufficient amount of practice time.
The king, the priest, the rich man—who lives and who dies? Who will the swordsman obey?
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 10 2016 15:08 GMT
#25
oh, cool! Wouldn't mind working on that project.
Temporary Happiness
Profile Joined March 2016
Italy11 Posts
March 10 2016 15:11 GMT
#26
I think these 2 videos tell who's gonna win if this is done in Sc2:





When opponent microes like that there is no room for outplay him strategically i think..
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
March 10 2016 15:11 GMT
#27
Following the go match a little bit right now i alreay asked myself what would happen in real time games like starcraft.
Perfect mechanics alone would allow for mediocre strategy to win games i would imagine.
Interesting for sure!
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
OtherWorld
Profile Blog Joined October 2013
France17333 Posts
March 10 2016 15:27 GMT
#28
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

https://www.youtube.com/watch?v=mrbYd4OFrWE

https://www.youtube.com/watch?v=IKVFZ28ybQs

When opponent microes like that there is no room for outplay him strategically i think..

That second video is fucking glorious, the zerglings abandoning the targeted ling to its fate to avoid splash is soooo Zerg
Used Sigs - New Sigs - Cheap Sigs - Buy the Best Cheap Sig near You at www.cheapsigforsale.com
Shuffleblade
Profile Joined February 2012
Sweden1903 Posts
Last Edited: 2016-03-10 15:29:55
March 10 2016 15:28 GMT
#29
I Believe it would be hard for humans to keep winning, perfectly microed units is enough to basically guarantee success. Imagine harassing a computer that pulls probes perfectly as soon as your Oracle/mutas/dropship is in vision. Perfectly microes every individual probe for example.

I Believe a super safe opening into blink stalkers would win against almost anything, imagine perfect blink Micro, perfect macro and just retreat and regenerate Shields when needed. AI could split perfectly against aoe, split their army up perfectly and on top of that Micro and macro perfectly. Lets be honest if you can Micro perfectly there are many ways to guarantee the game, reapers against zerg early game, stalkers against terran when they have only marines or at least pre conc Shells. This is barely a discussion, Flash can win now but he can't win forever.

Realized I automatically was thinking sc2, I don't think the arguement is decently valid for sc1 as well though.
Maru, Bomber, TY, Dear, Classic, DeParture and Rogue!
Clonester
Profile Joined August 2014
Germany2808 Posts
March 10 2016 15:35 GMT
#30
In the end it is a fight of strategy against 10.000 apm.

Even if you would limit the apm, the computer can do inhuman things. Thats the difference between a Board Game and a RTS, where your input is extremly minimal while your strategy is maximised. A AI that learned enough to be strategically on pair with very decent players, will always win thanks to the unlimited multitasks and inputs.

And oh yeah, what Flash is saying here, that was also said by the best GO player in the world. He thought just some quick 1M $ to win, computer will never beat him. Now he has lost 2 out of 2 games so far.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
BisuDagger
Profile Blog Joined October 2009
Bisutopia19223 Posts
March 10 2016 15:42 GMT
#31
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600
ModeratorFormer Afreeca Starleague Caster: http://afreeca.tv/ASL2ENG2
WiSaGaN
Profile Joined July 2006
203 Posts
March 10 2016 15:47 GMT
#32
On March 11 2016 00:42 BisuDagger wrote:
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600

That looks like some interesting proposal.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 10 2016 16:00 GMT
#33
On March 11 2016 00:47 WiSaGaN wrote:
Show nested quote +
On March 11 2016 00:42 BisuDagger wrote:
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600

That looks like some interesting proposal.

It is more like you should allow the engine 200apm. An engine will know to not waste actions, while most of what hero does is spamming. You can split marines perfectly with 600 apm I'm pretty sure, allowing the engine 10000apm won't make that much difference.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Clonester
Profile Joined August 2014
Germany2808 Posts
Last Edited: 2016-03-10 16:06:50
March 10 2016 16:04 GMT
#34
On March 11 2016 00:42 BisuDagger wrote:
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600


That would make things "more fair", but the reaction time of the machine will still be 0ms, will the human players has a reaction time.

I dont think it is possible to make a "fair" battle between humans and programms in Starcraft, as Starcraft has not only its strategic part, but its mechanical part. When a Car drives 100m faster then Mr. Bolt, we would also call that unfair.

The computer is a self learning A.I. in this case. As soon the computer finds out, that he will win with his stellar micro, he will go for a SCV rush every game.

We are also not talking about some programm running your local Core 2.

Deep thought is a neural network based on a supercomputer. It constantly became better by playing itself, the same would apply to Starcraft. At the moment the machine finds out, that humans cant micro like the machine can, the machine uses its stellar micro to win.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
algue
Profile Joined July 2011
France1436 Posts
Last Edited: 2016-03-10 16:15:51
March 10 2016 16:10 GMT
#35
If the AI is godlike and reacts as soon as an ennemy unit shows a single pixel there isn't much a human can do. As long as the Ai can out micro and out macro the human, strategic thinking won't bring much to the table in a RTS like starcraft.

However it'd be cool if Google built a robot with hands and eyes to give him the same restraints as the player. Idk if it's feasible but the AI could only see what the robot eyes see and the robot would have to hit the keyboard to make things happen instead of just making them happen out of nowhere like current AIs do. It'd be a nice challenge for robot builders to build humanoid robot that can beat a starcraft players with equal weapons (Keyboard + Mouse)
rly ?
Charoisaur
Profile Joined August 2014
Germany15913 Posts
March 10 2016 16:13 GMT
#36
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

https://www.youtube.com/watch?v=mrbYd4OFrWE

https://www.youtube.com/watch?v=IKVFZ28ybQs

When opponent microes like that there is no room for outplay him strategically i think..

There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio.
In those situations perfect micro doesnt give you that much of an advantage.
Many of the coolest moments in sc2 happen due to worker harassment
beg
Profile Blog Joined May 2010
991 Posts
Last Edited: 2016-03-10 16:17:14
March 10 2016 16:16 GMT
#37
On March 11 2016 01:13 Charoisaur wrote:
Show nested quote +
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

https://www.youtube.com/watch?v=mrbYd4OFrWE

https://www.youtube.com/watch?v=IKVFZ28ybQs

When opponent microes like that there is no room for outplay him strategically i think..

There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio.
In those situations perfect micro doesnt give you that much of an advantage.

Imagine 50 roaches, individually microed to create a perfect arc, pulling back before they die and burrowing, joining the battle again after regenerating.

Yea... no :D
Charoisaur
Profile Joined August 2014
Germany15913 Posts
March 10 2016 16:18 GMT
#38
BTW a bot that plays starcraft perfectly already exists.
It's called INnoVation.
Many of the coolest moments in sc2 happen due to worker harassment
OtherWorld
Profile Blog Joined October 2013
France17333 Posts
March 10 2016 16:19 GMT
#39
On March 11 2016 01:16 beg wrote:
Show nested quote +
On March 11 2016 01:13 Charoisaur wrote:
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

https://www.youtube.com/watch?v=mrbYd4OFrWE

https://www.youtube.com/watch?v=IKVFZ28ybQs

When opponent microes like that there is no room for outplay him strategically i think..

There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio.
In those situations perfect micro doesnt give you that much of an advantage.

Imagine 50 roaches, individually microed to create a perfect arc, pulling back before they die and burrowing, joining the battle again after regenerating.

Yea... no :D

I'm orgasming at that thought
Used Sigs - New Sigs - Cheap Sigs - Buy the Best Cheap Sig near You at www.cheapsigforsale.com
Grizvok
Profile Joined August 2014
United States711 Posts
March 10 2016 16:20 GMT
#40
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!


No way. After they become sophisticated enough it would crush a human.
sCCrooked
Profile Blog Joined April 2010
Korea (South)1306 Posts
Last Edited: 2016-03-10 16:26:03
March 10 2016 16:25 GMT
#41
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

Video 1

Video 2

When opponent microes like that there is no room for outplay him strategically i think..


Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.

Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.
Enlightened in an age of anti-intellectualism and quotidian repetitiveness of asinine assumptive thinking. Best lycan guide evar --> "Fixing solo queue all pick one game at a time." ~KwarK-
Salteador Neo
Profile Blog Joined August 2009
Andorra5591 Posts
March 10 2016 16:32 GMT
#42
I think perfect AI micro would crush humans with just pure vultures / lings really.

No idea what build would it go if it played protoss tho. Maybe it would just kill you with shuttle+reaver?
Revolutionist fan
Clonester
Profile Joined August 2014
Germany2808 Posts
March 10 2016 16:33 GMT
#43
On March 11 2016 01:25 sCCrooked wrote:
Show nested quote +
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

Video 1

Video 2

When opponent microes like that there is no room for outplay him strategically i think..


Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.

Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.


Ai picks Terran.

AI moves all SCVs to enemy base.

It is starting SCVs against starting worker + 3.

AI wins the fight.

Game over.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
Aron Times
Profile Blog Joined March 2011
United States312 Posts
March 10 2016 16:33 GMT
#44
I honestly wonder if Flash losing to DeepMind will result in simpler RTS games in the future. The problem with Starcraft is not that it is mechanically demanding, but that mechanics is disproportionately effective compared to actual strategy. A player skilled in mind games will lose to a player with better mechanics since he probably won't have the mechanics to execute his trickery in the first place.

Also, the fact that Starcraft is so mechanics-focused may give rise to new hacks that automate the game without making it obvious. If you guys remember the CS:GO fiasco last year, a progaming team was VAC'd for using subtle hacks. The hacks were not obvious and they amounted to a tiny boost in that team's effectiveness. At those levels, even the slightest advantage will make a big difference. It'd be like Flash vs. Flash, except one of them has a hack that automates SCV production in one command center. In the grand scheme of things, that's not a huge deal, but when both players are equally skilled, even the slightest advantage can tip the scales his way.

Games where mechanics matter less and strategy matters more might be the result of a human progamer vs. AI matchup.
"The drums! The drums! The drums! The neverending drumbeat! Open me, you human fool! Open the light and summon me and receive my majesty!"
chrisolo
Profile Joined May 2009
Germany2606 Posts
March 10 2016 16:36 GMT
#45
So God vs Machine?

The creator of the humanity against the creation of the humanity. Will be a spiritual thing!

+ Show Spoiler +
It's just a joke, please dont ban me mods :[
¯\_(ツ)_/¯ - aka cReAtiVee
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 10 2016 16:42 GMT
#46
On March 11 2016 01:25 sCCrooked wrote:
Show nested quote +
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

Video 1

Video 2

When opponent microes like that there is no room for outplay him strategically i think..


Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.

Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.

You could do similarly ridiculous stuff in BW with infinite APM... Also LOL at Comparing SC2 to tic-tac-toe.
TheDougler
Profile Joined April 2010
Canada8302 Posts
Last Edited: 2016-03-10 16:47:06
March 10 2016 16:44 GMT
#47
On March 11 2016 01:16 beg wrote:
Show nested quote +
On March 11 2016 01:13 Charoisaur wrote:
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

https://www.youtube.com/watch?v=mrbYd4OFrWE

https://www.youtube.com/watch?v=IKVFZ28ybQs

When opponent microes like that there is no room for outplay him strategically i think..

There are compositions where you can't micro that much for example roach vs roach zvz or roach ravager vs bio.
In those situations perfect micro doesnt give you that much of an advantage.

Imagine 50 roaches, individually microed to create a perfect arc, pulling back before they die and burrowing, joining the battle again after regenerating.

Yea... no :D


This is what I've always said. It's usually countered by people saying burrow micro is soooo much less efficient than say, blink micro. Which is true, but if a bot pushed burrow micro to its limit I bet we'd see pros do it a bit more often after we see that it DOES work.

On March 11 2016 01:42 ZAiNs wrote:
Show nested quote +
On March 11 2016 01:25 sCCrooked wrote:
On March 11 2016 00:11 Temporary Happiness wrote:
I think these 2 videos tell who's gonna win if this is done in Sc2:

Video 1

Video 2

When opponent microes like that there is no room for outplay him strategically i think..


Reasons like this are why they're considering Brood War and not SC2. BW is by far the only one that displays enough stability to produce a real result. Nobody with a strategically adept mind would pick SC2 for this.

Also, there should be absolutely no limits on the AI. Strategic depth should win out over pure mechanics/speed because in Brood War its much harder to completely counter someone's micro unlike its sequel where things are much more tic-tac-toe and then its over instead of constantly stacking areas you have to battle and defend.

You could do similarly ridiculous stuff in BW with infinite APM... Also LOL at Comparing SC2 to tic-tac-toe.


Indeed. In fact in BW I think there was one point where a pro microed a single marine to kill a lurker. I could be wrong on that but I think it was a thing.
I root for Euro Zergs, NA Protoss* and Korean Terrans. (Any North American who has beat a Korean Pro as Protoss counts as NA Toss)
Slayer91
Profile Joined February 2006
Ireland23335 Posts
Last Edited: 2016-03-10 16:47:59
March 10 2016 16:46 GMT
#48
I actually thought it would be pretty easy to get a super good BW AI.

You can teach it all the optimal pro build orders.
You can show it whatever flash replays you can find.
Have it learn literally how to play exactly like flash for a template skill level as much as is possible.
Then implement management and micro subroutines i.e for a given strategy it will always be building out of X rax marines Y facts tanks/vults Z ports vessels and SCVs if relevant.
Then for micro: TvZ: make sure it can destroy lurkers with marine micro, react instantly to mutas coming in range, always run from swarm, micro perfectly behind minerals from a dropship,perfect irradiate splitting and scourge dodging
TvP: Perfect vulture kiting and target firing on zealots, mine placement that doesn't put your units in danger. Perfect target firing of tanks on dragoons and ignoring zealots if you'll friendly fire.
TvT: Perfect range calculation for tank placement and scans etc.

Sure you might fall behind on decision making and playing vs some obscure strats but with perfect mechanics and copying the best players style it shouldn't be that hard for a big project team to handle.
The fact that replays exist give a template for reaching a high level of play instantly. It can play itself vs flash or jaedong 1000 times a day with a learning algorithm for example.

I think it make it fair you'd have to limit it so it has to use a cursor and keyboard and they are limited each to certain speed/APM. That way the AI is under the same PHYSICAL limitations as a (very very fast) human and has to figure out how to win "mentally" from there with the same limits of spending your attention as a human and algorithms to decide how to spend that attention.
You could also change the limits and see how certain strategies become better for slower players. (hue hue 100 apm bonjwa DeepProtoss)
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
March 10 2016 16:46 GMT
#49
It really depends on how you limit the computer. If you give it infinite APM and near perfect micro I think an AI can win easily just by using its superior mechanics. If the AI is constrained to more human mechanical capabilities, then it is a very difficult problem to solve.
Clonester
Profile Joined August 2014
Germany2808 Posts
March 10 2016 16:47 GMT
#50
To give a anology how this is going to work:

Think about a game Flash vs Flash.

The one Flash plays Broodwar like it is.

The other Flash plays Broodwar with unlimited unit group cap, with hotkey-groups for buildings, with 0ms reaction time, pixelperfect minimap awarness. Who wins?

A AI in Broodwar plays the game like engine limitations do not exists. Its macro is perfect without spending 2ms in base, its control will be stellar, its awareness will be unmatched. The only thing the AI might not be perfect is strategy and decision making. But first of all, Deepthought in Go has shown us, that the AI is able to improve from "I beat some Euro Scrub" to "GSL Champion" in only 6 month by playing itself and learning from these games. And secound, the machine learns. And the machine would learn soon, that it wins game, that do not go into macro. Unlike Go, where Deepthought becomes better and better with each stone more on the board.

I dont see any player winning either BW or SC II against a neuronal network AI without HARD limitiations to the input.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
SmykuToronto
Profile Joined October 2014
Poland269 Posts
March 10 2016 16:50 GMT
#51
1 hour against Has and AI won't be able to play anything more complicated than naughts and crosses for the rest of its existence.
Goolpsy
Profile Joined November 2010
Denmark301 Posts
March 10 2016 17:03 GMT
#52
Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.

Game might seem like it, but Starcraft has far fewer strategies than chess.
iloveav
Profile Joined November 2008
Poland1478 Posts
March 10 2016 17:07 GMT
#53
Its not the first time that gamers are proving that they are a force to be considered:

http://www.dailymail.co.uk/sciencetech/article-2039012/AIDS-cure-Gamers-solve-puzzle-stumped-scientists-years.html

(there are a lot more articles out there about this subject).

aka LRM)Cats_Paw.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 10 2016 17:10 GMT
#54
The only way for it to be fair is to make a robot+AI actually playing with mouse and keyboard, otherwise with perfect micro and stuff it'll win eventually really easily but it's cheating.
WriterMaru
eScaper-tsunami
Profile Blog Joined July 2009
Canada313 Posts
March 10 2016 17:11 GMT
#55
On March 11 2016 00:42 BisuDagger wrote:
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600


The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.
RuhRoh is my herO
Monochromatic
Profile Blog Joined March 2012
United States997 Posts
March 10 2016 17:14 GMT
#56
On March 11 2016 02:03 Goolpsy wrote:
Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.

Game might seem like it, but Starcraft has far fewer strategies than chess.


I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.

The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.

Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.

Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.
MC: "Guys I need your support! iam poor make me nerd baller" __________________________________________RIP Violet
Clonester
Profile Joined August 2014
Germany2808 Posts
March 10 2016 17:20 GMT
#57
On March 11 2016 02:14 Monochromatic wrote:
Show nested quote +
On March 11 2016 02:03 Goolpsy wrote:
Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.

Game might seem like it, but Starcraft has far fewer strategies than chess.


I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.

The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.

Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.

Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.


But thats the point. Deepthought is not like Deep Blue brute forcing its way into the game. It is making a much more soft approach by learning to play it. The neural network architecture is not ment to brute force the game and "solve it". It learns the game by millions of games against itself and replays and learns from that, takes conclusions. Deep Blue just tried out millions of next moves when playing chess. Deep Thought learned to play the game and knows what to do where.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
Zedd
Profile Joined January 2010
Czech Republic107 Posts
March 10 2016 17:23 GMT
#58
Is it for BW or Starcraft2?

Quite hard to decide, who would win, as there are two things

1. Bot can do insane micro like this, killing things with half an army progamer would need:

https://www.youtube.com/watch?v=mrbYd4OFrWE



2. There are virtually endless scenarios in Starcraft and I am not sure if its possible to teach the bot everything. He would have to play/analyze thousands (maybe even more) of games to actually learn how the units interact with each other and how building certain units in certain moments affects the game. There is simply much greater complexity. Also as Flash pointed out, this is game with incomplete information.
MrMotionPicture
Profile Joined May 2010
United States4327 Posts
March 10 2016 17:29 GMT
#59
I would love to see this showmatch. Go Flash!
"Elvis Presley" | Ret was looking at my post in the GSL video by Artosis. | MMA told me I look like Juanfran while we shared an elevator with Scarlett
heaveshade
Profile Joined March 2011
China330 Posts
March 10 2016 17:29 GMT
#60
So what's the difference between a normal game AI and Google's work?

I mean the default AI can cheat, does the functioning of the AI we've got rely on some gimmicky tricks not suitable for a human wisdom vs human creation scenario?
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 10 2016 17:30 GMT
#61
Flash hit the key point I think.

Go and chess are games of complete information, where as starcraft is a game of incomplete information. furthermore, the more information you get usually means the mroe resouce you spent on getting information.

sure, a computer can calculate possible situations with given incomplete information, but to match what we call meta game and intuition of pro players, will take a really long time of self learning ana data collecting on computers part.
64K RAM SYSTEM 38911 BASIC BYTES FREE
NinjaToss
Profile Blog Joined October 2015
Austria1383 Posts
March 10 2016 17:33 GMT
#62
I mean the problem is if Flash's wrist can keep up with him. He's playing BW "casually" and he already say that his arms hurt, to return to his former bonjwa self would be hard considering his injury.
I'm sorry for all those that got their hearts broken by Zest | Zest, Bisu, soO, herO, MC, Maru, TY, Rogue, Trap, TaeJa", Favourite foreigners: ShoWTimE, Snute, Serral and Nerchio| KT BEST KT |
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 10 2016 17:36 GMT
#63
On March 11 2016 02:11 eScaper-tsunami wrote:
Show nested quote +
On March 11 2016 00:42 BisuDagger wrote:
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600


The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.



btw does broodwar has a hardlimit for command accepted per frame?
like 12 per frame or something.

I wouldnt be suprised if there was, at least a limit to buffering
64K RAM SYSTEM 38911 BASIC BYTES FREE
Taf the Ghost
Profile Joined December 2010
United States11751 Posts
March 10 2016 17:37 GMT
#64
On March 11 2016 02:20 Clonester wrote:
Show nested quote +
On March 11 2016 02:14 Monochromatic wrote:
On March 11 2016 02:03 Goolpsy wrote:
Even with 400 apm limit, Neural network learning can easily teach it where to focus its micro most efficiently.

Game might seem like it, but Starcraft has far fewer strategies than chess.


I disagree. Starcraft may have fewer strategies then chess, but due to fog of war limiting information, the ai can only guess half the time. Furthermore, many strategies look the same and things such as drops could still catch the ai off guard. A starcraft game has many many more possibilities then a chess game.

The real problem though is that chess is turn based while starcraft is real time. This means that the computer would have much less time to think. Additionally, people make mistakes in execution in starcraft that differentiate the same strategy. Take a rally point, for instance. Depending on where it is set, there is a difference in time on when units get to different places.

Chess (and go, for that matter) are turn based, so making a move is the same for a grandmaster or a new player -- the pawn still goes to the square you want it to perfectly every time.

Finally, starcraft has so many orders of magnitude more board states then a chess board. This makes it overwhelmingly harder to try and brute force.


But thats the point. Deepthought is not like Deep Blue brute forcing its way into the game. It is making a much more soft approach by learning to play it. The neural network architecture is not ment to brute force the game and "solve it". It learns the game by millions of games against itself and replays and learns from that, takes conclusions. Deep Blue just tried out millions of next moves when playing chess. Deep Thought learned to play the game and knows what to do where.


"not like Deep Blue brute forcing its way into the game" vs "It learns the game by millions of games against itself and replays and learns from that". I don't think you quite know what "brute force" means.

I'm only poking you a little. The technology and approach are rather different between the computer systems, but they both still work by being able to call up almost the entire Game State possibility and knowing how to process through the information. This is what computers are supremely good at, but they it also shows their limitations. Though I'd be remiss if I didn't point out that most of these public games with top pro's always benefits the Computer in one very specific manner: they can analyze all of the publicly known games of the Master, allowing adaptation to the Master's play style.
L_Master
Profile Blog Joined April 2009
United States8017 Posts
March 10 2016 17:41 GMT
#65
Just the unit efficiency and micro possibilities for the AI are insane. That alone would make it insanely difficult to beat because you're guaranteed to always be outmacroed and viciously destroyed in any engagements involving similar numbers. There are videos out there of 12 goon vs 12 goon fights where the AI wins with 12 goons alive.

Some form of APM cap seems fair in my book, or else the AI isn't winning because it is making better decisions than the human, it's winning because it's exploiting micro in a way no human could ever imagine.
EffOrt and Soulkey Hwaiting!
Nakajin
Profile Blog Joined September 2014
Canada8989 Posts
March 10 2016 17:46 GMT
#66
I depend what they want to do, capping the APM would be a lot more interesting. If not of course you would win a game where you dodge every single stalker shot by hopping in and out of a medivac, in fact just doing a worker rush would probably work, but that is not realy extraordinary.
Writerhttp://i.imgur.com/9p6ufcB.jpg
boxerfred
Profile Blog Joined December 2012
Germany8360 Posts
March 10 2016 17:48 GMT
#67
there's no way a human would beat an AI in starcraft 2. take the micro bot that someone wrote here on TL as an example. SC2 has a theoretical notexistant skill gap which is beyond human reach. dunno about bw.
Savant
Profile Blog Joined October 2009
United States379 Posts
March 10 2016 17:51 GMT
#68
I think the AI can simply BBS every game and win through sheer micro
Haukinger
Profile Joined June 2012
Germany131 Posts
March 10 2016 17:56 GMT
#69
And that's why there should be a cooldown on each and every action you can do in SC2. To have a predictable behavior of the units independent of the capabilities of the player to exploit certain things. It would be so much easier and satisfying to balance, for instance, banelings if marines were unable to split.
disciple
Profile Blog Joined January 2008
9070 Posts
March 10 2016 18:09 GMT
#70
This match would have number of interesting implications chief among witch are BO decisions. If AI is strictly superior microing units theres no reason not to assume that it will try taking advantage of this and go for 1 base all-ins most of the time in order to force micro intensive early games. It would be cool if the AI has some doubt about his opponents skill and actually needs to confirm its superiority in micro in order to feel confident in winning and going for all ins. Humans already do that as we all know from Bisu being annoying as much as possible with his scouting probe. Now imagine AI controlling this, it will never die by mistake.
Administrator"I'm a big deal." - ixmike88
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2016-03-10 18:31:10
March 10 2016 18:27 GMT
#71
I recall some specific types of AI techniques used for mutalisk micro in one of the BWAI competitions. I wonder if the point of the deepmind project is to find applications for neuralnet type algorithms or whether they would be okay with discarding neural networks if other ai techniques would work better. I guess it is the former, since that must be the reason google funds it, so that eventually they can have self driving cars and smarter search results.

I think neural networks are used by the planetary annihilation ai designer, I used to read some of his explanations for why that was the future of rts ai, but I don't know if he is still working on similar things or where one can find information on this. iirc the ai could learn how to micro by playing against itself over and over
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
endy
Profile Blog Joined May 2009
Switzerland8970 Posts
March 10 2016 18:28 GMT
#72
On March 10 2016 23:35 B-royal wrote:
What will be the most difficult in my opinion is to have the AI make decisions such as where to attack, when to attack, multi-pronged attacks, when to get certain units and how to use spells such as dark swarm properly. It seems to me like it would be fairly easy to trick and abuse the behavior of the AI.


I've already played a bot on BW and engagements were actually the bot's biggest strength. It's constantly scouting with zerglings, has an incredible overlord spread for a perfect map awareness which allows to always perfectly surround your army or set the best possible concave possible.

As for stuffs like swarms, how many times did your defilers die before casting the swarm because you weren't selecting/clicking fast enough?
ॐ
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
March 10 2016 18:44 GMT
#73
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
Goolpsy
Profile Joined November 2010
Denmark301 Posts
March 10 2016 18:50 GMT
#74
Learning by iterations and bruteforcing is not the same thing. On the same note; Chess is far from solved.

There is a continuous evolution and progression of chess engines, the best one currently being 'Komodo'.

From the Wiki: "Komodo heavily relies on evaluation rather than depth". I've been following the TCEC tournaments and heard the developers talk about certain aspects of the making of AI.

Evaluation is derived not from knowing that a certain position withs 58% of the games, but by piece value and pieceplacement + structures.

If you have a chain of pawns (x= pawn, o = empty space)

oxo or oox or oxo
xoo xxo xox

They do not have a evaluation of 3 (1 per pawn), but more than 3. In the same way, having an isolated pawn (A pawn with no friendly pawns on either adjecent diagonal), it is usually very weak and worth less than 1.

Why is this important ?
Being able to evaluate a situation through dividing out the area/map is almost directly transferable to Starcraft. It can be used for unitplacement to scout, defending chokepoints, building placement or engagement angles (without too much processing even).

As for incomplete information; it is not all that hard. you don't need to know all units produced to calculate possible strategies available or techtrees possible at a current time. I know it seems like it, but you don't have complete information in either chess or Go. (not that is it not a game of complete information, as you can't hold all possible iterations and payoff thereof in memory, )
Empirimancer
Profile Joined July 2011
Canada1024 Posts
March 10 2016 18:52 GMT
#75
For this to be an interesting challenge for DeepMind the AI would have to be limited to 400 APM and it would have to interact through a virtual keyboard and mouse, i.e. it would have to actually drag the cursor to box units, etc, so it can't do micro that is (in principle) impossible for humans to do.



ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 10 2016 19:01 GMT
#76
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.
Blargh
Profile Joined September 2010
United States2101 Posts
March 10 2016 19:01 GMT
#77
I suspect there would be a variety of cheese tactics where the AI could simply out-micro the players to such an extreme degree that they can always win in the first 10 minutes.
Green_25
Profile Joined June 2013
Great Britain696 Posts
Last Edited: 2016-03-10 19:07:11
March 10 2016 19:06 GMT
#78
I'm very skeptical that this could work in an RTS like starcraft.

Even the micro I'm doubtful could be perfect. I may well be wrong however, and if I am this thing scares the shit out of me
Loccstana
Profile Blog Joined November 2012
United States833 Posts
March 10 2016 19:06 GMT
#79
This would be of interest to people interested in AI for Starcraft:

https://webdocs.cs.ualberta.ca/~cdavid/pdf/starcraft_survey.pdf

A conservative lower bound on the state space of brood war is 10^1685. This is many orders of magnitude above the state space of Go, which is 10^170. Whats more, the branching factor is 10^50 to 10^200, compared to <360 for Go.
[url]http://i.imgur.com/lw2yN.jpg[/url]
Loccstana
Profile Blog Joined November 2012
United States833 Posts
March 10 2016 19:10 GMT
#80
On March 11 2016 02:30 Cuce wrote:
Flash hit the key point I think.

Go and chess are games of complete information, where as starcraft is a game of incomplete information. furthermore, the more information you get usually means the mroe resouce you spent on getting information.

sure, a computer can calculate possible situations with given incomplete information, but to match what we call meta game and intuition of pro players, will take a really long time of self learning ana data collecting on computers part.


Starcraft is what we call a POMDP (partially observable markov decision process), there are algorithms for solving these types of problems, for example recurrent neural networks, but no one has tried applied it to something as complex as a full game in Starcraft.
[url]http://i.imgur.com/lw2yN.jpg[/url]
DonDomingo
Profile Joined October 2015
504 Posts
Last Edited: 2016-03-10 19:11:23
March 10 2016 19:11 GMT
#81
Would make much more sense for DeepMind to have a go at DotA; in a game where mechanics mean so much like StarCraft, of course, an ai will be able to rape humans - its just a question of time.
Green_25
Profile Joined June 2013
Great Britain696 Posts
March 10 2016 19:13 GMT
#82
So, to make it fair the AI would have to be a robot controlling the same set of key-bindings as the human rather than just a computer program.

Oh wait, Innovation.
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
Last Edited: 2016-03-10 19:21:55
March 10 2016 19:18 GMT
#83
On March 11 2016 04:01 ZAiNs wrote:
Show nested quote +
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.

Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself.
The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor.
If you don't do that then the result is questionable at best as far as i can tell

edit: and even then you will get a device which is superior to human flesh, so i dunno..
AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
CxWiLL
Profile Joined May 2013
China830 Posts
Last Edited: 2016-03-10 19:28:50
March 10 2016 19:26 GMT
#84
Imho, I don't know if we human can actually stand a chance on this.
After watching the Go games, the AlphaGo's play style feels like something next level to me. In the two games played, the bot fell behind in the early-mid game pretty badly, but it just win by out-calculate Lee Sedol in small skirmishes. By the end, the bots won.
Feel like playing some one with perfect blink stalker micro. No matter how badly his status is, as soon as his blink is ready, you start to trade badly here and there. Soon, you find yourself in an awkward position that you cannot walk out of your base and you cannot expand either.

If the Deepmind team goes full try-hard mode, some micro bot can out-micro human players pretty hard, which is nothing challenging to them.
Personally, I would love to see a bot that plays like a human, fetching information from the game through the output image instead of the computer memory. and this might make the game fair.
HellHound
Profile Joined September 2014
Bulgaria5962 Posts
March 10 2016 19:40 GMT
#85
On March 11 2016 01:18 Charoisaur wrote:
BTW a bot that plays starcraft perfectly already exists.
It's called INnoVation.

So we can beat deepmind with nydus play.
Good plan.
Classic GosoO |sOs| Everyone has to give in, let Life win | Zest Is The Best | Roach Cultist | I recognize the might and wisdom of my Otherworldly overlord | Air vs Air 200/200 SC2 is best SC2 | PRIME has been robbed | Fuck prime go ST | ROACH ROACH ROACH
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 10 2016 19:46 GMT
#86
On March 11 2016 03:09 disciple wrote:
This match would have number of interesting implications chief among witch are BO decisions. If AI is strictly superior microing units theres no reason not to assume that it will try taking advantage of this and go for 1 base all-ins most of the time in order to force micro intensive early games. It would be cool if the AI has some doubt about his opponents skill and actually needs to confirm its superiority in micro in order to feel confident in winning and going for all ins. Humans already do that as we all know from Bisu being annoying as much as possible with his scouting probe. Now imagine AI controlling this, it will never die by mistake.



I think AI should go for a late game instead. it has not only perfect micro but also perfect mechanics (maybe not intuitive and predictive macro but still) perfect multitasking, perfect minimap.
more stuff to do would mean more adventages AI will get.

Yes more tiem it gives to the player means player will have more options and tricks to pull of a win, but perfect micro can shutdown quite a alot of stuff.
64K RAM SYSTEM 38911 BASIC BYTES FREE
BjoernK
Profile Joined April 2012
194 Posts
March 10 2016 19:55 GMT
#87
I feel the AI should input the commands via robot hands and a keyboard. Maybe the APM should be limited to a sensible upper bound. (Say 500 or so)
chiasmus
Profile Blog Joined January 2012
United States134 Posts
March 10 2016 19:56 GMT
#88
Like many people here, I think it's weird to compare an AI that can bypass the physical mechanics of the game to a chess or go computer.

What I love about Starcraft, and what makes it my favorite esport, is that it's a *physical sport* in addition to a strategy game. If you take away the need to physically manipulate the mouse and keyboard, it isn't really the same game. That's why it's different from chess, or go, or poker, or hearthstone.

The AI-vs-AI competitions are still kinda cool though.
bITt.mAN
Profile Blog Joined March 2009
Switzerland3693 Posts
March 10 2016 20:07 GMT
#89
Lol.

1. They should do it with BWAPI because SC2 is lame like that (it doesn't have an API to interface code<->game).

2. There's been TONS of theorycrafting on RTS AI and their limitations. link Two big differences between turn-based games and RTS, are real-time computational optimizations (which figure far-less in turn-based AI), and, as Flash rightly states, finite information.
BW4LYF . . . . . . PM me, I LOVE PMs. . . . . . Long live "NaDa's Body" . . . . . . Fantasy | Bisu/Best | Jaedong . . . . .
Grizvok
Profile Joined August 2014
United States711 Posts
March 10 2016 20:32 GMT
#90
On March 11 2016 05:07 bITt.mAN wrote:
Lol.

1. They should do it with BWAPI because SC2 is lame like that (it doesn't have an API to interface code<->game).

2. There's been TONS of theorycrafting on RTS AI and their limitations. link Two big differences between turn-based games and RTS, are real-time computational optimizations (which figure far-less in turn-based AI), and, as Flash rightly states, finite information.


Their limitations NOW you mean. A sophisticated AI built to play SC2 (when it is ready) will destroy any player easily. Regardless you don't factor in the crazy levels of micro you can pull off with infinite APM. Dropping three areas at once while still macro'ing perfectly WHILE stutter step micro'ing each drop is something a human will never be able to do yet it is feasible that a computer could potentially do those things.
Chaggi
Profile Joined August 2010
Korea (South)1936 Posts
March 10 2016 20:52 GMT
#91
On March 11 2016 05:32 Grizvok wrote:
Show nested quote +
On March 11 2016 05:07 bITt.mAN wrote:
Lol.

1. They should do it with BWAPI because SC2 is lame like that (it doesn't have an API to interface code<->game).

2. There's been TONS of theorycrafting on RTS AI and their limitations. link Two big differences between turn-based games and RTS, are real-time computational optimizations (which figure far-less in turn-based AI), and, as Flash rightly states, finite information.


Their limitations NOW you mean. A sophisticated AI built to play SC2 (when it is ready) will destroy any player easily. Regardless you don't factor in the crazy levels of micro you can pull off with infinite APM. Dropping three areas at once while still macro'ing perfectly WHILE stutter step micro'ing each drop is something a human will never be able to do yet it is feasible that a computer could potentially do those things.


I feel like you can solve that by actually having things be possible, like the computer can't be looking at 3 screens at once
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 10 2016 20:53 GMT
#92
On March 11 2016 04:18 The_Red_Viper wrote:
Show nested quote +
On March 11 2016 04:01 ZAiNs wrote:
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.

Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself.
The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor.
If you don't do that then the result is questionable at best as far as i can tell

edit: and even then you will get a device which is superior to human flesh, so i dunno..
AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even

I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
March 10 2016 20:59 GMT
#93
On March 11 2016 05:53 ZAiNs wrote:
Show nested quote +
On March 11 2016 04:18 The_Red_Viper wrote:
On March 11 2016 04:01 ZAiNs wrote:
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.

Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself.
The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor.
If you don't do that then the result is questionable at best as far as i can tell

edit: and even then you will get a device which is superior to human flesh, so i dunno..
AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even

I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.


I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor.
The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc)
If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.

But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example)
It being a game with limited information is the only interesting aspect about all of this i can see tbh
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 10 2016 21:39 GMT
#94
On March 11 2016 05:59 The_Red_Viper wrote:
Show nested quote +
On March 11 2016 05:53 ZAiNs wrote:
On March 11 2016 04:18 The_Red_Viper wrote:
On March 11 2016 04:01 ZAiNs wrote:
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.

Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself.
The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor.
If you don't do that then the result is questionable at best as far as i can tell

edit: and even then you will get a device which is superior to human flesh, so i dunno..
AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even

I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.


I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor.
The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc)
If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.

But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example)
It being a game with limited information is the only interesting aspect about all of this i can see tbh

The number of game states in StarCraft is several magnitudes higher than Go, even if you somehow got rid of the irrelevant ones like obviously stupid openings (which really is something the AI would have to work out for itself), there would still be several magnitudes more game states for StarCraft. Regardless of what you think about the strategic depth of the game, the sheer number of game states makes things far more complicated for AI to figure out.
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
March 10 2016 21:47 GMT
#95
On March 11 2016 06:39 ZAiNs wrote:
Show nested quote +
On March 11 2016 05:59 The_Red_Viper wrote:
On March 11 2016 05:53 ZAiNs wrote:
On March 11 2016 04:18 The_Red_Viper wrote:
On March 11 2016 04:01 ZAiNs wrote:
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.

Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself.
The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor.
If you don't do that then the result is questionable at best as far as i can tell

edit: and even then you will get a device which is superior to human flesh, so i dunno..
AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even

I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.


I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor.
The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc)
If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.

But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example)
It being a game with limited information is the only interesting aspect about all of this i can see tbh

The number of game states in StarCraft is several magnitudes higher than Go, even if you somehow got rid of the irrelevant ones like obviously stupid openings (which really is something the AI would have to work out for itself), there would still be several magnitudes more game states for StarCraft. Regardless of what you think about the strategic depth of the game, the sheer number of game states makes things far more complicated for AI to figure out.


Just to be clear, let's say you place building X at place Y or Z, that are two different "board states" right?
Even if it means that placing your first supply depot in the enemy base probably isn't all that smart?

I get that it isn't "intuitive" for the AI like for a human being, but there surely are tons and tons of these things in sc2.
Even something like: I move my army (or even single marine) a few tiles on the left, it probably won't be the biggest deal but it surely is considered a different "board state" ?
If we want to play 100% perfectly these things have to be considered, but overall it probably doesn't matter at all i would imagine.
I don't think the same is true for GO? (i have no idea about GO though)
My statement was probably just simply this: A high lvl GO players surely possesses more tactical/strategical understanding than a starcraft professional, you don't have to be highly intelligent to play starcraft at a high lvl, the same probably isn't true for GO/chess. i think? (i can see why this isn't all that relevant to the main topic though ^^)
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
Slayer91
Profile Joined February 2006
Ireland23335 Posts
March 10 2016 21:56 GMT
#96
The number of game states doesn't really matter any more since we aren't using brute force calculation and there are clear ways to evaluate strength of play (economic advantage, supply advantage)
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
Last Edited: 2016-03-10 22:03:35
March 10 2016 22:00 GMT
#97
On March 11 2016 06:47 The_Red_Viper wrote:
Show nested quote +
On March 11 2016 06:39 ZAiNs wrote:
On March 11 2016 05:59 The_Red_Viper wrote:
On March 11 2016 05:53 ZAiNs wrote:
On March 11 2016 04:18 The_Red_Viper wrote:
On March 11 2016 04:01 ZAiNs wrote:
On March 11 2016 03:44 The_Red_Viper wrote:
See the problem i have with this is that the Ai will have such a big advantage through 'mechanics' alone.
It's much more interesting in GO because there is no difference in execution, tactics and strategy is all that matters here.

Even though there are a lot more possible "board states" in sc2, i am not sure if that really matters in the end if you theoretically have a player with unlimited APM and attention.


But hey, i have obviously very little idea about it and when google says starcraft would be the next step, maybe it's harder than i think (or they really want to somewhat limit the AI in the mechanics department so it comes down to tactis/strategy, which would be weird though)

How would it be weird to limit the mechanics? The goal is to be 'smarter' than a human, without limiting mechanics it wouldn't really prove anything or be an accomplishment. I imagine they would want to even limit the mechanics so that they're slightly below the absolute best players mechanically. Attention is a resource in SC2 and I think it'll be hard to give the AI imperfect mini-map awareness or imperfect mouse-accuracy without creating too complicated of a model, but things like actual keypresses a second and cursor speed will be easy to limit.

Because mechanics are such a big part about starcraft. By far the biggest. So how do we really make sure that the Ai didn't win through mechanics? It's impossible (imo) to build it exactly at the sweet spot. Attention is probably even a bigger deal than apm itself.
The only real way to make sure "it is fair" is to make the AI use the same hardware, mouse, keyboard and monitor.
If you don't do that then the result is questionable at best as far as i can tell

edit: and even then you will get a device which is superior to human flesh, so i dunno..
AI vs AI would be interesting to watch though, i would imagine tactis and strategy would be a way bigger deal there because the mechanical part could be made exactly even

I don't get what you mean by making the AI use a mouse, keyboard and monitor. The AI would still be able to move them with precision and speed far beyond a human. Speed and precision are big parts of SC2 but at the top-level they aren't what makes players usually win. If an AI that is restricted to the mechanics of an average progamer beats a top-level progamer then wouldn't be its mechanics that made it win.


I mean that the AI would have the same restrictions mechanically as the tpyical human. We only can interact with the game with the help of the hardware, mouse, keyboard and monitor.
The AI probably wouldn't do that, it could be everywhere at once (you as human cannot because the monitor simply doesn't make it possible, just as the mouse doen't make it possible to control different groups at once, etc)
If the human had another device (control the game directly with the brain or something similar) this maybe wouldn't be a limiting factor anymore.

But yeah if you can somehow make it so that the AI doesn't have better mechanics/multitasking/attention than the average pro player, then maybe this would be interesting (even though i am not so sure about that either, even though starcraft might have more possible "board states", i would imagine that most of them are completely irrelevant and that the actual depth of the game isn't anywhere near GO for example)
It being a game with limited information is the only interesting aspect about all of this i can see tbh

The number of game states in StarCraft is several magnitudes higher than Go, even if you somehow got rid of the irrelevant ones like obviously stupid openings (which really is something the AI would have to work out for itself), there would still be several magnitudes more game states for StarCraft. Regardless of what you think about the strategic depth of the game, the sheer number of game states makes things far more complicated for AI to figure out.


Just to be clear, let's say you place building X at place Y or Z, that are two different "board states" right?
Even if it means that placing your first supply depot in the enemy base probably isn't all that smart?

I get that it isn't "intuitive" for the AI like for a human being, but there surely are tons and tons of these things in sc2.
Even something like: I move my army (or even single marine) a few tiles on the left, it probably won't be the biggest deal but it surely is considered a different "board state" ?
If we want to play 100% perfectly these things have to be considered, but overall it probably doesn't matter at all i would imagine.
I don't think the same is true for GO? (i have no idea about GO though)
My statement was probably just simply this: A high lvl GO players surely possesses more tactical/strategical understanding than a starcraft professional, you don't have to be highly intelligent to play starcraft at a high lvl, the same probably isn't true for GO/chess. i think? (i can see why this isn't all that relevant to the main topic though ^^)

Well your first depot position is a bad example because it's actually very important (and even if it wasn't the AI would probably still figure out the best place for it). I get what you're saying though, like if you place your 4th Gateway one space to the left it's a trivially-different game-state which I'm sure feature in Go seeing as the board has 2 lines of symmetry. Even if you remove stuff like that and try to dumb the model down as much as possible you're still going to have a ridiculous number of game states. StarCraft BW and 2 both even have some random factors (more so in BW), even though they are minor they also would increase the complexity of things. How much 'human' strategy is needed is up for debate, but for an AI with mechanical limits conquering StarCraft will be far far more difficult than Go.
Vlad_Slymor
Profile Joined December 2015
France26 Posts
March 10 2016 22:08 GMT
#98
Honestly, I'm pretty sure it would still obliterate any player even with a strong APM cap.
That's the whole point of machine learning: cap it at 100 APM, and it will still find the single most optimal use for every of those actions. Add a 0-reaction time and a perfect decision making, and i can't even imagine how Flash is supposed to win.

Actually, an interesting challenge would probably be to find the minimum APM it needs to win...
disciple
Profile Blog Joined January 2008
9070 Posts
March 10 2016 22:11 GMT
#99
Considering the careers savior and stork had, I think some APM between 80 and 120 will be sufficient
Administrator"I'm a big deal." - ixmike88
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 10 2016 22:46 GMT
#100
inb4 timena vs DeepMind in S league
Bagration
Profile Blog Joined October 2011
United States18282 Posts
Last Edited: 2016-03-10 22:54:19
March 10 2016 22:53 GMT
#101
On March 11 2016 07:11 disciple wrote:
Considering the careers savior and stork had, I think some APM between 80 and 120 will be sufficient


Could you imagine a player with savior's strategy and game-sense, Boxer's creativity, and unlimited APM?
Team Slayers, Axiom-Acer and Vile forever
Slayer91
Profile Joined February 2006
Ireland23335 Posts
March 10 2016 22:55 GMT
#102
yeah, flash
OkStyX
Profile Blog Joined October 2011
Canada1199 Posts
March 10 2016 23:21 GMT
#103
On March 11 2016 07:55 Slayer91 wrote:
yeah, flash

good response, yeah flash was the pinnacle of BW for a human haha.
Team Overklocked Gaming! That man is the noblest creature may be inferred from the fact that no other creature has contested this claim. - G.C. Lichtenberg
andrewlt
Profile Joined August 2009
United States7702 Posts
March 10 2016 23:34 GMT
#104
On March 11 2016 07:11 disciple wrote:
Considering the careers savior and stork had, I think some APM between 80 and 120 will be sufficient


Lots of pro APM isn't really effective APM. Hyuk once had all of 4 zerglings to defend when Flash caught him unaware with a rush. He was hitting 800 apm doing god knows what.
Shinokuki
Profile Joined July 2013
United States859 Posts
March 10 2016 23:37 GMT
#105
Why is this in sc2 section if alpha go is playing bw and flash mostly played bw and won lots of championships..
Life is just life
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 10 2016 23:37 GMT
#106
On March 11 2016 08:37 Shinokuki wrote:
Why is this in sc2 section if alpha go is playing bw and flash mostly played bw and won lots of championships..

Because everybody knows OP so telling him he did something wrong is bad in TL's eyes
Deleted User 3420
Profile Blog Joined May 2003
24492 Posts
March 10 2016 23:49 GMT
#107
like other people said, apm would have to be limited. otherwise, the ai would win on apm alone. It could macro perfectly while nonstop microing 100 different units in 100 different spots all at once.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 00:00 GMT
#108
On March 11 2016 08:37 Shinokuki wrote:
Why is this in sc2 section if alpha go is playing bw and flash mostly played bw and won lots of championships..

Because it's very relevant to SC2 as well, and more people will see it in the SC2 section.
BeStFAN
Profile Blog Joined April 2015
483 Posts
March 11 2016 00:24 GMT
#109
thankfully for team humans Terran is probably best equipped to fight an AI that would seek to abuse unit control and early builds and Flash's strength is prediction and forcing games to extend longer

ideally flash would be playing a protoss AI
❤ BeSt... ༼ つ ◕_◕༽つ #YEAROFKOMA #YEAROFKOMA #YEAROFKOMA ༼ つ ◕_◕༽つ
bITt.mAN
Profile Blog Joined March 2009
Switzerland3693 Posts
March 11 2016 00:24 GMT
#110
Lol you're right, ffs Wax ...


Of course "playing StarCraft optimally" would be really cool - I would love it if the game could be 'solved'. Similar to watching some speed-runs that play full-tilt, ultra-risky, and after the 500th try they finally get that perfectly lucky run (such as the Deus Ex 1 or Jedi Knight: Jedi Outcast runs on SDA).

The problem is, DeepMind playing Go and DeepMind playing Starcraft is not a valid comparison. 'Solving' Go doesn't guarantee you have 'solved' Starcraft, because Starcraft presents bigger, different types of challenges. Here are the two big ones (I'm no comp-sci or AI expert, btw, these are the obvious ones).


1. Limited information (which is the point Flash is making).. In Go, both players can see the entire game state, not in SC. You have to react pre-emptively, you have to get observers just in case. Working with limited information is hard (and don't get me started on mind-games, series strategy, or going on tilt). For example, there are two places your opponent can expand to, you can only afford to scan one of them. You can use the process of elimination, but how the hell will the computer teach itself to do that? It's a new category of idea. It's not just 'push the buttons slightly faster and more precisely', it needs to THINK, it needs to teach itself this new 'process-of-elimination' mechanic.. Sure you can hard-code it to play, assuming the opponent is using a rational build order - but that predictability is straightforward to subvert.

2. Computational throughput. The whole fun of RTS > turn-based, is the real-time trade off: "Do I commit to my current decision" OR "Do I hold out for a better decision". Computing more takes more resources (time), if you wait too long to act they'll kill you. I don't enjoy Chess for this reason. In SC, if I send my units to the far side of the map, then change my mind, I can undo the 'badness' of the situation if I change my mind soon enough. In Chess, you're not allowed to undo your move up-to the point of the opponent responding, You commit, that's it.
Now how the hell do you program an AI to on-the-fly adjust it's computational depth, where sometimes it thinks a lot, and other times it knows just to act (Flash for example, can sim-city when he needs to, but other times just throws down depots messily to not get mentally slowed down). The way humans manage this balance is to practice so much, they delegate 'thinking' to 'instinct', they don't think about the right move, they act according to how they feel in the moment. But it works, because 'how they feel' is trained to instinctively make the right decisions. They don't compute, they do by impulse. THAT is HARD for an AI!



Of course AI could multi-task and micro better than humans, but the real challenge is dealing with limited information (scouting, assuming, and adjusting, as opposed to sticking to your cookie-cutter 'optimal strategy'), and having good decision-making fast enough (rather than searching DEEP, which takes time). Oh and needless to say, Go's mechanics are vastly simpler than that of SC. Economy, defense, attack, tech switching, positioning, harassment ... it's a whole 'nother level of difficulty to program for!
BW4LYF . . . . . . PM me, I LOVE PMs. . . . . . Long live "NaDa's Body" . . . . . . Fantasy | Bisu/Best | Jaedong . . . . .
ilikeredheads
Profile Joined August 2011
Canada1995 Posts
March 11 2016 00:25 GMT
#111
Of course Flash can win, he's a terminator as well. Machine vs Machine
BeStFAN
Profile Blog Joined April 2015
483 Posts
March 11 2016 00:29 GMT
#112
On March 11 2016 09:25 ilikeredheads wrote:
Of course Flash can win, he's a terminator as well. Machine vs Machine


the first terminator had issues when it faced the next iterations of terminators after the first film

❤ BeSt... ༼ つ ◕_◕༽つ #YEAROFKOMA #YEAROFKOMA #YEAROFKOMA ༼ つ ◕_◕༽つ
OkStyX
Profile Blog Joined October 2011
Canada1199 Posts
March 11 2016 00:45 GMT
#113
If god falls to the AI skynet will be born.
Team Overklocked Gaming! That man is the noblest creature may be inferred from the fact that no other creature has contested this claim. - G.C. Lichtenberg
OkStyX
Profile Blog Joined October 2011
Canada1199 Posts
March 11 2016 00:46 GMT
#114
On March 11 2016 08:49 travis wrote:
like other people said, apm would have to be limited. otherwise, the ai would win on apm alone. It could macro perfectly while nonstop microing 100 different units in 100 different spots all at once.

wouldnt that be so exciting to watch though?
Team Overklocked Gaming! That man is the noblest creature may be inferred from the fact that no other creature has contested this claim. - G.C. Lichtenberg
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 11 2016 00:49 GMT
#115
On March 11 2016 09:46 Shakattak wrote:
Show nested quote +
On March 11 2016 08:49 travis wrote:
like other people said, apm would have to be limited. otherwise, the ai would win on apm alone. It could macro perfectly while nonstop microing 100 different units in 100 different spots all at once.

wouldnt that be so exciting to watch though?

AI vs AI TvT on BW would be a nightmare though
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 11 2016 01:27 GMT
#116
On March 11 2016 02:36 Cuce wrote:
Show nested quote +
On March 11 2016 02:11 eScaper-tsunami wrote:
On March 11 2016 00:42 BisuDagger wrote:
(Z)hero can operate at about 450-500 apm on a consistent basis. To make this fair, the actions per minute should be clamped to <=600


The sad thing is it doesn't take 600 apm to have perfect micro. In the end it boils down to accuracy and efficiency.



btw does broodwar has a hardlimit for command accepted per frame?
like 12 per frame or something.

I wouldnt be suprised if there was, at least a limit to buffering




There is a buffer yes. When you exceed it StarCraft Brood War wont process any further commands. Thus you cannot simply spam apm all the time.


In general, for the people who think the bot can win based on its high apm alone: If that was true then the Berkeley Overmind would have defeated Flash already. As a matter of fact, micro-management is currently the biggest issue in the top StarCraft AI bots. This has to do with the fact that micro-management is in the complexity class EXPTIME . So the main issue is deciding where to attack/move based on the information you have. High apm isn't going to help you if you don't know what to do with it.
If you cannot win with 100 apm, win with 100 cpm.
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 11 2016 01:35 GMT
#117
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).
What qxc said.
Superbanana
Profile Joined May 2014
2369 Posts
Last Edited: 2016-03-11 02:10:31
March 11 2016 01:40 GMT
#118
Imba Ai goes 3 rax reaper every game no matter what and wins every game

Don't say "solved". Chess is not solved, Go is not solved.

But the point that the breakthrough is learning is very interesting. SC2 may not be too different from Go. While its not solved, chess can be played very well with processing power brute force.
But its interesting that chess engines have a big game database, specially in the early game the computer checks the known positions instead of trying to find out everything possible from the start. This way it can go deeper in positions that make sense and don't bother looking at silly moves.

edited
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
vOdToasT
Profile Blog Joined September 2010
Sweden2870 Posts
Last Edited: 2016-03-11 02:05:04
March 11 2016 01:54 GMT
#119
It's only a matter of time until artificial intelligence can defeat the greatest of us at all games.
After that, they will only lack the ability to artistically express and describe things that are valuable to humanity, because they don't know what it's like to be human.

Eventually, they may be able to understand that, and create art, too.

That is why I support cybernetic enhancement for humanity. If we do not find a way to give our brains the ability to do what computers can do, then we will be doomed to an existence as an inferior weaker species while artificial intelligence takes care of every thing for us.
If it's stupid but it works, then it's not stupid* (*Or: You are stupid for losing to it, and gotta git gud)
Superbanana
Profile Joined May 2014
2369 Posts
Last Edited: 2016-03-11 02:01:30
March 11 2016 02:00 GMT
#120
On March 11 2016 10:54 vOdToasT wrote:
It's only a matter of time until artificial intelligence can defeat the greatest of us at all games.
After that, they will only lack the ability to artistically express and describe things that are valuable to humanity, because they don't know what it's like to be human.

Eventually, they may be able to understand that, and create art, too.

They could describe art, they could create good art. The day they can enjoy it i quit.
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 11 2016 02:04 GMT
#121
I hope this happens.
@Drayxs | Drayxs.221 | Drayxs#1802
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 11 2016 02:06 GMT
#122
This technology is amazing but quite frightening.
chipmonklord17
Profile Joined February 2011
United States11944 Posts
Last Edited: 2016-03-11 02:09:14
March 11 2016 02:08 GMT
#123
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 11 2016 02:08 GMT
#124
On March 11 2016 10:40 Superbanana wrote:
Imba Ai goes 3 rax reaper every game no matter what and wins every game

Don't say "solved". Chess is not solved, Go is not solved.


You're right about that. I should've said "they beat Kasparov without a flying penis"

Checkers is solved though.
What qxc said.
Jonoman92
Profile Blog Joined September 2006
United States9103 Posts
March 11 2016 02:10 GMT
#125
I don't think an AI will be able to beat a current level top BW player within 50 years. Though it'd be cool to see... and terrifying.
Hypertension
Profile Joined April 2011
United States802 Posts
March 11 2016 02:55 GMT
#126
I think Deepmind wins this no contest with a few months training. Nearly perfect micro and macro will make up for a lot of tactical errors and build order mistakes, especially in Broodwar. After the AI builds a medic and marine it gets tough, once a dropship comes out gg
Buy boots first. Boots good item.
b0lt
Profile Joined March 2009
United States790 Posts
March 11 2016 03:50 GMT
#127
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports


And it'd be completely pointless?
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 04:00 GMT
#128
On March 11 2016 11:08 chipmonklord17 wrote:
Hey Google, instead of making an AI to beat a starcraft player, sponsor a starcraft team. It would cost less and probably be better received.

EDIT: Not saying this is poorly received, but imagine the hype if it was announced Google was getting into esports

But that's the cool thing about Google... They're not doing things to polish their image, but to innovate. They're pushing the boundaries.

Sponsoring a team wouldn't really do that, hm? Sponsoring a team is just for PR.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 04:01 GMT
#129
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.
evilfatsh1t
Profile Joined October 2010
Australia8632 Posts
March 11 2016 05:45 GMT
#130
just imagine an ai that is following flash's timing builds advancing towards you. it would siege the exact amount of tanks at the exact range for it to destroy your army, whilst advancing with the remaining unsieged units as you back off. kind of like a tidal wave slowly advancing to you but so beautifully smooth that youd piss your pants trying to look for an opening.
gives me chills just thinking about that possibility.
that said though, i dont know how deepmind is programmed enough to comment on its ability but i do know that go is at its roots a game that could in theory be solved by maths. the only advantage pros had over ai in past years was there was no ai that could calculate every single possible move until recently. im not sure if this is how deepmind works now, but if the ai is able to calculate every single variable in a game that follows mathematical rules then a human shouldnt be able to win.
starcraft however doesnt follow these rules so i dont see ai being able to defeat the decision making of a pro for a long time
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 05:47 GMT
#131
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.
BronzeKnee
Profile Joined March 2011
United States5217 Posts
Last Edited: 2016-03-11 05:50:52
March 11 2016 05:49 GMT
#132
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).


The thing about Sc2 though is that it is different.

In Poker, or Go or Chess, when you move, you move. That's it. And a computer can process that. SC2 is different.

If I load up a drop and sit it outside your base, I don't have to drop. But I might. But the dropship might actually be empty. What do you do? What does the AI do? I might show extreme aggression, but be taking a hidden expansion. I could also show an expansion, but then cancel it or not make it and attack.

Unless the computer wins with perfect micro and macro, I think it would struggle against non-traditional builds, timing attacks and mind games.
Wrath
Profile Blog Joined July 2014
3174 Posts
March 11 2016 05:57 GMT
#133
1. It is for BW.

2. The APM most likely will be restricted to around 200. AI's APM is equal to its EPM, it does not waste clicks like progamers and those who spam boxing or clicking to increase their APM. So for guys like EffOrt who can go to around 450 ~ 500 APM, what is the actual EPM of them? Does it go beyond 200? That is what we need to consider for AI.
CursOr
Profile Blog Joined January 2009
United States6335 Posts
March 11 2016 05:58 GMT
#134
All whilst Blizzard has absolutely no interest in making their AI even remotely strategic or interesting in any way. Once again, thank god for community interest.

I would love to see an AI that dropped in different places, tried to deceive opponents, did real different build orders, and played map specific strategies, just as a person would.
CJ forever (-_-(-_-(-_-(-_-)-_-)-_-)-_-)
ETisME
Profile Blog Joined April 2011
12351 Posts
March 11 2016 06:02 GMT
#135
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 11 2016 06:09 GMT
#136
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 11 2016 07:24 GMT
#137
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.

It would be nice if wherever Koreans play BW would automatically save the replay, scramble the names, and send it off to google. Or imagine people at google becoming frustrated because for once they do not have big data sets available for everything.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
lpunatic
Profile Joined October 2011
235 Posts
Last Edited: 2016-03-11 07:59:01
March 11 2016 07:53 GMT
#138
On March 11 2016 15:09 ZAiNs wrote:
Show nested quote +
On March 11 2016 14:47 beg wrote:
@ZAiNs: Aren't there many BW replays? Also, DeepMind is capable of learning from reading the graphics, so they could try using VoDs too.

AlphaGo was fed 30 million moves and apparently the average number of moves per game is 200, meaning they gave it around 150,000 high-level games. Getting that number of BW games is impossible, and even if it were, I'm quite sure you'd need drastically more replays to get training results on par with AlphaGo's initial training set. I don't think VODs would even be useful because they show such little information about the game state at any point in time, I think a replay is needed so it can observe the entire game state at every point in time.


AlphaGo got off the ground with a big bank of games, but recently it's been improving purely through self-play.

I think if the DeepMind team put their effort into BW, they'll be able to achieve superhuman performance in a few years time.

There are some ways that the problem is harder than Go - partial information, real time and a much more complex raw game state. On the other hand, there are some clear advantages an AI will have over people (APM, multitasking) which are not present in Go. It seems to me that if you can get an AI that makes decisions like a half decent human player, it will be able to press its advantages well beyond human competition.
lpunatic
Profile Joined October 2011
235 Posts
March 11 2016 08:17 GMT
#139
On March 11 2016 13:01 ZAiNs wrote:
Show nested quote +
On March 11 2016 10:35 rockslave wrote:
Everyone is missing the point (including Flash).

Go is already a game with an impossibly big search tree for brute force. Even chess is. The classical approach of heuristics coupled with brute force solved chess, but it was never even Platinum in Go.

The only reason for AIs starting to beat Go players is a somewhat recent innovation in AI: deep learning. From 10 years ago or so, there were several advancements to machine learning that made a gigantic leap in many fields for which computers always sucked. For instance: character recognition used to be a PitA, but nowadays you can write Python code that gets it right 99% of the time in a few minutes (the breakthrough was a particular optimization technique called backpropagation).

Even if you cap micro a lot, StarCraft isn't too much different from a combination of Go and a bunch of pattern recognition. That is precisely what machine learning solves. It's not easy though, there is a lot of clever training and parametrization to be done... But if they put it in their roadmap (with enough money), it will happen.

Oh, and imperfect information is not a problem at all. Even with a more standard (backtracking / brute force) approach, you only need to throw some probabilities around.

It's rather easy to write programs that play Poker well, for instance (discount the poker face though).

Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


On the other hand, evaluating a stone in Go is a very hard problem - it may depend on the position of every other stone on the board. For starcraft, the value of a base or a zealot is pretty simple to evaulate in comparison, and while zealots in a good position are better than zealots in a bad position, the positional relationships aren't anywhere near as complex as in Go.

Point being, you maybe can get away with a simplified game state representation.
Gluon
Profile Joined April 2011
Netherlands386 Posts
March 11 2016 08:25 GMT
#140
On March 11 2016 15:02 ETisME wrote:
Actually it makes me wonder what would two deepmind do if they were to play against each other.
We may even see a whole new meta developing


Exactly this. With the way the AI learns, the most interesting development will be in the fact that it will not be constrained to any conventional build orders. It could semi-randomly develop completely new builds for specific match-ups on specific maps. I'm really looking forward to that.

Other than that, Deepmind should eventually win with stellar macro and micro, just by going 3 rax every game
Administrator
Haukinger
Profile Joined June 2012
Germany131 Posts
Last Edited: 2016-03-11 08:36:08
March 11 2016 08:28 GMT
#141
You can have that today with human players if you remove the mechanical stress, leaving more room for actual thinking.

That's the core problem why starcraft is boring to play and boring to watch for most people: mechanics play an overwhelming part in winning. You can get to GM just by cannon rushing or 4-gating mechanically well, and I'm sure a bot would win GSL just by worker rush. That means players have to completely know their maps and chose a more or less static build orders because there's no time in the game to do think.
sertas
Profile Joined April 2012
Sweden881 Posts
March 11 2016 08:47 GMT
#142
you cant get gm by cannon rushing or 4 gating wtf not in this expansion at least
heqat
Profile Joined October 2011
Switzerland96 Posts
March 11 2016 09:18 GMT
#143
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.



NiHiLuSsc2
Profile Blog Joined November 2012
United States50 Posts
March 11 2016 10:28 GMT
#144
if anyone can do it its God himself
PBJT
sakurazawanamo
Profile Joined March 2016
Korea (South)1 Post
March 11 2016 10:40 GMT
#145
i wonder how an AI will react to fakes and misdirection in builds
DwD
Profile Joined January 2010
Sweden8621 Posts
March 11 2016 10:42 GMT
#146
After seeing some of those micro bots with like 50.000 APM(or whatever) in the SC2 map editor I'm pretty sure Flash would get smoked pretty hard.
~ T-ARA ~ DREAMCATCHER ~ EVERGLOW ~ OH MY GIRL ~ DIA ~ BOL4 ~ CHUNGHA ~
coolprogrammingstuff
Profile Joined December 2015
906 Posts
March 11 2016 11:17 GMT
#147
why are people talking about insane micro? Give it some unique quirks, perhaps, but "hurrr insane micro ai" is fucking stupid -- completely defeats the point if you give it perfect mechanics where it macros exactly on point, and micros 10 stacks of 11 mutas at once - pointless and stupid. I'm cringing reading comments discussing the micro mechanics and it being unstoppable.

Make it play like a human. Don't restrict the APM - It's not how algorithms operate. They'd had EAPM close to 100%. Restrict that instead, to a human level. Make it so it's actually a contest in natural ability - see if it can micro logically better from splitting, positioning, and general human-tier control, rather than by just maneuvering ridiculously. Make it execute build orders, rather than a 2 hatch muta all in every game with impossible micro. Making it play like a human and contest in a way that's human-esque is what makes it interesting, otherwise no human can stop even a perfect 4pool.

Besides from that, I think that at this stage it'd be close if it was to go up against Flash shortly, with Flash pulling ahead. However, if Flash was at his peak in 2 years, hypothetically, as mentioned before, if the bot was just fed brood war, I think he'd have no chance. And it'd be fascinating to watch how it plays.
Dromar
Profile Blog Joined June 2007
United States2145 Posts
March 11 2016 11:26 GMT
#148
On March 11 2016 18:18 heqat wrote:
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.






Well the game played will be Brood War, but even if it were SC2, the AI could control everything without moving the screen. It could simply hotkey every unit as it is produced, remember its location, and from that hotkey select and give commands to each individual unit. Isn't there also a "Select Army" button?
heqat
Profile Joined October 2011
Switzerland96 Posts
March 11 2016 11:49 GMT
#149
On March 11 2016 20:26 Dromar wrote:
Show nested quote +
On March 11 2016 18:18 heqat wrote:
The question is at what level the AI can access the game. Normally in AI research, the software cannot access the internal state of the game (or 3D scene). For instance it should not be able to just access the position of the (visible) units. So for a true test, the AI should also move the camera, trying to figure out what it sees on screen with a chance to miss some informations (which happend all the time to human in SC2). If it can simply access game state like the current SC2 AI, this is not a true test from my point of view.






Well the game played will be Brood War, but even if it were SC2, the AI could control everything without moving the screen. It could simply hotkey every unit as it is produced, remember its location, and from that hotkey select and give commands to each individual unit. Isn't there also a "Select Army" button?


Sorry yes, it would be BW. Regarding your point, what I mean is that for a perfect test, the AI should use the same user-interface than a human. It should take decisions using a flat 2D picture and control the game using hotkey, scrolling, etc.(don't need a physical robot, just wire the data to the AI software). In regular game AI (such as SC2 AI), the software has access to the complete game internal state and can take decision at every step by simply checking unit positions, states, etc. with some specific rules to avoid cheating (like preveting the AI to access non-visible units).

Now I guess it would become much more difficult for the AI if it has to play from the exact same user-interface than a human (which makes sens for a true SC human/machine match, contrary to Go/Chess where user-interface does not change the result of the performance). It would require some very advanded real time visual recognition algorithm for instance.

ETisME
Profile Blog Joined April 2011
12351 Posts
Last Edited: 2016-03-11 12:03:15
March 11 2016 11:52 GMT
#150
after reading some interviews, I think deepmind team just used starcraft as a point of reference because it is famous and strategy game, not aware that mechanics plays a huge part of the game.

Anyway I really don't think it is going to pose any challenge for the AI.
I am not an expert but certainly it can just scout every once awhile and deduct what is the most possible and threatening strategy/timing coming in and then win by perfect attention to everything, perfect micro, perfect reactionary decision etc.

Each harass/engagement just limits more and more uncertainty for the AI.
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
Last Edited: 2016-03-11 13:00:42
March 11 2016 12:57 GMT
#151
On March 11 2016 13:01 ZAiNs wrote:
Deep learning needs a dataset for the AI to be trained though. For AlphaGo they trained two separate networks (one designed to predict the next move, and the other designed to predict the final winner) on 30 million discrete moves from games played by human experts. After that it trained itself by actually playing Go against itself a ridiculous number of times.

A Go game can be perfectly modelled by simple list of positions describing which square had a stone placed on it each turn, it's going to be very hard to get enough useful data (replays) to significantly help with the training. And without the initial training it's going to have to learn mostly by playing against itself which will be difficult because of the ridiculous number of game states. At least that's my understanding of things, I could be wrong, but it seems to be a lot harder than Go.


That is a fair point. But I think you can break a game in several mini-games, having a little algorithm to guess who has the advantage, based on material advantage, positioning, etc (just as you said they did for Go).

While Go can be perfectly modelled, the number of possible states is intractable. Just as you need heuristics to cut the search tree in table games, you can also "cheat" in SC by having sort of a hash function on states. That's what I meant by parametrization earlier: a lot of the work involved in building neural nets is choosing which are the inputs.

By the way: I don't really know anything about what I'm saying. I just played with machine learning, never studied it seriously.

Edit: if anyone is interested, here's a great free book about it: http://neuralnetworksanddeeplearning.com . You gotta love mathematics, though.
What qxc said.
Vasoline73
Profile Blog Joined February 2008
United States7799 Posts
Last Edited: 2016-03-11 13:27:17
March 11 2016 13:14 GMT
#152
People severely underestimating the difficulty of achieving an effective AI for BW. As someone has pointed out it's not going to have access to the game state beyond seeing a 800x600 2D image in real time. It may see dots on the mini-map but it's not going to know what it is or how to properly react without moving it's "screen" there. Obviously it will have speed but...

...stuff like, how does the AI react to a map (building placement, etc) it's never played on before? What if there's no immediate natural and it typically fast expands? When it sends it's scout out onto the map, goes down the ramp and sees no natural... does it start looking for one? Scout for the enemy first? Does it change it's build order to a one base play when it may just have not scouted a expansion spot yet? The clock is ticking and supply is going up. How does it play on Monty Hall or some crazy shit for the first time?

Etc etc. That stuff will make an "all around" BW AI that beats top humans on the level chess engines do, or as AlphaGo is very likely to continue doing, very difficult.

Now if they make the AI just a one base BBS or 4 pool + drones killing machine on standard maps it recognizes then I see success being plausible quickly... probably now even, but I don't think google is trying to win that way. Guessing they have loftier ideas for their AI and what they want it to symbolize/accomplish.

All that said, it's more than possible and it would be cool to see it happen someday sooner than expected.
reminisce12
Profile Joined March 2012
Australia318 Posts
March 11 2016 13:35 GMT
#153
perfect macro and micro aint gonna matter when flash siege tanks reign fire down on ya
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 11 2016 13:47 GMT
#154
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
BeStFAN
Profile Blog Joined April 2015
483 Posts
March 12 2016 00:06 GMT
#155
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?
❤ BeSt... ༼ つ ◕_◕༽つ #YEAROFKOMA #YEAROFKOMA #YEAROFKOMA ༼ つ ◕_◕༽つ
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 12 2016 12:55 GMT
#156
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).
What qxc said.
Superbanana
Profile Joined May 2014
2369 Posts
Last Edited: 2016-03-12 13:08:32
March 12 2016 13:08 GMT
#157
Hard? 10 years? Are you kidding?
+ Show Spoiler +

Just put INnoVation in a box and call it a day
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
Makro
Profile Joined March 2011
France16890 Posts
March 12 2016 13:29 GMT
#158
On March 12 2016 22:08 Superbanana wrote:
Hard? 10 years? Are you kidding?
+ Show Spoiler +

Just put INnoVation in a box and call it a day

haha
Matthew 5:10 "Blessed are those who are persecuted because of shitposting, for theirs is the kingdom of heaven".
TL+ Member
OtherWorld
Profile Blog Joined October 2013
France17333 Posts
March 12 2016 13:30 GMT
#159
On March 12 2016 22:08 Superbanana wrote:
Hard? 10 years? Are you kidding?
+ Show Spoiler +

Just put INnoVation in a box and call it a day

Didn't know choking was an integral part of being an IA
Used Sigs - New Sigs - Cheap Sigs - Buy the Best Cheap Sig near You at www.cheapsigforsale.com
thezanursic
Profile Blog Joined July 2011
5478 Posts
March 12 2016 14:11 GMT
#160
Is this a joke article. Or is this legit?
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
thezanursic
Profile Blog Joined July 2011
5478 Posts
March 12 2016 14:13 GMT
#161
On March 10 2016 23:36 Pandemona wrote:
Yea, i think AI would struggle in an RTS game. Yet i am still open to be surprised. Imagine God losing a bw series to an AI !!!

I think a lot of programming would be required to make it work, but it is definitely possble.
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 12 2016 14:18 GMT
#162
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The advancements from AlphaGo are mainly relevant to point 6. Combinatorial explosion is something that you have to deal with in Go as well.
If you cannot win with 100 apm, win with 100 cpm.
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 14:42 GMT
#163
https://xkcd.com/1002/

This is probably relevant.
shid0x
Profile Joined July 2012
Korea (South)5014 Posts
Last Edited: 2016-03-12 15:20:04
March 12 2016 15:16 GMT
#164
Just saying that because he wants to hype the event.
I highly doubt anyone could be cooky enough to even think about beating an AI made by google unless you take some brain enchantment supplement or have some kind of brain chips. ( by the way in case you were wondering we are able to read other being thoughts with brain implants already ).

Google is the biggest and most sucessful trans-humanist firm, their AI would potentially even be able to "read" flash's mind.

He's gonna get his ass handed to him in a not so pretty fashion.

As someone who follow transhumanisn very close i can't help to laugh at how much of an idiot he is even ( but that's because he probably never even really looked into google's projects, he would shit his pants )
RIP MKP
75
Profile Joined December 2012
Germany4057 Posts
March 12 2016 15:58 GMT
#165
no way when there is no apm cap.

another question: can AIs beat top level poker players?
yo twitch, as long as I can watch 480p lagfree I'm happy
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 15:59 GMT
#166
On March 13 2016 00:58 75 wrote:
no way when there is no apm cap.

another question: can AIs beat top level poker players?


Is this asking: 'can an AI do an estimated bluff'?
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 16:19:07
March 12 2016 16:11 GMT
#167
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.

If Google actually proceeds with a serious project to make an AI that can beat Flash he won't have a chance. The only possibility is if they lower its efficient APM to realistic high level human standards. Then maybe there's a way to win. Although in hindsight I suppose that's exactly what they would do if they were to challenge him since everyone knows it's pointless if it can play with thousands of APM spent on useful things. They would want to test the intelligence not the brute force. It would also be important to make it unable to do more than one thing at the same time since humans can't do that.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:36 GMT
#168
On March 12 2016 21:55 rockslave wrote:
Show nested quote +
[B]1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.


I don't think your first hypothesis is true, the AI would be able to read the data in the replay files and judge plays accordingly (only in the training phase).

Also, there is a natural language to describe the moves: the one people use to describe AIs in BW (stuff like GTAI).


This is the approach taken so far by the Deepmind team when they came up with their general algorithm to play 2D Atari games. In particular the same algorithm was used to play 40 or so different games simply from pixels on the screen and score as an input. This precludes looking at any game-specific files. Learning was done from self-play only.

Source : www.nature.com

' We tested this agent on the challenging domain of classic Atari 2600 games12. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. '
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:41 GMT
#169
On March 12 2016 09:06 BeStFAN wrote:
Show nested quote +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.


could anyone answer this?: what is the significance of AI's ability to master the game of Go in relation to what it means for it's ability to play BW at a high enough level?

in other words, before and after the developments required for the ability to beat sedol what tools has AI gained in relation to it's abilty to play SC?


The Lee Sedol match is showcasing in Go context an AI technique of learning to play a game through self-play and the data of a boardgame or screen pixels only. This has already applied to the case of quasi-8 bit games in Atari 2600, see the relevant Nature article : www.nature.com

Much more research is required to generalize that algorithm enough to make it play Broodwar efficiently ( Jeff Dean from Google is already singling it as next goal ). My guess would be 3 to 10 years time. My post earlier was about specific sticking points that will need to be improved in the current algorithm before we get to that level. I believe we ultimately will.


"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
evilfatsh1t
Profile Joined October 2010
Australia8632 Posts
March 13 2016 04:22 GMT
#170
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 07:33 GMT
#171
On March 13 2016 13:22 evilfatsh1t wrote:
is anyone really debating whether ai will be able to do something better than a human? i dont think anyone is naive enough to think humans will be able to defeat ai in something in the future. what flash, boxer are probably saying is if alphago could play starcraft NOW, the humans would win. of course if you gave google as much time as they wanted, the ai would win. its literally only a matter of time given the speed at which technology is advancing

I think people are discussing how hard it'll be. Don't think anyone is seriously arguing that it is impossible if you give skilled people unlimited time.

People also discuss exactly what restriction to set on the computer, if any.

And some discuss if these announcements are just publicity stunts, riding on the alphaGo wave.
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
Last Edited: 2016-03-13 10:06:22
March 13 2016 09:56 GMT
#172
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 13 2016 10:25 GMT
#173
On March 13 2016 18:56 WinterViewbot420 wrote:
I have never seen the official Fish bot say anything before, I didn't even know it could talk.

This is something about DeepMind, I don't know what some of these words mean!! HELP

[image loading]

edit: Clan members are telling me this is the first time the bot has talked in like ten years? Wtf lol

It's gained consciousness!!! :o :o
RUN FOR THE HILLS!
Hryul
Profile Blog Joined March 2011
Austria2609 Posts
Last Edited: 2016-03-13 11:02:44
March 13 2016 11:02 GMT
#174
+ Show Spoiler +
On March 11 2016 22:47 MyLovelyLurker wrote:
I've been watching Broodwar for 15+ years, and programming reinforcement learning engines for a few. Here are a few thoughts on why the specific DeepMind approach is going to be very hard for SC, although it might well happen in around 10 years time :

1. We are assuming the AI teaches itself to play only from a realtime view of the pixels on the screen, and knows nothing about any score at all - *there is no score in Starcraft*, unlike in the arcade Atari 2600 games that have been mostly arcade games, with a clear numerical objective ( the score ) to be maximized by the playing agent. The act of playing thereby becomes a calculus problem ( to first order, set the gradients of the score function to zero ). Not impossible but harder in Starcraft.

2. Starcraft II is an imperfect information game, as opposed to chess or go where the board contains the whole information available to both players. Whilst it is possible to do reinforcement learning in that setting, it is a relatively new field and adds to the difficulty - articles are being published now on the subject.

3. The 60 or 120 APM barrier will not be broken easily. Right now in the Atari 2600 Deepmind simulations rely on one or two actions by frame, which imposes that your APM is limited by the FPS you render. Even with two policy networks - one for the keyboard and one for the mouse - you are headbutting against 120APM pretty much. It is not impossible to think about operating several policy networks in parallel in order to enable strong ( think Korean multiple drops ) multitasking, but it is a new area that needs to be explored - the connections between networks and their interaction would need to be thought through carefully. Some cutting-edge research with asynchronous networks goes in a similar direction.

4. Point-and-click games have not been tackled yet by RL ; they are joystick or keyboard-based, ergo with binary 'push or don't push' states, but no mouse game has been tackled by a policy network as far as I know. This brings its own sets of challenges ( the AI will have to figure out by itself, for instance, how to move the mouse in optimal ways, which includes making straight lines, position the cursor close to a Nexus or a pylon, etc ).

5. Starcraft is also 'multi-screen' - it requires frequently changing views with your F keys ( move to different bases and engagement battles ) in order to correctly represent the full state of the game. So far, Atari 2600 games have been mono-screen only. Again, it is not impossible to imagine this will be overcome in the future, it is just harder to do right now.

6. Combinatorial explosion in the number of unit compositions is also hard to tackle - every time you add a potential unit to the mix, the possibilities for army composition multiply, which is why in the campaign mode you learn to play from introduces units pretty much one at a time. It would objectively be much, much harder to start playing full games from laddering and without an instruction manual, which what the Deepmind approach is.

7. The meta in SC rotates on a regular basis - it is 'non stationary', which adds to the list of problems encountered by a machine that would learn by playing on ladder, as some of the strats and playstyles learned earlier could well be obsolete - and hard-countered - by the time they are assimilated. This happens with human players too ; they have to make a conscious effort to get out of a slump, learn more new information, and forget about the old. Some work on policy distillation or optimal brain damage in neural networks goes, very tentatively, in that direction. Again, this is hard.

For all those reasons, it would be an incredible achievement already to have a Starcraft deep reinforcement learning AI that can teach itself to play a very easy computer AI in a setting with only workers, and maybe a unit list restricted to just a couple, like zealots and dragoons.

If you look at the performance of reinforcement learning in 2d games such as Atari, 'mechanical' games like Pong or Breakout get to much higher skill levels than games with planning required such as Pacman. It is hence entirely possible that Starcraft Deepmind would play mechanically correctly, but overall pretty poorly, as one can only speculate. If you add up all the objection points above, you can get a feel for why there is quite a long way to go.

Happy to provide reference articles list if required.

I think also the learning algorithm might need some thought. So far the computer played itself and learned through this. But there are certain tactics which are more effective against someone with delayed reaction time.
For example: a human player might not be able to beat an AI microed rush/all in, but the AI might be able to hold it by itself thus discarding this line of play.
Countdown to victory: 1 200!
evilfatsh1t
Profile Joined October 2010
Australia8632 Posts
March 13 2016 11:42 GMT
#175
it says the ai didnt lose. alphago lost
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 12:28 GMT
#176
On March 13 2016 01:11 DuckloadBlackra wrote:
Flash thinks he would win? Well so did Lee Sedol who even went as far as to say he would win 4-1 or 5-0 and now trails 0-3, seemingly unable to win a single game.


Nice sophism
WriterMaru
boxerfred
Profile Blog Joined December 2012
Germany8360 Posts
March 13 2016 13:03 GMT
#177
The AI is able to simultanously micro at 2, 3, 4, n places on the map. No way a human will stop that.
waiting2Bbanned
Profile Joined November 2015
United States154 Posts
Last Edited: 2016-03-13 15:13:05
March 13 2016 15:08 GMT
#178
It's funny to me that people think the human could win. Even with capped APM the AI would use its APM in the most efficient way (no spamming), it could probably win with something like 90-100 APM easily.
It could probably win in any type of game as well: worker rush, 3 marines-1medic-1dropship, also late game when microing a big army the AI would crush a human with almost no losses; at the same time perfect macro (going back for a split second to his base at the perfect time, every time); also perfect mini-map awareness and reaction time, while being able to tell which units he sees based on their speed on the mini-map and determine the best response, without delay. It would also spend his minerals/gas in the most efficient way.
All this with perfect timed & positional scouting while extrapolating the opponent's build based on opponent's unit composition and timing.
IMHO the AI would utterly crush any human, even if it would tell the human ahead of time when it would do it.

"Now I will do a mid-game 2-or-3-base attack."
"This time I will attempt a maxed-out army build while keeping you pinned in your base with continuous harass. GLHF"

I would like to see the AI learn to BM, that would probably be the only real challenge
"If you are going to break the law, do it with two thousand people.. and Mozart." - Howard Zinn
TelecoM
Profile Blog Joined January 2010
United States10668 Posts
Last Edited: 2016-03-13 16:18:01
March 13 2016 16:16 GMT
#179
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol
AKA: TelecoM[WHITE] Protoss fighting
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 13 2016 23:55 GMT
#180
On March 14 2016 01:16 GGzerG wrote:
God speaks again.

Bots on fish can be made to speak by the Blizz / master of the channel, just sayin lol

We only have one master and he was not online.
Richasliodo
Profile Joined January 2016
18 Posts
March 14 2016 01:39 GMT
#181
Can you imagine the pros streaming them playing Ai all the time, it be so exciting to watch them get completely out played like they are vsing some noobs
Swoopae
Profile Joined January 2015
Australia339 Posts
March 14 2016 14:37 GMT
#182
I'd love to see an AI vs pro series but in SC1/BW the big problem with the mechanics would hinder a human vs an AI heavily

Even though I still believe SC1 was the superior game technically, SC2 might be a better candidate due to the (imo anyway) slightly lower APM needed which would give the human less of a technical disadvantage due to limited APM and make it more of a challenge for the AI programmers to get the strategy side close enough to beat the best humans (obviously the AI's macro would be better, and perfect micro, but spellcasting and strategy would be an issue for any AI)

I also think say, Maru or sOs as the master strategist/metagame/micro types would be a better pick for the human side than a technical grinder like herO or Innovation imo who make fewer mistakes and have better technical play but have less strategic genius

I'd love to see Maru and sOs and a zerg like maybe Solar play mirrored matchups against the AI of all 3 races in a best of x series, would be a fantastic PPV-type event if it happened

Flash and Jaedong in sc1 would be great too but I feel like the humans would struggle more due to the 'lol enjoy only controlling one small group of units at a time' micro plus the extra time needed to manage your economy in sc1
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 14 2016 17:03 GMT
#183
The best candidate would be Polt, Stephano and idk for protoss.
WriterMaru
Grettin
Profile Joined April 2010
42381 Posts
March 14 2016 21:03 GMT
#184
On March 15 2016 02:03 Poopi wrote:
The best candidate would be Polt, Stephano and idk for protoss.


In Broodwar? Okay.
"If I had force-fields in Brood War, I'd never lose." -Bisu
castleeMg
Profile Blog Joined January 2013
Canada759 Posts
Last Edited: 2016-03-14 23:01:39
March 14 2016 23:00 GMT
#185
i dont think there is anyway an ai could defeat a human in broodwar or sc2. there are too many factors
AKA: castle[eMg]@USEast/ iCCup
Mier19891
Profile Joined May 2015
United States75 Posts
March 14 2016 23:58 GMT
#186
They have already commented on how it would have to be set up "realistically". As in APM cap, fog of war, attention to areas, limit on hotkeys, etc. So no absurd medivac marine vs stalker micro like those AI bots while microing perfectly back home. It'll certainly be a challenge for the developers... P.S. didnt they say SC1? Either broodwar or vanilla? and NOT SC2?
Moose1
Profile Joined January 2016
3 Posts
March 15 2016 00:43 GMT
#187
but what race would the ai choose?
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 15 2016 00:50 GMT
#188
On March 15 2016 06:03 Grettin wrote:
Show nested quote +
On March 15 2016 02:03 Poopi wrote:
The best candidate would be Polt, Stephano and idk for protoss.


In Broodwar? Okay.

I respond to the guy above
WriterMaru
snakeeyez
Profile Joined May 2011
United States1231 Posts
Last Edited: 2016-03-15 03:44:27
March 15 2016 03:25 GMT
#189
A real team of pro researchers like at google could make a bot that plays a level of brood war no one has really seen before. A bot with perfect micro has a pretty large advantage over any human being.
Starcraft is a pretty complicated game even the best current bots have lots of flaws and patterns. It would be mighty tough to beat flash or jaedong at brood war no doubt about it especially in long sets where they could exploit weaknesses.
I think the AI will struggle with decisions and not falling into predictable patterns. Humans will exploit its patterns like expand timings, and stuff. It might pick illogical or inefficient builds that humans have figured out over decades of play.
The bot would need a rock solid early game against all kinds of rushes, and would need perfect responses to not fall behind. It would need to respond to all the possible builds in the right way even ones you cant prepare for. It would be a tough challenge for AI no doubt about it
The fact their AI beat a pro at go though makes me think they could do it. GO was a very tough game and its amazing feat to beat a pro player at that. I think they can build a pro level bot capable of beating any human if they try
SlayerS_BunkiE
Profile Blog Joined May 2009
Canada1707 Posts
March 15 2016 10:17 GMT
#190
On March 15 2016 09:43 Moose1 wrote:
but what race would the ai choose?

I think this is a very good topic to discuss. Which race would benefit most if you had "unlimited apm"?
iloveby.SlayerS_BunkiE[Shield]
WinterViewbot420
Profile Blog Joined December 2015
345 Posts
March 15 2016 10:21 GMT
#191
On March 15 2016 19:17 SlayerS_BunkiE wrote:
Show nested quote +
On March 15 2016 09:43 Moose1 wrote:
but what race would the ai choose?

I think this is a very good topic to discuss. Which race would benefit most if you had "unlimited apm"?

I'd say Protoss because Reavers are fucking scary
snakeeyez
Profile Joined May 2011
United States1231 Posts
Last Edited: 2016-03-15 22:52:53
March 15 2016 22:52 GMT
#192
On March 15 2016 19:17 SlayerS_BunkiE wrote:
Show nested quote +
On March 15 2016 09:43 Moose1 wrote:
but what race would the ai choose?

I think this is a very good topic to discuss. Which race would benefit most if you had "unlimited apm"?


Dragoons are amazing units with perfect micro. There are existing bots like overmind that played zerg and its pretty much for sure that stacked mutalisks are crazy strong. Properly microd carriers might be OP too. Some of the best current bots actually just go carriers.
Can their AI beat flash with all 3 races? That would be even more impressive if it can learn to win with all 3 races and learn all the matchups
MrMischelito
Profile Joined February 2014
347 Posts
March 17 2016 08:43 GMT
#193
I wouldn't be surprised if computers were better at video games. after all it's all just about numbers... 10110...
I'll continue following this discussion if/when computers start beating humans at basketball
Slayer91
Profile Joined February 2006
Ireland23335 Posts
March 17 2016 11:51 GMT
#194
On March 17 2016 17:43 MrMischelito wrote:
I wouldn't be surprised if computers were better at video games. after all it's all just about numbers... 10110...
I'll continue following this discussion if/when computers start beating humans at basketball


most ghetto post of this thread
snakeeyez
Profile Joined May 2011
United States1231 Posts
March 18 2016 00:26 GMT
#195
On March 17 2016 20:51 Slayer91 wrote:
Show nested quote +
On March 17 2016 17:43 MrMischelito wrote:
I wouldn't be surprised if computers were better at video games. after all it's all just about numbers... 10110...
I'll continue following this discussion if/when computers start beating humans at basketball


most ghetto post of this thread


I think some things go over the head of some people.
Improvement
Profile Joined March 2003
203 Posts
March 18 2016 01:44 GMT
#196
On March 11 2016 04:06 Loccstana wrote:
This would be of interest to people interested in AI for Starcraft:

https://webdocs.cs.ualberta.ca/~cdavid/pdf/starcraft_survey.pdf

A conservative lower bound on the state space of brood war is 10^1685. This is many orders of magnitude above the state space of Go, which is 10^170. Whats more, the branching factor is 10^50 to 10^200, compared to <360 for Go.

Holy shit, thanks for that info. You seem to be knowledgable in that field. I wondered about the complexity of that game my entire life tbh. This blows my mind.
Hmm
Normal
Please log in or register to reply.
Live Events Refresh
Replay Cast
00:00
2025 KFC #10: SC Evolution
CranKy Ducklings141
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft376
Nina 167
RuFF_SC2 129
Ketroc 36
StarCraft: Brood War
Sea 3014
Horang2 1601
Leta 49
Noble 31
Dota 2
monkeys_forever689
LuMiX1
League of Legends
JimRising 738
Super Smash Bros
hungrybox599
Mew2King57
Other Games
shahzam1065
Maynarde117
NeuroSwarm56
Livibee48
kaitlyn44
Organizations
Other Games
gamesdonequick1046
BasetradeTV61
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 18 non-featured ]
StarCraft 2
• Berry_CruncH266
• Mapu6
• Kozan
• LaughNgamezSOOP
• sooper7s
• AfreecaTV YouTube
• Migwel
• intothetv
• IndyKCrew
StarCraft: Brood War
• RayReign 25
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
League of Legends
• Doublelift6227
• Shiphtur538
• Stunt267
• Lourlo217
Other Games
• Scarra1685
Upcoming Events
Replay Cast
6h 11m
WardiTV Invitational
7h 11m
WardiTV Invitational
7h 11m
PiGosaur Monday
20h 11m
GSL Code S
1d 5h
Rogue vs GuMiho
Maru vs Solar
Online Event
1d 20h
Replay Cast
1d 22h
GSL Code S
2 days
herO vs Zoun
Classic vs Bunny
The PondCast
2 days
Replay Cast
2 days
[ Show More ]
WardiTV Invitational
3 days
OSC
3 days
Korean StarCraft League
3 days
CranKy Ducklings
4 days
WardiTV Invitational
4 days
Cheesadelphia
4 days
GSL Code S
5 days
Sparkling Tuna Cup
5 days
Replay Cast
5 days
Replay Cast
6 days
Liquipedia Results

Completed

CSL Season 17: Qualifier 2
BGE Stara Zagora 2025
Heroes 10 EU

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
KCM Race Survival 2025 Season 2
NPSL S3
Rose Open S1
CSL 17: 2025 SUMMER
2025 GSL S2
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025
PGL Bucharest 2025
BLAST Open Spring 2025

Upcoming

Copa Latinoamericana 4
CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
K-Championship
SEL Season 2 Championship
Esports World Cup 2025
HSC XXVII
Championship of Russia 2025
Murky Cup #2
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.