• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 10:22
CEST 16:22
KST 23:22
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S RO12 Preview: GuMiho, Bunny, SHIN, ByuN3The Memories We Share - Facing the Final(?) GSL24Code S RO12 Preview: Cure, Zoun, Solar, Creator4[ASL19] Finals Preview: Daunting Task30[ASL19] Ro4 Recap : The Peak15
Community News
Code S RO12 Results + RO8 Groups (2025 Season 2)1Weekly Cups (May 19-25): Hindsight is 20/20?0DreamHack Dallas 2025 - Official Replay Pack8[BSL20] RO20 Group Stage2EWC 2025 Regional Qualifiers (May 28-June 1)24
StarCraft 2
General
Code S RO12 Results + RO8 Groups (2025 Season 2) CN community: Firefly accused of suspicious activities The Memories We Share - Facing the Final(?) GSL Karma, Domino Effect, and how it relates to SC2. How does the number of casters affect your enjoyment of esports?
Tourneys
EWC 2025 Regional Qualifiers (May 28-June 1) DreamHack Dallas 2025 Last Chance Qualifiers for OlimoLeague 2024 Winter [GSL 2025] Code S:Season 2 - RO12 - Group B [GSL 2025] Code S:Season 2 - RO12 - Group A
Strategy
Simple Questions Simple Answers [G] PvT Cheese: 13 Gate Proxy Robo
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 475 Hard Target Mutation # 474 Futile Resistance Mutation # 473 Cold is the Void Mutation # 472 Dead Heat
Brood War
General
Battle.net is not working Will foreigners ever be able to challenge Koreans? BW General Discussion Which player typ excels at which race or match up? Practice Partners (Official)
Tourneys
[ASL19] Grand Finals [BSL 2v2] ProLeague Season 3 - Friday 21:00 CET [BSL20] RO20 Group D - Sunday 20:00 CET [BSL20] RO20 Group B - Saturday 20:00 CET
Strategy
[G] How to get started on ladder as a new Z player I am doing this better than progamers do.
Other Games
General Games
Monster Hunter Wilds Path of Exile Nintendo Switch Thread Beyond All Reason Battle Aces/David Kim RTS Megathread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
LiquidLegends to reintegrate into TL.net
Heroes of the Storm
Simple Questions, Simple Answers
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia TL Mafia Community Thread TL Mafia Plays: Diplomacy TL Mafia: Generative Agents Showdown Survivor II: The Amazon
Community
General
Things Aren’t Peaceful in Palestine US Politics Mega-thread Russo-Ukrainian War Thread All you football fans (soccer)! European Politico-economics QA Mega-thread
Fan Clubs
Serral Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion!
Sports
Formula 1 Discussion 2024 - 2025 Football Thread NHL Playoffs 2024 NBA General Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread Cleaning My Mechanical Keyboard How to clean a TTe Thermaltake keyboard?
TL Community
The Automated Ban List TL.net Ten Commandments
Blogs
I was completely wrong ab…
jameswatts
Need Your Help/Advice
Glider
Trip to the Zoo
micronesia
Yes Sir! How Commanding Impr…
TrAiDoS
Poker
Nebuchad
Info SLEgma_12
SLEgma_12
SECOND COMMING
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 15177 users

BoxeR: "AlphaGo won't beat humans in StarCraft"

Forum Index > SC2 General
568 CommentsPost a Reply
Normal
Waxangel
Profile Blog Joined September 2002
United States33281 Posts
Last Edited: 2016-03-12 21:12:51
March 12 2016 17:32 GMT
#1
Via: Yonhap News

As Google/DeepMind's artificial intelligence AlphaGo continues to roll against top Baduk pro Lee Se-dol, StarCraft has emerged as a potential future target for the AI.

Consensus greatest-of-all-time Brood War pro Flash expressed cautious confidence in a short interview on Thursday. Another Terran legend, Boxer, mirrored his sentiments in an interview with Yonhap News. Here are Boxer's quotes from the article:

  • "I don't know how smart [AlphaGo] is, but even if it can win in Baduk(Go), it can't beat humans in StarCraft."

  • "It would be a mistake to think artificial intelligence could beat humans in StarCraft. StarCraft is a game where situational strategy is far more important than in Baduk, so it's an area where AI cannot catch up."

  • "There are many variables in StarCraft, such as scouting, obviously, as well as maps, racial balance, micro, mind-games, etc."

  • "Even if countless data is inputted and studied by the AI so it has some degree of instinct, it won't reach pro level."

  • "If such an offer comes in the future, I'll gladly accept."

  • "Even if it has studied all of the many strategies I've used, I'll go at it with an unstoppable strategy I've prepared."

  • "Competing in a few StarCraft tournaments nowadays, I felt that this is where my roots are. It's exciting just thinking about facing a machine as mankind's representative."



Facebook Twitter Reddit
AdministratorHey HP can you redo everything youve ever done because i have a small complaint?
Axieoqu
Profile Joined October 2005
Finland204 Posts
March 12 2016 17:38 GMT
#2
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.
greenelve
Profile Joined April 2011
Germany1392 Posts
Last Edited: 2016-03-12 17:46:07
March 12 2016 17:45 GMT
#3
Im not sure about this. AI has the disatvantages of the points which are stated, sure why not, but an AI can also have thousands of APM and just perfectly muta micro and harass like never faced against a human before.

But it should be much harder for an AI to "understand" Starcraft on pro level than chess or GO, because of their static nature. Whereas SC has many variables and unknown factors to work with.
z0r.de for your daily madness /// Who knows what evil lurks in the heart of men? The Shadow knows!
Lexender
Profile Joined September 2013
Mexico2625 Posts
March 12 2016 17:47 GMT
#4
On March 13 2016 02:45 greenelve wrote:
Im not sure about this. AI has the disatvantages of the points which are stated, sure why not, but an AI can also have thousands of APM and just perfectly muta micro and harass like never faced against a human before.

But it should be much harder for an AI to "understand" Starcraft on pro level than chess or GO, because of their static nature. Whereas SC has many variables and unknown factors to work with.


They would have to implement APM constraints in the AI of course, otherwise the whole experiment would be useless.
parazice
Profile Joined March 2011
Thailand5517 Posts
March 12 2016 17:48 GMT
#5
Automaton 2000 Micro + Alpha Ai
R.I.P
Musicus
Profile Joined August 2011
Germany23576 Posts
March 12 2016 17:50 GMT
#6
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.
Maru and Serral are probably top 5.
Heyoka
Profile Blog Joined March 2008
Katowice25012 Posts
March 12 2016 17:52 GMT
#7
I mean your duty as a SC player is to thump your chest and say "no machine will beat me!" right? It's not like anyone is going to have a reasonable discussion about this in interviews.
@RealHeyoka | ESL / DreamHack StarCraft Lead
ejozl
Profile Joined October 2010
Denmark3341 Posts
March 12 2016 17:53 GMT
#8
I think it's pretty cool we get these statements, might make it more possible for the event to happen, since they seem so confident.
SC2 Archon needs "Terrible, terrible damage" as one of it's quotes.
Grettin
Profile Joined April 2010
42381 Posts
Last Edited: 2016-03-12 17:57:21
March 12 2016 17:53 GMT
#9
On March 13 2016 02:48 parazice wrote:
Automaton 2000 Micro + Alpha Ai
R.I.P


While i think it's relevant to bring this up, i'm not so sure (without looking into it) that it would work as well in Broodwar as it did in Star2.

Someone with more knowledge can clarify.

Also some good points being brought up in this thread.

http://www.teamliquid.net/forum/games/505525-go-alphago-google-vs-lee-sedol-world-champ?page=5
"If I had force-fields in Brood War, I'd never lose." -Bisu
brickrd
Profile Blog Joined March 2014
United States4894 Posts
Last Edited: 2016-03-12 17:56:51
March 12 2016 17:56 GMT
#10
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!
TL+ Member
RewardedFool
Profile Joined July 2015
17 Posts
March 12 2016 17:57 GMT
#11
Boxer is definitely Fan Hui in this scenario, not Lee Sedol.
Garrl
Profile Blog Joined February 2010
Scotland1972 Posts
Last Edited: 2016-03-12 18:09:23
March 12 2016 18:01 GMT
#12
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind's project is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.

+ Show Spoiler +


+ Show Spoiler +
GrandSmurf
Profile Joined July 2003
Netherlands462 Posts
March 12 2016 18:06 GMT
#13
its possible to get wrecked by superior micro.
One time that happened and I just stopped everything, selected the offending SCV, hit Cancel, moved it over to my Barracks, made a Marine, had the Marine shoot it to death, then left the game.
SoleSteeler
Profile Joined April 2003
Canada5414 Posts
March 12 2016 18:07 GMT
#14
A well programmed AI would have an easy time with perfect micro and macro. Decision making would be tougher of course...
Clonester
Profile Joined August 2014
Germany2808 Posts
March 12 2016 18:08 GMT
#15
The same has the complete Go community said about AlphaGo and also Lee Sedol said, he will win so easy against AlphaGo. It became a train wreck... for the Go community.

Same will happen with AlphaStarcraft for Boxer, Flash, Bisu and the complete Community.
Bomber, Attacker, DD, SOMEBODY, NiKo, Nex, Spidii
Nakajin
Profile Blog Joined September 2014
Canada8988 Posts
March 12 2016 18:16 GMT
#16
On March 13 2016 03:01 Garrl wrote:
Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.



Well it could also be far far longer than you might think, it the 60th it was pretty clear for a lot of people including scientist, that 50 years later, there was going to be daily space travel, and lunar colony. Yet here we are in 2016 and we are still far of doing that. The time it take to solve science problem is hard to predict.

Writerhttp://i.imgur.com/9p6ufcB.jpg
lordsaul
Profile Joined December 2010
13 Posts
Last Edited: 2016-03-12 18:25:11
March 12 2016 18:17 GMT
#17
I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine

* Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot
* Marines that always maintain their range advantage on roaches
* Tanks that always target the banelings first
* Marines that always perfect split v banelings (you can find that online already)
* Weak units that always rotate out of the front line
* Medivacs healing the most important target in range, rather than the closest
* Perfect charges vs tank lines (single units charging ahead of the main attack
* ...to name a very few basic micro tricks

And while all this happens, perfect macro? Humans overestimate themselves . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range.

Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure. Also - I realise I'm talking about SC2 here, not BW, but a lot would apply. I think it's a real shame that we can't have AI competitions in SC2,
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-12 18:28:10
March 12 2016 18:18 GMT
#18
On March 13 2016 02:56 brickrd wrote:
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!


maybe there will come a day when instead of watching WCS between 2 humans we'll see a WCS between 2 AI teams that build AIs to play the game

On March 13 2016 03:01 Garrl wrote:
Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind's project is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.

+ Show Spoiler +
https://www.youtube.com/watch?v=ePv0Fs9cGgU



Atari 2600 Space Invaders where bullets disappear and reappear in mid-air because the system can only handle a few independently moving objects at any one time?

its a pretty bad example.. the arcade version would be a much truer test.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
brickrd
Profile Blog Joined March 2014
United States4894 Posts
Last Edited: 2016-03-12 18:19:38
March 12 2016 18:18 GMT
#19
On March 13 2016 03:16 Nakajin wrote:
Show nested quote +
On March 13 2016 03:01 Garrl wrote:
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.



Well it could also be far far longer than you might think, it the 60th it was pretty clear for a lot of people including scientist, that 50 years later, there was going to be daily space travel, and lunar colony. Yet here we are in 2016 and we are still far of doing that. The time it take to solve science problem is hard to predict.


lunar colonies have nothing to do with adaptive AI winning at complex competitive games... the fact that an AI beat the go champion earlier than expected has everything to do with it...

just because we don't know for certain that doesn't mean that we shouldn't use relevant reference points to make predictions...
TL+ Member
Brutaxilos
Profile Blog Joined July 2010
United States2623 Posts
March 12 2016 18:19 GMT
#20
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.
Jangbi favorite player. Forever~ CJ herO the King of IEM. BOMBERRRRRRRR. Sexy Boy Rogue. soO #1! Oliveira China Represent!
duke91
Profile Joined April 2014
Germany1458 Posts
March 12 2016 18:19 GMT
#21
Why is this again posted in SC2 forum instead of Broodwar where it is obvious it is about Broodwar, talking about Boxer who majorly played BW, as well as Flash who is back at BW talking about BW AI implemented in BW. At least put the same post in the BW forum as well. I don't see any relevance of this topic being in the SC2 forum
( ͡° ͜ʖ ͡°)STYLE START SBENU( ͡° ͜ʖ ͡°)
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
March 12 2016 18:23 GMT
#22
On March 13 2016 02:52 Heyoka wrote:
I mean your duty as a SC player is to thump your chest and say "no machine will beat me!" right? It's not like anyone is going to have a reasonable discussion about this in interviews.


Should have asked Innovation. As a robot with a human face I feel he'd have a more objective view of the matter.
Moderator
Charoisaur
Profile Joined August 2014
Germany15900 Posts
March 12 2016 18:24 GMT
#23
On March 13 2016 03:17 lordsaul wrote:
I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine

* Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot
* Marines that always maintain their range advantage on roaches
* Tanks that always target the banelings first
* Marines that always perfect split v banelings (you can find that online already)
* Weak units that always rotate out of the front line
* Medivacs healing the most important target in range, rather than the closest
* Perfect charges vs tank lines (single units charging ahead of the main attack
* ...to name a very few basic micro tricks

And while all this happens, perfect macro? Humans overestimate themselves . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range.

Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure

people don't underestimate that. they know the AI would have to be limited for it to be a fair challenge.
the point is to show that bots are more intelligent then humans not that they have better mechanics.
Many of the coolest moments in sc2 happen due to worker harassment
Scarlett`
Profile Joined April 2011
Canada2381 Posts
March 12 2016 18:25 GMT
#24
On March 13 2016 03:17 lordsaul wrote:
I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine

* Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot
* Marines that always maintain their range advantage on roaches
* Tanks that always target the banelings first
* Marines that always perfect split v banelings (you can find that online already)
* Weak units that always rotate out of the front line
* Medivacs healing the most important target in range, rather than the closest
* Perfect charges vs tank lines (single units charging ahead of the main attack
* ...to name a very few basic micro tricks

And while all this happens, perfect macro? Humans overestimate themselves . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range.

Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure

wrong game fam
Progamer一条咸鱼
Nakajin
Profile Blog Joined September 2014
Canada8988 Posts
March 12 2016 18:27 GMT
#25
On March 13 2016 03:18 brickrd wrote:
Show nested quote +
On March 13 2016 03:16 Nakajin wrote:
On March 13 2016 03:01 Garrl wrote:
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.



Well it could also be far far longer than you might think, it the 60th it was pretty clear for a lot of people including scientist, that 50 years later, there was going to be daily space travel, and lunar colony. Yet here we are in 2016 and we are still far of doing that. The time it take to solve science problem is hard to predict.


lunar colonies have nothing to do with adaptive AI winning at complex competitive games... the fact that an AI beat the go champion earlier than expected has everything to do with it...

just because we don't know for certain that doesn't mean that we shouldn't use relevant reference points to make predictions...



Just an example that science don't always progress as fast as we might predict, maybe in a year deepmind will beat Flash, the point is I don't think everyone here is a AI programming specialist, so saying it happen will in six month or in 15 years by comparing it to go is not realy fair. (At least for me if you have inside information in the process of AI development, and the challenge that pose playing Starcraft then of course it is different)
Writerhttp://i.imgur.com/9p6ufcB.jpg
Musicus
Profile Joined August 2011
Germany23576 Posts
Last Edited: 2016-03-12 18:40:32
March 12 2016 18:27 GMT
#26
On March 13 2016 02:56 brickrd wrote:
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!


I find the science extremely interesting and love following DeepMind vs Lee Se Do. I'm not talking about the people discussing the possibilities here.

I just find the interviews with StarCraft pros pretty boring. They won't say "I think I will lose" and none of them has been challenged yet and we don't know if they ever will be.

On March 13 2016 03:01 Garrl wrote:
Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind's project is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.


I think the number "5-10 years" came from people on the Deepmind team. Can't give you a source though, read it somewhere :/.
Maru and Serral are probably top 5.
AsAr
Profile Joined September 2012
Germany52 Posts
March 12 2016 18:29 GMT
#27
it's just about the BO, when to expand and scouting the human early game. if that gets implemented well and the AI is in a reasonable state after 10 or so minutes, i believe every pro will eventually get crushed by constant AI harrassment
Empirimancer
Profile Joined July 2011
Canada1024 Posts
March 12 2016 18:30 GMT
#28
So... Boxer admits that he knows nothing about AI or AlphaGo, but still says it will never exceed human capabilities? When literally a few months ago, some Go players were saying the exact same thing? Talk about overconfidence.

IMO the only interesting question here is how long it will take for AlphaStarcraft to beat pro-gamers consistently starting from now. I would say probably less than 2 years, definitely less than 3 years. Perhaps less than 1 year.

Salteador Neo
Profile Blog Joined August 2009
Andorra5591 Posts
March 12 2016 18:31 GMT
#29
Real question is how many times in a row would Boxer try to bunker rush?
Revolutionist fan
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-12 18:48:27
March 12 2016 18:46 GMT
#30
On March 13 2016 03:30 Empirimancer wrote:
So... Boxer admits that he knows nothing about AI or AlphaGo, but still says it will never exceed human capabilities? When literally a few months ago, some Go players were saying the exact same thing? Talk about overconfidence.

IMO the only interesting question here is how long it will take for AlphaStarcraft to beat pro-gamers consistently starting from now. I would say probably less than 2 years, definitely less than 3 years. Perhaps less than 1 year.



what i really want to see is AI versus AI competitions like already exists today .. but with giant prize pools.

in FirePro Wrestling there is an entire hardcore wagering community built around AI v AI matches
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
AdrianHealeyy
Profile Joined June 2015
114 Posts
March 12 2016 18:47 GMT
#31
I think we need to differentiate two things here.

It's probably not that hard to come up with an AI that can have perfect micro. The trick is: can we design an AI with 'human' micro that can still consistently beat humans, based on insight, analysis, response, etc.?

That would be the ultimate challenge. I still think they can do it, but it'll take longer.
nohole
Profile Joined June 2012
United States56 Posts
March 12 2016 18:48 GMT
#32
I'm sure the GO players felt the same thing lol
Oshuy
Profile Joined September 2011
Netherlands529 Posts
Last Edited: 2016-03-12 18:53:34
March 12 2016 18:53 GMT
#33
On March 13 2016 02:56 brickrd wrote:
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!


The "maybe not in 10 years" sounds hopeful. Deepmind was created in 2010. Alphago is 18months old (as in : the project started 18 months ago).

There is a hurdle to design what to feed to the neural networks and how to represent the output in a game of starcraft : the space both of current status and potential action are huge; but once those representation are designed, the learning process will either fail or succeed in a few months.

The fact that information is incomplete is almost irrelevant in case of a neural network feed. Those are the type of problems we designed networks for in the first place. Real time and information retention may make things more difficult, but it could get there fast.
Coooot
[PkF] Wire
Profile Joined March 2013
France24192 Posts
March 12 2016 18:55 GMT
#34
On March 13 2016 03:47 AdrianHealeyy wrote:
I think we need to differentiate two things here.

It's probably not that hard to come up with an AI that can have perfect micro. The trick is: can we design an AI with 'human' micro that can still consistently beat humans, based on insight, analysis, response, etc.?

That would be the ultimate challenge. I still think they can do it, but it'll take longer.

The problem is how do you define human micro (and even human multitask). A simple limit on the APM wouldn't even be enough I think, since the computer doesn't spam and -more importantly- sees all screens at once.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
Last Edited: 2016-03-12 18:58:37
March 12 2016 18:57 GMT
#35
On March 13 2016 03:53 Oshuy wrote:
Show nested quote +
On March 13 2016 02:56 brickrd wrote:
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!


The "maybe not in 10 years" sounds hopeful. Deepmind was created in 2010. Alphago is 18months old (as in : the project started 18 months ago).

There is a hurdle to design what to feed to the neural networks and how to represent the output in a game of starcraft : the space both of current status and potential action are huge; but once those representation are designed, the learning process will either fail or succeed in a few months.

The fact that information is incomplete is almost irrelevant in case of a neural network feed. Those are the type of problems we designed networks for in the first place. Real time and information retention may make things more difficult, but it could get there fast.


It's actually not irrelevant in reinforcement learning, as you need to compute a conditional expectation of the state of play with respect to the information you have - and the update of said expectation will change algorithms by quite a lot. This is being tackled almost as we speak, here is a two weeks old article on the subject - from one of the fathers of AlphaGo - with an application to poker : arxiv.org

"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
Aron Times
Profile Blog Joined March 2011
United States312 Posts
March 12 2016 19:01 GMT
#36
I would be impressed by an AI that can win in DotA, where decision-making matters far more than mechanics. In Starcraft, it would be no contest. We've seen all the macro and micro hacks and bots over the years, and they're unstoppable by most players. Bear in mind that these hacks weren't made by Google, who has pretty much unlimited money to spend on development. Imagine a brilliant hacker who doesn't have to worry about paying his/her bills, free to devote everything into AI development.

That's what we're up against, and I doubt the Dominion will win.

Hey, Dustin Browder, i just had an idea for Starcraft 3: Bio vs. Mech. Terran, Zerg, and Protoss vs. an unstoppable race of intelligent machines.
"The drums! The drums! The drums! The neverending drumbeat! Open me, you human fool! Open the light and summon me and receive my majesty!"
OtherWorld
Profile Blog Joined October 2013
France17333 Posts
March 12 2016 19:08 GMT
#37
I dunno. As literally everyone pointed out, a competent AI would destroy any human if allowed unlimited APM. With restrictions, humans could win, but only until ~2030 methinks.
Used Sigs - New Sigs - Cheap Sigs - Buy the Best Cheap Sig near You at www.cheapsigforsale.com
Shield
Profile Blog Joined August 2009
Bulgaria4824 Posts
Last Edited: 2016-03-12 19:39:26
March 12 2016 19:36 GMT
#38
Well, it depends if AI can win with mostly micro in BW. If yes, then Flash and Boxer are wrong. Terran AI can definitely have an easy time against zerg in SC2. I guess if they do AI, it will be terran because zerg isn't that good (?) in comparison. So, I'd expect TvT if AI comes up and Boxer or Flash have to play. Just because terran can benefit a lot from high APM and micro.
Waxangel
Profile Blog Joined September 2002
United States33281 Posts
March 12 2016 19:40 GMT
#39
On March 13 2016 03:19 duke91 wrote:
Why is this again posted in SC2 forum instead of Broodwar where it is obvious it is about Broodwar, talking about Boxer who majorly played BW, as well as Flash who is back at BW talking about BW AI implemented in BW. At least put the same post in the BW forum as well. I don't see any relevance of this topic being in the SC2 forum


because I care about visibility more than adherence to TL's outdated forum structure
AdministratorHey HP can you redo everything youve ever done because i have a small complaint?
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 12 2016 19:45 GMT
#40
On March 13 2016 04:01 Eternal Dalek wrote:
I would be impressed by an AI that can win in DotA, where decision-making matters far more than mechanics. In Starcraft, it would be no contest. We've seen all the macro and micro hacks and bots over the years, and they're unstoppable by most players. Bear in mind that these hacks weren't made by Google, who has pretty much unlimited money to spend on development. Imagine a brilliant hacker who doesn't have to worry about paying his/her bills, free to devote everything into AI development.

That's what we're up against, and I doubt the Dominion will win.

Hey, Dustin Browder, i just had an idea for Starcraft 3: Bio vs. Mech. Terran, Zerg, and Protoss vs. an unstoppable race of intelligent machines.

There are some disadvantages to DotA though: there are still patch changes, so that you can't easily train a bot on one specific patch; and it's not so easy for an AI to mass practice games vs itself since you might be stuck with Steam or with 45 mins per game and those sort of things.

And there's the question of whether you should have an AI control all five heroes at once, since that might be seen as cheating since it'll have perfect coordination.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2016-03-12 19:53:57
March 12 2016 19:49 GMT
#41
On March 13 2016 04:08 OtherWorld wrote:
I dunno. As literally everyone pointed out, a competent AI would destroy any human if allowed unlimited APM. With restrictions, humans could win, but only until ~2030 methinks.

What should the restrictions be?

I thought the following ones would be simple enough:
- no hotkeys for buildings allowed (to stop flawless macro)
- (simulated) mouse and keyboard to control the game, with some restrictions to dpi and ap(m/s)
- short reaction time added before visual input can be processed

This lets the AI have superhuman control, but not inhuman control, and I think it would do a lot for the legitimacy of the challenge.

An additional handicap might be to simulate errors with mouse control, but I have a feeling that one might be harmful.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 12 2016 20:05 GMT
#42
I still think that even allowing cheating (perfect apm/micro speed wise) it should be easy to mess with the AI, sending workers to hang in vision of the AI and stuff like that.

By the way if they can't do anything before 2030 that's a pity xD, they got so much money to throw around on rather useless things.
WriterMaru
FeyFey
Profile Joined September 2010
Germany10114 Posts
March 12 2016 20:10 GMT
#43
wonder if there will be a person helping the AI, the moment the player finds a glitch and uses it. Just like with Deep Blue. Still the way this AI works sounds actually interesting.
And as long as they don't program this AI with anti siege tank micro and stuff everything there should be no need for limitations.
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 12 2016 20:11 GMT
#44
On March 13 2016 03:01 Garrl wrote:
Show nested quote +
On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.


people thought Go AI beating professional players was a long way off. AFAIK deepmind's project is a generalized solution that takes only pixel data as an input.

could be far, far closer than you might think.

+ Show Spoiler +
https://www.youtube.com/watch?v=ePv0Fs9cGgU


+ Show Spoiler +
https://www.youtube.com/watch?v=Q70ulPJW3Gk


Yea, before the match began Lee Sedol said he thinks he will beat this thing 4-1 or 5-0, but maybe in a few more years it can surpass him. Now he is fighting his hardest just to take a game and probably won't even be able to do that.
Faefae
Profile Joined June 2014
2202 Posts
March 12 2016 20:12 GMT
#45
Isn't it absolutly obvious that an AI would win against any humain at starcraft ? How delusionnal can Boxer be ? :/
ForGG. 29/11/2014
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 12 2016 20:14 GMT
#46
On March 13 2016 04:49 Grumbels wrote:
Show nested quote +
On March 13 2016 04:08 OtherWorld wrote:
I dunno. As literally everyone pointed out, a competent AI would destroy any human if allowed unlimited APM. With restrictions, humans could win, but only until ~2030 methinks.

What should the restrictions be?

I thought the following ones would be simple enough:
- no hotkeys for buildings allowed (to stop flawless macro)
- (simulated) mouse and keyboard to control the game, with some restrictions to dpi and ap(m/s)
- short reaction time added before visual input can be processed

This lets the AI have superhuman control, but not inhuman control, and I think it would do a lot for the legitimacy of the challenge.

An additional handicap might be to simulate errors with mouse control, but I have a feeling that one might be harmful.


I completely agree with this, but forget about the simulating errors part for now imo.
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 12 2016 20:18 GMT
#47
On March 13 2016 05:05 Poopi wrote:
I still think that even allowing cheating (perfect apm/micro speed wise) it should be easy to mess with the AI, sending workers to hang in vision of the AI and stuff like that.

By the way if they can't do anything before 2030 that's a pity xD, they got so much money to throw around on rather useless things.


No way, at the time when Google considers their AI ready for this challenge those kinds of obvious exploits will definitely not be working.
Skynx
Profile Blog Joined January 2013
Turkey7150 Posts
March 12 2016 20:19 GMT
#48
Has would go undefeated in a tournament vs ai
"When seagulls follow the troller, it is because they think sardines will be thrown into the sea. Thank you very much" - King Cantona | STX 4 eva
Oshuy
Profile Joined September 2011
Netherlands529 Posts
March 12 2016 20:33 GMT
#49
On March 13 2016 03:57 MyLovelyLurker wrote:
Show nested quote +
On March 13 2016 03:53 Oshuy wrote:
On March 13 2016 02:56 brickrd wrote:
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!


The "maybe not in 10 years" sounds hopeful. Deepmind was created in 2010. Alphago is 18months old (as in : the project started 18 months ago).

There is a hurdle to design what to feed to the neural networks and how to represent the output in a game of starcraft : the space both of current status and potential action are huge; but once those representation are designed, the learning process will either fail or succeed in a few months.

The fact that information is incomplete is almost irrelevant in case of a neural network feed. Those are the type of problems we designed networks for in the first place. Real time and information retention may make things more difficult, but it could get there fast.


It's actually not irrelevant in reinforcement learning, as you need to compute a conditional expectation of the state of play with respect to the information you have - and the update of said expectation will change algorithms by quite a lot. This is being tackled almost as we speak, here is a two weeks old article on the subject - from one of the fathers of AlphaGo - with an application to poker : arxiv.org


Building the dataset for supervised learning from replay databases consisting of both the incomplete information (one player view) and the complete information (spectator view) should provide a first estimate of a potential convergence for a given game representation.

Self-play reinforcement would be great; agreed, I have no idea how to construct an evaluation function (and quite sure it cannot be done on individual actions that are mostly meaningless in themselves). Unsure if it would be necessary at this point (why isn't supervised all the way with a spectator AI impossible ?).

Interesting part in the self-play is that the AI would get to the match with its own metagame that the human players faces for the first time during the match, while the human metagame will have been the basic dataset the AI learned from initialy.
Coooot
Clbull
Profile Blog Joined February 2011
United Kingdom1439 Posts
March 12 2016 20:34 GMT
#50
Bots have already surpassed humans in StarCraft. If you ever saw any of the AI competitons held at the University of California, you'd see bots with superior APM that are able to pull off absurd strategies.
B-royal
Profile Joined May 2015
Belgium1330 Posts
Last Edited: 2016-03-12 20:38:18
March 12 2016 20:35 GMT
#51
How can you even compare a game like GO where you both have complete knowledge of the game state with a game like starcraft that completely depends on fog of war.

And it's incredibly obvious that they won't let the AI access the game state, that's called "cheating". It has to interpret the game through a single screen, and use a cursor and keyboard to select and direct units/buildings. It'll only have 10 control groups just like humans do. Anything else is just cheating and wouldn't be a testament to the capabilities of deep learning.

edit: then again, if it would be lightning fast,these restrictions are most likely useless. What is the limit on command input in brood war? :D
new BW-player (~E rank fish) twitch.tv/crispydrone || What plays 500 games a season but can't get better? => http://imgur.com/a/pLzf9 <= ||
Alarak89
Profile Joined January 2016
United States882 Posts
Last Edited: 2016-03-12 20:43:49
March 12 2016 20:41 GMT
#52
sOs vs Alpha Go for SC2? Sounds like an interesting series Five Bo7 in five days maybe? Let us see if AI is really "intelligent"
sOs is THE ONLY player I pay attention to
BlysK
Profile Joined March 2011
Singapore48 Posts
March 12 2016 20:42 GMT
#53
i think the scary part of perfect AI would be perfect micro other than that, strategical depth etc i think progamers have the edge with instincts, and reaction
Take It Easy :)
Lexender
Profile Joined September 2013
Mexico2625 Posts
March 12 2016 21:00 GMT
#54
Starcraft 2 has ruinned the perception of RTS games so much that in a thread about the evolution of AI and their capabilities, all everybody is thinking is perfect mechanics, bot level micro and infinite apm SMH.
BeStFAN
Profile Blog Joined April 2015
483 Posts
March 12 2016 21:01 GMT
#55
BoxeR is John Connor leader of the human resistance
❤ BeSt... ༼ つ ◕_◕༽つ #YEAROFKOMA #YEAROFKOMA #YEAROFKOMA ༼ つ ◕_◕༽つ
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 12 2016 21:01 GMT
#56
On March 13 2016 03:55 [PkF] Wire wrote:
Show nested quote +
On March 13 2016 03:47 AdrianHealeyy wrote:
I think we need to differentiate two things here.

It's probably not that hard to come up with an AI that can have perfect micro. The trick is: can we design an AI with 'human' micro that can still consistently beat humans, based on insight, analysis, response, etc.?

That would be the ultimate challenge. I still think they can do it, but it'll take longer.

The problem is how do you define human micro (and even human multitask). A simple limit on the APM wouldn't even be enough I think, since the computer doesn't spam and -more importantly- sees all screens at once.

That's why they won't even try, human performance isn't constant so you can't cap or make the AI randomly missmicro, it's almost comical that they spoke about it before even realizing this obvious paradox.
WriterMaru
Edpayasugo
Profile Joined April 2013
United Kingdom2213 Posts
March 12 2016 21:04 GMT
#57
This would be sick
FlaSh MMA INnoVation FanTaSy MKP TY Ryung | soO Dark Rogue | HuK PartinG Stork State
NonY
Profile Blog Joined June 2007
8748 Posts
Last Edited: 2016-03-12 21:10:23
March 12 2016 21:07 GMT
#58
Unlike turn-based board games, where inputting moves is trivial and thus the method can be ignored, playing SC is intrinsically tied to keyboard and mouse control. If the AlphaGo team wants to tackle SC, then they have a significant robotics challenge in front of them that I'm not really sure is going to be worth their time as AI researchers. It's always a bad idea to bet against technology when technology is allowed unlimited time to develop, but SC presents some very significant increases in difficulty just for the AI, robotics aside. It's far more complex because in addition to a "mirror match" you've got to be able to beat two completely different sets of "game pieces" and there isn't just one simple game board. And after all that, games can hinge on luck like in a poker game. The human can pick randomly, like glance at his mineral count and do one extreme if it ends in an even number and do another extreme if it ends in an odd number, and there might simply be no solution for both possibilities. Avoiding all such situations seems unlikely. Because of this, it could possibly be a top player if it avoids predictability, but it seems just as likely as a poker AI to consistently win tournaments. Nonetheless I'm excited to see how it progresses. I wonder if the Korean BW players have a renewed sense of purpose seeing as how an AI might be entering one of their tournaments someday.
"Fucking up is part of it. If you can't fail, you have to always win. And I don't think you can always win." Elliott Smith ---------- Yet no sudden rage darkened his face, and his eyes were calm as they studied her. Then he smiled. 'Witness.'
CycoDude
Profile Joined November 2010
United States326 Posts
March 12 2016 21:14 GMT
#59
i think the question is whether or not to cap the apm. of course a bot has the advantage with unlimited apm. i think capping it would make for more interesting results. it then becomes a game of how well the programmers can design an ai that can strategize and predict the opponents moves.
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 21:16:04
March 12 2016 21:15 GMT
#60
On March 13 2016 05:42 BlysK wrote:
i think the scary part of perfect AI would be perfect micro other than that, strategical depth etc i think progamers have the edge with instincts, and reaction


If there's anything it isn't better than humans at then it isn't perfect AI. But who says Google's will be perfect? It won't be.
iFU.pauline
Profile Joined September 2009
France1529 Posts
Last Edited: 2016-03-12 22:04:30
March 12 2016 21:20 GMT
#61
Brood war is real time strategy not turn based, also this is not a chessboard...maps aren't small square with highly limited amount of possibilities...

The AI would simply fail at the scouting part... which is basically the most important. Fog of war implies you need to think, not calculate. It would not take long until a human recognize patterns and gets his way...

The macro aspect isn't really relevant, it's been a long time progamers can manage an entire game without having more than 500 minerals even on 5 bases assuming he isn't maxed out... While I agree AI can have the edge on that, it is in no way a game changer...

On top of that there is no such ultimate strategy in brood war that allows you to counter anything. If you choose strategy A it will protect you from strategy B and C but not from D. There is no escape, if you scout or choose wrong, you are dead no matter what. Statistic and probability won't help coze at the time you made your choice you did not have a chance to scout. And this is the definition of real time strategy, if the AI can't scout, what is it gonna do? Guess? No it won't guess anything, it will simply act based on the information it has, human will quickly catch up, and what's next? EXPLOIT.

I can only imagine proud scientist bringing the so called ultimate AI zerg vs Flash just to die in 3min from rush bunker...
Imagine the length of an algorithm that need to anticipate "hold lurker" good luck...

I believe they are still plenty of arguments, I can't predict future and everything is possible, but present day, come on...
No coward soul is mine, No trembler in the world's storm-troubled sphere, I see Heaven's glories shine, And Faith shines equal arming me from Fear
Petrosidius
Profile Joined March 2016
United States10 Posts
Last Edited: 2016-03-12 21:20:53
March 12 2016 21:20 GMT
#62
A computer will never beat a human in chess, Go, starcraft.
Alright Boxer we'll see
danl9rm
Profile Blog Joined July 2009
United States3111 Posts
March 12 2016 21:22 GMT
#63
On March 13 2016 04:45 Grumbels wrote:
Show nested quote +
On March 13 2016 04:01 Eternal Dalek wrote:
I would be impressed by an AI that can win in DotA, where decision-making matters far more than mechanics. In Starcraft, it would be no contest. We've seen all the macro and micro hacks and bots over the years, and they're unstoppable by most players. Bear in mind that these hacks weren't made by Google, who has pretty much unlimited money to spend on development. Imagine a brilliant hacker who doesn't have to worry about paying his/her bills, free to devote everything into AI development.

That's what we're up against, and I doubt the Dominion will win.

Hey, Dustin Browder, i just had an idea for Starcraft 3: Bio vs. Mech. Terran, Zerg, and Protoss vs. an unstoppable race of intelligent machines.

There are some disadvantages to DotA though: there are still patch changes, so that you can't easily train a bot on one specific patch; and it's not so easy for an AI to mass practice games vs itself since you might be stuck with Steam or with 45 mins per game and those sort of things.

And there's the question of whether you should have an AI control all five heroes at once, since that might be seen as cheating since it'll have perfect coordination.


It would be fair for the same AI control all 5 heroes, however, it would be necessary for it to be 5 separate instances of the same program. Otherwise, it wouldn't be fair at all.

I would be extremely impressed if an AI team could even beat 5 random 4k players thrown together. It's not going to happen for a really long time. Like... we'll all be dead.
"Science has so well established that the preborn baby in the womb is a living human being that most pro-choice activists have conceded the point. ..since the abortion proponents have lost the science argument, they are now advocating an existential one."
UberNuB
Profile Joined December 2010
United States365 Posts
March 12 2016 21:25 GMT
#64
On March 13 2016 06:07 NonY wrote:
Unlike turn-based board games, where inputting moves is trivial and thus the method can be ignored, playing SC is intrinsically tied to keyboard and mouse control. If the AlphaGo team wants to tackle SC, then they have a significant robotics challenge in front of them that I'm not really sure is going to be worth their time as AI researchers. It's always a bad idea to bet against technology when technology is allowed unlimited time to develop, but SC presents some very significant increases in difficulty just for the AI, robotics aside. It's far more complex because in addition to a "mirror match" you've got to be able to beat two completely different sets of "game pieces" and there isn't just one simple game board. And after all that, games can hinge on luck like in a poker game. The human can pick randomly, like glance at his mineral count and do one extreme if it ends in an even number and do another extreme if it ends in an odd number, and there might simply be no solution for both possibilities. Avoiding all such situations seems unlikely. Because of this, it could possibly be a top player if it avoids predictability, but it seems just as likely as a poker AI to consistently win tournaments. Nonetheless I'm excited to see how it progresses. I wonder if the Korean BW players have a renewed sense of purpose seeing as how an AI might be entering one of their tournaments someday.


While I agree there *could* be a degree of robotics involved with this, it's much more likely they would just write (or find) drivers to run off of virtual keyboard/mouse.

I'm not sure I understand why professional players think they could beat AI (even today). Just rotating through a few timing attacks that deny enough scouting to make the player coin flip a defend would make the AI pretty hard to consistently beat; especially given perfect micro potential.
the absence of evidence, is not the evidence of absence.
summerloud
Profile Joined March 2010
Austria1201 Posts
March 12 2016 21:41 GMT
#65
i think boxer has no idea what hes talking about. it would prolly be possible to program an AI that would beat everyone just using blink stalkers
ondik
Profile Blog Joined November 2008
Czech Republic2908 Posts
March 12 2016 21:50 GMT
#66
I don't think it was needed to humiliate Boxer by creating an article about this nonsense and spotlighting it.
Bisu. The one and only. // Save the cheerreaver, save the world (of SC2)
blade55555
Profile Blog Joined March 2009
United States17423 Posts
Last Edited: 2016-03-12 21:54:57
March 12 2016 21:54 GMT
#67
On March 13 2016 06:41 summerloud wrote:
i think boxer has no idea what hes talking about. it would prolly be possible to program an AI that would beat everyone just using blink stalkers


Broodwar doesn't have blink stalkers. Even then the AI has to get there in either an even position or ahead, you can have perfect blink stalker micro, won't matter if your opponent just has a lot more units/economy.
When I think of something else, something will go here
Shuffleblade
Profile Joined February 2012
Sweden1903 Posts
March 12 2016 21:55 GMT
#68
This question has an easy answer, if they cap eapm it can't beat humans. Taking for granted the AI needs to use a cursor and the same kind of tools as a human. If they the don't cap eapm the ai will beat humans easy.
Maru, Bomber, TY, Dear, Classic, DeParture and Rogue!
loppy2345
Profile Joined August 2015
39 Posts
March 12 2016 21:56 GMT
#69
The issue is that there are so many ways to abuse an AI, the number of variables means that if the AI has one single weakness, then humans can exploit. In Go and chess, there are much less options. Even trying to design an AI that can survive a perfect cannon rush would probably take years.

As long as AI matches are made available for players to analyse, then human players will beat the AI (which is fair given that the AI developer has access to pro games). Maybe in 50 years time, an true AI which can learn as it plays can win, but right now, the AI still very much depends on human programming, hence there will be tons of weaknesses.
FFW_Rude
Profile Blog Joined November 2010
France10201 Posts
Last Edited: 2016-03-12 22:01:58
March 12 2016 22:00 GMT
#70
So... DeepAiAlpha stuff... This is a dragoon And this a ramp.. Wait no.... not.... no... AlphaBot... you neeed to.. not this way
#1 KT Rolster fanboy. KT BEST KT ! Hail to KT playoffs Zergs ! Unofficial french translator for SlayerS_`Boxer` biography "Crazy as me".
usopsama
Profile Joined April 2008
6502 Posts
Last Edited: 2016-03-12 22:05:01
March 12 2016 22:03 GMT
#71
nbaker
Profile Joined July 2009
United States1341 Posts
March 12 2016 22:09 GMT
#72
Why does everyone assume computers would have perfect micro? I think it was Letabot in the other thread who said that micro was one of the hardest parts of building an AI. Just because a computer might be able to input commands without execution errors, the actual decision element of microing units isn't at all trivial, nor would it generalize well, I think.

I'm curious how back propagation works in deep mind, if anyone knows. Like, for example in a game like Go, is it possible to evaluate a move and do reinforcement/error correction without having to wait until the final result of the game?
Dangermousecatdog
Profile Joined December 2010
United Kingdom7084 Posts
March 12 2016 22:16 GMT
#73
Would an AI be able to control Dragoons?
loppy2345
Profile Joined August 2015
39 Posts
March 12 2016 22:21 GMT
#74
On March 13 2016 06:41 summerloud wrote:
i think boxer has no idea what hes talking about. it would prolly be possible to program an AI that would beat everyone just using blink stalkers


Only if an AI can survive a cannon rush/bunker rush/nydus! An AI would be much worst at scouting, and you could easily dupe it by hiding stuff or faking stuff. It will take years to develop an AI that can consistently hold off the most basic rushes.
TwiggyWan
Profile Blog Joined December 2013
France328 Posts
Last Edited: 2016-03-12 22:40:34
March 12 2016 22:39 GMT
#75
On March 13 2016 07:16 Dangermousecatdog wrote:
Would an AI be able to control Dragoons?


This alone will take some years to be perfected D:
No bad days
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 12 2016 22:44 GMT
#76
On March 13 2016 07:09 nbaker wrote:
Why does everyone assume computers would have perfect micro? I think it was Letabot in the other thread who said that micro was one of the hardest parts of building an AI. Just because a computer might be able to input commands without execution errors, the actual decision element of microing units isn't at all trivial, nor would it generalize well, I think.

I'm curious how back propagation works in deep mind, if anyone knows. Like, for example in a game like Go, is it possible to evaluate a move and do reinforcement/error correction without having to wait until the final result of the game?



I don't think most people know what perfect micro means. In order to have perfect micro you need to calculate the Nash Equilibrium each fight (at every single frame). With only 50 ms per frame, there is no way computers today can calculate the perfect micro.


There is one aspect involving micro-management where bots are better than human players. Since the default mining from worker units is not optimal, it is possible to calculate a better path for the worker units to take. Executing this in a pixel perfect matter will shave off several milliseconds of travel time each trip. Example video below:


If you cannot win with 100 apm, win with 100 cpm.
Shuffleblade
Profile Joined February 2012
Sweden1903 Posts
March 12 2016 22:44 GMT
#77
On March 13 2016 07:21 loppy2345 wrote:
Show nested quote +
On March 13 2016 06:41 summerloud wrote:
i think boxer has no idea what hes talking about. it would prolly be possible to program an AI that would beat everyone just using blink stalkers


Only if an AI can survive a cannon rush/bunker rush/nydus! An AI would be much worst at scouting, and you could easily dupe it by hiding stuff or faking stuff. It will take years to develop an AI that can consistently hold off the most basic rushes.

An AI could go the route of a safe build, holding an all in is hard because you are greedy and you are greedy to get an economical macro edge. An AI doesn't need an eoconomical advantage if they have perfect macro and micro.

Also you are all underestimating the possiblity of what an AI can do, as soon as something is in vision an AI could calculate exactly how to micro to get away or evade. For example you say an AI would be worse at scouting, I think thats pretty funny because imagine getting rid of a scouting probe/drone/scv that microes perfectly in your base. It could survive for ages especially if its a probe, it could surive against msc and zealot and only get taken out when you make a stalker or blowing an overcharge. It can micro against slow lings and a queen forever as well. The reason pros doesn't do this is because it takes too much energy and time for too lite gain, an AI would get all those small edges that pros doesn't have time to prioritize.
Maru, Bomber, TY, Dear, Classic, DeParture and Rogue!
sc2chronic
Profile Joined May 2012
United States777 Posts
March 12 2016 22:45 GMT
#78
the day they create the starcraft version of alphago, surely they wont go calling boxer to challenge it would they?

sounds like something jessica suggested yohwan should say to create some publicity
terrible, terrible, damage
FueledUpAndReadyToGo
Profile Blog Joined March 2013
Netherlands30548 Posts
Last Edited: 2016-03-12 22:52:57
March 12 2016 22:47 GMT
#79
This is so much different than a boardgame though. Boardgames are paused states, while Starcraft is time dependant, and humans are limited by having to use our hands and eyes to play the game.

I mean it might beat Boxer if it's allowed crazy apm, and reading data from the game so it can do inhuman micro and multitasking. But is that really fair if it has access to more data input and a way faster and more precise output than the real life player?

Honestly it would only be a fair beat if they had to use visual recognition software on an actual monitor for input data, and some sort of mechanically automated mouse and keyboard to click on stuff and scroll the screen.
Neosteel Enthusiast
Pursuit_
Profile Blog Joined June 2012
United States1330 Posts
March 12 2016 22:51 GMT
#80
I think it's a ways off before an AI would be able to beat a top player in a BoX. Strategy in real time is a lot different from turn based strategy, it needs to make optimal decisions in a split second with limited information. Mechanically the AI will have a pretty big advantage even if it's APM is capped at say 400-500 because it will have the most efficient APM of any player and can act across the whole map not just on a single screen within microseconds if necessary, which is an unfair advantage over 'real' players. Obviously AI will eventually get there though, and by the time DeepMind is ready to make such a challenge it will probably have surpassed humans.
In Somnis Veritas
arbiter_md
Profile Joined February 2008
Moldova1219 Posts
Last Edited: 2016-03-12 22:59:56
March 12 2016 22:56 GMT
#81
There are few points that people here seem to misunderstand:
1. Starcraft IS turn based. The difference from Go is that a turn in Starcraft has to be done in 1/400 of minute, while in Go the AI has to implement some kind of time management.
2. Starcraft compared to Go, requires much more programming work to be done. But when it's done, it will be much easier for AI to master. Because it's much easier at given time in the game to say who is ahead, compared to Go.
3. I can wait for a moment to watch two AI bots fighting in Starcraft with Korean commentators screaming. Imagine what a beautiful game that would be, with insane multitasking!
The copyright of this post belongs solely to me. Nobody else, not teamliquid, not greetech and not even blizzard have any share of this copyright. You can copy, distribute, use in commercial purposes the content of this post or parts of it freely.
fabiano
Profile Blog Joined August 2009
Brazil4644 Posts
Last Edited: 2016-03-12 23:05:43
March 12 2016 23:04 GMT
#82
Everything humans can do computers will be able to do it better... if not now, certainly later.

Except being stupid, I think we are pretty damn invincible at being stupid.
"When the geyser died, a probe came out" - SirJolt
Liquid`Snute
Profile Blog Joined July 2010
Norway839 Posts
March 12 2016 23:09 GMT
#83
Naive. Of course AIs will be able to beat humans, even with APM/micro limitations (no mineral hax etc). It will take a lot of work to get the AI to such a stage, but a computer's game-sense and execution will be absolute next level, far beyond that of any human. Perfect memory, perfect theory. Obviously the awkward 'mindless machine' quirks will be dealt with in the development of the AI. If the computer is fast enough to process well in "broodwar real-time" with several strategic layers working together (like AlphaGo), humans won't stand a chance. It will take crazy strong computers to do this, but progress is always there. Would be very cool to watch and I hope they undertake the project
Team Liquid
iFU.pauline
Profile Joined September 2009
France1529 Posts
Last Edited: 2016-03-12 23:13:48
March 12 2016 23:10 GMT
#84
On March 13 2016 07:56 arbiter_md wrote:
There are few points that people here seem to misunderstand:
1. Starcraft IS turn based.


I don't mean to offend you but this is not what turn based means. Turn based means that you have to wait your opponent to play before you can do anything.

This is not the case with brood war as you might have noticed...
No coward soul is mine, No trembler in the world's storm-troubled sphere, I see Heaven's glories shine, And Faith shines equal arming me from Fear
OkStyX
Profile Blog Joined October 2011
Canada1199 Posts
March 12 2016 23:15 GMT
#85
This would be so cool to watch.
Team Overklocked Gaming! That man is the noblest creature may be inferred from the fact that no other creature has contested this claim. - G.C. Lichtenberg
ProBell
Profile Joined May 2012
Thailand145 Posts
March 12 2016 23:17 GMT
#86
I'm 100% sure a perfect AI can beat any progamer, at least 90% of the time. Right now most sc2 players probably can't even beat Insane AI, while all pros and hardcore sc2 players find them pretty easy. But if you're going to disagree that a human can beat a computer, think of this: every sc2 unit has somewhat of a "counter" unit. So you make 5+ marines? AI makes 1-2 banelings, not to mention, you can easily program them to never get out-micro'ed, out-economied, make 3rd cc in base? Zerg AI will send a drone for a 4th hatch AND make a good enough defense to counter your potential attack on the 4th, main army, OR drops in the main. SC2 really is about perfect micro/macro, you can say but humans have 'better' game sense or preparations going into the game with a perfect plan, but EVERY sc2 unit or build can be countered. Remember, think of going vs someone who has a PERFECT micro, even-sized army, chances of winning is pretty much next to none.
Shuffleblade
Profile Joined February 2012
Sweden1903 Posts
March 12 2016 23:19 GMT
#87
On March 13 2016 08:10 iFU.pauline wrote:
Show nested quote +
On March 13 2016 07:56 arbiter_md wrote:
There are few points that people here seem to misunderstand:
1. Starcraft IS turn based.


I don't mean to offend you but this is not what turn based means. Turn based means that you have to wait your opponent to play before you can do anything.

This is not the case with brood war as you might have noticed...

You are right ofc, starcraft is rts, which is the total opposite of turnbased.

BUT with a sufficiently powerful computer (that probably doesn't exist yet) it basically becomes turn based.

Imagine if we would have slowed down the game to run at 5% of how fast it is currently going (gamespeed) it would still be rts but it would be so slow that that it would be more of a turn based game than an rts.
The difference between rts and turnbased is that the most important thing in rts is what you prioritize what to do with your time. In starcraft a computer could do everything basically at once, thus removing the rts factor and turning it into turn based.
Maru, Bomber, TY, Dear, Classic, DeParture and Rogue!
Green_25
Profile Joined June 2013
Great Britain696 Posts
Last Edited: 2016-03-12 23:33:30
March 12 2016 23:23 GMT
#88
I think an AI capable of beating a starcraft pro is a lot further off than people here think, but it will be possible eventually. Question is how long does it take.

I think its also important to note that real-time strategy always includes an element of chaos/random chance, so unlike chess/go where a perfect AI should never lose, I don't think its the same in RTS.
writer22816
Profile Blog Joined September 2008
United States5775 Posts
Last Edited: 2016-03-12 23:35:42
March 12 2016 23:35 GMT
#89
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.
8/4/12 never forget, never forgive.
thePunGun
Profile Blog Joined January 2016
598 Posts
Last Edited: 2016-03-12 23:36:59
March 12 2016 23:36 GMT
#90
On March 13 2016 08:23 Green_25 wrote:
I think an AI capable of beating a starcraft pro is a lot further off than people here think, but it will be possible eventually. Question is how long does it take.


This!
In a perfect world an AI will always beat any human contestant in any discipline, without a question, but we're not quite there yet.
I have no doubt in my mind that AI is the future of human society and eventually we'll all become human AI hybrids...oh what a glorious day tha will be.
Ohh holy AI, cure our flawed imperfect existence!
"You cannot teach a man anything, you can only help him find it within himself."
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 12 2016 23:41 GMT
#91
On March 13 2016 08:09 Liquid`Snute wrote:
Naive. Of course AIs will be able to beat humans, even with APM/micro limitations (no mineral hax etc). It will take a lot of work to get the AI to such a stage, but a computer's game-sense and execution will be absolute next level, far beyond that of any human. Perfect memory, perfect theory. Obviously the awkward 'mindless machine' quirks will be dealt with in the development of the AI. If the computer is fast enough to process well in "broodwar real-time" with several strategic layers working together (like AlphaGo), humans won't stand a chance. It will take crazy strong computers to do this, but progress is always there. Would be very cool to watch and I hope they undertake the project


This post is perfectly spot on. My thoughts as well.
reve_etrange
Profile Joined September 2015
1 Post
March 12 2016 23:42 GMT
#92
On March 13 2016 03:08 Clonester wrote:
The same has the complete Go community said about AlphaGo and also Lee Sedol said, he will win so easy against AlphaGo. It became a train wreck... for the Go community.

Same will happen with AlphaStarcraft for Boxer, Flash, Bisu and the complete Community.


This sentiment is wrong. Widespread availability of past-master programs will be to Go (and Starcraft) what it has been for chess. When everyone has a super-human practice partner, the human game gets much deeper.
arbiter_md
Profile Joined February 2008
Moldova1219 Posts
Last Edited: 2016-03-12 23:51:05
March 12 2016 23:48 GMT
#93
AI by definition is imperfect! It can lose at any game it plays, and it will never become perfect. The easiest proof of its imperfection is if you put it to play against itself.

It works with probabilities, and tries to emulate the human behavior. The difference from human is it can do it tirelessly. It's like a human who lived for a thousand years being biologically all the time at age 25. Doing one thing all his life.
The copyright of this post belongs solely to me. Nobody else, not teamliquid, not greetech and not even blizzard have any share of this copyright. You can copy, distribute, use in commercial purposes the content of this post or parts of it freely.
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-12 23:50:52
March 12 2016 23:49 GMT
#94
On March 13 2016 08:17 ProBell wrote:
I'm 100% sure a perfect AI can beat any progamer, at least 90% of the time. Right now most sc2 players probably can't even beat Insane AI, while all pros and hardcore sc2 players find them pretty easy. But if you're going to disagree that a human can beat a computer, think of this: every sc2 unit has somewhat of a "counter" unit. So you make 5+ marines? AI makes 1-2 banelings, not to mention, you can easily program them to never get out-micro'ed, out-economied, make 3rd cc in base? Zerg AI will send a drone for a 4th hatch AND make a good enough defense to counter your potential attack on the 4th, main army, OR drops in the main. SC2 really is about perfect micro/macro, you can say but humans have 'better' game sense or preparations going into the game with a perfect plan, but EVERY sc2 unit or build can be countered. Remember, think of going vs someone who has a PERFECT micro, even-sized army, chances of winning is pretty much next to none.


An AI doesn't even have to be close to perfect to beat any progamer 100% of the time. Google admitted AlphaGo is nowhere near being a perfect AI, but I suspect it will beat anyone close to 100% of the time if not 100%. I'll be amazed if Sedol takes a game. Perfect is such a colossal standard that even the best of the best pale in comparison.
Destructicon
Profile Blog Joined September 2011
4713 Posts
Last Edited: 2016-03-12 23:52:45
March 12 2016 23:50 GMT
#95
I think way too many of you are just going into this thinking of perfect mechanics and mid to late game engagements where the AI just destroys humans. I think that's a pretty narrow view of how things would unfold. The AI would need to learn such intricacies as scouting and interpreting the information it sees, because if the AI gets bunker rushed 3 times in a row all its perfect micro and mechanics will be useless.

Yes its true BW and SC2 are very heavily mechanics dependent and a human would probably get destroyed if he'd try to fight an AI toe to toe in any late game situation. However the early to mid game humans can probably juggle a lot of tasks efficiently enough to the point the advantage of an AI would be negligible and then the game sense would kick in. How does the AI learn the subtle differences between a economic 1/1/1 or a offensive one? Or the difference between the different variations of Gateway all-ins (with or without blink).

Yeah probably in 5-10 years the programmers will crack it. But I think they'll have one hell of a fight ahead of them when tackling BW and SC2, the information acquisition, interpretation and decisions modules will probably take tons of time to fine tune and refine.
WriterNever give up, never surrender! https://www.youtube.com/user/DestructiconSC
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 12 2016 23:58 GMT
#96
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.
coverpunch
Profile Joined December 2011
United States2093 Posts
Last Edited: 2016-03-13 00:14:02
March 13 2016 00:03 GMT
#97
I wouldn't be as confident as BoxeR, but I think humans do have more of an advantage in StarCraft as a game of imperfect information, real-time strategy, and frankly bring a far less mature game than chess or go. I think it is much easier with these qualities to bug out a computer AI or to present it with a totally novel strategy where it cannot rely on its computational superiority (Deep Blue) or a large database of past games (AlphaGo) to find a solution quickly enough.

That's not to say it will always be the case, but such a challenge will require a next level of machine learning and AI that will probably take a few years to get the adequate processing power and break down the problem into manageable chunks.

Edit: I am assuming the computer would be forced into some restrictions similar to a human, such as having to be external to the computer running the game and having to process only what it can see on a monitor. Its apm would thus be constrained to what the SC client can allow it to do, so it can't, say, spam millions of clicks per second over the entire map.
Scarlett`
Profile Joined April 2011
Canada2381 Posts
March 13 2016 00:12 GMT
#98
On March 13 2016 08:17 ProBell wrote:
I'm 100% sure a perfect AI can beat any progamer, at least 90% of the time. Right now most sc2 players probably can't even beat Insane AI, while all pros and hardcore sc2 players find them pretty easy. But if you're going to disagree that a human can beat a computer, think of this: every sc2 unit has somewhat of a "counter" unit. So you make 5+ marines? AI makes 1-2 banelings, not to mention, you can easily program them to never get out-micro'ed, out-economied, make 3rd cc in base? Zerg AI will send a drone for a 4th hatch AND make a good enough defense to counter your potential attack on the 4th, main army, OR drops in the main. SC2 really is about perfect micro/macro, you can say but humans have 'better' game sense or preparations going into the game with a perfect plan, but EVERY sc2 unit or build can be countered. Remember, think of going vs someone who has a PERFECT micro, even-sized army, chances of winning is pretty much next to none.

you're :
not talking about the same game
assuming the ai has full map vision
assuming the game is about 1 fight and who has a better army wins
Progamer一条咸鱼
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 00:12 GMT
#99
On March 13 2016 08:58 DuckloadBlackra wrote:
Show nested quote +
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You can prevent scouting from happening, you can cancel buildings or fly CC, you can kill your own units... or many other random things. The incomplete information is a really tricky issue, as well as the notion of "perfect" micro.
The fun thing is that even if you make two AI play against each other with unlimited APM and complete information, maybe (speaking of sc2) you can't solve the game or one race is indeed overpowered or whatever, thus in the "real" game you can't ensure win either.

So being too confident in either side is probably a bad idea.
WriterMaru
iFU.pauline
Profile Joined September 2009
France1529 Posts
March 13 2016 00:15 GMT
#100
Most of people are saying that an AI would beat a human at starcraft based of the assumption that in the future it will. That's stupid... How do you want to debate with that type of arguments :/ The thread is about AlphaGo present day. And it is nowhere near that. And even if it manages to do it in 50 years when no one is left playing Starcraft. What's the point...

Anyway, the thing that would definitely settle this debate is to know if all problems can be solved by calculations...

If yes, then eventually AI would be capable just by itself to run a campaign and become president. Or eradicate say, violence in the world.

Now talk about taking it to the next level...
No coward soul is mine, No trembler in the world's storm-troubled sphere, I see Heaven's glories shine, And Faith shines equal arming me from Fear
necrosexy
Profile Joined March 2011
451 Posts
Last Edited: 2016-03-13 00:23:41
March 13 2016 00:17 GMT
#101
On March 13 2016 08:58 DuckloadBlackra wrote:
Show nested quote +
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?
CTIA
Profile Joined November 2012
France117 Posts
March 13 2016 00:22 GMT
#102
"Even if it has studied all of the many strategies I've used, I'll go at it with an unstoppable strategy I've prepared."

Am I the only one who read this in Goku's voice?
Maru N1 MKP NesTea Mvp IdrA Ryung
coverpunch
Profile Joined December 2011
United States2093 Posts
March 13 2016 00:28 GMT
#103
On March 13 2016 09:15 iFU.pauline wrote:
Most of people are saying that an AI would beat a human at starcraft based of the assumption that in the future it will. That's stupid... How do you want to debate with that type of arguments :/ The thread is about AlphaGo present day. And it is nowhere near that. And even if it manages to do it in 50 years when no one is left playing Starcraft. What's the point...

Anyway, the thing that would definitely settle this debate is to know if all problems can be solved by calculations...

If yes, then eventually AI would be capable just by itself to run a campaign and become president. Or eradicate say, violence in the world.

Now talk about taking it to the next level...

Then it is obvious that humans can beat the AI because a human with any skill at all can easily defeat the native AI in SC.

What makes the question intriguing is how to break down the problem to create an AI that is clearly better than any human (by beating the world champion).

Someone has to ask the ultimate balance questions: which race does the AI master first? And if you play the perfect AI from each race against each other, would one be clearly better than the others?
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 00:34 GMT
#104
On March 13 2016 09:28 coverpunch wrote:
Show nested quote +
On March 13 2016 09:15 iFU.pauline wrote:
Most of people are saying that an AI would beat a human at starcraft based of the assumption that in the future it will. That's stupid... How do you want to debate with that type of arguments :/ The thread is about AlphaGo present day. And it is nowhere near that. And even if it manages to do it in 50 years when no one is left playing Starcraft. What's the point...

Anyway, the thing that would definitely settle this debate is to know if all problems can be solved by calculations...

If yes, then eventually AI would be capable just by itself to run a campaign and become president. Or eradicate say, violence in the world.

Now talk about taking it to the next level...

Then it is obvious that humans can beat the AI because a human with any skill at all can easily defeat the native AI in SC.

What makes the question intriguing is how to break down the problem to create an AI that is clearly better than any human (by beating the world champion).

Someone has to ask the ultimate balance questions: which race does the AI master first? And if you play the perfect AI from each race against each other, would one be clearly better than the others?

There is no transitivity in e-sports so beating the world champion doesn't necessarily mean you can beat everyone tho xd.
But in the mind of the majority it means exactly that, which is what they care about
WriterMaru
thePunGun
Profile Blog Joined January 2016
598 Posts
Last Edited: 2016-03-13 00:56:22
March 13 2016 00:35 GMT
#105
On March 13 2016 09:17 necrosexy wrote:
Show nested quote +
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.
An AI has no center of attention like we humans do, it does not need so called awareness like us humans. It sees what is and what is not in an instant and does not question itself or its decisions.
"You cannot teach a man anything, you can only help him find it within himself."
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 13 2016 00:35 GMT
#106
On March 13 2016 09:12 Poopi wrote:
Show nested quote +
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You can prevent scouting from happening, you can cancel buildings or fly CC, you can kill your own units... or many other random things. The incomplete information is a really tricky issue, as well as the notion of "perfect" micro.
The fun thing is that even if you make two AI play against each other with unlimited APM and complete information, maybe (speaking of sc2) you can't solve the game or one race is indeed overpowered or whatever, thus in the "real" game you can't ensure win either.

So being too confident in either side is probably a bad idea.


I actually can't speak for BW since I don't know a lot about it, I was thinking in SC2 terms and forgetting the context is BW. In SC2 there's no way you would be able to deny the AI gathering sufficient information to at least stay on equal footing if it was good enough at it. It would take into account the possibility of canceling buildings/flying CC (of course this delves into very complicated territory but that isn't the point) and killing your own units is very rarely if ever a useful idea. I agree this is a very tricky issue, but doable.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 00:36 GMT
#107
On March 13 2016 09:35 thePunGun wrote:
Show nested quote +
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.

Problem is the correct counter measures vary depending of your own mechanical skills as well as your opponent's
WriterMaru
iFU.pauline
Profile Joined September 2009
France1529 Posts
March 13 2016 00:41 GMT
#108
I think at the end it all comes down to this :

Limit the human mind to games based on calculation tasks, and eventually machine will win.

Add other variables in a game where feelings has a deep involvement in winning, then the machine will be easily outclassed.

Present day, I don't think AI can match a human in a game like Brood war because of the big feeling variable, and this is exactly what Boxer meant by giving the "scouting" example.
No coward soul is mine, No trembler in the world's storm-troubled sphere, I see Heaven's glories shine, And Faith shines equal arming me from Fear
thePunGun
Profile Blog Joined January 2016
598 Posts
Last Edited: 2016-03-13 00:42:06
March 13 2016 00:41 GMT
#109
On March 13 2016 09:36 Poopi wrote:
Show nested quote +
On March 13 2016 09:35 thePunGun wrote:
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.

Problem is the correct counter measures vary depending of your own mechanical skills as well as your opponent's

which is another plus for the AI, it won't need a keyboard or fingers to execute commands. It will be faster than any human with a keyboard.
"You cannot teach a man anything, you can only help him find it within himself."
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 00:42 GMT
#110
On March 13 2016 09:41 thePunGun wrote:
Show nested quote +
On March 13 2016 09:36 Poopi wrote:
On March 13 2016 09:35 thePunGun wrote:
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.

Problem is the correct counter measures vary depending of your own mechanical skills as well as your opponent's

which is another plus for the AI, it won't need a keyboard or fingers to execute commands. It will be faster than any human with a keyboard.

Then it doesn't count :o
WriterMaru
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 13 2016 00:43 GMT
#111
On March 13 2016 09:17 necrosexy wrote:
Show nested quote +
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?


Hence why I said "sufficiently advanced." This isn't a matter of how it will do it, but rather a matter of being capable of dealing with the possibility of deception. I can't tell you exactly how AlphaGo deals with all the problems it faces in the game of Go, but it does. Not being able to explain how doesn't mean it can't be done and I'm convinced it will be possible.
thePunGun
Profile Blog Joined January 2016
598 Posts
Last Edited: 2016-03-13 00:47:53
March 13 2016 00:45 GMT
#112
On March 13 2016 09:42 Poopi wrote:
Show nested quote +
On March 13 2016 09:41 thePunGun wrote:
On March 13 2016 09:36 Poopi wrote:
On March 13 2016 09:35 thePunGun wrote:
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.

Problem is the correct counter measures vary depending of your own mechanical skills as well as your opponent's

which is another plus for the AI, it won't need a keyboard or fingers to execute commands. It will be faster than any human with a keyboard.

Then it doesn't count :o

When you play vs the SC AI, do you think it's using a keyboard? xD
"You cannot teach a man anything, you can only help him find it within himself."
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 00:53 GMT
#113
On March 13 2016 09:45 thePunGun wrote:
Show nested quote +
On March 13 2016 09:42 Poopi wrote:
On March 13 2016 09:41 thePunGun wrote:
On March 13 2016 09:36 Poopi wrote:
On March 13 2016 09:35 thePunGun wrote:
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.

Problem is the correct counter measures vary depending of your own mechanical skills as well as your opponent's

which is another plus for the AI, it won't need a keyboard or fingers to execute commands. It will be faster than any human with a keyboard.

Then it doesn't count :o

When you play vs the SC AI, do you think it's using a keyboard? xD

SC ingame AI is irrelevant since they artificially rise the difficulty for the player by cheating with different ways.
What Google wants is showing mustles and beat humans with superior decision making, if it becomes easy to win by cheating there is no interest for them.
WriterMaru
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 13 2016 01:03 GMT
#114
The SC ingame "AI" isn't even really an AI, it's a bot.
thePunGun
Profile Blog Joined January 2016
598 Posts
Last Edited: 2016-03-13 01:04:52
March 13 2016 01:03 GMT
#115
On March 13 2016 09:53 Poopi wrote:
Show nested quote +
On March 13 2016 09:45 thePunGun wrote:
On March 13 2016 09:42 Poopi wrote:
On March 13 2016 09:41 thePunGun wrote:
On March 13 2016 09:36 Poopi wrote:
On March 13 2016 09:35 thePunGun wrote:
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.

Problem is the correct counter measures vary depending of your own mechanical skills as well as your opponent's

which is another plus for the AI, it won't need a keyboard or fingers to execute commands. It will be faster than any human with a keyboard.

Then it doesn't count :o

When you play vs the SC AI, do you think it's using a keyboard? xD

SC ingame AI is irrelevant since they artificially rise the difficulty for the player by cheating with different ways.
What Google wants is showing mustles and beat humans with superior decision making, if it becomes easy to win by cheating there is no interest for them.

Well an AI using a robotic arm will probably still be faster (some day). However, current robotics are not on the same level as a human arm and are even worse when it comes to hands...
"You cannot teach a man anything, you can only help him find it within himself."
Golgotha
Profile Blog Joined January 2011
Korea (South)8418 Posts
March 13 2016 01:08 GMT
#116
well it depends. how good is this Ai? ive seen some of those BW AIs that are supposedly good but in reality they suck even with their awesome mechanics.

Baduk is so simple compared to SC. dont really see why there is an argument that this AI can handle something like SC where BOs, scouting, and meta/counters come into play.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 13 2016 01:12 GMT
#117
On March 13 2016 05:33 Oshuy wrote:
Show nested quote +
On March 13 2016 03:57 MyLovelyLurker wrote:
On March 13 2016 03:53 Oshuy wrote:
On March 13 2016 02:56 brickrd wrote:
it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen"

On March 13 2016 02:50 Musicus wrote:
All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited.

sorry for finding science interesting!


The "maybe not in 10 years" sounds hopeful. Deepmind was created in 2010. Alphago is 18months old (as in : the project started 18 months ago).

There is a hurdle to design what to feed to the neural networks and how to represent the output in a game of starcraft : the space both of current status and potential action are huge; but once those representation are designed, the learning process will either fail or succeed in a few months.

The fact that information is incomplete is almost irrelevant in case of a neural network feed. Those are the type of problems we designed networks for in the first place. Real time and information retention may make things more difficult, but it could get there fast.


It's actually not irrelevant in reinforcement learning, as you need to compute a conditional expectation of the state of play with respect to the information you have - and the update of said expectation will change algorithms by quite a lot. This is being tackled almost as we speak, here is a two weeks old article on the subject - from one of the fathers of AlphaGo - with an application to poker : arxiv.org


Building the dataset for supervised learning from replay databases consisting of both the incomplete information (one player view) and the complete information (spectator view) should provide a first estimate of a potential convergence for a given game representation.

Self-play reinforcement would be great; agreed, I have no idea how to construct an evaluation function (and quite sure it cannot be done on individual actions that are mostly meaningless in themselves). Unsure if it would be necessary at this point (why isn't supervised all the way with a spectator AI impossible ?).

Interesting part in the self-play is that the AI would get to the match with its own metagame that the human players faces for the first time during the match, while the human metagame will have been the basic dataset the AI learned from initialy.


I agree. Self-play reinforcement is what Google Deepmind is aiming for, but it might be easier to start a hybrid approach with replays first. In my opinion they will probably even have to settle for a 'Starcraft for dummies' subset first, with only workers and couple units first, mirroring the 'one unit at a time' learning curve you get from campaign play.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 13 2016 01:13 GMT
#118
On March 13 2016 10:08 Golgotha wrote:
well it depends. how good is this Ai? ive seen some of those BW AIs that are supposedly good but in reality they suck even with their awesome mechanics.

Baduk is so simple compared to SC. dont really see why there is an argument that this AI can handle something like SC where BOs, scouting, and meta/counters come into play.


Nobody ever said this AI can handle any of that yet. They haven't even begun work on anything for this game yet, but they expressed interest in doing so in the future.
Whitewing
Profile Joined October 2010
United States7483 Posts
March 13 2016 01:23 GMT
#119
A computer is very capable of executing well, but it sucks horribly at metagaming and situational reading. It has no star sense, for example, no gut instinct.

For this reason, there's never been an AI that can compete with human players at bridge. They have terrible table sense.

Computers do wonderfully in games of perfect information, but they actually are not very good at all at games of imperfect information.
Strategy"You know I fucking hate the way you play, right?" ~SC2John
VArsovski_SC
Profile Joined April 2015
14 Posts
March 13 2016 01:37 GMT
#120
Surely one has to understand there's pride in this, especially from someone like Boxer

A few bits of info of how AI could work:

#1 - Self-Learning AI = this is the "robotic" approach where you give input data on an AI what winning condition is, what losing condition is and leave it to itself to figure out over thousands and millions of iterations for self-learning process

#2 - Database AI = Blizz has millions of games in their database as well as perfect information from replays = this approach is probably the best for fast results but with obvious flaws cause it will just copy human potential

#3 - Stick to one race/build and develop the perfect micro to complement that. For example mass Marine/CC into Tankyvacs

#4 - I know people would hate me for this but it's the perfect approach to make an unbeatable AI = statistically safest opener with perfect micro only for mirror matchups
Dianchie
Profile Joined February 2016
Canada10 Posts
March 13 2016 01:39 GMT
#121
I think it will be interesting to see how a human performs against the AI. The big thing is that humans can create situational strategies that defy logic and still win. Many games of StarCraft have been won that way. Obviously if the AI has perfect micro/macro that will make it a lot easier to win. I don't see it being programmed to work at 2000+ APM like some have said in this thread. They want the AI to feel like playing against a realistic human opponent.
Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
Last Edited: 2016-03-13 01:44:07
March 13 2016 01:43 GMT
#122
There is such an incredible amount of value in perfect micro and macro, I can't really see such a high level AI having an issue cracking StarCraft over time. Yes the limited info would hinders it tremendously, but we give it back so much more in mechanics. I kind of feel people would just really like for SC to be that uncrackable game. One of reasons I love BW so much is the incredible multitasking required, something which an AI will just totally crush at. Here is an in comparison totally shitty bot:

Administrator
ETisME
Profile Blog Joined April 2011
12348 Posts
March 13 2016 01:44 GMT
#123
This is all PR rubbish.
Google is choosing the game that has most exposure, big community members of cause have to jump to make big claims
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
Pwere
Profile Joined April 2010
Canada1556 Posts
Last Edited: 2016-03-13 01:53:50
March 13 2016 01:45 GMT
#124
Discussing a Starcraft AI is barely better than discussing a CS:GO AI. You first need to limit the huge mechanical advantage before you can discuss what is and isn't possible.

From a purely theoretical point of view, SC is not an interesting challenge for an AI programmer/designer, so I doubt the resources will be dedicated to this for a while, and by then it won't be all that impressive.

If they plan to adapt a general AI to play Starcraft, then that is a different challenge, but the outcome still comes down to where they draw the mechanical/perceptual line.
Cluster__
Profile Joined September 2013
United States328 Posts
March 13 2016 01:55 GMT
#125
AI will definitely win...
Liquid`Snute, AcerScarlett, ROOTCatZ, MC, Maru, Soulkey, Losira
Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
Last Edited: 2016-03-13 01:58:20
March 13 2016 01:57 GMT
#126
On March 13 2016 10:45 Pwere wrote:
Discussing a Starcraft AI is barely better than discussing a CS:GO AI. You first need to limit the huge mechanical advantage before you can discuss what is and isn't possible.

From a purely theoretical point of view, SC is not an interesting challenge for an AI programmer/designer, so I doubt the resources will be dedicated to this for a while, and by then it won't be all that impressive.

If they plan to adapt a general AI to play Starcraft, then that is a different challenge, but the outcome still comes down to where they draw the mechanical/perceptual line.

Totally agree. It doesn't make any sense to beat top pros with limited strategy and perfect execution and then brag about it from an AI perspective. A waste of time in terms of prestige.
Administrator
necrosexy
Profile Joined March 2011
451 Posts
March 13 2016 02:02 GMT
#127
On March 13 2016 09:35 thePunGun wrote:
Show nested quote +
On March 13 2016 09:17 necrosexy wrote:
On March 13 2016 08:58 DuckloadBlackra wrote:
On March 13 2016 08:35 writer22816 wrote:
Since SC isn't a perfect information game, it stands to reason that a good human player should always have the ability to at least take occasional games off of an AI. Nevertheless, even though I love Boxer and Flash, they're kidding themselves if they think that there will never be an AI that can reliably take games off of them. Most people in the gaming community think AIs are a joke because bots in video games are always easy to beat. If a company like Google or IBM threw significant resources into making a video game AI, these people would very quickly be eating their words lol. There is nothing in either Starcraft game that remotely approaches the intractability of Go, and mechanics-wise a good AI would be able to completely shit on any human player.


A sufficiently advanced AI would be able to do all the scouting it needs to gain enough information to win every time. It would remember everything perfectly and calculate the implications of what it learns with extreme precision. The biggest challenge is programming the decisions it will need to make based off this information.

You're overlooking human deceptiveness.
Computer scouts, e.g., a reactored factory and a starport. So it's a drop (or is it?), but when and where will it arrive?
If the AI detects an enemy scan, how does he interpret this? Does the scan mean dropping at the scan location or is it a bluff?

What you don't see is, when an AI scouts it knows instantly the tech, which type of units, their amount(including the current worker count) and what kind of strategies are possible. Whereas a human is 1. not able to identify the type and quantity of units
and 2. most likely will not have a database of every strategy ever up to this point, the timings of those and the correct counter measures.
An AI has no center of attention like we humans do, it does not need so called awareness like us humans. It sees what is and what is not in an instant and does not question itself or its decisions.

I know the AI will determine all the possible build orders from a scout. But scouting ends, so the AI's determination of what the human can be true or false (human deceptivness).
You seem to be implying the AI will always have scouting information at every point in the game.
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 13 2016 02:02 GMT
#128
On March 13 2016 10:57 Liquid`Nazgul wrote:
Show nested quote +
On March 13 2016 10:45 Pwere wrote:
Discussing a Starcraft AI is barely better than discussing a CS:GO AI. You first need to limit the huge mechanical advantage before you can discuss what is and isn't possible.

From a purely theoretical point of view, SC is not an interesting challenge for an AI programmer/designer, so I doubt the resources will be dedicated to this for a while, and by then it won't be all that impressive.

If they plan to adapt a general AI to play Starcraft, then that is a different challenge, but the outcome still comes down to where they draw the mechanical/perceptual line.

Totally agree. It doesn't make any sense to beat top pros with limited strategy and perfect execution and then brag about it from an AI perspective. A waste of time in terms of prestige.


Exactly.
Kyir
Profile Joined June 2011
United States1047 Posts
March 13 2016 02:48 GMT
#129
Since AlphaGo was specifically designed for Go, saying it won't win at Starcraft is probably a safe assumption.
sc2chronic
Profile Joined May 2012
United States777 Posts
March 13 2016 03:00 GMT
#130
On March 13 2016 09:12 Scarlett` wrote:
Show nested quote +
On March 13 2016 08:17 ProBell wrote:
I'm 100% sure a perfect AI can beat any progamer, at least 90% of the time. Right now most sc2 players probably can't even beat Insane AI, while all pros and hardcore sc2 players find them pretty easy. But if you're going to disagree that a human can beat a computer, think of this: every sc2 unit has somewhat of a "counter" unit. So you make 5+ marines? AI makes 1-2 banelings, not to mention, you can easily program them to never get out-micro'ed, out-economied, make 3rd cc in base? Zerg AI will send a drone for a 4th hatch AND make a good enough defense to counter your potential attack on the 4th, main army, OR drops in the main. SC2 really is about perfect micro/macro, you can say but humans have 'better' game sense or preparations going into the game with a perfect plan, but EVERY sc2 unit or build can be countered. Remember, think of going vs someone who has a PERFECT micro, even-sized army, chances of winning is pretty much next to none.

you're :
not talking about the same game
assuming the ai has full map vision
assuming the game is about 1 fight and who has a better army wins


[image loading]
terrible, terrible, damage
Chocolate
Profile Blog Joined December 2010
United States2350 Posts
March 13 2016 03:00 GMT
#131
On March 13 2016 11:48 Kyir wrote:
Since AlphaGo was specifically designed for Go, saying it won't win at Starcraft is probably a safe assumption.

They mean the algorithm / procedure behind the AI, not the specifically trained AI for go
stevorino
Profile Joined April 2011
957 Posts
Last Edited: 2016-03-13 03:17:01
March 13 2016 03:16 GMT
#132
all predictions aside who would win the duel. i am already MASSIVELY hyped!
[_] Terran [_] Zerg [_] Protoss [X] Random ------- Fantasy - hyvaa - sOs
Legionnaire
Profile Joined January 2003
Australia4514 Posts
Last Edited: 2016-03-13 03:27:22
March 13 2016 03:24 GMT
#133
AI would win fairly easily if time was spent to make it.

- Perfect mining at every base. Which humans dont do because we can't.
- Mass micro of scout units. Constantly running around the edge of the base etc? Half the time humans dont even look as they are microing other things.
- While perfect resource spending. So many games i finish a battle and i have 1k+.
- While perfect micro, at both an early game, and late game mass army stage? Seriously.
- Worried about mine drops (sc2)? Perfect runaways. Constant mining at mineral patches that are outside of the range of the mine. Perfect timing of suiciding a probe to allow the others to mine for 30 seconds. There are just so many instances where AI would dominate at a level far in excess of human endevour.

AI's true advantage is just any unit with range, which is where the heaviest micro element comes into play during battles. But even so, on a simple level, just utilizing a perfect number of stalker attacks on a single target without wasting unneeded firepower.

Every single battle from the start of the game would add a 20%+ benefit to the AI. Which would then keep snowballing out of control.

Perhaps the perfect strategy Boxer was talking about would be his rax first bunker rush.

I'd pick terran, or toss as an AI. The link nazgul posted shows why. Range micro ftw. Rines or Bink stalkers.
My hope is one day stupid people will feel the same pain when they talk, as the pain the rest of us feel when we hear them. Twitter: @Legionnaire_au
The_Masked_Shrimp
Profile Joined February 2012
425 Posts
March 13 2016 03:33 GMT
#134
You guys realize that AlphaGo started learning while having a few turn advantages.

They don't want the machine to win with prestige, that's totally out of subject for the target. They will first try to make an AI that can beat a pro, even if it means it knows what units you build and can see through fog of war. Then they will increment on it and make it more humanlike one step at a time.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
Last Edited: 2016-03-13 03:39:37
March 13 2016 03:39 GMT
#135
APM talk is justified yet goes away quickly - there is nothing preventing an APM cap at something human / Korean ( 300, 400ish ) , or even allowing the AI's only peek into the game engine to be the human opponent's APM, and using its moving average as a cap to the algorithm's own.

An AI victory in that context would arguably be much more meaningful.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
EngrishTeacher
Profile Blog Joined March 2012
Canada1109 Posts
March 13 2016 03:41 GMT
#136
I've followed SC since the early OGN BW days, yet I've never understood why the SC AIs (both BW and SC2) are still so terrible vs. a real human opponent.

Reading a few pages of replies, my view that perfect micro/macro will beat "strategy" and "overall awareness" every single time has been cemented. Exactly what are the main difficulties encountered by AI nowadays? Imagine this:

Let's suppose someone creates an AI that opens CC first or rax-CC every game, then macros and micros perfectly of course. No deep strategic awareness or even variations in BO are needed, the AI just need to CONSTANTLY poke and pressure with bio, especially after medivacs are out on the field. In its very essence, SC is a game of economy, and as long as the AI maintains a 1 base deficit or less depending on the matchup, I just don't see how the human pro can keep trading efficiently vs. perfect mechanics. Surely, eventually the army value lost will favor the AI in the extreme?


Could someone link me to some past AI vs. Pro games? I'd be really interested in seeing how the AIs are currently losing.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 13 2016 03:56 GMT
#137
On March 13 2016 12:41 EngrishTeacher wrote:
I've followed SC since the early OGN BW days, yet I've never understood why the SC AIs (both BW and SC2) are still so terrible vs. a real human opponent.

Reading a few pages of replies, my view that perfect micro/macro will beat "strategy" and "overall awareness" every single time has been cemented. Exactly what are the main difficulties encountered by AI nowadays? Imagine this:

Let's suppose someone creates an AI that opens CC first or rax-CC every game, then macros and micros perfectly of course. No deep strategic awareness or even variations in BO are needed, the AI just need to CONSTANTLY poke and pressure with bio, especially after medivacs are out on the field. In its very essence, SC is a game of economy, and as long as the AI maintains a 1 base deficit or less depending on the matchup, I just don't see how the human pro can keep trading efficiently vs. perfect mechanics. Surely, eventually the army value lost will favor the AI in the extreme?


Could someone link me to some past AI vs. Pro games? I'd be really interested in seeing how the AIs are currently losing.


This strategy goes away if the AI learns by playing against itself.


It's that process which is interesting, much more so than creating an ever-winning, 6-pool + 1000 APM bot.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
fishjie
Profile Blog Joined September 2010
United States1519 Posts
Last Edited: 2016-03-13 04:02:22
March 13 2016 04:01 GMT
#138
A better test is if an AI can beat a human if its APM were capped at some limit.

I'm going to say yes. AI has been drastically improving by leaps and bounds. In the machine learning world, the switch from statistical methods of machine learning to neural nets (which are vaguely modelled on our limited understanding of our human brain) is happening. Often times due to lack of training data in problem fields such as natural language processing, statistical methods are used to generate the training data. Either way, the results speak for themselves. Image search in google photos (without any tagging) is disturbingly good. Self driving cars. Watson trashing Jennings. Alpha go destroying the world's best player. So on and so forth.

Be afraid. Mass unemployment is just one of the dangers of this runaway AI.
Legionnaire
Profile Joined January 2003
Australia4514 Posts
March 13 2016 04:07 GMT
#139
On March 13 2016 12:41 EngrishTeacher wrote:
I've followed SC since the early OGN BW days, yet I've never understood why the SC AIs (both BW and SC2) are still so terrible vs. a real human opponent.


What humans like is something that plays;
- a fraction above their current ability. (So you get a challenge and feel like you've done something worthwhile when you win)
- is humanlike (can be unpredictable, yet it will still reacts to what you are doing) so that it gives you a challenge.
- Yet its also something that is beatable, else whats the point of playing then? (Think TA:Escalation where hard comp mode has infinite resources and if you dont kill it with a commander rush you die after 5 mins).

Yet from a companies prospective, this is so damn hard and expensive to do. It's far easier to make 'hard AI' by simply cheating and giving them map awareness or money or something. (Think Civilisation 4. They add difficulty by just making the AI build faster, research faster, and start with more units.) This takes a day of effort to make instead of weeks/months of design/ build time by a team of programmers.

Even if a company goes to all of that effort to make good AI. Someone can still find a flaw in the AI which means you will always win. (scv rush and attack the enemy CC, then run away and have all the scvs follow you etc etc)

It is a fine line for companies to walk. But i do agree, i wish they had something better for good players to play against.

Considering the expense of making real AI. Combined with the fact most players are bad and would prefer to have a 20min BGH no rush game so they can build up and move out at their own pace. You can see why game dev goes the way it does. The top 5% will never really be happy with any AI that can be built.

Besides, that's what multiplayer is for
My hope is one day stupid people will feel the same pain when they talk, as the pain the rest of us feel when we hear them. Twitter: @Legionnaire_au
EvilTeletubby
Profile Blog Joined January 2004
Baltimore, USA22251 Posts
March 13 2016 04:23 GMT
#140
Holy shit it's Legionnaire.
Moderatorhttp://carbonleaf.yuku.com/topic/408/t/So-I-proposed-at-a-Carbon-Leaf-concert.html ***** RIP Geoff
Ryncol
Profile Joined July 2011
United States980 Posts
Last Edited: 2016-03-13 04:35:36
March 13 2016 04:33 GMT
#141
I'm not very well-educated on the subject of AI or Go, but assuming an APM cap is in place to make the game mechanically possible, it seems like a pretty tall order to make an AI that won't drop a game vs a human player. Like, take Has vs Jaedong with the seven pylon wall off. I'm skeptical of even a God AI would reacting perfectly and wining that, because it's so off the walls fucking insane.

It definitely seems like it would definitely be possible for an AI to beat the best players most of the time, but Starcraft seems so much more volatile than Go or Chess, especially with the possible denial/hiding of information, or straight up tricking the AI (researching cloak, then cancelling it, cancelling tech after it's scouted, etc). I think that the best/sneakiest/craftiest players would be able to outsmart it once in a while. I doubt that the best players would NEVER win, you know?
SK.Testie
Profile Blog Joined January 2007
Canada11084 Posts
March 13 2016 04:34 GMT
#142
Anyone remember that 200 wraith vs 200 muta video?
That was pretty serious stuff.
Social Justice is a fools errand. May all the adherents at its church be thwarted. Of all the religions I have come across, it is by far the most detestable.
BronzeKnee
Profile Joined March 2011
United States5217 Posts
Last Edited: 2016-03-13 05:03:28
March 13 2016 04:57 GMT
#143
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too. I think many of you are underestimating the power of the brain.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.
writer22816
Profile Blog Joined September 2008
United States5775 Posts
March 13 2016 05:03 GMT
#144
On March 13 2016 13:57 BronzeKnee wrote:
Show nested quote +
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.
8/4/12 never forget, never forgive.
BronzeKnee
Profile Joined March 2011
United States5217 Posts
Last Edited: 2016-03-13 05:12:39
March 13 2016 05:06 GMT
#145
On March 13 2016 14:03 writer22816 wrote:
Show nested quote +
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 13 2016 06:02 GMT
#146
On March 13 2016 14:03 writer22816 wrote:
Show nested quote +
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.

Basically that the game would have to interact with the game in the same way humans do. It would have to give the game inputs via virtual keyboard and mouse (which would have some limitations on speed and accuracy to make this fair) which would also mean it'd have to reason about things like what it uses its hotkeys for. It'd also have to observe the game through the same viewport a player sees and have access to the minimap, as opposed to just having the whole game state available to it constantly as the in-game bots do.
Jibba
Profile Blog Joined October 2007
United States22883 Posts
Last Edited: 2016-03-13 06:48:59
March 13 2016 06:44 GMT
#147
On March 13 2016 10:57 Liquid`Nazgul wrote:
Show nested quote +
On March 13 2016 10:45 Pwere wrote:
Discussing a Starcraft AI is barely better than discussing a CS:GO AI. You first need to limit the huge mechanical advantage before you can discuss what is and isn't possible.

From a purely theoretical point of view, SC is not an interesting challenge for an AI programmer/designer, so I doubt the resources will be dedicated to this for a while, and by then it won't be all that impressive.

If they plan to adapt a general AI to play Starcraft, then that is a different challenge, but the outcome still comes down to where they draw the mechanical/perceptual line.

Totally agree. It doesn't make any sense to beat top pros with limited strategy and perfect execution and then brag about it from an AI perspective. A waste of time in terms of prestige.

It would be kind of awesome to see an AI do a probe rush in BW and win with perfect micro. At least it'd be awesome the first time, and it wouldn't be awesome because of the AI but because it's such an infuriating, unsolvable way to lose.
ModeratorNow I'm distant, dark in this anthrobeat
todespolka
Profile Joined November 2012
221 Posts
Last Edited: 2016-03-13 06:59:33
March 13 2016 06:54 GMT
#148
Does sc/bw not have an apm limit? Also will the AI process information the same way as humans (from the picture)? and will he control the units with mouse? If not thats already a huge advantage.

No human is able to multi select single units for example and if ai has an advantage its not fair.
todespolka
Profile Joined November 2012
221 Posts
March 13 2016 07:02 GMT
#149
On March 13 2016 14:06 BronzeKnee wrote:
Show nested quote +
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.



Exactly!
Liquid`Jinro
Profile Blog Joined September 2002
Sweden33719 Posts
Last Edited: 2016-03-13 07:14:35
March 13 2016 07:07 GMT
#150
On March 13 2016 13:33 Ryncol wrote:
I'm not very well-educated on the subject of AI or Go, but assuming an APM cap is in place to make the game mechanically possible, it seems like a pretty tall order to make an AI that won't drop a game vs a human player. Like, take Has vs Jaedong with the seven pylon wall off. I'm skeptical of even a God AI would reacting perfectly and wining that, because it's so off the walls fucking insane.

It definitely seems like it would definitely be possible for an AI to beat the best players most of the time, but Starcraft seems so much more volatile than Go or Chess, especially with the possible denial/hiding of information, or straight up tricking the AI (researching cloak, then cancelling it, cancelling tech after it's scouted, etc). I think that the best/sneakiest/craftiest players would be able to outsmart it once in a while. I doubt that the best players would NEVER win, you know?

I don't think you would necessarily have the same goal posts for a game like Starcraft as you would a game of Go.

I.E the best heads-up limit poker bots are (as of a few years) better than the best humans, but you don't expect them to win every hand. Variance is different between games i.e a low level pro will never beat a high ranked pro at Go or chess, seldomly in Starcraft (especially if it's starcraft 2 I think), but quite often in Poker.

Doesn't mean they won't make an AI that 5-0s the best human of course, just that it would require a higher skill gap to ensure such an outcome.
Moderatortell the guy that interplanatar interaction is pivotal to terrans variety of optionitudals in the pre-midgame preperatories as well as the protosstinal deterriggation of elite zergling strikes - Stimey n | Formerly FrozenArbiter
Yferi
Profile Joined April 2010
United States90 Posts
March 13 2016 07:24 GMT
#151
On March 13 2016 12:41 EngrishTeacher wrote:
Could someone link me to some past AI vs. Pro games? I'd be really interested in seeing how the AIs are currently losing.


https://www.youtube.com/playlist?list=PLokMj1YGn8mgnXUgtFk-WmY2dy73Bdhz0

Human >>> AI as of now. There are some other games on YouTube, but those are vs. players who aren't as good.
iFU.pauline
Profile Joined September 2009
France1529 Posts
Last Edited: 2016-03-13 08:24:12
March 13 2016 07:59 GMT
#152
AI can't do multiple action at the same time because there is only one cursor, one keyboard and one screen, If it does, then its cheating. Also it can't make an unit attack faster either...... so no matter what it will have to spend the same amount of time microing than a human do... I don't get what's so difficult to understand about that...

What's the point having 5000 apm, to tell an unit to attack 5000 times until the next round? At that speed an unit wouldn't even respond anyway coze all you need is one click until the next round. And anyway what an AI could do during that little space of time? It won't produce units faster either... Time resolution is nowhere near that high for a computer to make a significant difference compare to progamer... That doesn't make any sense. Starcraft has its mechanic limits as well and AI has to be bound by it too... Pretty sure we will be sending a human on Mars before an android could beat a human at brood war...
No coward soul is mine, No trembler in the world's storm-troubled sphere, I see Heaven's glories shine, And Faith shines equal arming me from Fear
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2016-03-13 08:39:57
March 13 2016 08:27 GMT
#153
On March 13 2016 08:50 Destructicon wrote:
I think way too many of you are just going into this thinking of perfect mechanics and mid to late game engagements where the AI just destroys humans. I think that's a pretty narrow view of how things would unfold. The AI would need to learn such intricacies as scouting and interpreting the information it sees, because if the AI gets bunker rushed 3 times in a row all its perfect micro and mechanics will be useless.

Yes its true BW and SC2 are very heavily mechanics dependent and a human would probably get destroyed if he'd try to fight an AI toe to toe in any late game situation. However the early to mid game humans can probably juggle a lot of tasks efficiently enough to the point the advantage of an AI would be negligible and then the game sense would kick in. How does the AI learn the subtle differences between a economic 1/1/1 or a offensive one? Or the difference between the different variations of Gateway all-ins (with or without blink).

Yeah probably in 5-10 years the programmers will crack it. But I think they'll have one hell of a fight ahead of them when tackling BW and SC2, the information acquisition, interpretation and decisions modules will probably take tons of time to fine tune and refine.

It will build an opening database though, if the information it scouts matches with something it has seen before it will know the correct response. And with regards to trickery, I do not think you can easily fool a robust AI with something like that. You would just have to add more openings to the database, like "masked mech into bio allin" or so. I think the opening database will always be small enough to manage, even if you add these variations per opening and even if you allow for masked openings.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
TheDougler
Profile Joined April 2010
Canada8302 Posts
March 13 2016 08:37 GMT
#154
On March 13 2016 03:08 Clonester wrote:
The same has the complete Go community said about AlphaGo and also Lee Sedol said, he will win so easy against AlphaGo. It became a train wreck... for the Go community.

Same will happen with AlphaStarcraft for Boxer, Flash, Bisu and the complete Community.


Looks the majority here don't think the Starcraft pros have a chance actually. It's mostly the pros themselves that are confident. The rest of us just want to see the games.
I root for Euro Zergs, NA Protoss* and Korean Terrans. (Any North American who has beat a Korean Pro as Protoss counts as NA Toss)
Charoisaur
Profile Joined August 2014
Germany15900 Posts
March 13 2016 08:45 GMT
#155
On March 13 2016 17:37 TheDougler wrote:
Show nested quote +
On March 13 2016 03:08 Clonester wrote:
The same has the complete Go community said about AlphaGo and also Lee Sedol said, he will win so easy against AlphaGo. It became a train wreck... for the Go community.

Same will happen with AlphaStarcraft for Boxer, Flash, Bisu and the complete Community.


Looks the majority here don't think the Starcraft pros have a chance actually. It's mostly the pros themselves that are confident. The rest of us just want to see the games.

If it would happen in the near future I'm confident the pros would win. However it's inevitable that one day the AI will be able to defeat humans. The question is just how long it'd take
Many of the coolest moments in sc2 happen due to worker harassment
rabidch
Profile Joined January 2010
United States20289 Posts
March 13 2016 08:46 GMT
#156
it will take a while for google to come up with this though (a few years or more), mostly because of hashing out plans with blizzard (assuming theyll cooperate), designing how theyll train and do decisions, and then getting enough computing power and time to train. even with google's cloud i think it will take a massive amount of computing to train the thing, assuming they choose neutral networks like they did with alphago
LiquidDota StaffOnly a true king can play the King.
hexhaven
Profile Joined July 2014
Finland926 Posts
March 13 2016 08:49 GMT
#157
There's an excellent (and very relevant) piece about AI and AI's future here:
http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

WriterI shoot events. | http://www.jussi.co/esports
Spazzer
Profile Blog Joined February 2011
Canada139 Posts
March 13 2016 08:51 GMT
#158
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some.
www.SpazCraft.com
rednusa
Profile Joined October 2012
651 Posts
March 13 2016 09:23 GMT
#159
Has Google formally expressed any interest in a SC2 AI project?
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 13 2016 09:35 GMT
#160
Emperor has spoken.

who are we to question his decree
64K RAM SYSTEM 38911 BASIC BYTES FREE
Grettin
Profile Joined April 2010
42381 Posts
March 13 2016 09:36 GMT
#161
On March 13 2016 18:23 rednusa wrote:
Has Google formally expressed any interest in a SC2 AI project?


Haven't seen anything but this.

"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at the Structure Data event in San Francisco.

Source
"If I had force-fields in Brood War, I'd never lose." -Bisu
Kerence
Profile Joined May 2011
Sweden1817 Posts
March 13 2016 09:36 GMT
#162
On March 13 2016 18:23 rednusa wrote:
Has Google formally expressed any interest in a SC2 AI project?

Apparently they have, there's already a thread about this.
http://www.teamliquid.net/forum/starcraft-2/505728-google-vs-sc2
Although reading the linked article it's not clear if they are talking about SC:BW or SC2.
http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3?r=US&IR=T
I am here in the shadows.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2016-03-13 09:40:05
March 13 2016 09:39 GMT
#163
On March 13 2016 17:46 rabidch wrote:
it will take a while for google to come up with this though (a few years or more), mostly because of hashing out plans with blizzard (assuming theyll cooperate), designing how theyll train and do decisions, and then getting enough computing power and time to train. even with google's cloud i think it will take a massive amount of computing to train the thing, assuming they choose neutral networks like they did with alphago

Why should they need permission from Blizzard though? Their other AIs would work with just visual input, it is not like they would need access to the game state.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 13 2016 09:56 GMT
#164
without access to game, with same control inputs given to human players, I think it will take quite a bit of time to optimize even how ai controls the game.
64K RAM SYSTEM 38911 BASIC BYTES FREE
weikor
Profile Blog Joined March 2011
Austria580 Posts
March 13 2016 09:58 GMT
#165
On March 13 2016 18:39 Grumbels wrote:
Show nested quote +
On March 13 2016 17:46 rabidch wrote:
it will take a while for google to come up with this though (a few years or more), mostly because of hashing out plans with blizzard (assuming theyll cooperate), designing how theyll train and do decisions, and then getting enough computing power and time to train. even with google's cloud i think it will take a massive amount of computing to train the thing, assuming they choose neutral networks like they did with alphago

Why should they need permission from Blizzard though? Their other AIs would work with just visual input, it is not like they would need access to the game state.


Its technically botting, and I think blizzard could sue a large conpany if they are developing illegal tools - thats why permission.

Im pretty sure an advanced Ai would crush any human player in starcraft.

Just look at those micro bots. Add perfect macro to that and they would need one build order to win 100% of games.

The advantage ai will always have In any game is 100% memory, beeing able to calculate multiple outcomes and perfect multitasking.
CrayonPopChoa
Profile Blog Joined November 2011
Canada761 Posts
March 13 2016 10:08 GMT
#166
What would be mankind's representatives for each match up?? going off of peak ELO in BW it would be

Flash vs. TZP
Jaedong vs. ZTP
Bisu vs. ZP
Jangbi/Stork vs. T
BW4LIFE
CrayonPopChoa
Profile Blog Joined November 2011
Canada761 Posts
March 13 2016 10:13 GMT
#167
On March 13 2016 17:51 Spazzer wrote:
Show nested quote +
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some. https://www.youtube.com/watch?v=IKVFZ28ybQs




but isnt that cheating, like we arent supposed to know what the siege tanks are targeting?? why would the AI know which ling is being targeted. its not like a seeker missle where it shows up alerting you. its different then like when a unit is taking damage u then micro it away. Tank shot comes without you really knowing where its gonna hit, maybe the AI could guess, but if your controlling ur tanks u can click on something else no?
BW4LIFE
Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
Last Edited: 2016-03-13 10:21:48
March 13 2016 10:20 GMT
#168
On March 13 2016 19:13 CrayonPopChoa wrote:
Show nested quote +
On March 13 2016 17:51 Spazzer wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some. https://www.youtube.com/watch?v=IKVFZ28ybQs




but isnt that cheating, like we arent supposed to know what the siege tanks are targeting?? why would the AI know which ling is being targeted. its not like a seeker missle where it shows up alerting you. its different then like when a unit is taking damage u then micro it away. Tank shot comes without you really knowing where its gonna hit, maybe the AI could guess, but if your controlling ur tanks u can click on something else no?

In theory you do know which one they're going to be attacking. As a player you will know which unit will be auto-targeted first, it is just a part of the behavior of units. It isn't random. I don't think this bot knows which one they are targetting by scanning game activity files. This bot knows which one they are attacking, because yes you can determine that based on tank behavior. Professional players play around this (or should) all the time. This video is an even better example (than the marine splitting one) of why AI would totally crush any StarCraft pro given some time to learn.

I've seen some suggestions on limiting the APM and amount of clicks to be similar to mouse/keyboard input. I don't think that really matters in the end. I doubt the zergling vs. tank video has a high amount of apm involved. Even if it did you can pretty much get 90% of efficiency with just picking out individual zerglings that are going to be targeted, while the rest of your army is on a-move. It's pretty much just clicking accuracy that you can do with relatively low apms.
Administrator
KT_Elwood
Profile Joined July 2015
Germany858 Posts
March 13 2016 10:25 GMT
#169
I think you can not make AI vs Human without apply the mechanical constraints to the AI that a Physical player has.

So I guss an AI without any restraint could easily win against any human in SC(l2). 40000 APM? Why not. Perfect macro, perfect micro and millions of games to find out the best reaction possible.
First constraint must be APM and Data Input. AI should only see the screen (Pixels), and interact through mouse/kb input.
There will be a serious task of "recognition" alone.

Easiest way: Make a robot play the game.
And I dont see a robot handel sc(2)

"First he eats our dogs, and then he taxes the penguins... Donald Trump truly is the Donald Trump of our generation. " -DPB
Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
Last Edited: 2016-03-13 10:44:06
March 13 2016 10:29 GMT
#170
On March 13 2016 19:25 KT_Elwood wrote:
I think you can not make AI vs Human without apply the mechanical constraints to the AI that a Physical player has.

So I guss an AI without any restraint could easily win against any human in SC(l2). 40000 APM? Why not. Perfect macro, perfect micro and millions of games to find out the best reaction possible.
First constraint must be APM and Data Input. AI should only see the screen (Pixels), and interact through mouse/kb input.
There will be a serious task of "recognition" alone.

Easiest way: Make a robot play the game.
And I dont see a robot handel sc(2)


The problem is in
  • Accuracy
  • Ability to read and predict
  • No wasted clicks
  • Always 'having an eye on the screen' aka response time

I'm pretty sure you can make an absolutely crushing AI with relatively low apm (200 or so should be enough). No need to assume this AI is going to need 50k apm for the above. It will help of course it will be able to do even more, but the gains are pretty marginal since speed is one of the least important factors when you can account for the above listed points. Limiting the apm isn't going to do much.
Administrator
juvenal
Profile Joined July 2013
2448 Posts
Last Edited: 2016-03-13 10:50:56
March 13 2016 10:43 GMT
#171
On March 13 2016 15:44 Jibba wrote:
Show nested quote +
On March 13 2016 10:57 Liquid`Nazgul wrote:
On March 13 2016 10:45 Pwere wrote:
Discussing a Starcraft AI is barely better than discussing a CS:GO AI. You first need to limit the huge mechanical advantage before you can discuss what is and isn't possible.

From a purely theoretical point of view, SC is not an interesting challenge for an AI programmer/designer, so I doubt the resources will be dedicated to this for a while, and by then it won't be all that impressive.

If they plan to adapt a general AI to play Starcraft, then that is a different challenge, but the outcome still comes down to where they draw the mechanical/perceptual line.

Totally agree. It doesn't make any sense to beat top pros with limited strategy and perfect execution and then brag about it from an AI perspective. A waste of time in terms of prestige.

It would be kind of awesome to see an AI do a probe rush in BW and win with perfect micro. At least it'd be awesome the first time, and it wouldn't be awesome because of the AI but because it's such an infuriating, unsolvable way to lose.

"unsolvable", you mean no human being can rush a probe tight wall?
Michael Probu
KT_Elwood
Profile Joined July 2015
Germany858 Posts
March 13 2016 10:49 GMT
#172
Agreed on APM cap in usefullness.
I see the problem in access on "game state" = 100% possible data in handy format and "screen/pixeldata only"
And AI with access to game state can have possible 100% flawless multitask wich will win you most Sc2 games.
If you never miss a beat, really never, then you can afford to send units out to scout everything at any time. Build 99% Depots around the map and always cancel on last sec would give you so much information. Or have 3 Reapers/lings around that give you vision on any angle of attack (on perfect micro those can always escape !). Players might not do that because they need XX minerals for BO but a computer could just scratch those XX mins from perfect Worker-Timings perfect mineral mining and so on.
I think of it as a possible 10 player Archon with mind-synchronisation.
A 10-Player Archon woul always have 100% of information possible and most of all "awareness", not only minimap-vision

"First he eats our dogs, and then he taxes the penguins... Donald Trump truly is the Donald Trump of our generation. " -DPB
Dumbledore
Profile Joined April 2011
Sweden725 Posts
March 13 2016 11:06 GMT
#173
On March 13 2016 18:58 weikor wrote:
Show nested quote +
On March 13 2016 18:39 Grumbels wrote:
On March 13 2016 17:46 rabidch wrote:
it will take a while for google to come up with this though (a few years or more), mostly because of hashing out plans with blizzard (assuming theyll cooperate), designing how theyll train and do decisions, and then getting enough computing power and time to train. even with google's cloud i think it will take a massive amount of computing to train the thing, assuming they choose neutral networks like they did with alphago

Why should they need permission from Blizzard though? Their other AIs would work with just visual input, it is not like they would need access to the game state.


Its technically botting, and I think blizzard could sue a large conpany if they are developing illegal tools - thats why permission.

Im pretty sure an advanced Ai would crush any human player in starcraft.

Just look at those micro bots. Add perfect macro to that and they would need one build order to win 100% of games.

The advantage ai will always have In any game is 100% memory, beeing able to calculate multiple outcomes and perfect multitasking.



You really think botting in games is illegal? lol
Have a nice day ;)
404AlphaSquad
Profile Joined October 2011
839 Posts
March 13 2016 11:15 GMT
#174
I am on team Human
aka Kalevi
Charoisaur
Profile Joined August 2014
Germany15900 Posts
March 13 2016 11:27 GMT
#175
On March 13 2016 19:20 Liquid`Nazgul wrote:
Show nested quote +
On March 13 2016 19:13 CrayonPopChoa wrote:
On March 13 2016 17:51 Spazzer wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some. https://www.youtube.com/watch?v=IKVFZ28ybQs




but isnt that cheating, like we arent supposed to know what the siege tanks are targeting?? why would the AI know which ling is being targeted. its not like a seeker missle where it shows up alerting you. its different then like when a unit is taking damage u then micro it away. Tank shot comes without you really knowing where its gonna hit, maybe the AI could guess, but if your controlling ur tanks u can click on something else no?

In theory you do know which one they're going to be attacking. As a player you will know which unit will be auto-targeted first, it is just a part of the behavior of units. It isn't random. I don't think this bot knows which one they are targetting by scanning game activity files. This bot knows which one they are attacking, because yes you can determine that based on tank behavior. Professional players play around this (or should) all the time. This video is an even better example (than the marine splitting one) of why AI would totally crush any StarCraft pro given some time to learn.

I've seen some suggestions on limiting the APM and amount of clicks to be similar to mouse/keyboard input. I don't think that really matters in the end. I doubt the zergling vs. tank video has a high amount of apm involved. Even if it did you can pretty much get 90% of efficiency with just picking out individual zerglings that are going to be targeted, while the rest of your army is on a-move. It's pretty much just clicking accuracy that you can do with relatively low apms.

really? always box-clicking the zerglings surrounding the targeted zergling and individually pulling them away and that multiple times at once. I'm pretty sure the APM are insane in that scenario.
Many of the coolest moments in sc2 happen due to worker harassment
moonlawn
Profile Joined May 2010
Latvia6 Posts
March 13 2016 11:33 GMT
#176
now say it to AUTOMATON 2000



Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
March 13 2016 11:34 GMT
#177
On March 13 2016 20:27 Charoisaur wrote:
Show nested quote +
On March 13 2016 19:20 Liquid`Nazgul wrote:
On March 13 2016 19:13 CrayonPopChoa wrote:
On March 13 2016 17:51 Spazzer wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some. https://www.youtube.com/watch?v=IKVFZ28ybQs




but isnt that cheating, like we arent supposed to know what the siege tanks are targeting?? why would the AI know which ling is being targeted. its not like a seeker missle where it shows up alerting you. its different then like when a unit is taking damage u then micro it away. Tank shot comes without you really knowing where its gonna hit, maybe the AI could guess, but if your controlling ur tanks u can click on something else no?

In theory you do know which one they're going to be attacking. As a player you will know which unit will be auto-targeted first, it is just a part of the behavior of units. It isn't random. I don't think this bot knows which one they are targetting by scanning game activity files. This bot knows which one they are attacking, because yes you can determine that based on tank behavior. Professional players play around this (or should) all the time. This video is an even better example (than the marine splitting one) of why AI would totally crush any StarCraft pro given some time to learn.

I've seen some suggestions on limiting the APM and amount of clicks to be similar to mouse/keyboard input. I don't think that really matters in the end. I doubt the zergling vs. tank video has a high amount of apm involved. Even if it did you can pretty much get 90% of efficiency with just picking out individual zerglings that are going to be targeted, while the rest of your army is on a-move. It's pretty much just clicking accuracy that you can do with relatively low apms.

really? always box-clicking the zerglings surrounding the targeted zergling and individually pulling them away and that multiple times at once. I'm pretty sure the APM are insane in that scenario.

I mean sure it isn't a walk in the park, but for a computer with perfect accuracy I don't think it needs to be abnormally high to get a high reward. You can continue to make it better to pretty much unlimited apm, though. Essentially pulling away zerglings and selecting single zerglings with perfect accuracy should be doable while keeping the numbers low.
Administrator
outscar
Profile Joined September 2014
2832 Posts
Last Edited: 2016-03-13 11:44:20
March 13 2016 11:42 GMT
#178
The Emperor has spoken so Google needs to look up some other things. He gonna bunker rush x3 and score gonna be 3-0. If AlphaGo risks then we gonna get new pimpest play where BoxeR humiliates the machine, hell it's about damn time.
sunbeams are never made like me...
Fighter
Profile Joined August 2010
Korea (South)1531 Posts
Last Edited: 2016-03-13 11:52:57
March 13 2016 11:48 GMT
#179
Has anyone posted any links to some of those micro bots? I remember one where lings were micro'd perfectly against tanks, so each tank shot only one died. Made lings look ungodly powerful.

edit: OH. Yep, it's on this page :p
For Aiur???
Charoisaur
Profile Joined August 2014
Germany15900 Posts
March 13 2016 11:51 GMT
#180
On March 13 2016 20:48 Fighter wrote:
Has anyone posted any links to some of those micro bots? I remember one where lings were micro'd perfectly against tanks, so each tank shot only one died. Made lings look ungodly powerful.

no, here aren't any links...
Many of the coolest moments in sc2 happen due to worker harassment
loppy2345
Profile Joined August 2015
39 Posts
March 13 2016 12:11 GMT
#181
The real question is would an AI controlling itself be able to beat an micro-bot controlled by a human. (where the human comes up with the game strategies). If a micro-bot can be used by an AI, it should be available to humans (a micro-bot is a tool for controlling units, similar to keyboard and mouse).
Big J
Profile Joined March 2011
Austria16289 Posts
Last Edited: 2016-03-13 12:21:50
March 13 2016 12:21 GMT
#182
I have no clue how you would ever beat an AI with highly developed micro capabilities. Sure, actually making this work in a normal game is much harder than programming the Automaton, but I believe that such an AI would be absolutly unbeatable.
There are tools in SC1 and SC2 that players can only dream to abuse mechanically and information that players know about but which we are simply not capable of processing fast enough, which would be easily handled by an advanced AI.

Maybe SC1 offers less of them - I don't know as many specifics about the engine of that game - in SC2 however just certain rushes with bot-control would be probably unbeatable/only beatable by mirroring them in certain matchups.
Superbanana
Profile Joined May 2014
2369 Posts
Last Edited: 2016-03-13 12:38:36
March 13 2016 12:29 GMT
#183
The real challenge i not to win.
Its about being able to outplay the human strategically. Not with a good micro system, perfect macro, multitask and attention. Sure the AI could be good at those things and its an achievement in itself.
But they will make no point if it looks like a dumb bot winning with "speed".
The limited APM idea might be a good way to go. This way the AI must pick good decisions and distribute attention, instead of doing evrything at the same time like a super archon.

I don't know how much thought they put on this project for now. But winning at an RTS using no strategy and exploiting the real time part won't display how awesome the AI is.

the AI should be limited by hotkeys and control groups, screen vision (not tracking all information outside the fog of war, except for what is provided by the minimap), clicking stuff... at the very least.

edited
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
March 13 2016 12:33 GMT
#184
Are Flash and Boxer stupid? What arrogance to claim your game can't be player better by AI than by humans. What do they know about AI? They didn't even go to school.

Just a while ago, when as a chess player you talked about AI with a go player, they were glad to point out that in go, amateur humans wreck the best computers. Go was this elegant game that computers couldn't phantom and wouldn't for a long long time.

Also, AlphaGo plays go and only go. And the DeepMind project doesn't have Starcraft as a target yet. I can beat AlphaGo at tic tac toe, which is trivially solvable.


Also, RTS games can be set up in a modular fashion. You can define problems, like build order and micro, and solve them independently of each other. This makes it much easier.
Also, RTS games are convergent, not divergent. Even in chess the endgame was solved. You could just use a table and the outcome was forced.
In go you get more possiblities, not less, like in chess or RTS.
In the strategic sense, every ending has a certain theme.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 13 2016 12:36 GMT
#185
On March 13 2016 21:33 trulojucreathrma.com wrote:
Are Flash and Boxer stupid? What arrogance to claim your game can't be player better by AI than by humans. What do they know about AI? They didn't even go to school.

Just a while ago, when as a chess player you talked about AI with a go player, they were glad to point out that in go, amateur humans wreck the best computers. Go was this elegant game that computers couldn't phantom and wouldn't for a long long time.

Also, AlphaGo plays go and only go. And the DeepMind project doesn't have Starcraft as a target yet. I can beat AlphaGo at tic tac toe, which is trivially solvable.


Also, RTS games can be set up in a modular fashion. You can define problems, like build order and micro, and solve them independently of each other. This makes it much easier.
Also, RTS games are convergent, not divergent. Even in chess the endgame was solved. You could just use a table and the outcome was forced.
In go you get more possiblities, not less, like in chess or RTS.
In the strategic sense, every ending has a certain theme.

They are not stupid, it's for PR, plus since you can't have a fair match anyways (because humans are limited by their mechanical abilities) before the real AI issues are solved, a match most likely will never happen.
WriterMaru
Deleted User 26513
Profile Joined February 2007
2376 Posts
Last Edited: 2016-03-13 12:40:23
March 13 2016 12:39 GMT
#186
On March 13 2016 21:33 trulojucreathrma.com wrote:
Are Flash and Boxer stupid? What arrogance to claim your game can't be player better by AI than by humans. What do they know about AI? They didn't even go to school.

Just a while ago, when as a chess player you talked about AI with a go player, they were glad to point out that in go, amateur humans wreck the best computers. Go was this elegant game that computers couldn't phantom and wouldn't for a long long time.

Also, AlphaGo plays go and only go. And the DeepMind project doesn't have Starcraft as a target yet. I can beat AlphaGo at tic tac toe, which is trivially solvable.


Also, RTS games can be set up in a modular fashion. You can define problems, like build order and micro, and solve them independently of each other. This makes it much easier.
Also, RTS games are convergent, not divergent. Even in chess the endgame was solved. You could just use a table and the outcome was forced.
In go you get more possiblities, not less, like in chess or RTS.
In the strategic sense, every ending has a certain theme.

I think they are saying this, not because they think that it's true, but because "the community" expects them to say this.

In reality the human doesn't stand a chance against a well made AI.
Superbanana
Profile Joined May 2014
2369 Posts
March 13 2016 12:41 GMT
#187
Right!?
The fans don't want to hear something like "meh... its going to be a super AI, no way to win ggwp" from Boxer.
I don't think he believes the AI will never win, its a statement that he will do his best when the time comes and also a "come at me bro".
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
Big J
Profile Joined March 2011
Austria16289 Posts
March 13 2016 12:49 GMT
#188
On March 13 2016 21:29 Superbanana wrote:
The real challenge i not to win.
Its about being able to outplay the human strategically. Not with a good micro system, perfect macro, multitask and attention. Sure the AI could be good at those things and its an achievement in itself.
But they will make no point if it looks like a dumb bot winning with "speed".
The limited APM idea might be a good way to go. This way the AI must pick good decisions and distribute attention, instead of doing evrything at the same time like a super archon.

I don't know how much thought they put on this project for now. But winning at an RTS using no strategy and exploiting the real time part won't display how awesome the AI is.

the AI should be limited by hotkeys and control groups, screen vision (not tracking all information outside the fog of war, except for what is provided by the minimap), clicking stuff... at the very least.

edited


Yeah, I think limited APM and similar restrictions - like limited click precision, actual cursor movement across the screen - should be invoked, or it is going to be a very uninteresting stomp.

The big problem with this however is that there are no such restrictions for humans and thus any player will always seek to exectue as well as he/she can. I think you would have make the AI behave at similar speed/precision to a human, which in itself makes the challenge a bit nonsensical.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
March 13 2016 12:58 GMT
#189
If AI wins at RTS against a human, obviously it has to get the game in the realms where AI outperforms humans and get the advantage there.


Not even AI can outperform humans at being human. You can never make a computer that is more human than a human is.
redviper
Profile Joined May 2010
Pakistan2333 Posts
March 13 2016 13:09 GMT
#190
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


In a real matchup we would have an AI that moves a mouse and presses keys. Its not hard to build and it will be very fast, but not infinitely faster than humans. It will also need a way to analyse the data from the screen so will need a capture mechanism.

Bots like Automaton are different since they inject commands without physical constraints and (I guess) interpret game data as already processed structure.

Also teaching DeepMind to play SC2 (or Broodwar which is even worse!) will be very difficult. A real learning challenge. Unlike Go and Chess, the areas where it must excel are greater (moving mouse, tapping keys), the breadth of options is greater (strategic choices, tactical choices, reactive choices) and there is an inherent lack of full knowledge. RTS would be interesting for deepmind like systems because it needs to work with partial, incomplete and stale knowledge. It may have to deal with feints and misdirections also (like nexus cancel).

Boxer may be right for the present. But this kind of learning would be useful for AI, and the ceiling for AI is much higher than humans.
redviper
Profile Joined May 2010
Pakistan2333 Posts
March 13 2016 13:13 GMT
#191
On March 13 2016 21:58 trulojucreathrma.com wrote:
If AI wins at RTS against a human, obviously it has to get the game in the realms where AI outperforms humans and get the advantage there.


Not even AI can outperform humans at being human. You can never make a computer that is more human than a human is.


There is nothing really special about human. We could potentially (in 50 years or so) simulate a whole human brain, the nerve connections, the chemical impulses and the external inputs. About 5 or 6 years ago at SC (supercomputing conference, not starcraft) IBM showed a first simulation of a rat brain. In 50 years you could simulate a whole person.

But AI doesn't have to simulate every facet of human to be more human than human (whatever that means). It can learn to have reactions that are indistinguishable from humans (hence the turing test). As I said previously, there is no ceiling on how smart an AI can be, there is for people (the size of the brain and the energy/heat requirements).
shadymmj
Profile Joined June 2010
1906 Posts
March 13 2016 13:21 GMT
#192
i agree that with a 400 apm limit it would be next to impossible for bots to beat humans in the near future
There is no such thing is "e-sports". There is Brood War, and then there is crap for nerds.
DinosaurPoop
Profile Blog Joined April 2013
687 Posts
March 13 2016 13:41 GMT
#193
On March 13 2016 05:34 Clbull wrote:
Bots have already surpassed humans in StarCraft. If you ever saw any of the AI competitons held at the University of California, you'd see bots with superior APM that are able to pull off absurd strategies.


https://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/report2015.shtml#mvm
When cats speak, mice listen.
NonY
Profile Blog Joined June 2007
8748 Posts
March 13 2016 14:11 GMT
#194
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Even if this is a challenge of pure strategy for the AI, it must do it with these hardware limitations to get exposed to all of the elements of strategy of an SC player. A huge part of the strategy of SC is having to adjust it for the level of execution that you can expect to have. Not only that, but figuring out which actions are currently the highest priority is a huge challenge as well. If all actions can be done so quickly as to almost be simultaneous, this whole aspect of strategy can be ignored.
"Fucking up is part of it. If you can't fail, you have to always win. And I don't think you can always win." Elliott Smith ---------- Yet no sudden rage darkened his face, and his eyes were calm as they studied her. Then he smiled. 'Witness.'
nepeta
Profile Blog Joined May 2008
1872 Posts
March 13 2016 14:14 GMT
#195
On March 13 2016 22:41 DinosaurPoop wrote:
Show nested quote +
On March 13 2016 05:34 Clbull wrote:
Bots have already surpassed humans in StarCraft. If you ever saw any of the AI competitons held at the University of California, you'd see bots with superior APM that are able to pull off absurd strategies.


https://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/report2015.shtml#mvm


To be fair, those bots are developed and trained to beat other bots, the most successful ones abuse general or specific flaws of bots, instead of emulating a human game. Deepmind does the oposite, it is trained vs humans.
Broodwar AI :) http://sscaitournament.com http://www.starcraftai.com/wiki/Main_Page
nepeta
Profile Blog Joined May 2008
1872 Posts
March 13 2016 14:16 GMT
#196
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Even if this is a challenge of pure strategy for the AI, it must do it with these hardware limitations to get exposed to all of the elements of strategy of an SC player. A huge part of the strategy of SC is having to adjust it for the level of execution that you can expect to have. Not only that, but figuring out which actions are currently the highest priority is a huge challenge as well. If all actions can be done so quickly as to almost be simultaneous, this whole aspect of strategy can be ignored.


Some of the best AIs for broodwar can do 45k apm easy, and still be beaten by a d player. The whole question of strategy, ie decision making, has got very little to do with that. A d player with 1000 apm would still be a bad player compared to jd on 200.
Broodwar AI :) http://sscaitournament.com http://www.starcraftai.com/wiki/Main_Page
thezanursic
Profile Blog Joined July 2011
5478 Posts
March 13 2016 14:23 GMT
#197
"Even if countless data is inputted and studied by the AI so it has some degree of instinct, it won't reach pro level."

Well that's not true, eventually the AI would be good enough.
http://i45.tinypic.com/9j2cdc.jpg Let it be so!
NonY
Profile Blog Joined June 2007
8748 Posts
March 13 2016 14:35 GMT
#198
On March 13 2016 23:16 nepeta wrote:
Show nested quote +
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Even if this is a challenge of pure strategy for the AI, it must do it with these hardware limitations to get exposed to all of the elements of strategy of an SC player. A huge part of the strategy of SC is having to adjust it for the level of execution that you can expect to have. Not only that, but figuring out which actions are currently the highest priority is a huge challenge as well. If all actions can be done so quickly as to almost be simultaneous, this whole aspect of strategy can be ignored.


Some of the best AIs for broodwar can do 45k apm easy, and still be beaten by a d player. The whole question of strategy, ie decision making, has got very little to do with that. A d player with 1000 apm would still be a bad player compared to jd on 200.

I happen to have some experience playing BW so I'm familiar with this truth. I'm not sure what it has to do with my post though.
"Fucking up is part of it. If you can't fail, you have to always win. And I don't think you can always win." Elliott Smith ---------- Yet no sudden rage darkened his face, and his eyes were calm as they studied her. Then he smiled. 'Witness.'
Korakys
Profile Blog Joined November 2014
New Zealand272 Posts
March 13 2016 14:44 GMT
#199
Lets list some ground rules:
*An external robot interfaces mechanically with a separate computer that is running the game (it can bring its own mouse and keyboard though).
*Robot has built in signal delays if required (I think humans peak reaction speed is about 80ms).
*Robot is APM limited to a comparable human level (estimate 300APM over a whole game).
*Best of 9 against the best human player of a recent tournament where the robot can't be specialised for that match-up.

I still think that the AI would still win, but that it could take it years of training to do so. One thing to keep in mind though is that the pace of AI development is not linear - it's going to sneak up on us.
Swing away sOs, swing away.
nepeta
Profile Blog Joined May 2008
1872 Posts
March 13 2016 14:46 GMT
#200
On March 13 2016 23:35 NonY wrote:
Show nested quote +
On March 13 2016 23:16 nepeta wrote:
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Even if this is a challenge of pure strategy for the AI, it must do it with these hardware limitations to get exposed to all of the elements of strategy of an SC player. A huge part of the strategy of SC is having to adjust it for the level of execution that you can expect to have. Not only that, but figuring out which actions are currently the highest priority is a huge challenge as well. If all actions can be done so quickly as to almost be simultaneous, this whole aspect of strategy can be ignored.


Some of the best AIs for broodwar can do 45k apm easy, and still be beaten by a d player. The whole question of strategy, ie decision making, has got very little to do with that. A d player with 1000 apm would still be a bad player compared to jd on 200.

I happen to have some experience playing BW so I'm familiar with this truth. I'm not sure what it has to do with my post though.


I know you have played a little broodwar, don't worry on that part I was trying to say that limiting apm or not has very little bearing on the cognitive capabilities of AIs at this time. The human mind is doing so much parallel stuff in the thinking department that it sometimes is limited by the fingers/apm. AIs are basically at the level of... nematodes and trying to make up for being this by using a few crude heuristics and huge amounts of apm.
Broodwar AI :) http://sscaitournament.com http://www.starcraftai.com/wiki/Main_Page
Magerius
Profile Joined December 2014
Canada1 Post
March 13 2016 15:18 GMT
#201
There will be always A.I that are superior to humans. but it is a software build only for one purpose ! it wont know how to cook an egg
quebec
waiting2Bbanned
Profile Joined November 2015
United States154 Posts
Last Edited: 2016-03-13 15:35:45
March 13 2016 15:34 GMT
#202
AIs are basically at the level of... nematodes and trying to make up for being this by using a few crude heuristics and huge amounts of apm.


I think this might be true for the rudimentary BW bots created to win with a single strat against other bots; this approach wouldn't even work for a chess bot, let alone an artificial neural-network computer with virtually limitless computing power and data resources capable of learning, one which crushed a Go champion.

Even limiting its APM to something like 100 it would still roflstomp a human player.
"If you are going to break the law, do it with two thousand people.. and Mozart." - Howard Zinn
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 13 2016 15:45 GMT
#203
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.
64K RAM SYSTEM 38911 BASIC BYTES FREE
xQuesian
Profile Joined January 2016
15 Posts
March 13 2016 15:45 GMT
#204
If AI gets better than humans, will people start to demand AI vs AI tournaments, since Korean sc won't be the best anymore?
redviper
Profile Joined May 2010
Pakistan2333 Posts
March 13 2016 15:58 GMT
#205
On March 14 2016 00:45 Cuce wrote:
Show nested quote +
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.

You are truly underestimating an advanced AI. AIs build on DNN can certainly create make different decisions based on slightly different scouting information. After a million games the AI would have a much better understanding of the game's strategy than any human being could.
TelecoM
Profile Blog Joined January 2010
United States10667 Posts
March 13 2016 16:13 GMT
#206
Well this is interesting lol
AKA: TelecoM[WHITE] Protoss fighting
LRM)TechnicS
Profile Joined May 2008
Bulgaria1565 Posts
March 13 2016 16:16 GMT
#207
NonY's posts are pretty spot on imo but only if the main purpose of DeepMind playing Starcraft is to beat/crush top human players at all cost.

There are a lot of IFs here.

IMO if their purpose is as posed above, in theory if:

1. Google invests enough resources into the project to fulfill their purpose

2. there are no limits for DeepMind to practice and play (for example operating on multiple screens, no APM cap or anything that resembles mouse-keyboard-monitor management in the physical world) and also to gather and analyze information from replays

then I think that it will be reasonable to, at least but not limited to, expect DeepMind by operating simultaneously on multiple screens (on preset maps):

1. to execute perfectly his retardedly optimal and or exploitative set of build orders for various matchups vs different actual players/opponents,

2. to have efficient-resource-mining abilities that will far exceed what any human being can physically achieve nowadays

3. to have a micro that will demolish a human player while doing all the above at the same time

4. to know all the possible strategies its opponent is doing just by clicking on a still constructing building of the opponent and checking at what exact % it's from finishing, by how many units and when he has them etc.

5. develop a whole new set of build orders and strategies based on his unlimited capabilities to micro, macro and know what their opponent is doing.

Bottom line, if Deep Mind has no limits, it is not unreasonable to expect Deep Mind to stomp all human players. It will feel like you are playing an UMS game titled "IMPO CPU good come play + obs" where everyone is having a laugh out how top players with every race struggle to stay alive mid/late game against DeepMind's ling/queen/scourge/defiler/overlord drops only warcombo.

The result will be meaningless as Deep Mind will destroy his opponents that are bound to the physical world. Flash, plays Starcraft with his mind but through tools like mouse, mousepad, keyboard, monitor and not with his mind alone.

Limiting APM alone would not do the trick IMO.

Google's Deep Mind biggest challenge will be creating something that satisfyingly resembles actual/physical mouse-keyboard-monitor management at least in order to know what kind of stuff are physically possible to be done. This will be needed in order to adequately claim victory in the event of Deep Mind destroying Flash/Jaedong in a bo X series (x>9).
Enjoy the game
Scarlett`
Profile Joined April 2011
Canada2381 Posts
March 13 2016 16:21 GMT
#208
On March 13 2016 19:20 Liquid`Nazgul wrote:
Show nested quote +
On March 13 2016 19:13 CrayonPopChoa wrote:
On March 13 2016 17:51 Spazzer wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some. https://www.youtube.com/watch?v=IKVFZ28ybQs




but isnt that cheating, like we arent supposed to know what the siege tanks are targeting?? why would the AI know which ling is being targeted. its not like a seeker missle where it shows up alerting you. its different then like when a unit is taking damage u then micro it away. Tank shot comes without you really knowing where its gonna hit, maybe the AI could guess, but if your controlling ur tanks u can click on something else no?

In theory you do know which one they're going to be attacking. As a player you will know which unit will be auto-targeted first, it is just a part of the behavior of units. It isn't random. I don't think this bot knows which one they are targetting by scanning game activity files. This bot knows which one they are attacking, because yes you can determine that based on tank behavior. Professional players play around this (or should) all the time. This video is an even better example (than the marine splitting one) of why AI would totally crush any StarCraft pro given some time to learn.

I've seen some suggestions on limiting the APM and amount of clicks to be similar to mouse/keyboard input. I don't think that really matters in the end. I doubt the zergling vs. tank video has a high amount of apm involved. Even if it did you can pretty much get 90% of efficiency with just picking out individual zerglings that are going to be targeted, while the rest of your army is on a-move. It's pretty much just clicking accuracy that you can do with relatively low apms.


its 4 actions per ling in the AOE per tank shot (select; move; (wait) select; attackmove); say theres even only 5 tanks and with natural clumping of lings this by itself would require 6000+ APM (assuming theres ~15 lings in the aoe)
& dodging with the individually targeted ling also wouldnt work if the units attacking are somewhat clumped up (which is better vs certain armies and worse vs others so deciding how to pre-split is a huge issue in itself) unless the ling is at the edge of the army other than the front ~ one that is indeed very unlikely to be targeted even if the opponent is not controlling their tanks at all

it also ignores the fact that the other player can micro the tank -> tell it which unit to target; so the AI would need to have the tank and every possible unit it can be targeting (based on the turret angle) on screen at the time it decides to micro while also knowing whether the tank will have vision of the target long enough to fire

because of the delay in the tank shot that makes this micro even theoretically possible, if the terran is controlled by an AI itself it could even re-target the tank after the zerg micros which is much simpler > if i take the previous example thats 2 actions compared to ~60 for a shot

all these micro bots built in sc2 are built upon information given directly form the game engine making them trivial to code rather than having to go by the visual and audio output the players are limited to
Progamer一条咸鱼
Big J
Profile Joined March 2011
Austria16289 Posts
Last Edited: 2016-03-13 16:48:36
March 13 2016 16:46 GMT
#209
On March 14 2016 01:21 Scarlett` wrote:
Show nested quote +
On March 13 2016 19:20 Liquid`Nazgul wrote:
On March 13 2016 19:13 CrayonPopChoa wrote:
On March 13 2016 17:51 Spazzer wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.


I think this point is the biggest issues. Here is an example of what it could accomplish and some. https://www.youtube.com/watch?v=IKVFZ28ybQs




but isnt that cheating, like we arent supposed to know what the siege tanks are targeting?? why would the AI know which ling is being targeted. its not like a seeker missle where it shows up alerting you. its different then like when a unit is taking damage u then micro it away. Tank shot comes without you really knowing where its gonna hit, maybe the AI could guess, but if your controlling ur tanks u can click on something else no?

In theory you do know which one they're going to be attacking. As a player you will know which unit will be auto-targeted first, it is just a part of the behavior of units. It isn't random. I don't think this bot knows which one they are targetting by scanning game activity files. This bot knows which one they are attacking, because yes you can determine that based on tank behavior. Professional players play around this (or should) all the time. This video is an even better example (than the marine splitting one) of why AI would totally crush any StarCraft pro given some time to learn.

I've seen some suggestions on limiting the APM and amount of clicks to be similar to mouse/keyboard input. I don't think that really matters in the end. I doubt the zergling vs. tank video has a high amount of apm involved. Even if it did you can pretty much get 90% of efficiency with just picking out individual zerglings that are going to be targeted, while the rest of your army is on a-move. It's pretty much just clicking accuracy that you can do with relatively low apms.


its 4 actions per ling in the AOE per tank shot (select; move; (wait) select; attackmove); say theres even only 5 tanks and with natural clumping of lings this by itself would require 6000+ APM (assuming theres ~15 lings in the aoe)
& dodging with the individually targeted ling also wouldnt work if the units attacking are somewhat clumped up (which is better vs certain armies and worse vs others so deciding how to pre-split is a huge issue in itself) unless the ling is at the edge of the army other than the front ~ one that is indeed very unlikely to be targeted even if the opponent is not controlling their tanks at all

it also ignores the fact that the other player can micro the tank -> tell it which unit to target; so the AI would need to have the tank and every possible unit it can be targeting (based on the turret angle) on screen at the time it decides to micro while also knowing whether the tank will have vision of the target long enough to fire

because of the delay in the tank shot that makes this micro even theoretically possible, if the terran is controlled by an AI itself it could even re-target the tank after the zerg micros which is much simpler > if i take the previous example thats 2 actions compared to ~60 for a shot

all these micro bots built in sc2 are built upon information given directly form the game engine making them trivial to code rather than having to go by the visual and audio output the players are limited to



You're absolutely right imo, limiting APM greatly alters the challenge for creating the bot. There is a lot of (micro-)management possible with unlimited APM and precision.
I think the basic problem of limiting APM is the question "to how little?", and even more "but why even, the point is not to create a human, so why pretend it's a human?".

I think the point of such an AI should be that it plays the game to the normal rules, but with maximum efficiency. If that means that it does never box select, it has precise knowledge about unit health from healthbars and gives all orders via the minimap until it needs to identify what enemy dot was actually spotted then that is still fair game. Humans do use the same game mechanics, just not to that efficiency. Which, in my opinion, is the whole point of trying to beat a human with a bot.
Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
Last Edited: 2016-03-13 17:28:53
March 13 2016 17:14 GMT
#210
On March 14 2016 01:21 Scarlett` wrote:
its 4 actions per ling in the AOE per tank shot (select; move; (wait) select; attackmove); say theres even only 5 tanks and with natural clumping of lings this by itself would require 6000+ APM (assuming theres ~15 lings in the aoe)
& dodging with the individually targeted ling also wouldnt work if the units attacking are somewhat clumped up (which is better vs certain armies and worse vs others so deciding how to pre-split is a huge issue in itself) unless the ling is at the edge of the army other than the front ~ one that is indeed very unlikely to be targeted even if the opponent is not controlling their tanks at all

This is accounting for perfection. My view is that AI can be incredibly effective with a lot less than perfection. Pulling out individual units to counter the autotargetting of aoe units is something that can be done without thousands of apm. It won't be perfect, but it will be many times superior to human micro even at low apms. Just imagine your own apm but now with perfect selection onto individual units, perfect ability to predict where automated shots are going to go, and instant reactions and decisions. The difference between that and what you can do now is enormous.

it also ignores the fact that the other player can micro the tank -> tell it which unit to target; so the AI would need to have the tank and every possible unit it can be targeting (based on the turret angle) on screen at the time it decides to micro while also knowing whether the tank will have vision of the target long enough to fire

because of the delay in the tank shot that makes this micro even theoretically possible, if the terran is controlled by an AI itself it could even re-target the tank after the zerg micros which is much simpler > if i take the previous example thats 2 actions compared to ~60 for a shot

all these micro bots built in sc2 are built upon information given directly form the game engine making them trivial to code rather than having to go by the visual and audio output the players are limited to

Right, but I think when you introduce an AI into the fold you have to assume its reactions to audio/visual are going to be pretty much instant. What I'm saying though is if audio/visual is instant, the behavior of the tanks can be 'read'. It is all behavior that is standardized and predictable. You can introduce human behavior to battle it of course, that is what will happen when an AI plays a professional, but I don't think it would be close. Just my perspective.

On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Certainly by tournament rules an AI wouldn't be allowed to participate, and neither is it allowed to actually enter Go tournaments. For there to be a fruitful discussion about an AI playing there should be an understanding it wouldn't be playing through headphones and pressing buttons on a keyboard. If we are going to enforce those rules onto an AI we may as well not bother.
Administrator
Tuczniak
Profile Joined September 2010
1561 Posts
Last Edited: 2016-03-13 17:31:05
March 13 2016 17:30 GMT
#211
On March 14 2016 00:45 xQuesian wrote:
If AI gets better than humans, will people start to demand AI vs AI tournaments, since Korean sc won't be the best anymore?
Interesting thing is that with good enough simulation of AI vs AI the game could be balanced at the release date ( and with good design). The patches would only serve to lead the player base in the right way. Although this would require a lot better software than just AI beating humans.
Erugua
Profile Joined November 2015
13 Posts
Last Edited: 2016-03-13 17:41:30
March 13 2016 17:33 GMT
#212
I don't even see how a human could beat a descent AI since it can have 6000 apm and pretty, can manage 5 packs of armies a way human couldn't in different locations while macroing perfectly. Yeah that's probably very hard to make an AI that does that well, but if it exist one day, it'll have 100% win chance vs human, no doubt

For me the real question is " can a machine be powerful enough to realise that goal ", and the awnser is obviously yes on SCBW, and maybe not yet on SC2
Oshuy
Profile Joined September 2011
Netherlands529 Posts
March 13 2016 17:55 GMT
#213
On March 14 2016 02:14 Liquid`Nazgul wrote:
Show nested quote +
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Certainly by tournament rules an AI wouldn't be allowed to participate, and neither is it allowed to actually enter Go tournaments. For there to be a fruitful discussion about an AI playing there should be an understanding it wouldn't be playing through headphones and pressing buttons on a keyboard. If we are going to enforce those rules onto an AI we may as well not bother.


Would be a fun project. From a robotic point of view, keyboard / mouse handling is not that difficult (trickier if you add the difficulty that the robot must only use 2 arms with 5 fingers each). Identifying elements from screen/sound needs a bit of work, but unless you add random difficulties (lighting issues on the screen, random noise in the room, ...) it is manageable. Knowing the fastest way to execute a list of actions and the ordering is basic, but it would be an interesting way of setting the mechanical limitations on the AI.

The main impact for AI optimization is defining how the feedback of the limitation is sent to the action selection... deepmind might just go the "let it learn" route.

Not sure they would go for it, but for the looks it could be fun.
Coooot
nimdil
Profile Blog Joined January 2011
Poland3748 Posts
March 13 2016 18:37 GMT
#214
I don't think any progamer is suited to say anything about potential strength of well tuned AI in SC. They just have no idea what they talk about.
beheamoth
Profile Joined December 2015
44 Posts
Last Edited: 2016-03-13 18:57:48
March 13 2016 18:53 GMT
#215
hmm im not sure the computer would get beat because you really have to think to the level, with the info it sees and response times it may have to pulling out of scanning range (if that still exists) and moving between reps of volleys and literally keeping everything alive, theres no way they would ever trade ineffectively.

If the ai micro manages all of the scouting and just sees everything, it can mathematically work out exactly when builds could have been put down, finished and work out numbers of units out . . they could perfectly respond, eg has y marines, well 8 really well controlled lings beats that so ill pre for this but they could be going tanks seeing that my scout saw tech and 2 gas as x time, 1 tank will be out at x but the cops already started the tech path to what it considers the perfect counter


no i think the ai would just crush everything regardless of strategy. That pretty much unimportant if the ai is good enough to hedge bets against everything. this is too difficult to get across in text but no, imagine if you played the perfect game, but lets say you were to macro and the other guy was allining, but you react perfectly on a sub optimal strat . .the computer would make the very best of any situation with the precision micro and timing
thePunGun
Profile Blog Joined January 2016
598 Posts
Last Edited: 2016-03-13 18:55:51
March 13 2016 18:54 GMT
#216
The biggest advantages of an AI are:
1. The 'absolute' awareness, which means getting all the information (in exact numbers and the tech) with every scout within less than a second.
2. Having all the info to counter in a huge database and being able to adjust accordingly within seconds.
3. Perfect execution by making every action count(no apm spam, exact commands) which will probably result in superior micro even with apm limited to 200ish.

To counter that you would probably have to come up with the weirdest shit ever! Which is why I think that Boxer might actually be able to pull that off, because he is "the emperor" and the very essence of making weird shit work somehow.
"You cannot teach a man anything, you can only help him find it within himself."
necrosexy
Profile Joined March 2011
451 Posts
March 13 2016 19:15 GMT
#217
On March 14 2016 00:58 redviper wrote:
Show nested quote +
On March 14 2016 00:45 Cuce wrote:
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.

You are truly underestimating an advanced AI. AIs build on DNN can certainly create make different decisions based on slightly different scouting information. After a million games the AI would have a much better understanding of the game's strategy than any human being could.

This is like saying if an AI analyzes flipping a coin a million times, it has a better idea of what side the coin will land from this knowledge.
The AI cannot reliably determine what course the human will take from the limited scouting input. Moreover, the AI can be duped by the human player with bad info.
Pwere
Profile Joined April 2010
Canada1556 Posts
Last Edited: 2016-03-13 19:25:21
March 13 2016 19:23 GMT
#218
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Even if this is a challenge of pure strategy for the AI, it must do it with these hardware limitations to get exposed to all of the elements of strategy of an SC player. A huge part of the strategy of SC is having to adjust it for the level of execution that you can expect to have. Not only that, but figuring out which actions are currently the highest priority is a huge challenge as well. If all actions can be done so quickly as to almost be simultaneous, this whole aspect of strategy can be ignored.
It goes further than that. How do you limit the mouse accuracy of the AI? How do you limit its perception so it doesn't analyze every pixel of every frame on the minimap?

What is, imo, truly a challenge is to develop a live coach that could tell you what to do, and with decent mechanics, its superior strategy and build adaptations would lead a good master player to beat a top Code S contender. You could use one of those programs that finds where your eyes are looking to limit its vision.

That's kind of what they're doing in Go... the AI is telling a human what to do to beat another human. That's the only way to make it fair, and would be a huge accomplishment from a theoretical point of view. I also feel like it would be a blast for the player to sometimes not understand what is going on, but still win, and would eventually be a sweet training tool, just like AIs were in Chess (and now Go, apparently).
Big J
Profile Joined March 2011
Austria16289 Posts
Last Edited: 2016-03-13 20:25:00
March 13 2016 19:47 GMT
#219
On March 14 2016 04:15 necrosexy wrote:
Show nested quote +
On March 14 2016 00:58 redviper wrote:
On March 14 2016 00:45 Cuce wrote:
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.

You are truly underestimating an advanced AI. AIs build on DNN can certainly create make different decisions based on slightly different scouting information. After a million games the AI would have a much better understanding of the game's strategy than any human being could.

This is like saying if an AI analyzes flipping a coin a million times, it has a better idea of what side the coin will land from this knowledge.
The AI cannot reliably determine what course the human will take from the limited scouting input. Moreover, the AI can be duped by the human player with bad info.

I disagree. One proper scout plus a very precise estimate of how many resources a player could have mined deducted from previous scouting can tell you nearly everything at any point in the game. And an AI may be capable to just force scouts by sheer amounts of micromanagement. Not to mention that such an AI might be aware that it will perform more efficiently in battles, even when outnumbered to some degree and therefore it might just choose to sacrifice more power for information, since it may realize that getting duped is the only way to lose against a mechanically inferior player. There's a lot possible.
Befree
Profile Joined April 2010
695 Posts
Last Edited: 2016-03-13 20:11:54
March 13 2016 20:11 GMT
#220
So clearly a computer programmed to do an activity of brute force is entirely unimpressive when it beats a human. For example a car being faster than a human, or a calculator doing extremely large calculations that a human can't do.

The more abstract and less mechanical, the more impressive it would be for a computer obviously.

I guess I just wonder where StarCraft falls on this spectrum. The real-time aspect seems to me like it would give a huge advantage to AI which has very little to do with its "intelligence" and more to do with the brute power of computers.

I know BW AI tournaments put APM caps to eliminate at least part of this issue, but just the brute power of its awareness seems like it diminishes the meaningfulness of its competitive ability. It's a huge advantage that doesn't come through any sort of creativity or intelligence, just brute strength.
necrosexy
Profile Joined March 2011
451 Posts
March 13 2016 20:45 GMT
#221
On March 14 2016 04:47 Big J wrote:
Show nested quote +
On March 14 2016 04:15 necrosexy wrote:
On March 14 2016 00:58 redviper wrote:
On March 14 2016 00:45 Cuce wrote:
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.

You are truly underestimating an advanced AI. AIs build on DNN can certainly create make different decisions based on slightly different scouting information. After a million games the AI would have a much better understanding of the game's strategy than any human being could.

This is like saying if an AI analyzes flipping a coin a million times, it has a better idea of what side the coin will land from this knowledge.
The AI cannot reliably determine what course the human will take from the limited scouting input. Moreover, the AI can be duped by the human player with bad info.

I disagree. One proper scout plus a very precise estimate of how many resources a player could have mined deducted from previous scouting can tell you nearly everything at any point in the game. And an AI may be capable to just force scouts by sheer amounts of micromanagement. Not to mention that such an AI might be aware that it will perform more efficiently in battles, even when outnumbered to some degree and therefore it might just choose to sacrifice more power for information, since it may realize that getting duped is the only way to lose against a mechanically inferior player. There's a lot possible.

But it doesn't tell the AI what the human player is spending with those resources.
If the human diligently denies scouting, the AI must ultimately guess.

pr1de
Profile Joined January 2016
38 Posts
March 13 2016 20:53 GMT
#222
I will take a bet against Boxer, fine for him that he was a pro, but i dont think he realise what a pc can do nowadays
Cuce
Profile Joined March 2011
Turkey1127 Posts
March 13 2016 21:28 GMT
#223
you never bet against boxer.
64K RAM SYSTEM 38911 BASIC BYTES FREE
rabidch
Profile Joined January 2010
United States20289 Posts
March 13 2016 22:08 GMT
#224
On March 13 2016 18:39 Grumbels wrote:
Show nested quote +
On March 13 2016 17:46 rabidch wrote:
it will take a while for google to come up with this though (a few years or more), mostly because of hashing out plans with blizzard (assuming theyll cooperate), designing how theyll train and do decisions, and then getting enough computing power and time to train. even with google's cloud i think it will take a massive amount of computing to train the thing, assuming they choose neutral networks like they did with alphago

Why should they need permission from Blizzard though? Their other AIs would work with just visual input, it is not like they would need access to the game state.

having cooperation with blizzard would be able to speed up development time for google. the amount of computing power to run a go game versus computing power starcraft is far far less, and google would certainly be interested in cutting it down
LiquidDota StaffOnly a true king can play the King.
Squat
Profile Joined September 2013
Sweden7978 Posts
Last Edited: 2016-03-13 22:39:56
March 13 2016 22:37 GMT
#225
On March 14 2016 05:11 Befree wrote:
So clearly a computer programmed to do an activity of brute force is entirely unimpressive when it beats a human. For example a car being faster than a human, or a calculator doing extremely large calculations that a human can't do.

The more abstract and less mechanical, the more impressive it would be for a computer obviously.

I guess I just wonder where StarCraft falls on this spectrum. The real-time aspect seems to me like it would give a huge advantage to AI which has very little to do with its "intelligence" and more to do with the brute power of computers.

I know BW AI tournaments put APM caps to eliminate at least part of this issue, but just the brute power of its awareness seems like it diminishes the meaningfulness of its competitive ability. It's a huge advantage that doesn't come through any sort of creativity or intelligence, just brute strength.

To me, that kind of collision is the entire point of exercises of this sort, which is why I am against imposing restrictions or handicapping the AI any more than absolutely necessary to ensure it doesn't straight up violate the rules. As I see it, the question here is whether human creativity, adaptability and capacity for abstract thinking is enough to overcome the raw computational power and mechanical precision of a computer. If the answer is no, then the answer is no and the human loses. Taking away the AI's main edge seems to render the whole project pointless. In the end, the ability of an AI to analyse and process massive amounts of data in a very short period of time is what gives it a fighting chance, and that must on some level include things like perfect unit control and mechanics.

It's a bit like if I insist a cheetah is only allowed to use two legs in a race, because otherwise it's going to be much faster than me. While true, it also defeats the basic premise of the competition, which is to discover if a human can compensate for a natural disadvantage through training and strategy. In this example the answer will clearly be no, and that's fine. It may be the same way in SC2(I suspect that it is), and that's fine too.

Personally, I am more interested to see that happens when an AI plays a fast-paced, decision-making centric game like Dota 2. I have no idea if an AI that could beat a top5 team could even be written, but if it could, it would be far more impressive than beating pros in starcraft.

"Digital. They have digital. What is digital?" - Donald J Trump
JeffKim
Profile Blog Joined November 2013
Korea (South)36 Posts
March 13 2016 22:53 GMT
#226
On March 14 2016 03:37 nimdil wrote:
I don't think any progamer is suited to say anything about potential strength of well tuned AI in SC. They just have no idea what they talk about.
In the same breath, a lot of people ITT that think experience trumps knowledge and facts about AI have no idea what they're talking about.

It seems many Wikipedia'd "AI" and "DeepMind" and are now experts in the field.
Richasliodo
Profile Joined January 2016
18 Posts
March 14 2016 01:40 GMT
#227
Seems like boxer is offering a challenge so that he gets the first call up =p
DJThwomp
Profile Joined August 2013
Australia13 Posts
March 14 2016 01:49 GMT
#228
pretty sure AI has already won gsl, aka Innovation.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 14 2016 01:59 GMT
#229
On March 14 2016 02:14 Liquid`Nazgul wrote:
Show nested quote +
On March 14 2016 01:21 Scarlett` wrote:
its 4 actions per ling in the AOE per tank shot (select; move; (wait) select; attackmove); say theres even only 5 tanks and with natural clumping of lings this by itself would require 6000+ APM (assuming theres ~15 lings in the aoe)
& dodging with the individually targeted ling also wouldnt work if the units attacking are somewhat clumped up (which is better vs certain armies and worse vs others so deciding how to pre-split is a huge issue in itself) unless the ling is at the edge of the army other than the front ~ one that is indeed very unlikely to be targeted even if the opponent is not controlling their tanks at all

This is accounting for perfection. My view is that AI can be incredibly effective with a lot less than perfection. Pulling out individual units to counter the autotargetting of aoe units is something that can be done without thousands of apm. It won't be perfect, but it will be many times superior to human micro even at low apms. Just imagine your own apm but now with perfect selection onto individual units, perfect ability to predict where automated shots are going to go, and instant reactions and decisions. The difference between that and what you can do now is enormous.
Show nested quote +

it also ignores the fact that the other player can micro the tank -> tell it which unit to target; so the AI would need to have the tank and every possible unit it can be targeting (based on the turret angle) on screen at the time it decides to micro while also knowing whether the tank will have vision of the target long enough to fire

because of the delay in the tank shot that makes this micro even theoretically possible, if the terran is controlled by an AI itself it could even re-target the tank after the zerg micros which is much simpler > if i take the previous example thats 2 actions compared to ~60 for a shot

all these micro bots built in sc2 are built upon information given directly form the game engine making them trivial to code rather than having to go by the visual and audio output the players are limited to

Right, but I think when you introduce an AI into the fold you have to assume its reactions to audio/visual are going to be pretty much instant. What I'm saying though is if audio/visual is instant, the behavior of the tanks can be 'read'. It is all behavior that is standardized and predictable. You can introduce human behavior to battle it of course, that is what will happen when an AI plays a professional, but I don't think it would be close. Just my perspective.

Show nested quote +
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Certainly by tournament rules an AI wouldn't be allowed to participate, and neither is it allowed to actually enter Go tournaments. For there to be a fruitful discussion about an AI playing there should be an understanding it wouldn't be playing through headphones and pressing buttons on a keyboard. If we are going to enforce those rules onto an AI we may as well not bother.

While it's true that it's impossible to subject an AI to the limitations of a human, there is a lot that can be done. Cap keys per second to lower than a human to account for extra efficiency, cap how quickly it can process information when changing screens (so it can't scan an area, look at it, and go back to where it was before in a couple of frames) and cap mouse accuracy so that it if it wants to click on something a certain distance away from its current cursor position it has to do it at a lower mouse speed, give it human-level reaction time with some predicitive capabilities. They could even give the AI 'eyes' so it has to think about when to observe the minimap, and doing so would restrict what kind of micro it could be doing for that brief instant.

The main issue is that I think doing stuff like this just increases the complexity of the model by a lot which would make things a lot harder and unappealing to AI researchers because they may feel they are workong on stuff that is too domain-specific to StarCraft. If they're going to let the AI do whatever crazy micro it wants then the goal would be for the AI to teach itself that superhuman micro is the optimal strategy, rather than teaching itself how to actually outsmart a human in StarCraft, in which case it'd be boring and pointless to make it fight Boxer or Flash. I do think with enough effort there could be a 'fair' AI, but it might be too much effort.
redviper
Profile Joined May 2010
Pakistan2333 Posts
March 14 2016 02:06 GMT
#230
On March 14 2016 04:15 necrosexy wrote:
Show nested quote +
On March 14 2016 00:58 redviper wrote:
On March 14 2016 00:45 Cuce wrote:
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.

You are truly underestimating an advanced AI. AIs build on DNN can certainly create make different decisions based on slightly different scouting information. After a million games the AI would have a much better understanding of the game's strategy than any human being could.

This is like saying if an AI analyzes flipping a coin a million times, it has a better idea of what side the coin will land from this knowledge.
The AI cannot reliably determine what course the human will take from the limited scouting input. Moreover, the AI can be duped by the human player with bad info.


Its not how things would work. The AI would have a graph of possible paths that can be taken. Just as a person does. Having seen what the current state is, it will be able to predict the path the opponent is taking. Just the way a person can. There is nothing special about how pros play, its experience, insight and mechanics. DeepMind-like AIs could probably surpass them on almost all of these factors given enough computation. It could probably determine better than a human what the opponent is doing.

And don't assume that the AI can only react, it can have its own strategy.
thePunGun
Profile Blog Joined January 2016
598 Posts
March 14 2016 02:07 GMT
#231
On March 14 2016 05:53 pr1de wrote:
I will take a bet against Boxer, fine for him that he was a pro, but i dont think he realise what a pc can do nowadays

Blasphemy!!! As soon as find my torch and pitchfork, I will come for you!
But then again, I'm actually kind of hungry...I'll probably have lunch first...but as soon as I've finished my lunch..meh nevermind...
"You cannot teach a man anything, you can only help him find it within himself."
NonY
Profile Blog Joined June 2007
8748 Posts
Last Edited: 2016-03-14 02:18:29
March 14 2016 02:16 GMT
#232
On March 14 2016 02:14 Liquid`Nazgul wrote:
Show nested quote +
On March 13 2016 23:11 NonY wrote:
I don't get everyone who is assuming that the AI is allowed to cheat. The hardware rules are pretty clear for SC. Anything that is supposed to require one button press in the game must take one button press in the real world. Any "macro" that turns a combo of presses into one press is banned. And certainly any system that completely bypasses having to press anything is not going to be allowed. It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack. These are basic things for anyone who is played competitive SC to understand.

Certainly by tournament rules an AI wouldn't be allowed to participate, and neither is it allowed to actually enter Go tournaments. For there to be a fruitful discussion about an AI playing there should be an understanding it wouldn't be playing through headphones and pressing buttons on a keyboard. If we are going to enforce those rules onto an AI we may as well not bother.

History does not set a precedent one way or the other because using a robot to execute the moves in Chess and Go would be purely symbolic. For SC it actually matters that the moves are performed physically. I'm not sure what significance we can take from it defeating the best human player if it is allowed to cheat. If their goal is to simply do AI research, then they can do it without the framework of games and without challenging human players. For us to put their accomplishment in terms we can understand, which I assume is a reason why they play these games and challenge their champions, then acknowledging the reality of the game is a necessity. They don't have to choose SC, but if they do and they want their results to continue to be meaningful, then they have to accept what SC is.

There are some very advanced robots. I think a very capable machine could be built and it would be all the more impressive and clear to people what they're accomplishing. Take a moment to imagine the SC-playing robot and how cool that would be. If they choose not to do so, then I guess personally all I can say is that it's no longer interesting to me, the same way I'm not interested in trying to judge how good of a player a hacker is, or who the best micro tournament player is, or whatever.
"Fucking up is part of it. If you can't fail, you have to always win. And I don't think you can always win." Elliott Smith ---------- Yet no sudden rage darkened his face, and his eyes were calm as they studied her. Then he smiled. 'Witness.'
shadymmj
Profile Joined June 2010
1906 Posts
March 14 2016 02:25 GMT
#233
its not so impressive if an AI can outmuscle a human
its more impressive if it can out-think a human

so you need to have some restrictions in place to make it a thinking competition
There is no such thing is "e-sports". There is Brood War, and then there is crap for nerds.
necrosexy
Profile Joined March 2011
451 Posts
Last Edited: 2016-03-14 02:29:51
March 14 2016 02:29 GMT
#234
On March 14 2016 11:06 redviper wrote:
Show nested quote +
On March 14 2016 04:15 necrosexy wrote:
On March 14 2016 00:58 redviper wrote:
On March 14 2016 00:45 Cuce wrote:
On March 13 2016 23:11 NonY wrote:
It also cannot gather any info except by looking at a monitor and listening to headphones. Any other method is clearly a hack.



I wonder how ai would react of it will react in time to dt. since there wont be an alert. or how it will scan the screen agains nukes. we as players "feel" something wrong and react to these stuff. well thats just simplified way of saying "players been doing this sutff for so long that neuroplasticity forms a neural reflex akin to muscle memory".

The same goes for sublte changes in macroing stuff to fit slight changes in timing due to socuting information, or deviations from meta throught out the game. these stuff are not really conscious desicions, they are more muscle memory.

Thats the crux of the issue I thing. yeah sure, it can pull workred rushes everygame to win. But wining is not the result such a development team wants. they want to develop AI, not get gimmicky wins.

I wonder how much variables an AI would have to go through to figure out what some to a pro player intuativly.
Not sure if we can pull that off in realtime right now.

thats another thing, realtime vs turnbased.

You are truly underestimating an advanced AI. AIs build on DNN can certainly create make different decisions based on slightly different scouting information. After a million games the AI would have a much better understanding of the game's strategy than any human being could.

This is like saying if an AI analyzes flipping a coin a million times, it has a better idea of what side the coin will land from this knowledge.
The AI cannot reliably determine what course the human will take from the limited scouting input. Moreover, the AI can be duped by the human player with bad info.


Its not how things would work. The AI would have a graph of possible paths that can be taken. Just as a person does. Having seen what the current state is, it will be able to predict the path the opponent is taking. Just the way a person can. There is nothing special about how pros play, its experience, insight and mechanics. DeepMind-like AIs could probably surpass them on almost all of these factors given enough computation. It could probably determine better than a human what the opponent is doing.

And don't assume that the AI can only react, it can have its own strategy.

Making a prediction and making the right prediction is the difference here. Again, the AI must guess.
EngrishTeacher
Profile Blog Joined March 2012
Canada1109 Posts
March 14 2016 04:06 GMT
#235
On March 13 2016 14:06 BronzeKnee wrote:
Show nested quote +
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.
Big J
Profile Joined March 2011
Austria16289 Posts
March 14 2016 04:15 GMT
#236
On March 14 2016 13:06 EngrishTeacher wrote:
Show nested quote +
On March 13 2016 14:06 BronzeKnee wrote:
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.


I couldn't agree more.

I for my part I would find it beautiful if an AI could show us that the game is played completely wrong and possibly broken beyond repair. It shows the possibilities for the game, the gaming industry and e-sports and should be embraced rather than feared. It has no impact whatsoever on the human achievments in this game so far.
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
March 14 2016 04:53 GMT
#237
On March 14 2016 13:15 Big J wrote:
Show nested quote +
On March 14 2016 13:06 EngrishTeacher wrote:
On March 13 2016 14:06 BronzeKnee wrote:
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.


I couldn't agree more.

I for my part I would find it beautiful if an AI could show us that the game is played completely wrong and possibly broken beyond repair. It shows the possibilities for the game, the gaming industry and e-sports and should be embraced rather than feared. It has no impact whatsoever on the human achievments in this game so far.

The point of the calls for limiting the "mechanics" of the AI is to show that is actually won because of tactics/strategy.
That's pretty much the point of these AI programms in the first place.
If you simply have it with no limitations it could win purely on mechanics alone, it would be almost trivial.
You could also argue that starcraft (or any game which is played in real time) might be simply a bad choice for this kind of thing.
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
EngrishTeacher
Profile Blog Joined March 2012
Canada1109 Posts
March 14 2016 05:00 GMT
#238
On March 14 2016 13:53 The_Red_Viper wrote:
Show nested quote +
On March 14 2016 13:15 Big J wrote:
On March 14 2016 13:06 EngrishTeacher wrote:
On March 13 2016 14:06 BronzeKnee wrote:
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.


I couldn't agree more.

I for my part I would find it beautiful if an AI could show us that the game is played completely wrong and possibly broken beyond repair. It shows the possibilities for the game, the gaming industry and e-sports and should be embraced rather than feared. It has no impact whatsoever on the human achievments in this game so far.

The point of the calls for limiting the "mechanics" of the AI is to show that is actually won because of tactics/strategy.
That's pretty much the point of these AI programms in the first place.
If you simply have it with no limitations it could win purely on mechanics alone, it would be almost trivial.
You could also argue that starcraft (or any game which is played in real time) might be simply a bad choice for this kind of thing.


That's... almost exactly what I said, I even had a TL; DR at the bottom but I guess it was still too long to read.

Still TL; DR: it would be nearly impossible to implement the "right" amount of limitations, and thus SC, or any other RTS game, are probably terrible candidates for AI development.
Lazare1969
Profile Joined September 2014
United States318 Posts
March 14 2016 05:16 GMT
#239
It can be done if you had money to throw away.

5 programmers * $80k * 1 year = $400,000
6 trillion
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
Last Edited: 2016-03-14 05:21:45
March 14 2016 05:16 GMT
#240
On March 14 2016 14:00 EngrishTeacher wrote:
Show nested quote +
On March 14 2016 13:53 The_Red_Viper wrote:
On March 14 2016 13:15 Big J wrote:
On March 14 2016 13:06 EngrishTeacher wrote:
On March 13 2016 14:06 BronzeKnee wrote:
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.


I couldn't agree more.

I for my part I would find it beautiful if an AI could show us that the game is played completely wrong and possibly broken beyond repair. It shows the possibilities for the game, the gaming industry and e-sports and should be embraced rather than feared. It has no impact whatsoever on the human achievments in this game so far.

The point of the calls for limiting the "mechanics" of the AI is to show that is actually won because of tactics/strategy.
That's pretty much the point of these AI programms in the first place.
If you simply have it with no limitations it could win purely on mechanics alone, it would be almost trivial.
You could also argue that starcraft (or any game which is played in real time) might be simply a bad choice for this kind of thing.


That's... almost exactly what I said, I even had a TL; DR at the bottom but I guess it was still too long to read.

Still TL; DR: it would be nearly impossible to implement the "right" amount of limitations, and thus SC, or any other RTS game, are probably terrible candidates for AI development.



I read Big J's reply the wrong way i fear. I got confused why he would agree with you and at the same time say the AI would show us how the game is meant to be played (strategically) when it probably would just win by pure mechanics (against humans)
AI vs AI would be more interesting i guess.

So yeah my bad for already making posts early in the morning here -.-

TLDR: I partially agree with you, real time games (and sc2) are probably a bad place for AI development, or to be more precise a bad area to show that the AI beats humans because of superior 'understanding' of the game. (which is obviously only one step in the direction of a real 'intelligent AI')
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
Big J
Profile Joined March 2011
Austria16289 Posts
March 14 2016 05:29 GMT
#241
On March 14 2016 14:16 The_Red_Viper wrote:
Show nested quote +
On March 14 2016 14:00 EngrishTeacher wrote:
On March 14 2016 13:53 The_Red_Viper wrote:
On March 14 2016 13:15 Big J wrote:
On March 14 2016 13:06 EngrishTeacher wrote:
On March 13 2016 14:06 BronzeKnee wrote:
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.


I couldn't agree more.

I for my part I would find it beautiful if an AI could show us that the game is played completely wrong and possibly broken beyond repair. It shows the possibilities for the game, the gaming industry and e-sports and should be embraced rather than feared. It has no impact whatsoever on the human achievments in this game so far.

The point of the calls for limiting the "mechanics" of the AI is to show that is actually won because of tactics/strategy.
That's pretty much the point of these AI programms in the first place.
If you simply have it with no limitations it could win purely on mechanics alone, it would be almost trivial.
You could also argue that starcraft (or any game which is played in real time) might be simply a bad choice for this kind of thing.


That's... almost exactly what I said, I even had a TL; DR at the bottom but I guess it was still too long to read.

Still TL; DR: it would be nearly impossible to implement the "right" amount of limitations, and thus SC, or any other RTS game, are probably terrible candidates for AI development.



I read Big J's reply the wrong way i fear. I got confused why he would agree with you and at the same time say the AI would show us how the game is meant to be played (strategically) when it probably would just win by pure mechanics (against humans)
AI vs AI would be more interesting i guess.

So yeah my bad for already making posts early in the morning here -.-

TLDR: I partially agree with you, real time games (and sc2) are probably a bad place for AI development, or to be more precise a bad area to show that the AI beats humans because of superior 'understanding' of the game. (which is obviously only one step in the direction of a real 'intelligent AI')


Closer to wow it is meant to be played from a game theoretical point of view, since there is no rule in the game restricting you mechanically. All those restrictions are outside of the ruleset of the game and therefore uninteresting if you want to solve the game. (which you can always do as game theory tells us, i.e. there is always a set of pure or mixed strategies that you should never deviate from.)
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
Last Edited: 2016-03-14 05:34:53
March 14 2016 05:32 GMT
#242
On March 14 2016 14:29 Big J wrote:
Show nested quote +
On March 14 2016 14:16 The_Red_Viper wrote:
On March 14 2016 14:00 EngrishTeacher wrote:
On March 14 2016 13:53 The_Red_Viper wrote:
On March 14 2016 13:15 Big J wrote:
On March 14 2016 13:06 EngrishTeacher wrote:
On March 13 2016 14:06 BronzeKnee wrote:
On March 13 2016 14:03 writer22816 wrote:
On March 13 2016 13:57 BronzeKnee wrote:
On March 13 2016 03:19 Brutaxilos wrote:
As a programmer, I'm actually quite skeptical that Boxer would be able to beat an intelligent AI given that a team of scientists are given enough time to develop one. It's inevitable.


If the constraints are the same, as in the computer must use a keyboard and mouse and can only view one part of the screen at a time I don't think an AI could ever win against a top player, at least in my lifetime. And if we were able to control the game using the human brain, it would be another no contest, we'd have near perfect micro and macro too.

Only if the computer is unrestrained by a keyboard and mouse and the human is restrained by those factors will the human lose.

Humans are far too innovative. If you simply deny scouting and the AI will either guess what you are doing or go for some standard safe play, and either of those things could be exploit.


What the hell does it even mean for a computer to use a keyboard and a mouse? A computer doesn't have two hands and 10 fingers.


The computer with the AI should be playing the game using another computer using a keyboard, mouse and monitor because that is how Starcraft is played. I'm didn't create Starcraft so don't blame that that is how the game is played.

Having those limitations is what makes Starcraft difficult. If the AI can at once be blink microing Stalkers while warping in units at pylon off screen (off field of vision) then that is cheating.

The AI in Chess or Go can do nothing a human cannot, the AI is literally outthinking the players. So even if the APM is limited, the field of vision must be limited also.

The AI has no chance given equal constraints. If we are talking about no keyboard, mouse or monitor for the computer, then it should be the same for humans. I can imagine perfect forcefields, if the game responded to my mind, I'd never miss a forcefield. And my macro would be on point too, just subtle sounds would be all I would need to know to send a worker to a mineral line.


I know this is a few pages further back in the thread, but I really would like to address the artificial physical limitations people are trying to impose.

I'm pretty certain that no matter HOW MUCH WE ATTEMPT TO RESTRICT the AI in terms of physical playing limitations, it's still going to VASTLY outperform a human in terms of raw mechanics. SC is just much too mechanics-heavy of a game to properly design and implement an AI that "wins by out-thinking instead of out-muscling."

As a specific example, sure you can limit the AI to only perform actions within view, so that it cannot micro perfectly while also macro'ing "off screen". However this immediately becomes a moot point, because good multitasking is highly prized among SC players, and since true multitasking is impossible, it's no more than screen flipping using location and army hotkeys and trying to maximize the benefits of one's limited attention and actions. Considering inherent human limitations such as reaction time and muscle activation time, human multitasking is FAR from perfect. So the AI can be made to "multitask" as well, except we'll give it active snapshots and predicative algorithms as to what is and what would be happening on screen, so every millisecond the AI is flipping back and forth between the 2 important micro/macro locations, and performing actions as perfectly as your silly "limitations" allow.

Of course in the end people were getting ridiculous with ideas of extreme limitations, going as far as saying computers should be limited to playing with a keyboard and a mouse, thus almost completely removing any mechanical advantages the AI would have by nature, because "that's how SC is meant to be played." If such bizarre limitations were to be put in place so the "AI plays just like a human", then I guess AlphaGO is a cheater by definition, and its wins are meaningless as well right? Considering the human brain can only access limited parts of its limited memory (by comparison) at any given time, AlphaGO should have been made to only access 2KB of its 56MB (just random numbers here for illustration) database while playing right?

What makes these matches interesting is seeing how the imbalances (memory size vs. overall awareness, etc.) between AI and humans play out. If you attempt to artificially handicap the AI to exactly resemble a human, then what's the point of competing in the first place?

TL;DR: SC is just too mechanics heavy to allow for meaningful AI development. Coupled with the fact that it's a game of imperfect information with a significant RNG aspect to it, it's just not a good candidate game for AI exploration.


I couldn't agree more.

I for my part I would find it beautiful if an AI could show us that the game is played completely wrong and possibly broken beyond repair. It shows the possibilities for the game, the gaming industry and e-sports and should be embraced rather than feared. It has no impact whatsoever on the human achievments in this game so far.

The point of the calls for limiting the "mechanics" of the AI is to show that is actually won because of tactics/strategy.
That's pretty much the point of these AI programms in the first place.
If you simply have it with no limitations it could win purely on mechanics alone, it would be almost trivial.
You could also argue that starcraft (or any game which is played in real time) might be simply a bad choice for this kind of thing.


That's... almost exactly what I said, I even had a TL; DR at the bottom but I guess it was still too long to read.

Still TL; DR: it would be nearly impossible to implement the "right" amount of limitations, and thus SC, or any other RTS game, are probably terrible candidates for AI development.



I read Big J's reply the wrong way i fear. I got confused why he would agree with you and at the same time say the AI would show us how the game is meant to be played (strategically) when it probably would just win by pure mechanics (against humans)
AI vs AI would be more interesting i guess.

So yeah my bad for already making posts early in the morning here -.-

TLDR: I partially agree with you, real time games (and sc2) are probably a bad place for AI development, or to be more precise a bad area to show that the AI beats humans because of superior 'understanding' of the game. (which is obviously only one step in the direction of a real 'intelligent AI')


Closer to wow it is meant to be played from a game theoretical point of view, since there is no rule in the game restricting you mechanically. All those restrictions are outside of the ruleset of the game and therefore uninteresting if you want to solve the game. (which you can always do as game theory tells us, i.e. there is always a set of pure or mixed strategies that you should never deviate from.)

Yeah i got that after reading everything again, which is why i said AI vs AI might be the way to go then.
It's just that games are made for humans, which is why "solving" real time games is imo different in a sense.
For board games it made no difference. (edit: even though you could say that even for board game there is the difference of pure calculation power between humans and an AI, which is comparable i guess)
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
fireforce7
Profile Joined June 2010
United States334 Posts
March 14 2016 07:08 GMT
#243
what they say is true. However, the onyl way i could see it being false is if the AI is like the Cheater AI and doesn't have to fight through the fog of war. Mindgames make so much of what sc2 is.

it'd be intersting to see how efficient the AI is versus a player though..they could perfectly micro each individual unit (potentially)
I'm terranfying
Garmer
Profile Joined October 2010
1286 Posts
March 14 2016 08:50 GMT
#244
in a very distant future we may assist to bot vs bot only on a great level of play, this is somehow disgusting...
papaz
Profile Joined December 2009
Sweden4149 Posts
Last Edited: 2016-03-14 09:27:40
March 14 2016 09:26 GMT
#245
Rofl @humans and their pride.

Why does some many people think humans have some kind of magic that can't be replicated by computers?

Is it because people are religious and think we have some kind of "soul" or something along those lines that makes us unique?

We are just advanced biological machines as a result of evolution.

We will of course sooner or later develop AI and computers that outshines us in every single way. It's just a matter of time. What argument is there that this won't happen?

It's just a matter of science and enough time.

Science fighting!
Liquid`Bunny
Profile Joined May 2011
Denmark145 Posts
March 14 2016 10:59 GMT
#246
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task
Team Liquid
Liquid`Nazgul
Profile Blog Joined September 2002
22427 Posts
Last Edited: 2016-03-14 11:04:14
March 14 2016 11:03 GMT
#247
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~
Administrator
BeyondCtrL
Profile Joined March 2010
Sweden642 Posts
Last Edited: 2016-03-14 11:10:57
March 14 2016 11:08 GMT
#248
Definitely a lot of good points here about the nature of SC and physical limitations (Nony sums it up pretty much).

Yet I think people are arguing out of perspective, somewhat. Even if Google's AI can beat a human pro by next year it won't mean that the age of AI is nigh. It will merely be a showcase that AI development is headed the right way when it comes to tackling the problems of innovation/creativity (even then that aspect is heavily micro managed by human intelligence).

There are way many more problems in making AIs competitive with humans than simply just the ability to innovate. The human brain runs on something like 20 Watts and if you take into consideration all the tasks that it's always doing (regardless of what the conscious focus is); well, then having a computer with 100+ dedicated support human intelligences while also using the energy cost of a server farm (with almost all its computing power focused on a single purpose) is all of a sudden very pathetic in comparison to what the human brain is accomplishing.
Kaiwa
Profile Blog Joined August 2010
Netherlands2209 Posts
March 14 2016 11:27 GMT
#249
On March 14 2016 20:03 Liquid`Nazgul wrote:
Show nested quote +
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~


Definitely important. If it weren't for the law I would've taken over the world a long time ago.
시크릿 / 씨스타 / 에이핑크 / 윤하 / 가비앤제이
beg
Profile Blog Joined May 2010
991 Posts
March 14 2016 11:44 GMT
#250
On March 14 2016 20:08 BeyondCtrL wrote:
Definitely a lot of good points here about the nature of SC and physical limitations (Nony sums it up pretty much).

Yet I think people are arguing out of perspective, somewhat. Even if Google's AI can beat a human pro by next year it won't mean that the age of AI is nigh. It will merely be a showcase that AI development is headed the right way when it comes to tackling the problems of innovation/creativity (even then that aspect is heavily micro managed by human intelligence).

There are way many more problems in making AIs competitive with humans than simply just the ability to innovate. The human brain runs on something like 20 Watts and if you take into consideration all the tasks that it's always doing (regardless of what the conscious focus is); well, then having a computer with 100+ dedicated support human intelligences while also using the energy cost of a server farm (with almost all its computing power focused on a single purpose) is all of a sudden very pathetic in comparison to what the human brain is accomplishing.

Keep in mind, "deep learning" is around for how long? 10 years only? Computer power is still developing fast.

Give it another 100 years... or maybe 500... who knows...
We're seeing the beginning of something beautiful right now.
Liquid`Bunny
Profile Joined May 2011
Denmark145 Posts
March 14 2016 11:51 GMT
#251
On March 14 2016 20:03 Liquid`Nazgul wrote:
Show nested quote +
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~

When we create a program that can make a better program on it's own, that's when the trouble starts.
Team Liquid
BeyondCtrL
Profile Joined March 2010
Sweden642 Posts
March 14 2016 11:59 GMT
#252
On March 14 2016 20:44 beg wrote:
Keep in mind, "deep learning" is around for how long? 10 years only? Computer power is still developing fast.

Give it another 100 years... or maybe 500... who knows...
We're seeing the beginning of something beautiful right now.


Not sure that you understood my point.
BeyondCtrL
Profile Joined March 2010
Sweden642 Posts
March 14 2016 12:01 GMT
#253
On March 14 2016 20:51 Liquid`Bunny wrote:
Show nested quote +
On March 14 2016 20:03 Liquid`Nazgul wrote:
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~

When we create a program that can make a better program on it's own, that's when the trouble starts.


AlphaGo does that already.
Dakota_Fanning *
Profile Joined January 2008
Hungary2347 Posts
Last Edited: 2016-03-14 12:13:06
March 14 2016 12:11 GMT
#254
A well written AI will/would be able to beat any human player with half the human's APM (that "half" ratio is not written in stone).

As many wrote before, the AI would use its allowed APM to do just the actions that are needed, no spam, no misclick, no selection to check unit's health or energy. An AI doesn't need to scroll/click on a hatchery to see when next larva inject is required etc. An AI would never miss any drop "flying in" on the minimap, an AI would react to a drop in the same millisecond it becomes visible (e.g. that comes into the range of any unit).

As an extreme example, the AI could mimic the exact game engine (but of course using only the "visible/public" part of the scene) to calculate any battle, even before it happens (different actions / micro take into account as variations to calculate different outcomes).

It's just a matter of how many hours and energy is put into it, and what computational power is at the AI's disposal, it's absolutely not a question whether it could beat pro players.
https://repmastered.icza.net
Skynx
Profile Blog Joined January 2013
Turkey7150 Posts
March 14 2016 12:23 GMT
#255
I cannot see how all this micro/apm cap discussions are relevant if an ai cannot survive 6 minutes. Assuming it is not reading any game input data and driven by simple scouting fog of war that is same for humans, ai would still employ tactics that is limited to its scouting/efficiency input (that is, lets say with rax scout, seeing protoss player with 2 gas n single pylon, machine cant check every corner of the map). It can take every possible precaution in the book against cheese but every now and then you face a player like Has (or dare i name it sOs) that cannot be predicted and only can be beaten by that instinctive decision making Boxer is talking about. an ai wont think instinctively to send 2 drones to enemy base to proxy hatch in case a cannon rush is not scouted and dealt with in time.
All this brings another factor into play if a human can predict what a machine input will do in a certain case (send perfect amount of drones to deal with cannon rush but it is still unknown that will cannon finish or not, which can be exploited).
A machine is no matter what, always limited by the input data (by humans). Unless the same instinctive pattern of human conciousness can be implemented in a distant future, they can never be like you Neo sOs.
"When seagulls follow the troller, it is because they think sardines will be thrown into the sea. Thank you very much" - King Cantona | STX 4 eva
beg
Profile Blog Joined May 2010
991 Posts
March 14 2016 12:29 GMT
#256
On March 14 2016 20:59 BeyondCtrL wrote:
Show nested quote +
On March 14 2016 20:44 beg wrote:
Keep in mind, "deep learning" is around for how long? 10 years only? Computer power is still developing fast.

Give it another 100 years... or maybe 500... who knows...
We're seeing the beginning of something beautiful right now.


Not sure that you understood my point.

Not sure you had any point at all.
BeyondCtrL
Profile Joined March 2010
Sweden642 Posts
March 14 2016 12:40 GMT
#257
On March 14 2016 21:11 Dakota_Fanning wrote:
A well written AI will/would be able to beat any human player with half the human's APM (that "half" ratio is not written in stone).

As many wrote before, the AI would use its allowed APM to do just the actions that are needed, no spam, no misclick, no selection to check unit's health or energy. An AI doesn't need to scroll/click on a hatchery to see when next larva inject is required etc. An AI would never miss any drop "flying in" on the minimap, an AI would react to a drop in the same millisecond it becomes visible (e.g. that comes into the range of any unit).

As an extreme example, the AI could mimic the exact game engine (but of course using only the "visible/public" part of the scene) to calculate any battle, even before it happens (different actions / micro take into account as variations to calculate different outcomes).

It's just a matter of how many hours and energy is put into it, and what computational power is at the AI's disposal, it's absolutely not a question whether it could beat pro players.


Would an AI then be able to beat a human that could control the game directly by thought? Of course the answer to that is yes. Even if, for example, both AI and human had the exact same calculations/second the AI would ALWAYS react faster because its signals simply travel MUCH faster than a biological brain's (the difference in reaction time between the AI and human is the amount of extra time the AI will have to make calculations before the human is even aware of the event).

Going back, however, to the question at the beginning of this reply; I believe that if AlphaGo was going to beat a human player in the near future (where the human is playing with kb/m) it would be doing it without any of the physical restraints that the human player is bound to. If the human were to, in that same scenario, be able to control the game simply by thought then AlphaGo would most likely be beaten every time.

Ultimately the truth is that a non-biological form of sentience/intelligence is going to not only be orders of magnitude more creative, but also so incomprehensibly faster that to it a human intelligence would be more comparable with a sloth than to itself.
Squat
Profile Joined September 2013
Sweden7978 Posts
March 14 2016 14:02 GMT
#258
On March 14 2016 20:03 Liquid`Nazgul wrote:
Show nested quote +
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~

Lack of opposable thumbs should keep everyone relatively safe.
"Digital. They have digital. What is digital?" - Donald J Trump
BisuDagger
Profile Blog Joined October 2009
Bisutopia19219 Posts
March 14 2016 14:21 GMT
#259
Until the AI plays a person, it won't learn the micro tricks that players contain: Stacked lurkers, glitching through mineral lines, observer on top of missile turret. Hell, even ally your opponent so their spider mines don't work could be used.

On the other side, what are the selection limitations of the computer? If a Terran floats barracks over a hatchery then the hatchery is un-selectable. If a barracks is on top of a cluster of tanks that sit on the high ground, then the tanks cannot be target by direct mouse clicks. Does the computer have to abide by the same rules in that sense?
ModeratorFormer Afreeca Starleague Caster: http://afreeca.tv/ASL2ENG2
LRM)TechnicS
Profile Joined May 2008
Bulgaria1565 Posts
Last Edited: 2016-03-14 14:51:47
March 14 2016 14:47 GMT
#260
On March 14 2016 23:21 BisuDagger wrote:
Until the AI plays a person, it won't learn the micro tricks that players contain: Stacked lurkers, glitching through mineral lines, observer on top of missile turret. Hell, even ally your opponent so their spider mines don't work could be used.

On the other side, what are the selection limitations of the computer? If a Terran floats barracks over a hatchery then the hatchery is un-selectable. If a barracks is on top of a cluster of tanks that sit on the high ground, then the tanks cannot be target by direct mouse clicks. Does the computer have to abide by the same rules in that sense?


The more likely outcome of a fully developed AI aimed to beat top human players will be rather the reverse IMO- they will have far superior micro and it is probable that we learn micro tricks from it. Though we won't be able to use them as it will be physically impossible for humans most of the time

Check out this sweety pie AI with his basic Muta vs Archon micro from 2009 here

Not sure if the AI will need humans for this

Enjoy the game
Tuczniak
Profile Joined September 2010
1561 Posts
March 14 2016 14:58 GMT
#261
I think the most interesting set of limitations will be the one that produce the best strategies that could be theoretically used by humans. But of course I would like to see AI vs AI without limitations, even if it's irrelevant to the game we play.
suid
Profile Joined May 2015
11 Posts
March 14 2016 15:04 GMT
#262
On March 14 2016 18:26 papaz wrote:
Rofl @humans and their pride.

Why does some many people think humans have some kind of magic that can't be replicated by computers?

Is it because people are religious and think we have some kind of "soul" or something along those lines that makes us unique?

We are just advanced biological machines as a result of evolution.

We will of course sooner or later develop AI and computers that outshines us in every single way. It's just a matter of time. What argument is there that this won't happen?

It's just a matter of science and enough time.

Science fighting!


I agree that, in general, humans overestimate their position in the animal kingdom, but there are non-religious, professional philosophers and cognitive scientists who believe that human consciousness is an emergent phenomena from impossibly complicated circuits at the neurobiological level that they believe cannot be completely comprehended, let alone reproduced, by theoretical algorithms. Dennett says this consciousness is an illusion, which I typically feel is probably an accurate statement of what most people think of. Any species with an endocrine system, a sufficiently complex endbrain, and a few other structures/connections probably experiences some feeling of "sentience" or "consciousness." But, again, I think those words probably don't actually mean anything scientifically. It's very easy to discuss the components of consciousness without actually even realizing it's consciousness being discussed.

Sure, humans are "just advanced biological machines as a result of evolution." How long did that process take again? 3.5 billion years? You're view of the complexity of the human species seems very degraded. The theoretical/computational neuroscientists don't really even have a falsifiable theory of the brain yet. It's a gigantic pile of anatomical details and physiological details. There are still many, many experiments to be done, and that alone can take extremely long. And, fwiw, the "Turing test" is an idiotic metric of machine intelligence.

A couple areas that machines will have a very difficult time "outshining" human expertise or biology are energy conservation (metabolism), regardless of species; least for a very, very, very long time. Evolution optimized that process very well. That's an opinion I just formed while writing this, maybe someone else has actually studied it.

My view of the SC2 AI is that this could already happen today very easily. I don't know why anyone cares about Boxer's opinion.
Asbury
Profile Joined August 2015
2 Posts
March 14 2016 15:14 GMT
#263
so I'm wondering, exactly what is the upper limit of APM with respect to how the game actually runs on our computers? Is it equal to the game's framerate, like 60 FPS?
Ciaus237
Profile Joined July 2015
South Africa270 Posts
March 14 2016 15:28 GMT
#264
To those saying that the challenge is pointless as the AI could just brute force it with mechanics, I think Google would be perfectly happy with that result. The AI will have ITSELF LEARNT that that is effective. For the AI to figure out from replays and self play the ideal way to micro (a reasonably difficult learning process) would be an incredible result. Most micro bots are coded explicitly with knowledge of things like splash radius and attack range in mind, for the AI to "figure out micro", same for macro, I think would be a great result. That alone (a self-learning algorithm learning mechanical optimisation) would be as least as interesting and useful as the GO wins. If it comes up with *anything* properly novel then that's just godly really and people at Google will be exploding with happiness.
The time that we kill keeps us alive
guN-viCe
Profile Joined March 2010
United States687 Posts
March 14 2016 16:44 GMT
#265
imo,

The AI will lose in it's current state but if the team persists for a year or two they will get to pro-gamer level and beyond.

How does one go about writing a bot for SC and what language is optimal? Is the bot designed to be like a human and play from the top down? Or is it somehow tied to the SC files in a deep rooted fashion?
Never give up, never surrender!!! ~~ Extraordinary claims require extraordinary evidence -Sagan
danielias
Profile Joined August 2012
Chile67 Posts
March 14 2016 16:47 GMT
#266
Wow...Spechless. Robots will take control of everything in the future, hospitals, highways, buildings, food, etc. In our lifes era when we get old we´ll see that. Maybe in 50 years from now.
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-14 16:53:53
March 14 2016 16:48 GMT
#267
On March 14 2016 23:58 Tuczniak wrote:
I think the most interesting set of limitations will be the one that produce the best strategies that could be theoretically used by humans. But of course I would like to see AI vs AI without limitations, even if it's irrelevant to the game we play.


interesting idea but the game would have to be re-balanced from the ground up. in SC2 as an example, the relationship between marines and banelings is very different in AI v. AI at unlmited APM.

there are dozens of as yet undiscovered micro-techniques at 3,000 APM that some day AIs may help humans discover.

we do not know much about Starcraft at 3,000 APM. we do not know what we don't know.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
Makro
Profile Joined March 2011
France16890 Posts
March 14 2016 17:06 GMT
#268
On March 15 2016 01:47 danielias wrote:
Wow...Spechless. Robots will take control of everything in the future, hospitals, highways, buildings, food, etc. In our lifes era when we get old we´ll see that. Maybe in 50 years from now.

i would say 150 years, 50 years not enough
Matthew 5:10 "Blessed are those who are persecuted because of shitposting, for theirs is the kingdom of heaven".
TL+ Member
danielias
Profile Joined August 2012
Chile67 Posts
March 14 2016 17:30 GMT
#269
On March 15 2016 02:06 Makro wrote:
Show nested quote +
On March 15 2016 01:47 danielias wrote:
Wow...Spechless. Robots will take control of everything in the future, hospitals, highways, buildings, food, etc. In our lifes era when we get old we´ll see that. Maybe in 50 years from now.

i would say 150 years, 50 years not enough


Maybe, but i think we will see some of this in our lifes. In 150 years robots will definitly have a bigger rol in humanity.

Michiu Kaku teach this stuff. I think this DeepMind is a huge thing, like A.C and B.C. the same will hapen. Before self learning robot and before self learning robot. Jobs will be very different in 100 or 150 years.
Tenks
Profile Joined April 2010
United States3104 Posts
March 14 2016 17:51 GMT
#270
Like others have said if they allow the AI free reign it could just run at 1000 APM and micro 4 marines to kill almost everything. But I'm skeptical that they'd be able to program an AI to beat someone like Flash off pure strategy and counter-strategy.
Wat
beheamoth
Profile Joined December 2015
44 Posts
March 14 2016 17:53 GMT
#271
juat to add so my other points the ai would wreck humans, just think about being able to take a few units or even a worker and trigger scan range at the absolute perfect moment so half the army fire at the wrong point , then do this again . .again and a gain with superb unit control. I can accept the computer wont be thinking strategy on a human level but i believe it can make more of the units which is what a lot of sc2 is, not only will it perfectly spead its money, but given scouting sc doesnt have that many varibales for response, its how u get there.
TheZov
Profile Joined December 2010
Russian Federation34 Posts
March 14 2016 18:29 GMT
#272
Oh come on guys, what are we talking about here? The AI of tomorrow is not the Deep Blue archetype that (I might point out) DID beat a professional chess player at his own game. Dynamic thinking machines that have unlimited APM and unlimited ability to process, calculate, analyze and mimic every play in the history of the game (both pro and amateur, including every ladder game ever played in the history of the game), on a micro-second basis, selecting situational solutions in real time with perfect execution and an unlimited potential for multi-prong execution... Does that sound like something anyone else can do?
Economy is priority #1, #2, and #3 through 7.
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
Last Edited: 2016-03-14 19:10:53
March 14 2016 19:10 GMT
#273
On March 15 2016 03:29 TheZov wrote:
Oh come on guys, what are we talking about here? The AI of tomorrow is not the Deep Blue archetype that (I might point out) DID beat a professional chess player at his own game. Dynamic thinking machines that have unlimited APM and unlimited ability to process, calculate, analyze and mimic every play in the history of the game (both pro and amateur, including every ladder game ever played in the history of the game), on a micro-second basis, selecting situational solutions in real time with perfect execution and an unlimited potential for multi-prong execution... Does that sound like something anyone else can do?

First of all, it's well agreed upon that if the AI exploits infinite APM it will win easily. What's in question is whether an AI will be able to beat a human any time soon, when the AI is subject to physical limitations similar to that of a human.

Also your understanding of AI right now isn't very accurate. The AI won't have 'analyzed every play in the history of the game including every pro, amateur and ladder match', it couldn't for Go because there were simply too many game states, and StarCraft has several magnitudes more different game states.

Brute force worked for chess but not for Go which needed a neural network AI to 'solve'. For AlphaGo, they fed the AI around 150,000 pro-games (there's no way they will be able to feed a StarCraft AI anywhere close to that, and even if they could I'm pretty sure StarCraft would required waaaay more games than Go to get it to the same place). AlphaGo then played against itself for months using a crazy amount of computing power (which amounts to a completely insane number of games). In StarCraft, every tiny change in any unit's positioning is a different game-state (or even the position of the mouse cursor), so it'll constantly be playing game-states that it's never played before.

You also say the AI can perform this super complex analysis 'on a micro-second basis' which isn't the case at all. AlphaGo does not make decisions instantaneously, AlphaGo and Lee Sedol in their ongoing match are both given 2 hours of thinking time per game, and Go is a game with around 200 discrete moves, if you consider every action made in StarCraft there are probably tens of thousands of discrete moves from both sides. Making the AI able to make decisions in 1/60th of a second is going to be a huge challenge, and there is no way it can perform the same level of analysis in 1/60th of a second as it could with even 1 second.
bluQ
Profile Blog Joined January 2011
Germany1724 Posts
Last Edited: 2016-03-14 19:36:44
March 14 2016 19:26 GMT
#274
On March 15 2016 00:04 suid wrote:
Show nested quote +
On March 14 2016 18:26 papaz wrote:
Rofl @humans and their pride.

Why does some many people think humans have some kind of magic that can't be replicated by computers?

Is it because people are religious and think we have some kind of "soul" or something along those lines that makes us unique?

We are just advanced biological machines as a result of evolution.

We will of course sooner or later develop AI and computers that outshines us in every single way. It's just a matter of time. What argument is there that this won't happen?

It's just a matter of science and enough time.

Science fighting!


I agree that, in general, humans overestimate their position in the animal kingdom, but there are non-religious, professional philosophers and cognitive scientists who believe that human consciousness is an emergent phenomena from impossibly complicated circuits at the neurobiological level that they believe cannot be completely comprehended, let alone reproduced, by theoretical algorithms. Dennett says this consciousness is an illusion, which I typically feel is probably an accurate statement of what most people think of. Any species with an endocrine system, a sufficiently complex endbrain, and a few other structures/connections probably experiences some feeling of "sentience" or "consciousness." But, again, I think those words probably don't actually mean anything scientifically. It's very easy to discuss the components of consciousness without actually even realizing it's consciousness being discussed.

Sure, humans are "just advanced biological machines as a result of evolution." How long did that process take again? 3.5 billion years? You're view of the complexity of the human species seems very degraded. The theoretical/computational neuroscientists don't really even have a falsifiable theory of the brain yet. It's a gigantic pile of anatomical details and physiological details. There are still many, many experiments to be done, and that alone can take extremely long. And, fwiw, the "Turing test" is an idiotic metric of machine intelligence.

A couple areas that machines will have a very difficult time "outshining" human expertise or biology are energy conservation (metabolism), regardless of species; least for a very, very, very long time. Evolution optimized that process very well. That's an opinion I just formed while writing this, maybe someone else has actually studied it.

My view of the SC2 AI is that this could already happen today very easily. I don't know why anyone cares about Boxer's opinion.

Let me help you with your argument:
They estimate that simulation of the whole human brain would require supercomputer with about 500 petabytes of memory. Current record in one system is 1.5 petabytes (Sequoia supercomputer). So we need a system over 300 times larger. Such machines are not expected in this decade. Human Brain Project expects that such machines will be available around 2023.
Source: here


People oversimplify the problem. Without a quantum computer no AI will outshine a humans brain.

And one simple fact most are missing anyways: An AI designed to ONLY be superb at a certain thing just can't be compared to a human who dedicates their brain-power to a diversity of things (social life, sports, school, studies etc. etc.).

To speak of "superior" AIs is the real illusion here.

Science hwaiting.
www.twitch.tv/bluquh (PoE, Starbow, HS)
L_Master
Profile Blog Joined April 2009
United States8017 Posts
March 14 2016 19:47 GMT
#275
On March 13 2016 03:55 [PkF] Wire wrote:
Show nested quote +
On March 13 2016 03:47 AdrianHealeyy wrote:
I think we need to differentiate two things here.

It's probably not that hard to come up with an AI that can have perfect micro. The trick is: can we design an AI with 'human' micro that can still consistently beat humans, based on insight, analysis, response, etc.?

That would be the ultimate challenge. I still think they can do it, but it'll take longer.

The problem is how do you define human micro (and even human multitask). A simple limit on the APM wouldn't even be enough I think, since the computer doesn't spam and -more importantly- sees all screens at once.


It would still make it reasonably fair. If you capped the computer at say, 180-200apm, that would be pretty on par with the number of useful actions taken by progamers. Yea, the computer could certainly be more optimal in its use of those given actions, but it would absolutely shut down almost all the obscene micro possibilities with individual muta micro, instant perfect marine splits vs lurkers, perfectly pulling back all units in battles at once, etc.

If the computer learns well enough, I think it will still have a significant control advantage over human players, but probably won't be able to execute game breaking micro.
EffOrt and Soulkey Hwaiting!
Xyik
Profile Blog Joined November 2009
Canada728 Posts
March 14 2016 19:51 GMT
#276
Seems like most people commenting here aren't aware that many attempts have been made to make bots in BW, in fact there were competitions to do so, and to my knowledge there has never been one to rival a pro level player (of course, there was no big company like Google working on the bots).

Asides from perfect micro which is imo is not that interesting, i don't think an A.I can ever consistently beat a top pro. Like Boxer and Flash have said there are far more variables to SC than Go.

1. It would be difficult to train / test the A.I. Unlike Go, where an A.I could probably simulate 100 moves a second to learn optimal openings, an A.I wouldn't be able to simulate SC games at the same speed. The best it could do is parse out replay-level data (build orders, clicks, etc. - and I'm not sure how detailed that data is) and play real games (and obv the avg SC game length is around ~10-20 minutes, allowing a max of playing ~200 games a day).

2. In general, there is less data on SC that the A.I could learn from, and more importantly, the Google developer team working on the A.I could learn from. While the A.I uses unsupervised learning, the developers on the A.I themselves would still need to know what type of data to feed it, and what type of data is important. They would probably need to recruit a couple high-level SC players to get a sense of whats important in SC, vs what is important in Go. For example, in Go the A.I knows to optimize to net stones. What would it optimize for in SC?

3. There is a factor of luck. An A.I that plays by probability can miss the tiniest things. An easy example that comes to mind is scouting. The scouting path taken can make a huge difference, if some proxy cheese build is found 30 seconds late it could mean the entire game.

4. The number of maps, start positions and race match-up will make the number of situations it has to learn in explode exponentially. Unless they decide to only train the A.I on a select number of maps and match-ups, but then once again there is the problem of having even less data to learn from in the first place.

5. SC is a game of taking risk, making trade-offs with limited information. This makes it very difficult to consistently win games, which is why the state of the game today pretty extremely difficult for any player to have > 70% win rates overall.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-14 19:55:45
March 14 2016 19:53 GMT
#277
On March 15 2016 04:51 Xyik wrote:
Seems like most people commenting here aren't aware that many attempts have been made to make bots in BW, in fact there were competitions to do so, and to my knowledge there has never been one to rival a pro level player (of course, there was no big company like Google working on the bots).



There have been some people messing around, just for fun, that's correct.

Doesn't mean it is easy, but there has been no serious attempt.


3. There is a factor of luck. An A.I that plays by probability can miss the tiniest things. An easy example that comes to mind is scouting. The scouting path taken can make a huge difference, if some proxy cheese build is found 30 seconds late it could mean the entire game.


The AI will on average be just as lucky as the human player.


I agree though that a very big skill difference is needed in Starcraft to get a significant higher win-rate.
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-14 20:53:35
March 14 2016 20:53 GMT
#278
the AI competitions run thus far are pretty serious.

http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/report2013.shtml

i believe the university of alberta has constructed a series Foundation Framework Classes for coders to create new AI.

waterloo is still better though.
just cause.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-14 22:08:59
March 14 2016 20:56 GMT
#279
They just do that to motivate unmotivatable students. And yes, they mess around.
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-14 21:26:17
March 14 2016 21:08 GMT
#280
sry guy,
churchill is not messing around.

also, when u use a generalized term like "student" you're not saying anything.
it could be a 7 year old or it could be a 26 year old phd student defending his or her thesis.

http://webdocs.cs.ualberta.ca/~cdavid/research.shtml

http://webdocs.cs.ualberta.ca/~cdavid/publications.shtml

http://webdocs.cs.ualberta.ca/~cdavid/pdf/combat13.pdf

based on what i'm reading the first AI to consistently defeat pro players using human APM limits will probably have to employ at least 1 top level SC player.

"As with many game search applications, state spaces are
often too large to search completely, so heuristics must be
employed to evaluate non-terminal states. In traditional games
such as Checkers or Chess these heuristic functions often
depend on expertly crafted formula based on intuitive notions
such as game positioning, or material counts
."

they'll need some top level players for that.

Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
BronzeKnee
Profile Joined March 2011
United States5217 Posts
Last Edited: 2016-03-14 23:10:41
March 14 2016 22:09 GMT
#281
On March 14 2016 02:33 Erugua wrote:
I don't even see how a human could beat a descent AI since it can have 6000 apm and pretty, can manage 5 packs of armies a way human couldn't in different locations while macroing perfectly. Yeah that's probably very hard to make an AI that does that well, but if it exist one day, it'll have 100% win chance vs human, no doubt

For me the real question is " can a machine be powerful enough to realise that goal ", and the awnser is obviously yes on SCBW, and maybe not yet on SC2


How would it have 6000 apm given the mechanical restraints of a keyboard, monitor and a mouse?

Perhaps a better match would be for the only restraint to be a monitor, and allow the human mind to control the game. It would be such an easy win for humans when we don't have to rely on our fat fingers and can just think what need to happens.

So many of you underestimate the mind. No AI, at least in my lifetime, will ever be able to react as intelligently and as fast as the mind to changing environments. SC2 is not a turn based game.
Whitewing
Profile Joined October 2010
United States7483 Posts
March 15 2016 00:13 GMT
#282
On March 15 2016 07:09 BronzeKnee wrote:
Show nested quote +
On March 14 2016 02:33 Erugua wrote:
I don't even see how a human could beat a descent AI since it can have 6000 apm and pretty, can manage 5 packs of armies a way human couldn't in different locations while macroing perfectly. Yeah that's probably very hard to make an AI that does that well, but if it exist one day, it'll have 100% win chance vs human, no doubt

For me the real question is " can a machine be powerful enough to realise that goal ", and the awnser is obviously yes on SCBW, and maybe not yet on SC2


How would it have 6000 apm given the mechanical restraints of a keyboard, monitor and a mouse?

Perhaps a better match would be for the only restraint to be a monitor, and allow the human mind to control the game. It would be such an easy win for humans when we don't have to rely on our fat fingers and can just think what need to happens.

So many of you underestimate the mind. No AI, at least in my lifetime, will ever be able to react as intelligently and as fast as the mind to changing environments. SC2 is not a turn based game.


You have it wrong. A computer can calculate much faster than any human can. The difference is that computers suck horribly at dealing with imperfect information, while a human can evaluate and respond more accurately and account for a wider array of potential options.

This is why computers don't lose at chess or go but they can't win at bridge.
Strategy"You know I fucking hate the way you play, right?" ~SC2John
BillGates
Profile Blog Joined April 2013
471 Posts
March 15 2016 00:35 GMT
#283
In a game like SC2 it probably can win because the game is not so much about outthinking and outwitting the opponent as it is of hard countering it with precise numbers and using good micro with an AI can do easily.

SC1 on the other hand is a lot more tactical, a lot more skill based, it requires more thinking, it requires higher level of strategy, etc...

So it won't be too hard for an AI to master SC2, but SC1 on the other hand is a different beast.

User was warned for this post
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-15 01:05:16
March 15 2016 00:37 GMT
#284
On March 15 2016 09:13 Whitewing wrote:
Show nested quote +
On March 15 2016 07:09 BronzeKnee wrote:
On March 14 2016 02:33 Erugua wrote:
I don't even see how a human could beat a descent AI since it can have 6000 apm and pretty, can manage 5 packs of armies a way human couldn't in different locations while macroing perfectly. Yeah that's probably very hard to make an AI that does that well, but if it exist one day, it'll have 100% win chance vs human, no doubt

For me the real question is " can a machine be powerful enough to realise that goal ", and the awnser is obviously yes on SCBW, and maybe not yet on SC2


How would it have 6000 apm given the mechanical restraints of a keyboard, monitor and a mouse?

Perhaps a better match would be for the only restraint to be a monitor, and allow the human mind to control the game. It would be such an easy win for humans when we don't have to rely on our fat fingers and can just think what need to happens.

So many of you underestimate the mind. No AI, at least in my lifetime, will ever be able to react as intelligently and as fast as the mind to changing environments. SC2 is not a turn based game.


You have it wrong. A computer can calculate much faster than any human can. The difference is that computers suck horribly at dealing with imperfect information, while a human can evaluate and respond more accurately and account for a wider array of potential options.

This is why computers don't lose at chess or go but they can't win at bridge.


more precisely, it is not the "CPU" or "computer" rather it is the mathematics of decision-making with partial information that is weak.

as this branch of mathematics grows and improves computers get better at dealing with imperfect
information.

http://www.wired.com/2015/05/humans-play-ai-texas-hold-em-now/

and as Churchill stated in my previous post some heuristics are required in games like Chess and Starcraft. "state spaces are often too large to search completely, so heuristics must be employed to evaluate non-terminal states". the first really great SC AI Bot will need some really smart SC players to craft some top notch heuristic functions.

basically, if a team of AI experts hires Boxer as their only expert player and they end up building a great AI that totally kicks ass.. it'll do so using a Boxer playstyle.

we are many years away from building a Starcraft AI-bot that requires no heuristic functions.

IOW : we are many years away from a team of AI specialists building a top notch AIBot with zero input from SC expert world class players.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
JeffKim
Profile Blog Joined November 2013
Korea (South)36 Posts
Last Edited: 2016-03-15 01:03:26
March 15 2016 01:02 GMT
#285
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it.
This is ill-informed as I've seen throughout TL.

This is like saying "of course a human can move a mountain with their hands, they just need to push hard enough" without taking into consideration anything else.
TheZov
Profile Joined December 2010
Russian Federation34 Posts
March 15 2016 02:06 GMT
#286
On March 15 2016 04:10 ZAiNs wrote:
Show nested quote +
On March 15 2016 03:29 TheZov wrote:
Oh come on guys, what are we talking about here? The AI of tomorrow is not the Deep Blue archetype that (I might point out) DID beat a professional chess player at his own game. Dynamic thinking machines that have unlimited APM and unlimited ability to process, calculate, analyze and mimic every play in the history of the game (both pro and amateur, including every ladder game ever played in the history of the game), on a micro-second basis, selecting situational solutions in real time with perfect execution and an unlimited potential for multi-prong execution... Does that sound like something anyone else can do?

First of all, it's well agreed upon that if the AI exploits infinite APM it will win easily. What's in question is whether an AI will be able to beat a human any time soon, when the AI is subject to physical limitations similar to that of a human.

Also your understanding of AI right now isn't very accurate. The AI won't have 'analyzed every play in the history of the game including every pro, amateur and ladder match', it couldn't for Go because there were simply too many game states, and StarCraft has several magnitudes more different game states.

Brute force worked for chess but not for Go which needed a neural network AI to 'solve'. For AlphaGo, they fed the AI around 150,000 pro-games (there's no way they will be able to feed a StarCraft AI anywhere close to that, and even if they could I'm pretty sure StarCraft would required waaaay more games than Go to get it to the same place). AlphaGo then played against itself for months using a crazy amount of computing power (which amounts to a completely insane number of games). In StarCraft, every tiny change in any unit's positioning is a different game-state (or even the position of the mouse cursor), so it'll constantly be playing game-states that it's never played before.

You also say the AI can perform this super complex analysis 'on a micro-second basis' which isn't the case at all. AlphaGo does not make decisions instantaneously, AlphaGo and Lee Sedol in their ongoing match are both given 2 hours of thinking time per game, and Go is a game with around 200 discrete moves, if you consider every action made in StarCraft there are probably tens of thousands of discrete moves from both sides. Making the AI able to make decisions in 1/60th of a second is going to be a huge challenge, and there is no way it can perform the same level of analysis in 1/60th of a second as it could with even 1 second.


Oh I thought we were talking about the REAL ai, singularity type shit. Of course it won't happen soon but it will happen
Economy is priority #1, #2, and #3 through 7.
emc
Profile Joined September 2010
United States3088 Posts
Last Edited: 2016-03-15 04:25:09
March 15 2016 02:20 GMT
#287
I will never, ever doubt the will of the technological world that would develop an AI to beat the best players in the world. Will it happen tomorrow? A year from now? It doesn't matter, because technology gets exponentially better that eventually an AI would be smarter, more adaptable than a player and with no emotions.

it could take forever, I don't know if we'll ever see a true thinking AI in our life times but never say never boys, the apocalypse is coming.

http://stream1.gifsoup.com/view7/3949890/sarah-connor-o.gif
Nuclease
Profile Joined August 2011
United States1049 Posts
March 15 2016 03:04 GMT
#288
Every game that has ever been "solved" (in the AI scientific sense of the word) by a computer has had a player that claims a computer won't be able to beat humans in competition. The fact is, as much as I respect Flash, he doesn't really know what he's talking about, so I'm not too impressed.
Zealots, not zee-lots. | Never forget, KTViolet, Go)Space. | You will never be as good as By.Flash, and your drops will never be as sick as MMA.
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-15 03:28:34
March 15 2016 03:25 GMT
#289
i am really interested in this topic as well, but i would side with the "humans would win" opinion

Go is a game with "full information", while sc2 is not. AI can't handle this situation.
if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player.
“Never assume malice when stupidity will suffice.”
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
Last Edited: 2016-03-15 05:25:26
March 15 2016 04:51 GMT
#290
i stumbled across this and its really damn frickin' cool.

http://www.gdcvault.com/play/1021848/Building-a-Better-Centaur-AI

the Infinite Axis Utlility System is really frickin' cool and this same guy talked about that subsystem in great depth and detail at GDC 2013.

notice that at its deepest foundation the IAUS requires carefully defined heuristic functions. the response curves are all human created.
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
papaz
Profile Joined December 2009
Sweden4149 Posts
March 15 2016 07:12 GMT
#291
On March 15 2016 07:09 BronzeKnee wrote:
Show nested quote +
On March 14 2016 02:33 Erugua wrote:
I don't even see how a human could beat a descent AI since it can have 6000 apm and pretty, can manage 5 packs of armies a way human couldn't in different locations while macroing perfectly. Yeah that's probably very hard to make an AI that does that well, but if it exist one day, it'll have 100% win chance vs human, no doubt

For me the real question is " can a machine be powerful enough to realise that goal ", and the awnser is obviously yes on SCBW, and maybe not yet on SC2


How would it have 6000 apm given the mechanical restraints of a keyboard, monitor and a mouse?

Perhaps a better match would be for the only restraint to be a monitor, and allow the human mind to control the game. It would be such an easy win for humans when we don't have to rely on our fat fingers and can just think what need to happens.

So many of you underestimate the mind. No AI, at least in my lifetime, will ever be able to react as intelligently and as fast as the mind to changing environments. SC2 is not a turn based game.


Where "at least in my lifetime" is at best the most accurate thing in your post.

Otherwise there is nothing explained why an AI wouldn't beat a human unless there is some magical element in humans that can't be replicated by science.
papaz
Profile Joined December 2009
Sweden4149 Posts
March 15 2016 07:16 GMT
#292
On March 14 2016 23:21 BisuDagger wrote:
Until the AI plays a person, it won't learn the micro tricks that players contain: Stacked lurkers, glitching through mineral lines, observer on top of missile turret. Hell, even ally your opponent so their spider mines don't work could be used.

On the other side, what are the selection limitations of the computer? If a Terran floats barracks over a hatchery then the hatchery is un-selectable. If a barracks is on top of a cluster of tanks that sit on the high ground, then the tanks cannot be target by direct mouse clicks. Does the computer have to abide by the same rules in that sense?


Again not true.

How has humans learnt the micro trics? By some divine power or by testing?

What makes you think AI can't be self learning. Even if the technology isn't there yet for self learning in SC2 there are other games (like GO) where the AI has improved by playing millions and millions of games with itself finding the best moves.

I find it funny that people want to see themselves (humans) that is something completely different from AI. Like we possess some magical or divine power that can't be replicated or exist anywhere else than in or "soul".

Seriously, is it so hard to understand that humans are nothing but advanced biological machines and there is nothing from stopping us from creating AI that one day surprass us?
Liquid`Bunny
Profile Joined May 2011
Denmark145 Posts
March 15 2016 07:56 GMT
#293
On March 14 2016 21:01 BeyondCtrL wrote:
Show nested quote +
On March 14 2016 20:51 Liquid`Bunny wrote:
On March 14 2016 20:03 Liquid`Nazgul wrote:
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~

When we create a program that can make a better program on it's own, that's when the trouble starts.


AlphaGo does that already.

AlphaGo doesn't change the way it's programmed, it will always be programmed in a certain way. What it can change is parameters within functions to achieve a better result.
Team Liquid
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
March 15 2016 07:59 GMT
#294
On March 15 2016 12:25 necaremus wrote:
i am really interested in this topic as well, but i would side with the "humans would win" opinion

Go is a game with "full information", while sc2 is not. AI can't handle this situation.
if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player.

That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain.

In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW).

If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
KameZerg
Profile Blog Joined May 2007
Sweden1761 Posts
March 15 2016 08:24 GMT
#295
So what map are they gonna program it for? FS?
asdasdasdasdasd123123123
BisuDagger
Profile Blog Joined October 2009
Bisutopia19219 Posts
March 15 2016 08:27 GMT
#296
On March 15 2016 16:16 papaz wrote:
Show nested quote +
On March 14 2016 23:21 BisuDagger wrote:
Until the AI plays a person, it won't learn the micro tricks that players contain: Stacked lurkers, glitching through mineral lines, observer on top of missile turret. Hell, even ally your opponent so their spider mines don't work could be used.

On the other side, what are the selection limitations of the computer? If a Terran floats barracks over a hatchery then the hatchery is un-selectable. If a barracks is on top of a cluster of tanks that sit on the high ground, then the tanks cannot be target by direct mouse clicks. Does the computer have to abide by the same rules in that sense?


Again not true.

How has humans learnt the micro trics? By some divine power or by testing?

What makes you think AI can't be self learning. Even if the technology isn't there yet for self learning in SC2 there are other games (like GO) where the AI has improved by playing millions and millions of games with itself finding the best moves.

I find it funny that people want to see themselves (humans) that is something completely different from AI. Like we possess some magical or divine power that can't be replicated or exist anywhere else than in or "soul".

Seriously, is it so hard to understand that humans are nothing but advanced biological machines and there is nothing from stopping us from creating AI that one day surprass us?

I have a background in AI and spent my entire career in simulation so please don't attempt to lecture or patronize me lol. My post was about game rules. Is the AI given the capability to access the ally opponent screen so it can do the spider mine trick? My guess is no. It may however stumble on to hold lurker by seeing the H button is an extra contextual button available to it when a lurker is paired with another unit. On the other hand, Does the computer have the same limitations as a user in terms of mouse clicks to select buildings? This is a big unknown to me. It may perform a raycast and stop at the first object hit or sort through all objects hit by the raycast and if so then be able to select hatcheries covered by Terran buildings.
ModeratorFormer Afreeca Starleague Caster: http://afreeca.tv/ASL2ENG2
Caihead
Profile Blog Joined July 2011
Canada8550 Posts
March 15 2016 09:07 GMT
#297
On March 15 2016 17:27 BisuDagger wrote:
Show nested quote +
On March 15 2016 16:16 papaz wrote:
On March 14 2016 23:21 BisuDagger wrote:
Until the AI plays a person, it won't learn the micro tricks that players contain: Stacked lurkers, glitching through mineral lines, observer on top of missile turret. Hell, even ally your opponent so their spider mines don't work could be used.

On the other side, what are the selection limitations of the computer? If a Terran floats barracks over a hatchery then the hatchery is un-selectable. If a barracks is on top of a cluster of tanks that sit on the high ground, then the tanks cannot be target by direct mouse clicks. Does the computer have to abide by the same rules in that sense?


Again not true.

How has humans learnt the micro trics? By some divine power or by testing?

What makes you think AI can't be self learning. Even if the technology isn't there yet for self learning in SC2 there are other games (like GO) where the AI has improved by playing millions and millions of games with itself finding the best moves.

I find it funny that people want to see themselves (humans) that is something completely different from AI. Like we possess some magical or divine power that can't be replicated or exist anywhere else than in or "soul".

Seriously, is it so hard to understand that humans are nothing but advanced biological machines and there is nothing from stopping us from creating AI that one day surprass us?

I have a background in AI and spent my entire career in simulation so please don't attempt to lecture or patronize me lol. My post was about game rules. Is the AI given the capability to access the ally opponent screen so it can do the spider mine trick? My guess is no. It may however stumble on to hold lurker by seeing the H button is an extra contextual button available to it when a lurker is paired with another unit. On the other hand, Does the computer have the same limitations as a user in terms of mouse clicks to select buildings? This is a big unknown to me. It may perform a raycast and stop at the first object hit or sort through all objects hit by the raycast and if so then be able to select hatcheries covered by Terran buildings.


Why would the AI have to obey the same input rules as a human? To make it "fair"? AlphaGo doesn't have an entire set of mechanical limbs and locomotion just so it can drop game pieces on a board. Humans are taking billions of years of evolution for granted, if anything the AI is at a great disadvantage since it's playing games that humans designed for humans to play. You wouldn't make a human play BW by manually inputting onto a circuit board with a power supply and probes the electrical signals that represent in game triggers would you?

As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete. Everything in the universe function in accordance to some set of fundamental laws and rules regardless of whether they are comprehensible to us, if you devote enough energy, time, and ordered structures to a task it will be completed regardless. The real question is whether doing that achieves some conscious goal for some wealth or good. If we devoted all of the planet's most brilliant scientists and engineers, all of the ex-bw pro-players and coaches for strategic input, and all of humanity's manufacturing facilities and raw resources to build a machine just to play starcraft that used all the electricity and fuel sources available to us to power it then it would be a completely trivial task to beat one person. Unless you believe there is something fundamentally non deterministic that violates all computational mathematics and sciences inherent in a human being.
"If you're not living in the US or are a US Citizen, please do not tell us how to vote or how you want our country to be governed." - Serpest, American Hero
BisuDagger
Profile Blog Joined October 2009
Bisutopia19219 Posts
March 15 2016 09:34 GMT
#298
On March 15 2016 18:07 Caihead wrote:
Show nested quote +
On March 15 2016 17:27 BisuDagger wrote:
On March 15 2016 16:16 papaz wrote:
On March 14 2016 23:21 BisuDagger wrote:
Until the AI plays a person, it won't learn the micro tricks that players contain: Stacked lurkers, glitching through mineral lines, observer on top of missile turret. Hell, even ally your opponent so their spider mines don't work could be used.

On the other side, what are the selection limitations of the computer? If a Terran floats barracks over a hatchery then the hatchery is un-selectable. If a barracks is on top of a cluster of tanks that sit on the high ground, then the tanks cannot be target by direct mouse clicks. Does the computer have to abide by the same rules in that sense?


Again not true.

How has humans learnt the micro trics? By some divine power or by testing?

What makes you think AI can't be self learning. Even if the technology isn't there yet for self learning in SC2 there are other games (like GO) where the AI has improved by playing millions and millions of games with itself finding the best moves.

I find it funny that people want to see themselves (humans) that is something completely different from AI. Like we possess some magical or divine power that can't be replicated or exist anywhere else than in or "soul".

Seriously, is it so hard to understand that humans are nothing but advanced biological machines and there is nothing from stopping us from creating AI that one day surprass us?

I have a background in AI and spent my entire career in simulation so please don't attempt to lecture or patronize me lol. My post was about game rules. Is the AI given the capability to access the ally opponent screen so it can do the spider mine trick? My guess is no. It may however stumble on to hold lurker by seeing the H button is an extra contextual button available to it when a lurker is paired with another unit. On the other hand, Does the computer have the same limitations as a user in terms of mouse clicks to select buildings? This is a big unknown to me. It may perform a raycast and stop at the first object hit or sort through all objects hit by the raycast and if so then be able to select hatcheries covered by Terran buildings.


Why would the AI have to obey the same input rules as a human? To make it "fair"? AlphaGo doesn't have an entire set of mechanical limbs and locomotion just so it can drop game pieces on a board. Humans are taking billions of years of evolution for granted, if anything the AI is at a great disadvantage since it's playing games that humans designed for humans to play. You wouldn't make a human play BW by manually inputting onto a circuit board with a power supply and probes the electrical signals that represent in game triggers would you?

As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete. Everything in the universe function in accordance to some set of fundamental laws and rules regardless of whether they are comprehensible to us, if you devote enough energy, time, and ordered structures to a task it will be completed regardless. The real question is whether doing that achieves some conscious goal for some wealth or good. If we devoted all of the planet's most brilliant scientists and engineers, all of the ex-bw pro-players and coaches for strategic input, and all of humanity's manufacturing facilities and raw resources to build a machine just to play starcraft that used all the electricity and fuel sources available to us to power it then it would be a completely trivial task to beat one person. Unless you believe there is something fundamentally non deterministic that violates all computational mathematics and sciences inherent in a human being.


This has nothing to do with mechanical input. I am talking completely on software level. A raycast is a software term. I guess people who don't program shouldn't be relying to me. Anyone who has a full understanding of the SC1 API can feel free to lecture me on its limitations compared to a human player.
ModeratorFormer Afreeca Starleague Caster: http://afreeca.tv/ASL2ENG2
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 10:31:07
March 15 2016 10:24 GMT
#299
They can do it in 4 different ways:
1. Computer can read SC2 process memory and send commands directly into the game process.
2. Computer can read SC2 process memory but can only send mouse and keyboard commands.
3. Computer has no access to SC2 process memory. It needs to observe the picture on the monitor to gather information and can only send mouse and keyboard commands.
4. Computer needs to observe the picture on the monitor to gather information and can only physically control mouse and keyboard.


AI will win in cases №1 and №2 and won't even consider playing in conditions №3 and 4 and that's why:

Starcraft wasn't designed to be played perfectly. Because it is impossible to do it by humans and that's why whole game design and balance only works with limitation of human motility and reaction.
Can you calculate which if your roaches siege tank will shoot right now by looking on his weapon position? Absolutely not. But even if you do, there is no way that you can properly react on this information by microing roaches properly to receive as minimum damage as possible.

We all saw these Automaton 2000 micro videos which proves that 5 siege tanks with medevacs controlled by AI can destroy a 200 limit zerg ground army without taking any damage. And this Automaton 2000 was created by one or several enthusiasts. It is nothing compared to what team from Google can create.

This game is not turn based. And it will be harder for AI. Correct. But it also so much harder for humans. Imagine if in GO you can play stones as fast as you can. Board would be filled with stones by computer even before Lee could touch his first stone. Turn based concept doesn't give any advantage to computers. It is much more favorable for humans and still computer wins 4-1 in the most difficult turn based game vs the best human possible.

In first two cases where AI can read SC2 process memory (limited by fog of war of cause) to gather information for processing it will beat any pro gamer just by simply outmicroing him in a way that game wasn't designed for. Micro in case №2 will be 1000 times slower than in №1 but it will still be 100000 faster than any human which is enough.

You could say that micro only just isn't working. That's why current Starcraft AI can't beat humans. And you are totally right. But current Starcraft AI is nothing compared to what google have created. It is like comparing a medusa to Stephen Hawking at who can do math better. Google AI will be 1000 times better than any human in both micro and macro.

Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

Cases 3 and 4 isn't about AI. It is about AI, but totally different AI. It is about image analysis and recognition. Which is totally not what this particular program is supposed to do.
Be polite, be professional, but have a plan to kill everybody you meet.
xN.07)MaK
Profile Joined January 2006
Spain1159 Posts
March 15 2016 10:50 GMT
#300
On March 15 2016 19:24 sh1RoKen wrote:
They can do it in 4 different ways:
1. Computer can read SC2 process memory and send commands directly into the game process.
2. Computer can read SC2 process memory but can only send mouse and keyboard commands.
3. Computer has no access to SC2 process memory. It needs to observe the picture on the monitor to gather information and can only send mouse and keyboard commands.
4. Computer needs to observe the picture on the monitor to gather information and can only physically control mouse and keyboard.


AI will win in cases №1 and №2 and won't even consider playing in conditions №3 and 4 and that's why:

Starcraft wasn't designed to be played perfectly. Because it is impossible to do it by humans and that's why whole game design and balance only works with limitation of human motility and reaction.
Can you calculate which if your roaches siege tank will shoot right now by looking on his weapon position? Absolutely not. But even if you do, there is no way that you can properly react on this information by microing roaches properly to receive as minimum damage as possible.

We all saw these Automaton 2000 micro videos which proves that 5 siege tanks with medevacs controlled by AI can destroy a 200 limit zerg ground army without taking any damage. And this Automaton 2000 was created by one or several enthusiasts. It is nothing compared to what team from Google can create.

This game is not turn based. And it will be harder for AI. Correct. But it also so much harder for humans. Imagine if in GO you can play stones as fast as you can. Board would be filled with stones by computer even before Lee could touch his first stone. Turn based concept doesn't give any advantage to computers. It is much more favorable for humans and still computer wins 4-1 in the most difficult turn based game vs the best human possible.

In first two cases where AI can read SC2 process memory (limited by fog of war of cause) to gather information for processing it will beat any pro gamer just by simply outmicroing him in a way that game wasn't designed for. Micro in case №2 will be 1000 times slower than in №1 but it will still be 100000 faster than any human which is enough.

You could say that micro only just isn't working. That's why current Starcraft AI can't beat humans. And you are totally right. But current Starcraft AI is nothing compared to what google have created. It is like comparing a medusa to Stephen Hawking at who can do math better. Google AI will be 1000 times better than any human in both micro and macro.

Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

Cases 3 and 4 isn't about AI. It is about AI, but totally different AI. It is about image analysis and recognition. Which is totally not what this particular program is supposed to do.


Great post.

In any case, I'm not sure about what are Google goals concerning AI, but as an external observer cases 3 and 4 are more interesting to me. If everything else is equal (apm, visual recognition, physical constraints, etc)... can AI beat the best decision makers just by learning playing itself?

Btw, 10 million games per day vs himself seems a lot to me :D
El micro es el último recurso que les queda a los que no producen lo suficiente
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 15 2016 12:00 GMT
#301
On March 15 2016 16:56 Liquid`Bunny wrote:
Show nested quote +
On March 14 2016 21:01 BeyondCtrL wrote:
On March 14 2016 20:51 Liquid`Bunny wrote:
On March 14 2016 20:03 Liquid`Nazgul wrote:
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~

When we create a program that can make a better program on it's own, that's when the trouble starts.


AlphaGo does that already.

AlphaGo doesn't change the way it's programmed, it will always be programmed in a certain way. What it can change is parameters within functions to achieve a better result.


Just like humans. You only have about 3 billion variables to program a human, and the great majority of that is about synthesizing proteins and throwing little molecules around.

Which reminds me of a story about Claude Shannon, one of the founders of computer science:
Reporter: Can computers think?
Shannon: Can you think?
Reporter: Yes.
Shannon: So, yes.
What qxc said.
JimmyJRaynor
Profile Blog Joined April 2010
Canada16653 Posts
March 15 2016 12:24 GMT
#302
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-15 12:45:37
March 15 2016 12:28 GMT
#303
On March 15 2016 16:59 deacon.frost wrote:
Show nested quote +
On March 15 2016 12:25 necaremus wrote:
i am really interested in this topic as well, but i would side with the "humans would win" opinion

Go is a game with "full information", while sc2 is not. AI can't handle this situation.
if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player.

That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain.

In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW).

If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7


my point about "full information": in SC you have fog of war, you do not see your enemy. There is a big uncertain factor for the AI: do i move out with my army and risk being counter attacked? For the human these factors of uncertainty are normal: to be honest, it's the only way of how we interact with the outside world: with a big uncertainty - we don't even know it any different except for special cases like the game of "Go".

I could imagine that the AI has a big problem, if the AI-scout doesn't find the enemy army (because our human didn't build one, maybe?) it would try to scout the whole map before moving out, because it doesn't want to risk a counter-atk. A human would just a-move and win

On March 15 2016 18:07 Caihead wrote:
As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete.

this.

when i heard about how AlphaGo works, i thought of this: numberphile, knots/DNA
more precise (2nd video to this topic on numberphile ~ 1:30)
Type II Topoisomerase

my thought was "holy shit, we can already 'build' the logical infrastructure, of a component of bacteria"
only thing is: we need about 10^100000 more energy... (arbitrary chosen, but something rly huge as factor)

so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
“Never assume malice when stupidity will suffice.”
NukeD
Profile Joined October 2010
Croatia1612 Posts
March 15 2016 12:36 GMT
#304
Machines are our friends, not our enemies!
sorry for dem one liners
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
March 15 2016 13:25 GMT
#305
On March 15 2016 21:28 necaremus wrote:
Show nested quote +
On March 15 2016 16:59 deacon.frost wrote:
On March 15 2016 12:25 necaremus wrote:
i am really interested in this topic as well, but i would side with the "humans would win" opinion

Go is a game with "full information", while sc2 is not. AI can't handle this situation.
if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player.

That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain.

In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW).

If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7


my point about "full information": in SC you have fog of war, you do not see your enemy. There is a big uncertain factor for the AI: do i move out with my army and risk being counter attacked? For the human these factors of uncertainty are normal: to be honest, it's the only way of how we interact with the outside world: with a big uncertainty - we don't even know it any different except for special cases like the game of "Go".

I could imagine that the AI has a big problem, if the AI-scout doesn't find the enemy army (because our human didn't build one, maybe?) it would try to scout the whole map before moving out, because it doesn't want to risk a counter-atk. A human would just a-move and win

Show nested quote +
On March 15 2016 18:07 Caihead wrote:
As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete.

this.

when i heard about how AlphaGo works, i thought of this: numberphile, knots/DNA
more precise (2nd video to this topic on numberphile ~ 1:30)
Type II Topoisomerase

my thought was "holy shit, we can already 'build' the logical infrastructure, of a component of bacteria"
only thing is: we need about 10^100000 more energy... (arbitrary chosen, but something rly huge as factor)

so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)

It doesn't.

It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.

The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)

Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu

It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is).
It seems to me that either we are talking about something else or you don't know how that works
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
heqat
Profile Joined October 2011
Switzerland96 Posts
March 15 2016 13:39 GMT
#306
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 13:53:11
March 15 2016 13:41 GMT
#307
On March 15 2016 21:24 JimmyJRaynor wrote:
Show nested quote +
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?


How they did with GO:

1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game).
At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".

2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.

3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.

They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.
Be polite, be professional, but have a plan to kill everybody you meet.
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 14:25 GMT
#308
On March 15 2016 22:25 deacon.frost wrote:
It doesn't.

It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.

The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)

Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu

It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is).
It seems to me that either we are talking about something else or you don't know how that works


I don't know how it works completely, but i have a few information and a lot of uncertainty, which i use to evaluate the situation

I know, that the AI doesn't need all information, but i wanted to point out, that we have a whole new range of problems regarding starcraft compared to go.

let's suppose the AI has the "perfect" strategy: this would mean the AI always plays the exact same way -> the human wouldn't have this "uncertainty", because he knows how the AI is going to play and he could craft a strategy, that isn't perfect, but beats the strategy of the AI (for example a doom-drop? i don't know)

you could try to bypass this and give the AI a range of strategies it can choose from. but if you hardcode this into the AI i don't see the point of even trying to build an AI for starcraft. The hardcoded strategy would be human-created making it a "machine+human vs human" match
“Never assume malice when stupidity will suffice.”
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 14:52:17
March 15 2016 14:46 GMT
#309
On March 15 2016 23:25 necaremus wrote:
Show nested quote +
On March 15 2016 22:25 deacon.frost wrote:
It doesn't.

It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.

The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)

Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu

It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is).
It seems to me that either we are talking about something else or you don't know how that works


I don't know how it works completely, but i have a few information and a lot of uncertainty, which i use to evaluate the situation

I know, that the AI doesn't need all information, but i wanted to point out, that we have a whole new range of problems regarding starcraft compared to go.

let's suppose the AI has the "perfect" strategy: this would mean the AI always plays the exact same way -> the human wouldn't have this "uncertainty", because he knows how the AI is going to play and he could craft a strategy, that isn't perfect, but beats the strategy of the AI (for example a doom-drop? i don't know)

you could try to bypass this and give the AI a range of strategies it can choose from. but if you hardcode this into the AI i don't see the point of even trying to build an AI for starcraft. The hardcoded strategy would be human-created making it a "machine+human vs human" match


If AI will succeed at finding perfect strategy which will win 100% of times and can't be countered by any counter-action of his opponent, he will execute it over and over again without any chance of loosing. Otherwise that strategy can't be called perfect and AI wouldn't play it over and over again. He knows what predictability is and will vary his buildorders.

He is programmed in a way that he will try to do every move that increases his chance of winning and will avoid any situations that decreases his chance of winning.

He can blink micro for 16 hours if he knows that that will 100% give him 1 HP advantage over his opponent. But he will never go for all-in with 98% chance of winning if there is any possible way to increase it for another 0.000000001%.
Be polite, be professional, but have a plan to kill everybody you meet.
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 14:51:29
March 15 2016 14:50 GMT
#310
Be polite, be professional, but have a plan to kill everybody you meet.
Pwere
Profile Joined April 2010
Canada1556 Posts
Last Edited: 2016-03-15 14:53:25
March 15 2016 14:52 GMT
#311
Sorry to say this, necaremus, but it seems to me you don't understand how AI works. Some of us have an advanced degree in AI, but since it's not exactly the same branch, we don't feel comfortable making predictions.

What you are describing is absolutely a non-issue for this type of AI. You are thinking of a bot, which is vastly inferior. There is nothing inherently difficult about Starcraft for an AI. The strategic aspect is complex, but you only have to be better than humans, not perfect. And humans waste most of their training regimen on mechanics.

AI these days are at least on par with humans when dealing with uncertainty. Pure numbers over thousands of games beat intuition. They don't even bother with profiling to exploit people's weaknesses because of how dominant the analytical approach is when dealing with uncertainty.

All the "problems" pointed out in this thread are mostly annoyances. Not being able to simulate millions of games per day is the bigger struggle, but I feel comfortable saying you can easily run hundreds of games of Broodwar per hour on gaming hardware, so Google would find a way.

I think the main reason Google even considers Starcraft is that it's fun to watch, and millions of people would watch these games. It would be a publicity stunt.
thePunGun
Profile Blog Joined January 2016
598 Posts
March 15 2016 14:57 GMT
#312
On March 15 2016 22:41 sh1RoKen wrote:
Show nested quote +
On March 15 2016 21:24 JimmyJRaynor wrote:
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?


How they did with GO:

1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game).
At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".

2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.

3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.

They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.


Well it's not that simple considering, that SC is not turn based. There are much more complex calculations involved. I'm sure it will get there eventually, but it won't learn as fast in a real-time startegy game like SC, since it's way more random.

"You cannot teach a man anything, you can only help him find it within himself."
BaronVonOwn
Profile Joined April 2011
299 Posts
March 15 2016 14:58 GMT
#313
I'm sure BoxeR's only saying this because he'd love the publicity, because any serious AI would make a pro SC2 player look like a bronze noob. AI's have perfect mechanics meaning he will lose every micro/macro battle. Mechanics dominate strategy in SC2 and you can win games based on pure micro/macro alone. Starcraft was developed assuming human players with lag and poor reactions. A lot of the game elements would be rendered useless with an AI player. For example think about raven seeker missiles. Those will basically never hit against a properly coded AI. Hell they never hit against human players either.
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
March 15 2016 15:13 GMT
#314
On March 15 2016 23:57 thePunGun wrote:
Show nested quote +
On March 15 2016 22:41 sh1RoKen wrote:
On March 15 2016 21:24 JimmyJRaynor wrote:
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?


How they did with GO:

1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game).
At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".

2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.

3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.

They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.


Well it's not that simple considering, that SC is not turn based. There are much more complex calculations involved. I'm sure it will get there eventually, but it won't learn as fast in a real-time startegy game like SC, since it's way more random.



There is nothing random in Starcraft. It might be random for humans but for computer it is 100% predictable.

It will definitely take much more time to learn lesson 1 because of the more complicated design than GO.
But Man. Some enthusiast were managed to teach another much more simpler artificial neural network to complete Mario level in 34 attempts! And this was a child play compared to what google is capable of in both intellectual and hardware resources.
Be polite, be professional, but have a plan to kill everybody you meet.
ClanRH.TV
Profile Joined July 2010
United States462 Posts
March 15 2016 15:27 GMT
#315
On March 15 2016 22:39 heqat wrote:
Show nested quote +
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com


That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.
"Don't take life too seriously because you'll never get out alive."
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 15:29 GMT
#316
On March 15 2016 23:52 Pwere wrote:
Sorry to say this, necaremus, but it seems to me you don't understand how AI works. Some of us have an advanced degree in AI, but since it's not exactly the same branch, we don't feel comfortable making predictions.

What you are describing is absolutely a non-issue for this type of AI. You are thinking of a bot, which is vastly inferior. There is nothing inherently difficult about Starcraft for an AI. The strategic aspect is complex, but you only have to be better than humans, not perfect. And humans waste most of their training regimen on mechanics.

AI these days are at least on par with humans when dealing with uncertainty. Pure numbers over thousands of games beat intuition. They don't even bother with profiling to exploit people's weaknesses because of how dominant the analytical approach is when dealing with uncertainty.

i do agree, that we have a different picture of the situation. I also know, that my "hardcoded" example is pointing towards a bot and the AI the deepmind team created is entirely different.

A weakness of mine may very well be, that i do not fear to make predictions, although my range of information is very limited (like in this case). I do make these prediction on the one hand to find out where i might be wrong (because of lack of information) and on the other hand... because it's fun for me

some people think this attitude is annoying, but - as far as i know - it's the fastest way of learning and improving oneself. Maybe people think this kind of thing is annoying, because they make the mistake of interpreting evaluations as facts or opinions.

but i wanna get back to the AI and starcraft... and i wanna try to explain why i may build a different picture as you do

i didn't say it, but when i evaluated the AI vs human thingy, i didn't take the state of the game as is, but a slightly different version of starcraft, where AI and human would be on-par if it comes to micro-management of the units:
both the human and the AI would use the same algorithm for blink-micro, concave building, focus fire (and so on) reducing the game to positional advantage on the map and build-order strategy.

i did this, because you would not need the deepmind AI to win against a human if you take the state of the game as of now.
the superior micro-control of a simple AI (which even i could program) would win pretty much every game against a real human...
“Never assume malice when stupidity will suffice.”
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 15:49 GMT
#317
On March 16 2016 00:27 ClanRH.TV wrote:
Show nested quote +
On March 15 2016 22:39 heqat wrote:
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com


That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.

ty
I actually didn't bother reading it, because i thought "total nonsense" when i saw the title. But your statement suggest, that it may very well be worth a read... just a bad title :3
“Never assume malice when stupidity will suffice.”
heqat
Profile Joined October 2011
Switzerland96 Posts
March 15 2016 16:09 GMT
#318
On March 16 2016 00:49 necaremus wrote:
Show nested quote +
On March 16 2016 00:27 ClanRH.TV wrote:
On March 15 2016 22:39 heqat wrote:
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com


That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.

ty
I actually didn't bother reading it, because i thought "total nonsense" when i saw the title. But your statement suggest, that it may very well be worth a read... just a bad title :3


Sure, but still from the article:

"To put our findings in perspective, the 6.4*1018 instructions per second that human kind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second,"

Of course the brain works very differently than a CPU, so we cannot directly compare them in term of power.

nimdil
Profile Blog Joined January 2011
Poland3748 Posts
March 15 2016 19:40 GMT
#319
It's funny that people discuss - at the same time - if AI could beat top players in StarCraft and the ways how SC-tuned AlphaGo like AI (so AlphaSC I guess) should be obstructed so that the game will be fair.

AlphaGo is an AI that bases it's aactions on graphical input and that's it. If you feel that you need to tune down the ability of AI to execute perfect strategies at superhuman speed even though it would be using standard inputs then sorry but it is game over. AI won.
Xyik
Profile Blog Joined November 2009
Canada728 Posts
March 15 2016 19:45 GMT
#320
For those claiming that it will learn from games on the internet, how many replays / pieces of game data is there available online? I would guess < 100K high-level games (and thats being generous, I would guess even fewer as top pros rarely release replays).

Lets say we have 100,000 replays.

divide that by the 9 possible match-ups (TvX, ZvX, PvX), now we only have 11111 per match-up.
divide that by lets say, 10 popular maps within the last 2 years. Now we have ~1000 replays per map per match-up.
divide that by the number of build order openings / start-positions and we have at the most, 100 replays to study from in each match-up on a particular map on particular openings.

I don't think thats enough data to properly seed the A.I, so most of its learning will be from playing itself which will be quite difficult. Lets say Google uses a cluster of 10,000 machines, each running a copy of SC for the A.I to play, that allows it play maybe 2M games a day against itself. Now do the same division to figure how many games it will be able to play for each map / match-up / starting position / build-order a day.

I don't know how much data AlphaGo needed to reach its current level in Go, but clearly training the A.I in SC will be a much more difficult task based on acquiring enough data alone.

Then there is the challenge of it actually being able to learn the nuances of the game and interpret game-state. Even if Google found a way to sufficiently train it, I am really not convinced it could win.

I think whats really important is map-level data as well. will the A.I be able to interpret what the map looks like?
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 21:05 GMT
#321
On March 16 2016 04:45 Xyik wrote:
[...] divide that by the 9 possible match-ups (TvX, ZvX, PvX), [...]

try that again, plz >_>

unless you are counting TvZ and ZvT as different match-up, i don't get 9
“Never assume malice when stupidity will suffice.”
Xyik
Profile Blog Joined November 2009
Canada728 Posts
March 15 2016 21:25 GMT
#322
On March 16 2016 06:05 necaremus wrote:
Show nested quote +
On March 16 2016 04:45 Xyik wrote:
[...] divide that by the 9 possible match-ups (TvX, ZvX, PvX), [...]

try that again, plz >_>

unless you are counting TvZ and ZvT as different match-up, i don't get 9


i was thinking that it would need to learn the play Z build orders vs T just as it would need to learn to play T orders vs Z, but yes you are right, you can remove all duplicates by segmentation (e.g: treat a TvZ same as a ZvT when the map, spawn and build orders are the same for both races).
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 15 2016 22:17 GMT
#323
Technically there already is an APM limit in Brood War. In one of the games my bot played today, no units where being produced because some units were spamming so much apm that the network buffer got full.

I have also seen several bots that let their units spam so much apm that they keep getting stuck in the attack animation without actually attacking.


As for limiting the apm to something like 300: Once bots can defeat pro-players without an APM limit rule, you can slowly start capping it more and more. Right now bots are only competitive with C- or lower players, even with their high apm.
If you cannot win with 100 apm, win with 100 cpm.
snakeeyez
Profile Joined May 2011
United States1231 Posts
March 15 2016 22:47 GMT
#324
I think brood war would be a mighty big test for AI, and I do think if these researchers pick starcraft as a proving ground they will far surpass all existing bots and show new tech that might beat the likes of flash or jaedong.
That being said a bot playing protoss could beat pro human players just with perfect dragoon micro and kiting. It would always trade favorably, and then just win late game with all those advantages. Its not really fair if flash has to either micro marines or macro at his base while the bot can do everything at once. Its a huge advantage for the bot.
Making a bot that beats flash in a best of 7 without falling into predictable patterns or stupid build orders/decisions would be a very tough AI challenge though. Even harder if you emulate how a human plays the game so it has to beat him fairly. After beating GO grandmaster they are running out of games to solve honestly. I never thought they would win at GO its something they been after since 1960s
dankobanana
Profile Joined February 2016
Croatia237 Posts
March 15 2016 22:49 GMT
#325
maybe it already on bnet as IIIIIIIIIIII
Battle is waged in the name of the many. The brave, who generation after generation choose the mantle of - Dark Templar!
Charoisaur
Profile Joined August 2014
Germany15900 Posts
Last Edited: 2016-03-16 00:43:42
March 16 2016 00:40 GMT
#326
On March 16 2016 04:40 nimdil wrote:
It's funny that people discuss - at the same time - if AI could beat top players in StarCraft and the ways how SC-tuned AlphaGo like AI (so AlphaSC I guess) should be obstructed so that the game will be fair.

AlphaGo is an AI that bases it's aactions on graphical input and that's it. If you feel that you need to tune down the ability of AI to execute perfect strategies at superhuman speed even though it would be using standard inputs then sorry but it is game over. AI won.

SC2 is meant to be played with mechanical restraints (aka mouse and keyboard) and not through graphical input.
If someone plays sc2 without those mechanical restraints he would be basically cheating.
So the only fair ways for an AI vs human game would be either to have the AI played by a robot (which isn't technically possible) or simulate a robot (aka played through graphical input but with the mechanical restraints that humans/robots have).
otherwise the AI would be cheating.
Many of the coolest moments in sc2 happen due to worker harassment
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 16 2016 01:47 GMT
#327
On March 16 2016 09:40 Charoisaur wrote:
Show nested quote +
On March 16 2016 04:40 nimdil wrote:
It's funny that people discuss - at the same time - if AI could beat top players in StarCraft and the ways how SC-tuned AlphaGo like AI (so AlphaSC I guess) should be obstructed so that the game will be fair.

AlphaGo is an AI that bases it's aactions on graphical input and that's it. If you feel that you need to tune down the ability of AI to execute perfect strategies at superhuman speed even though it would be using standard inputs then sorry but it is game over. AI won.

SC2 is meant to be played with mechanical restraints (aka mouse and keyboard) and not through graphical input.
If someone plays sc2 without those mechanical restraints he would be basically cheating.
So the only fair ways for an AI vs human game would be either to have the AI played by a robot (which isn't technically possible) or simulate a robot (aka played through graphical input but with the mechanical restraints that humans/robots have).
otherwise the AI would be cheating.



So to clarify, would this be considered cheating or not by you?


If you cannot win with 100 apm, win with 100 cpm.
Hexe
Profile Joined August 2014
United States332 Posts
March 16 2016 04:06 GMT
#328
It would be impossible for an AI to beat someone at SC2 let alone SCBW. mind games are a real thing. bo7 absolutely no way. I can see one build. one matchup. one race. but even that someone can cheese and think of something that another person cant
Baarn
Profile Joined April 2010
United States2702 Posts
Last Edited: 2016-03-16 05:03:34
March 16 2016 05:01 GMT
#329
Boxer will get smashed by a bot at alphago's level and funding. He'd be out of his league in micro/macro and even scouting. People are trying to debate that "mind games" are going to turn the tide. With a little ai adjustment it will be able scout twice before really committing to a build. Maybe even more as the ai improves? You have to think that an ai can simultaneously look around the map while building units, expanding, moving workers etc. Asinine to think humans will have any advantage when you will be constantly behind from the first second the game starts.
There's no S in KT. :P
aTnClouD
Profile Blog Joined May 2007
Italy2428 Posts
March 16 2016 06:14 GMT
#330
Bots are nowhere near the level to solve starcraft and play the best players. Don't underestimate the complexity and amount of time of learning Starcraft through trial and error for a computer compared to a game like chess. There's too much difference between the two and human brain can still find valid strategies at a higher and faster degree than a bot at this point in time.
http://i53.photobucket.com/albums/g64/hunter692007/kruemelmonsteryn0.gif
ashara
Profile Joined July 2008
France22 Posts
March 16 2016 07:53 GMT
#331
Quite exciting to see they may tackle Starcraft after Go. I don't think AIs are anywhere close to beat top SC players yet, but if they put the money and effort I expect it to happen at some point. Although maybe not with close to 100% winrate though, because build order losses may still happen.

It would be quite interesting to see if they can make an AI that just knows how to learn. So that you could tell it: "learn to play Starcraft 1v1 at top level" or "write an html5 breakout game and publish it on this platform" and it would actually do it.
papaz
Profile Joined December 2009
Sweden4149 Posts
Last Edited: 2016-03-16 08:05:13
March 16 2016 08:04 GMT
#332
On March 16 2016 15:14 aTnClouD wrote:
Bots are nowhere near the level to solve starcraft and play the best players. Don't underestimate the complexity and amount of time of learning Starcraft through trial and error for a computer compared to a game like chess. There's too much difference between the two and human brain can still find valid strategies at a higher and faster degree than a bot at this point in time.


I think in GO the AI learnt by playing literally millions of games vs itself.

It would do the same thing in Starcraft in order to improve, just as humans do. The only difference? It can play millions of games simultaneously and learn faster than any human.

Also it could learn the game by "watching" vods of pro players.

There is no possible way a human can come close to computers. Just think about it for a second. Just 40 years ago there was no home PCs.

Today we have computers everywhere, in just 40 years!

Just think of how fast the development goes. Can you even imagine the world in 40 more years from today?

If Google decides to dedicate time to a Starcrat AI make no mistake. It will crush any of todays pros. I love how the pro GO players were so confident of themselves and both European and World champion got crushed.

Google and AI fighting!

aTnClouD
Profile Blog Joined May 2007
Italy2428 Posts
March 16 2016 09:29 GMT
#333
On March 16 2016 17:04 papaz wrote:
Show nested quote +
On March 16 2016 15:14 aTnClouD wrote:
Bots are nowhere near the level to solve starcraft and play the best players. Don't underestimate the complexity and amount of time of learning Starcraft through trial and error for a computer compared to a game like chess. There's too much difference between the two and human brain can still find valid strategies at a higher and faster degree than a bot at this point in time.


I think in GO the AI learnt by playing literally millions of games vs itself.

It would do the same thing in Starcraft in order to improve, just as humans do. The only difference? It can play millions of games simultaneously and learn faster than any human.

Also it could learn the game by "watching" vods of pro players.

There is no possible way a human can come close to computers. Just think about it for a second. Just 40 years ago there was no home PCs.

Today we have computers everywhere, in just 40 years!

Just think of how fast the development goes. Can you even imagine the world in 40 more years from today?

If Google decides to dedicate time to a Starcrat AI make no mistake. It will crush any of todays pros. I love how the pro GO players were so confident of themselves and both European and World champion got crushed.

Google and AI fighting!


As I previously stated bots are not capable of competing with humans at this point in time. There will be a point in the future when they will solve any kind of complex game but the day is not close to now. It comes down to how complex the game is. Backgammon has been solved but poker is nowhere close to be solved. Starcraft is extraordinarily complex and so the same applies for it. As computers get better and faster and better self learning programs are developed sure AI will find and execute strategies that can't be overcome by humans.
http://i53.photobucket.com/albums/g64/hunter692007/kruemelmonsteryn0.gif
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
March 16 2016 11:11 GMT
#334
On March 16 2016 18:29 aTnClouD wrote:
Show nested quote +
On March 16 2016 17:04 papaz wrote:
On March 16 2016 15:14 aTnClouD wrote:
Bots are nowhere near the level to solve starcraft and play the best players. Don't underestimate the complexity and amount of time of learning Starcraft through trial and error for a computer compared to a game like chess. There's too much difference between the two and human brain can still find valid strategies at a higher and faster degree than a bot at this point in time.


I think in GO the AI learnt by playing literally millions of games vs itself.

It would do the same thing in Starcraft in order to improve, just as humans do. The only difference? It can play millions of games simultaneously and learn faster than any human.

Also it could learn the game by "watching" vods of pro players.

There is no possible way a human can come close to computers. Just think about it for a second. Just 40 years ago there was no home PCs.

Today we have computers everywhere, in just 40 years!

Just think of how fast the development goes. Can you even imagine the world in 40 more years from today?

If Google decides to dedicate time to a Starcrat AI make no mistake. It will crush any of todays pros. I love how the pro GO players were so confident of themselves and both European and World champion got crushed.

Google and AI fighting!


As I previously stated bots are not capable of competing with humans at this point in time. There will be a point in the future when they will solve any kind of complex game but the day is not close to now. It comes down to how complex the game is. Backgammon has been solved but poker is nowhere close to be solved. Starcraft is extraordinarily complex and so the same applies for it. As computers get better and faster and better self learning programs are developed sure AI will find and execute strategies that can't be overcome by humans.


Could you please read about AlphaGo or Artificial neural networks just a little bit before arguing about the topic you clearly know nothing about. That "point in the future when they will solve any kind of complex game" has been reached just now. GO has more combinations than atoms in the whole universe. And that AI doesn't calculated all of it and solved the whole game. He learned to play the game like humans do but much better. That AI has nothing in common with any other AI you might find in any other game.

And BTW:
1. He can kill 100 mutalisks with 1 phoenix without taking any damage
2. He can kill 200 marines without steam with 1 stalker without taking any damage
3. He can kill 33 ultralisks with 1 immortal + warp prism without taking any damage
4. He can do 1, 2 and 3 at the same time.

Starcraft units wasn't designed to be microed by computer.
Be polite, be professional, but have a plan to kill everybody you meet.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 16 2016 12:59 GMT
#335
You realize AlphaGO isn't an ordinary neural network right?

The success of Alpha GO or how neural networks work don't mean it's an insta gg, far from it.

The fact that SC depends so much on mechanics is a problem because you have to balance it, not because we want a "fair" match but because the point is to have an AI that does well in such a complex game strategically, but a lot of its complexity lies in the interface we have to use.
WriterMaru
Flonomenalz
Profile Joined May 2011
Nigeria3519 Posts
March 16 2016 13:23 GMT
#336
Without limitations on the APM or something... I don't see how any pro beats AI. SC units commanded by a computer are totally unfair. Perfectly micro'd units? Lol.

I think the only way to beat it would be to come at it with a strategy it had never seen before? But even then...
I love crazymoving
nimdil
Profile Blog Joined January 2011
Poland3748 Posts
March 16 2016 14:32 GMT
#337
On March 16 2016 09:40 Charoisaur wrote:
Show nested quote +
On March 16 2016 04:40 nimdil wrote:
It's funny that people discuss - at the same time - if AI could beat top players in StarCraft and the ways how SC-tuned AlphaGo like AI (so AlphaSC I guess) should be obstructed so that the game will be fair.

AlphaGo is an AI that bases it's aactions on graphical input and that's it. If you feel that you need to tune down the ability of AI to execute perfect strategies at superhuman speed even though it would be using standard inputs then sorry but it is game over. AI won.

SC2 is meant to be played with mechanical restraints (aka mouse and keyboard) and not through graphical input.
If someone plays sc2 without those mechanical restraints he would be basically cheating.
So the only fair ways for an AI vs human game would be either to have the AI played by a robot (which isn't technically possible) or simulate a robot (aka played through graphical input but with the mechanical restraints that humans/robots have).
otherwise the AI would be cheating.

To be fair it's just the most efficient way for Humans to interact but if we could prepare mouse, keyboard and mechanical hands (jesus that's stupid already) that yet would be able to spam on the level of thousand of actions per minute (obviously it would be custom build mouse and keyboard) - would it be "fair"? Because I don't think it'd be too hard compared to how complicated is building AI like alphago. What you would be doing is encorcing trnaslation of eletronic impulses into mechanical moves which in turn will be translated into other elektronic impuulses.
I call it stupid.
Crying
Profile Joined February 2011
Bulgaria778 Posts
March 16 2016 15:13 GMT
#338
As a programmer, player and so on i have something to say.

AlphaGo utilizes Machine Learning specifically to learn to play the game through numerous amounts of data.

Imagine feeding AlphaGo with good amount of data (replays), where he can study and create absolutely beautiful results.

The difference between chess go and starcraft is that chess and go is the following :

Chess uses heuristics to provide the best move.
Go uses heavy machine learning and neural networks to study games and thus come with best move/strategy
Starcraft would have to use machine learning neural networks AND heuristics probably to achieve better results, also it needs huge quantity of data.

The thing is, chess at the moment is pretty much absolutely dominated by computers, Magnus Carlsen for instance having an ELO of 2850~, a computer easiliy pulls 3000+, even going to 3300 with different settings.
The best outcome Magnus has versus a computer is that of a draw. Chess grandmasters say they use engines to evaluate situations and see their mistakes in matches, most of them never play enginges because its pointless, while u see 20moves ahead, the engine already calculated more than 10 million. And due to the heuristic based search, it will always minimize your outcome with minimax algorithm, clever alpha-beta pruning, quintessence search and so on.

Starcraft however hasn't got perfect information, thus searching is not possible, to some extent however. The amount of advancement in Machine Learning, Neural Networks and so on in the past years will probably some day bring an abomination to life. At the moment people don't really know how to make A.I. safe.(for humans)

It blows my mind how in 1960's we didn't even know how to find the shortest path from A-B and nowadays we have these things.
Determination~ Hard Work Surpass NATURAL GENIUS!
heqat
Profile Joined October 2011
Switzerland96 Posts
March 16 2016 16:13 GMT
#339
Next level AI could use a synthetic brain such as the Blue Brain:

https://en.wikipedia.org/wiki/Blue_Brain_Project
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 16 2016 17:21 GMT
#340
It blows my mind how in 1960's we didn't even know how to find the shortest path from A-B and nowadays we have these things.


https://en.wikipedia.org/wiki/Dijkstra's_algorithm =P
What qxc said.
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 16 2016 17:29 GMT
#341
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).
What qxc said.
BazookaBenji1
Profile Joined February 2016
15 Posts
March 16 2016 17:59 GMT
#342
i gotta agree with lordsaul, we've all played computer on hardest setting (insane) and felt it was about gold league at best. As programmers continue to add fields to the AI which increase it's scope, depth, and understanding of the game. It will continue to get harder, and things like insane macro mixed with good harass and micro will make it harder each time, especially if the programmer is an accomplished sc player who is watching the games vs pro and ai, then can continue to make little adjustments accordingly, it's only gonna get harder.
BazookaBenji1
Profile Joined February 2016
15 Posts
Last Edited: 2016-03-16 18:01:36
March 16 2016 18:01 GMT
#343
rockslave you are insane if you think starcraft isn't more complex than any game in existence other than the ones that the bankers and corporations are playing with Global Economies and Governments. sc2 has infinitely more tech paths than chess or go did u forget to take your medications or something?
Monochromatic
Profile Blog Joined March 2012
United States997 Posts
March 16 2016 18:19 GMT
#344
On March 17 2016 02:29 rockslave wrote:
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).


I think you are extremely underestimating the difference between a real time and turn based game.

For example a chess game can be represented very quickly:
+ Show Spoiler +

1. e4 e5
2. Nf3 d6
3. d4 Bg4
4. de5 Bf3
5. Qf3 de5
6. Bc4 Nf6
7. Qb3 Qe7
8. Nc3 c6
9. Bg5 b5
10. Nb5 cb5
11. Bb5 Nbd7
12. O-O-O Rd8
13. Rd7 Rd7
14. Rd1 Qe6
15. Bd7 Nd7
16. Qb8 Nb8
17. Rd8#


This represents an entire game of chess, and you can read through it easily. There are only so many moves to simulate.

Compare this to starcraft: If each player has 100 APM, that means that there is 200 inputs to evaluate every minute. A 20 minute game has 4000 moves, so it is over 100 times more complex than chess.

Progamers with more APM also increase this number by a significant margin, and the differing lengths of games mean that there could be over 15000 possible moves for each game.

Add in the fact that there is three races and multiple maps, the amount that a program would need to learn is immensely more than chess. Not to mention randomness factors into starcraft, with spawning positions and build orders.

I'm not sure how machine learning works, but I'm willing to bet the time it takes to analyze a game increases exponentially with the possible number of moves. In that sense, starcraft is even harder than go.

The final nail in the Ai's coffin is that it has to analyze in real time. It can only look so far ahead before the future arrives, so it has to analyze extremely quickly. The amount of processing power required would be massive.
MC: "Guys I need your support! iam poor make me nerd baller" __________________________________________RIP Violet
Veldril
Profile Joined August 2010
Thailand1817 Posts
Last Edited: 2016-03-16 19:56:32
March 16 2016 19:25 GMT
#345
On March 17 2016 03:19 Monochromatic wrote:
Show nested quote +
On March 17 2016 02:29 rockslave wrote:
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).


I think you are extremely underestimating the difference between a real time and turn based game.

For example a chess game can be represented very quickly:
+ Show Spoiler +

1. e4 e5
2. Nf3 d6
3. d4 Bg4
4. de5 Bf3
5. Qf3 de5
6. Bc4 Nf6
7. Qb3 Qe7
8. Nc3 c6
9. Bg5 b5
10. Nb5 cb5
11. Bb5 Nbd7
12. O-O-O Rd8
13. Rd7 Rd7
14. Rd1 Qe6
15. Bd7 Nd7
16. Qb8 Nb8
17. Rd8#


This represents an entire game of chess, and you can read through it easily. There are only so many moves to simulate.

Compare this to starcraft: If each player has 100 APM, that means that there is 200 inputs to evaluate every minute. A 20 minute game has 4000 moves, so it is over 100 times more complex than chess.

Progamers with more APM also increase this number by a significant margin, and the differing lengths of games mean that there could be over 15000 possible moves for each game.

Add in the fact that there is three races and multiple maps, the amount that a program would need to learn is immensely more than chess. Not to mention randomness factors into starcraft, with spawning positions and build orders.

I'm not sure how machine learning works, but I'm willing to bet the time it takes to analyze a game increases exponentially with the possible number of moves. In that sense, starcraft is even harder than go.

The final nail in the Ai's coffin is that it has to analyze in real time. It can only look so far ahead before the future arrives, so it has to analyze extremely quickly. The amount of processing power required would be massive.


You can't really use chess to descripbe Alphago-type of AI, however. You can't even compare chess with go as go is million time more complex than chess. To compare chess with go is like you compare the easiness of landing on the moon with landing on other solar system's planet.

I feel like many people misunderstand what is Alphago-type AI is or how it works. Alphago is not AI that is hard coded to response to human's move or memorize pattern. If that's is its ability then it would not be able to defeat Lee Sedol in even a single game of go which has possible moves more than number of atoms in the universe. Considering the opening move of go has 361 position, the first five turns of go would come up with the total possible moves of 5,962,870,725,840. There's no way an AI can do that in a reasonable amount of time, yet Alphago has pretty much put in (at least) two moves that would go down in go's history as "God's Hand" that even 9-professional dan pros are in awe in less than 5 minutes. There is one move that it decides to play even it recognized that the chance of human playing this move is less than 1/10,000 but it decides that the human pros would be wrong.

The AI is not coded to just copy human move but it is coded to "learn the game" instead. This means that it learn to play the game like human would, by experimenting and learn from its mistake. It would learn to use heuristic to simplify their "thought process" and then reinforcing their decision making process by continuing practices the moves (or if in starcraft terms: build order, or army movement, positioning, etc.) millions of times to reduce the time they need to make a good move. It is build to learn how to play and with every game it will improve its decision making process. After the game, even Deepmind's people don't know why Alphago made some moves because

So if Deepmind decides to code their AI-system to learn Starcraft, it would not be putting in hard coding to to make it just respond to build order or blindly builds something. It will learn how to scout, how to count buildings and workers and predict what the build order would be. It will learn to recognize scouting patterns and timing that the scout arrive at the base to decide whether it is possible that the opponent is proxying or not. Then it will play out the possible scenarios (which is less complicated than go by a lot) ahead. And it will do by practicing with itself millions of times per day to find out what would the possible responses that would lead to a win be and learn from every single game. That's the scary part of this type of AI.
Without love, we can't see anything. Without love, the truth can't be seen. - Umineko no Naku Koro Ni
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-16 19:55:56
March 16 2016 19:53 GMT
#346
On March 17 2016 02:29 rockslave wrote:
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).


what?

starcraft is way bigger compared to these games: just count the possible "squares"(tiles) on the smallest starcraft map and compare that with a go or chess board.

"tech paths"? you know that fighting with 4 marines vs 5 marines is a total different situation, as fighting with 5 marines vs 5 marines?

now input, that each marine needs a tile to stand on. and each marine has to move to this tile while not blocking the path of another marine.

just a simple 1 rax vs 1 rax situation is way more complex than the whole game of chess.
/edit: 1 rax vs 1 rax would be around the same complexity as Go, just that you have a bigger "board"
“Never assume malice when stupidity will suffice.”
Veldril
Profile Joined August 2010
Thailand1817 Posts
March 16 2016 20:03 GMT
#347
On March 17 2016 04:53 necaremus wrote:
Show nested quote +
On March 17 2016 02:29 rockslave wrote:
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).


what?

starcraft is way bigger compared to these games: just count the possible "squares"(tiles) on the smallest starcraft map and compare that with a go or chess board.

"tech paths"? you know that fighting with 4 marines vs 5 marines is a total different situation, as fighting with 5 marines vs 5 marines?

now input, that each marine needs a tile to stand on. and each marine has to move to this tile while not blocking the path of another marine.

just a simple 1 rax vs 1 rax situation is way more complex than the whole game of chess.
/edit: 1 rax vs 1 rax would be around the same complexity as Go, just that you have a bigger "board"


1 rax vs 1 rax is far from the complexity of Go because the position of each unit is not as important and can be simplify to possible area that units can be that would give similar results. A marine on a pixel away would not means a lot in the local fight, let alone the bigger picture of the whole game. In contrast, a single move of a stone from one point to another in go can make a difference between a win and a loss of a game.

And Alphago-type of AI is taught to think and use heuristic similar to human's thought process. So it will be able to simplify and use shortcut to their thinking that would make their decision process fast enough.

Without love, we can't see anything. Without love, the truth can't be seen. - Umineko no Naku Koro Ni
loppy2345
Profile Joined August 2015
39 Posts
March 16 2016 20:26 GMT
#348
On March 17 2016 04:25 Veldril wrote:
Show nested quote +
On March 17 2016 03:19 Monochromatic wrote:
On March 17 2016 02:29 rockslave wrote:
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).


I think you are extremely underestimating the difference between a real time and turn based game.

For example a chess game can be represented very quickly:
+ Show Spoiler +

1. e4 e5
2. Nf3 d6
3. d4 Bg4
4. de5 Bf3
5. Qf3 de5
6. Bc4 Nf6
7. Qb3 Qe7
8. Nc3 c6
9. Bg5 b5
10. Nb5 cb5
11. Bb5 Nbd7
12. O-O-O Rd8
13. Rd7 Rd7
14. Rd1 Qe6
15. Bd7 Nd7
16. Qb8 Nb8
17. Rd8#


This represents an entire game of chess, and you can read through it easily. There are only so many moves to simulate.

Compare this to starcraft: If each player has 100 APM, that means that there is 200 inputs to evaluate every minute. A 20 minute game has 4000 moves, so it is over 100 times more complex than chess.

Progamers with more APM also increase this number by a significant margin, and the differing lengths of games mean that there could be over 15000 possible moves for each game.

Add in the fact that there is three races and multiple maps, the amount that a program would need to learn is immensely more than chess. Not to mention randomness factors into starcraft, with spawning positions and build orders.

I'm not sure how machine learning works, but I'm willing to bet the time it takes to analyze a game increases exponentially with the possible number of moves. In that sense, starcraft is even harder than go.

The final nail in the Ai's coffin is that it has to analyze in real time. It can only look so far ahead before the future arrives, so it has to analyze extremely quickly. The amount of processing power required would be massive.


You can't really use chess to descripbe Alphago-type of AI, however. You can't even compare chess with go as go is million time more complex than chess. To compare chess with go is like you compare the easiness of landing on the moon with landing on other solar system's planet.

I feel like many people misunderstand what is Alphago-type AI is or how it works. Alphago is not AI that is hard coded to response to human's move or memorize pattern. If that's is its ability then it would not be able to defeat Lee Sedol in even a single game of go which has possible moves more than number of atoms in the universe. Considering the opening move of go has 361 position, the first five turns of go would come up with the total possible moves of 5,962,870,725,840. There's no way an AI can do that in a reasonable amount of time, yet Alphago has pretty much put in (at least) two moves that would go down in go's history as "God's Hand" that even 9-professional dan pros are in awe in less than 5 minutes. There is one move that it decides to play even it recognized that the chance of human playing this move is less than 1/10,000 but it decides that the human pros would be wrong.

The AI is not coded to just copy human move but it is coded to "learn the game" instead. This means that it learn to play the game like human would, by experimenting and learn from its mistake. It would learn to use heuristic to simplify their "thought process" and then reinforcing their decision making process by continuing practices the moves (or if in starcraft terms: build order, or army movement, positioning, etc.) millions of times to reduce the time they need to make a good move. It is build to learn how to play and with every game it will improve its decision making process. After the game, even Deepmind's people don't know why Alphago made some moves because

So if Deepmind decides to code their AI-system to learn Starcraft, it would not be putting in hard coding to to make it just respond to build order or blindly builds something. It will learn how to scout, how to count buildings and workers and predict what the build order would be. It will learn to recognize scouting patterns and timing that the scout arrive at the base to decide whether it is possible that the opponent is proxying or not. Then it will play out the possible scenarios (which is less complicated than go by a lot) ahead. And it will do by practicing with itself millions of times per day to find out what would the possible responses that would lead to a win be and learn from every single game. That's the scary part of this type of AI.


The thing is Starcraft is based in real time, and has to be played as such. If an AI tries to play itself, it will have to learn by itself what the win condition is. Assuming it eventually manages to play games in 10 minutes on average, it will be able to play 144 games each day, so 52,560 games in one year. Let's just say arbitarily that it has to play 1 million games before it reaches pro level, it will take 18 years. Realistically, it will take probably billions or trillions of games, which would be millions of years.

Therefore this trial and error approach just won't work in starcraft, unlike go or chess where it can play games against itself in fractions of seconds.
beg
Profile Blog Joined May 2010
991 Posts
March 16 2016 20:33 GMT
#349
Yea, I was wondering if there was a way to speed up SC:BW games, so DeepMind could have faster games ...
Maybe they have a reasonable approach with many games running parallel. Don't know if that would work. It'd probably be a novelty, eh?


Wish they gave some insight. This is so exciting.
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-16 22:08:14
March 16 2016 21:46 GMT
#350
On March 17 2016 05:03 Veldril wrote:
Show nested quote +
On March 17 2016 04:53 necaremus wrote:
On March 17 2016 02:29 rockslave wrote:
Please stop saying StarCraft is more complex than Chess or Go.

If you only look at strategy, StarCraft is absurdly simple. The number of possible tech paths and "general strategies" you can do is veeeeery small compared to these board games.

The complexity for the AI would be more on input/output, and that is assuming it won't run through an API like other AIs in BW (GTAI or the like).


what?

starcraft is way bigger compared to these games: just count the possible "squares"(tiles) on the smallest starcraft map and compare that with a go or chess board.

"tech paths"? you know that fighting with 4 marines vs 5 marines is a total different situation, as fighting with 5 marines vs 5 marines?

now input, that each marine needs a tile to stand on. and each marine has to move to this tile while not blocking the path of another marine.

just a simple 1 rax vs 1 rax situation is way more complex than the whole game of chess.
/edit: 1 rax vs 1 rax would be around the same complexity as Go, just that you have a bigger "board"


1 rax vs 1 rax is far from the complexity of Go because the position of each unit is not as important and can be simplify to possible area that units can be that would give similar results. A marine on a pixel away would not means a lot in the local fight, let alone the bigger picture of the whole game. In contrast, a single move of a stone from one point to another in go can make a difference between a win and a loss of a game.

And Alphago-type of AI is taught to think and use heuristic similar to human's thought process. So it will be able to simplify and use shortcut to their thinking that would make their decision process fast enough.



the position of each unit is not as important? if you have the same number of marines (for example 5 vs 5), but one "player" has his marines positioned, so that all 5 can focus fire a target at the same time, in the same instant, while the other one has 3 marines in front an 2 marines behind -> only 3 marines can fire on initiation of the fight, the other 2 will join the "2nd round" of the fight. who do you think wins this fight?

clearly the one who uses all 5 marines in the instant the fight starts.
(i would guess he would be left with 2-3 marines, while the other one has none left)

and this is only 1 tile difference in position: imagine you used a marine to scout! he would never be able to join the fight, making it essential a "4v5" although both players would have the same amount of possible marines.

/edit: and i didn't even consider the layout of the map, concrete: line of sight. put 5 marines on top of a ramp and try to break through with 5 marines... good luck.
“Never assume malice when stupidity will suffice.”
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-16 21:52:05
March 16 2016 21:51 GMT
#351
On March 17 2016 05:26 loppy2345 wrote:
The thing is Starcraft is based in real time, and has to be played as such. If an AI tries to play itself, it will have to learn by itself what the win condition is. Assuming it eventually manages to play games in 10 minutes on average, it will be able to play 144 games each day, so 52,560 games in one year. Let's just say arbitarily that it has to play 1 million games before it reaches pro level, it will take 18 years. Realistically, it will take probably billions or trillions of games, which would be millions of years.

Therefore this trial and error approach just won't work in starcraft, unlike go or chess where it can play games against itself in fractions of seconds.

this is not entirely true. to be honest: it is far from reality: you ever heard of parallel processing? the AI could play multiple games at the same time (just like they did with go), and they could easily adjust the game-speed to something faster, if they wished.

/edit: they only need a smart way to merge the data they got out of the games :edit/

the question is: do they have the resources to do so? for go they used about the energy of a middle big city over a few month.

if they want to compete in starcraft, they would have to expand this.
“Never assume malice when stupidity will suffice.”
Quesadilla
Profile Blog Joined October 2007
United States1814 Posts
March 16 2016 23:25 GMT
#352
Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win.
Make a lot of friends. Wear good clothes. Drink good beer. Love a nice girl.
StarscreamG1
Profile Joined February 2011
Portugal1652 Posts
March 16 2016 23:38 GMT
#353
On March 17 2016 08:25 Quesadilla wrote:
Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win.

Agree, but with restrictions the test wouldn't make sense.
bo1b
Profile Blog Joined August 2012
Australia12814 Posts
March 16 2016 23:39 GMT
#354
People who think that ai's that play chess/go are doing it differently to each other are hilarious. As is the insecurity that all go players seem to display whenever they feel compelled to tell the world just how many possible combinations there are.

Everyone thinking that Starcraft is unsolvable for an ai has completely missed the point that these ai's aren't memorising every possible option in chess/go (completely impossible with current technology), they're simply learning the game. Starcraft is significantly less complex then both go and chess, and to assume that asinine things like all possible positions of a marine on a map is going to make a significant impact is laughable. In fact, given how quickly machine learning operates, I'd bet that positioning is probably the very first thing that is mastered by it in the grand scheme of strategy.
jinorazi
Profile Joined October 2004
Korea (South)4948 Posts
March 16 2016 23:48 GMT
#355
i'd be impressed and hope to see within my lifetime.

hell, even i'm confident i can beat whatever AI available at the moment but obviously, this is talking about future.

i just cant fathom how the AI would work to understand a game like starcraft and be able to decide what to do.
age: 84 | location: california | sex: 잘함
loppy2345
Profile Joined August 2015
39 Posts
March 17 2016 00:18 GMT
#356
Also I think the map choice is going to seriously mess up the AI, if it was a plain map with no obstacles, etc..., then it would obviously be a lot easier for the AI than a map with lots of cliffs. It would be very easy to design maps that would completely screw over the AI, whereas obviously humans would be able to understand the map a lot quicker.

I think it's definitely possible to develop an AI that could beat the best pro's consistently, but would probably take a team of 10 world class programmers 20 years or so to do it, and that's not really worth the money and effort.
Veldril
Profile Joined August 2010
Thailand1817 Posts
Last Edited: 2016-03-17 00:49:26
March 17 2016 00:45 GMT
#357
On March 17 2016 08:39 bo1b wrote:
People who think that ai's that play chess/go are doing it differently to each other are hilarious. As is the insecurity that all go players seem to display whenever they feel compelled to tell the world just how many possible combinations there are.

Everyone thinking that Starcraft is unsolvable for an ai has completely missed the point that these ai's aren't memorising every possible option in chess/go (completely impossible with current technology), they're simply learning the game. Starcraft is significantly less complex then both go and chess, and to assume that asinine things like all possible positions of a marine on a map is going to make a significant impact is laughable. In fact, given how quickly machine learning operates, I'd bet that positioning is probably the very first thing that is mastered by it in the grand scheme of strategy.


Well, the AIs that play chess and Alphago are doing thing completely different, though. In chess, AIs can look for combination of the overall board positions without using Alphago's policy technics but Go is too complex for that. If the same type of AIs that are used in chess can work in go then the top go players would be defeated by AI a long time ago (no human can win against AI in chess since around 2006-2007). It is not really an insecurity if it is a fact that is back up by concrete evidences.

On March 17 2016 06:46 necaremus wrote:

the position of each unit is not as important? if you have the same number of marines (for example 5 vs 5), but one "player" has his marines positioned, so that all 5 can focus fire a target at the same time, in the same instant, while the other one has 3 marines in front an 2 marines behind -> only 3 marines can fire on initiation of the fight, the other 2 will join the "2nd round" of the fight. who do you think wins this fight?

clearly the one who uses all 5 marines in the instant the fight starts.
(i would guess he would be left with 2-3 marines, while the other one has none left)

and this is only 1 tile difference in position: imagine you used a marine to scout! he would never be able to join the fight, making it essential a "4v5" although both players would have the same amount of possible marines.

/edit: and i didn't even consider the layout of the map, concrete: line of sight. put 5 marines on top of a ramp and try to break through with 5 marines... good luck.


As long as one pixel/tile differences would not lead to a different result, then those differences would not matter and would be heuristically group into clusters of positions instead. When pro players play, they don't think about positioning each unit on each tile, they think about position units in a general area as long as that cluster of area is where the units should be in. Alphago-type AI is also taught to think this way like human so it will learn how to emulate how human pro think but will be made to think faster.

Beside, losing a group of marines do not matter at all in the bigger picture. If by sacrificing a group of units would lead to a better game positions (i.e. strengthening overall board positions in Go or opening up counter attack path to base in Starcraft) then AI would be willing to sacrifice units. It will also learn how to react and what can it do to maximize its chance to win if units are caught out of position.

On March 17 2016 05:26 loppy2345 wrote:
The thing is Starcraft is based in real time, and has to be played as such. If an AI tries to play itself, it will have to learn by itself what the win condition is. Assuming it eventually manages to play games in 10 minutes on average, it will be able to play 144 games each day, so 52,560 games in one year. Let's just say arbitarily that it has to play 1 million games before it reaches pro level, it will take 18 years. Realistically, it will take probably billions or trillions of games, which would be millions of years.

Therefore this trial and error approach just won't work in starcraft, unlike go or chess where it can play games against itself in fractions of seconds.


That's true but I would say that if the AI can use parallel processing to learn, then it could play more than 144 games a day.

On March 17 2016 09:18 loppy2345 wrote:
Also I think the map choice is going to seriously mess up the AI, if it was a plain map with no obstacles, etc..., then it would obviously be a lot easier for the AI than a map with lots of cliffs. It would be very easy to design maps that would completely screw over the AI, whereas obviously humans would be able to understand the map a lot quicker.

I think it's definitely possible to develop an AI that could beat the best pro's consistently, but would probably take a team of 10 world class programmers 20 years or so to do it, and that's not really worth the money and effort.


You can say that with any game though. What is the point of making AI that beating people in chess and go? What was the point of spending billion in building Alphago? The point is that it is for research on how human learning process work, then people will do it, like what happen with Alphago. If by beating pro Starcraft players by AI would give us answer how people decision making process or learning process in using asymmetric information is, or improve the decision making process of AI; then people will do it.
Without love, we can't see anything. Without love, the truth can't be seen. - Umineko no Naku Koro Ni
Kerm
Profile Joined April 2010
France467 Posts
March 17 2016 11:13 GMT
#358
Interesting blog on gamasutra on this subject.

http://gamasutra.com/blogs/BenWeber/20160314/267956/DeepMind_Challenges_for_StarCraft.php
What i know is that I know nothing - [http://twitter.com/UncleKerm]
unholyflare
Profile Joined August 2014
42 Posts
March 17 2016 11:27 GMT
#359
On March 13 2016 03:24 Charoisaur wrote:
Show nested quote +
On March 13 2016 03:17 lordsaul wrote:
I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine

* Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot
* Marines that always maintain their range advantage on roaches
* Tanks that always target the banelings first
* Marines that always perfect split v banelings (you can find that online already)
* Weak units that always rotate out of the front line
* Medivacs healing the most important target in range, rather than the closest
* Perfect charges vs tank lines (single units charging ahead of the main attack
* ...to name a very few basic micro tricks

And while all this happens, perfect macro? Humans overestimate themselves . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range.

Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure

people don't underestimate that. they know the AI would have to be limited for it to be a fair challenge.
the point is to show that bots are more intelligent then humans not that they have better mechanics.


This was never the point? Certainly as far as chess engines go, they are superior simply because they can brute force calculate in the way that humans can't. Humans have to "teach" engines strategy by assigning values to various strategic aspects.

The brute force calculation power of machines in chess/go I would say is roughly the equivalent to mechanics in SC2. It's part of the deal.
Elentos
Profile Blog Joined February 2015
55509 Posts
March 17 2016 11:33 GMT
#360
On March 17 2016 08:38 StarscreamG1 wrote:
Show nested quote +
On March 17 2016 08:25 Quesadilla wrote:
Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win.

Agree, but with restrictions the test wouldn't make sense.

Why not? I'm pretty sure it's easier to create an AI that can beat the best SC players with humanly impossible mechanics than it is to make an AI that can beat them strategically with human-like mechanics. But the 2nd one is way more interesting.
Every 60 seconds in Africa, a minute passes.
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
March 17 2016 11:39 GMT
#361
On March 17 2016 20:27 unholyflare wrote:
Show nested quote +
On March 13 2016 03:24 Charoisaur wrote:
On March 13 2016 03:17 lordsaul wrote:
I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine

* Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot
* Marines that always maintain their range advantage on roaches
* Tanks that always target the banelings first
* Marines that always perfect split v banelings (you can find that online already)
* Weak units that always rotate out of the front line
* Medivacs healing the most important target in range, rather than the closest
* Perfect charges vs tank lines (single units charging ahead of the main attack
* ...to name a very few basic micro tricks

And while all this happens, perfect macro? Humans overestimate themselves . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range.

Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure

people don't underestimate that. they know the AI would have to be limited for it to be a fair challenge.
the point is to show that bots are more intelligent then humans not that they have better mechanics.


This was never the point? Certainly as far as chess engines go, they are superior simply because they can brute force calculate in the way that humans can't. Humans have to "teach" engines strategy by assigning values to various strategic aspects.

The brute force calculation power of machines in chess/go I would say is roughly the equivalent to mechanics in SC2. It's part of the deal.

For the love of me, READ ABOUT THAT THING ANYTHING. It haven't brute forced the game. Bruteforcing GO is technically near impossible, taht's why the net lost one game(IIRC the score is now 3-1)
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
Crying
Profile Joined February 2011
Bulgaria778 Posts
March 17 2016 12:12 GMT
#362
On March 17 2016 02:21 rockslave wrote:
Show nested quote +
It blows my mind how in 1960's we didn't even know how to find the shortest path from A-B and nowadays we have these things.


https://en.wikipedia.org/wiki/Dijkstra's_algorithm =P


Yea it was published in 1959. However its truly amazing how much we've gone, from literally not knowing how to calculate flow/shortest distance to being able to beat a human in Go.
Determination~ Hard Work Surpass NATURAL GENIUS!
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
March 17 2016 12:50 GMT
#363
FYI

Google self-driving cars has driven 1,011,338 in autonomous mode on life streets. It has been in an accident for 12 times and only once by it's fault. Considering that car has not to only decide how to drive, it has to scan the space around it, determine the objects (cars, humans, animals, garbage, marking, road signs, pits, traffic lights) and predict object's behavior in real-time. Starcraft isn't more complex that the real life. And the car AI isn't even 1% as smart as the AlplaGo.
Be polite, be professional, but have a plan to kill everybody you meet.
Ljas
Profile Joined July 2012
Finland725 Posts
March 17 2016 13:02 GMT
#364
On March 17 2016 20:33 Elentos wrote:
Show nested quote +
On March 17 2016 08:38 StarscreamG1 wrote:
On March 17 2016 08:25 Quesadilla wrote:
Seems like most people are already echoing my thoughts too. Unless the AI is limited with the APM/mechanics, no way a person could win.

Agree, but with restrictions the test wouldn't make sense.

Why not? I'm pretty sure it's easier to create an AI that can beat the best SC players with humanly impossible mechanics than it is to make an AI that can beat them strategically with human-like mechanics. But the 2nd one is way more interesting.

I have a feeling the losing humans would get salty and claim the result isn't legitimate because of the machine's inhuman APM. Until, of course, it gets lowered to a level where they can overpower it mechanically themselves. Do you think they'll ever find an APM cap all parties can agree with?
unholyflare
Profile Joined August 2014
42 Posts
March 17 2016 13:40 GMT
#365
On March 17 2016 20:39 deacon.frost wrote:
Show nested quote +
On March 17 2016 20:27 unholyflare wrote:
On March 13 2016 03:24 Charoisaur wrote:
On March 13 2016 03:17 lordsaul wrote:
I think people massively underestimate what perfect mechanics does to the game It depends on the rules/limitations placed on the AI, but imagine

* Every Medivac always picking up units about to be hit by a stalker and immediately dropping it for the next shot
* Marines that always maintain their range advantage on roaches
* Tanks that always target the banelings first
* Marines that always perfect split v banelings (you can find that online already)
* Weak units that always rotate out of the front line
* Medivacs healing the most important target in range, rather than the closest
* Perfect charges vs tank lines (single units charging ahead of the main attack
* ...to name a very few basic micro tricks

And while all this happens, perfect macro? Humans overestimate themselves . Computers won't even need "good" strategy to beat humans, just a large number of difficult to handle micro tricks and beastly macro. The "AI" that will need to be added is just to stop the computer glitching out against weird tricks (e.g. somehow tricking the AI into permanent retreat based on units trying to find perfect range.

Edit: Humans are actually at an advantage in Chess and Go, because they are put under far less real time pressure

people don't underestimate that. they know the AI would have to be limited for it to be a fair challenge.
the point is to show that bots are more intelligent then humans not that they have better mechanics.


This was never the point? Certainly as far as chess engines go, they are superior simply because they can brute force calculate in the way that humans can't. Humans have to "teach" engines strategy by assigning values to various strategic aspects.

The brute force calculation power of machines in chess/go I would say is roughly the equivalent to mechanics in SC2. It's part of the deal.

For the love of me, READ ABOUT THAT THING ANYTHING. It haven't brute forced the game. Bruteforcing GO is technically near impossible, taht's why the net lost one game(IIRC the score is now 3-1)


i'm perfectly aware of the limitations of brute forcing the game. Back when chess engines only went on brute force they couldn't beat top humans either because they could outplay the engine strategically.

Nonetheless, *tactically*, in Go and Chess, engines/AI are perfect or near-perfect. And it's the tactics that are like SC2 mechanics.
todespolka
Profile Joined November 2012
221 Posts
March 17 2016 13:56 GMT
#366
On March 13 2016 08:09 Liquid`Snute wrote:
Naive. Of course AIs will be able to beat humans, even with APM/micro limitations (no mineral hax etc). It will take a lot of work to get the AI to such a stage, but a computer's game-sense and execution will be absolute next level, far beyond that of any human. Perfect memory, perfect theory. Obviously the awkward 'mindless machine' quirks will be dealt with in the development of the AI. If the computer is fast enough to process well in "broodwar real-time" with several strategic layers working together (like AlphaGo), humans won't stand a chance. It will take crazy strong computers to do this, but progress is always there. Would be very cool to watch and I hope they undertake the project


I can assure you human brain is very good at fast situational incomplete exercises.The only way to create an ai with same intelligence would be to create a human brain and we are far away from that. How many neurons can we simulate today? That of a bee or two bees?

Dont forget, the ai needs to use the same interface as humans with the difference of image recognition but you need to simulate that it can only see one screen and the minimap. Otherwise the whole experiment misses the point, it would be like playing against a player who is connected to the computer and just has to imagine things.
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-17 15:41:07
March 17 2016 14:33 GMT
#367
On March 17 2016 22:56 todespolka wrote:
I can assure you human brain is very good at fast situational incomplete exercises.The only way to create an ai with same intelligence would be to create a human brain and we are far away from that. How many neurons can we simulate today? That of a bee or two bees?

More than we need for a computer to drive a car on a real life streets. How many bee brains combined can do this?

On March 17 2016 22:56 todespolka wrote:
Dont forget, the ai needs to use the same interface as humans with the difference of image recognition but you need to simulate that it can only see one screen and the minimap.

Why the hell do you want to translate computer output to a human interface to be than translated back from a human interface to a computer input? Than we should encrypt it 5 times to make it even harder for AI!
You can even limit it's "vision" to 1 square centimeter. He will move it million times per second "scanning" every possible "pixel" on the map for every rendered frame. That incredible mechanical advantage will most likely lead to a flawless victory for a computer. Because, as I was saying like 3 times before:
On March 16 2016 20:11 sh1RoKen wrote:
Starcraft units wasn't designed to be microed by computer.


On March 17 2016 22:56 todespolka wrote:
Otherwise the whole experiment misses the point, it would be like playing against a player who is connected to the computer and just has to imagine things.

That is the whole fucking point of an experiment. To create a program that can imagine things by itself. The program than can learn things and become better than human at things you can't completely calculate through. And that was already done. They just want to test it in different environments. Which you should have been aware of if you've read anything about that AlphaGO before posting arguments based only on your complete ignorance about the subject.
Be polite, be professional, but have a plan to kill everybody you meet.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 17 2016 15:41 GMT
#368
It's hard not to ad hominem on such people :o ^
WriterMaru
jinorazi
Profile Joined October 2004
Korea (South)4948 Posts
Last Edited: 2016-03-17 19:03:37
March 17 2016 18:59 GMT
#369
On March 17 2016 21:50 sh1RoKen wrote:
FYI

Google self-driving cars has driven 1,011,338 in autonomous mode on life streets. It has been in an accident for 12 times and only once by it's fault. Considering that car has not to only decide how to drive, it has to scan the space around it, determine the objects (cars, humans, animals, garbage, marking, road signs, pits, traffic lights) and predict object's behavior in real-time. Starcraft isn't more complex that the real life. And the car AI isn't even 1% as smart as the AlplaGo.


i dont know how complex that self driving car is but AI being able to play starcraft would be like doing everything you can to disrupt the self driving cars; crash a paint truck that covers the whole road, a flood, burning fuel depot, tornado miles ahead, have it determine if it is safe to drive on ice, threat assessment from weird behaving cars and people, etc.

AI being to understand starcraft and act accordingly is beyond my imagination so i would love to see this happen. making an AI to beat a professional bw player sounds like impossible technology at the moment. its mind boggling if google is able to pull it off within a few years. perhaps im greatly underestimating current AI technology but i'd estimate at least 5 and probably closer to double digits in years to accomplish this.

a simple way of if player does X, AI must respond with Y will not work. it is truly being able to asses threat, determine the information to be fake or not, intentional or not, and after all that decide what to do.
age: 84 | location: california | sex: 잘함
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-17 21:33:11
March 17 2016 21:17 GMT
#370
On March 17 2016 21:50 sh1RoKen wrote:
FYI

Google self-driving cars has driven 1,011,338 in autonomous mode on life streets. It has been in an accident for 12 times and only once by it's fault. Considering that car has not to only decide how to drive, it has to scan the space around it, determine the objects (cars, humans, animals, garbage, marking, road signs, pits, traffic lights) and predict object's behavior in real-time. Starcraft isn't more complex that the real life. And the car AI isn't even 1% as smart as the AlplaGo.


the user above me already said a bit about this, but i want to add something none the less:

"it has been in an accident for 12 times" - and we are assuming, that even those people causing these accident, didn't want to make them themselfs.

a game is a total different situation: everyone wants to hit you. the car failed 12 times, even thou noone tried to hit it. that is a really bad number, not a good one.

/edit: oh and i just saw "the car has driven [number] in autonomous [...]" [number] what? seconds? hours? miles? km? lightyears? O_o
“Never assume malice when stupidity will suffice.”
Xyik
Profile Blog Joined November 2009
Canada728 Posts
March 17 2016 21:40 GMT
#371
On March 17 2016 21:50 sh1RoKen wrote:
FYI

Google self-driving cars has driven 1,011,338 in autonomous mode on life streets. It has been in an accident for 12 times and only once by it's fault. Considering that car has not to only decide how to drive, it has to scan the space around it, determine the objects (cars, humans, animals, garbage, marking, road signs, pits, traffic lights) and predict object's behavior in real-time. Starcraft isn't more complex that the real life. And the car AI isn't even 1% as smart as the AlplaGo.


not sure being able to drive a car has any correlation with being able to play Starcraft at a high level. Also, they've been working on that since 2009 and its still not ready, in fact, other companies are beating them to the chase (Honda, Tesla, rumors about Uber as well).

mierin
Profile Joined August 2010
United States4943 Posts
Last Edited: 2016-03-17 22:11:01
March 17 2016 21:52 GMT
#372
An interesting thought I've had recently is...is Starcraft truly in real time? It doesn't seem right to assume that your brain thinks completely continuously since space itself can't truly be considered continuous. Wouldn't vectorizing the game with a really small deltaT be kind of computationally equivalent to the Go game? Maybe not equivalent, it may even be like at each point in time a sort of Go level game has to be evaluated, but at our rate of technology I imagine we'll be seeing a successful Starcraft AI in our lifetimes.

EDIT: I think a reasonable deltaT can even be chosen at this point. It'd likely be (the time at which the average human brain cannot register additional information delivered at a certain framerate) + (the time the "average" mouse move physically takes). I'm assuming the time for a thought to be transmitted to the hand is negligible.
JD, Stork, Calm, Hyuk Fighting!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 17 2016 22:26 GMT
#373
On March 18 2016 06:52 mierin wrote:
An interesting thought I've had recently is...is Starcraft truly in real time? It doesn't seem right to assume that your brain thinks completely continuously since space itself can't truly be considered continuous. Wouldn't vectorizing the game with a really small deltaT be kind of computationally equivalent to the Go game? Maybe not equivalent, it may even be like at each point in time a sort of Go level game has to be evaluated, but at our rate of technology I imagine we'll be seeing a successful Starcraft AI in our lifetimes.

EDIT: I think a reasonable deltaT can even be chosen at this point. It'd likely be (the time at which the average human brain cannot register additional information delivered at a certain framerate) + (the time the "average" mouse move physically takes). I'm assuming the time for a thought to be transmitted to the hand is negligible.



Technically StarCraft is based on frames. At each frame, the actions of all players are handled and the game logic is run. So you could theoretically analyze it this way. The main problem is the very large search space. Much larger than that of GO.
If you cannot win with 100 apm, win with 100 cpm.
snakeeyez
Profile Joined May 2011
United States1231 Posts
March 18 2016 00:24 GMT
#374
Starcraft is a very hard game and probably a good choice for them, but they will have more technical difficulties with a video game verses board games. With the resources and knowledge they have im sure their bot will be on a level and scale far beyond any starcraft so far. Its going to be a tall order though especially if you consider all 3 races, the huge variety of maps, the slight luck involved with scout timings and things. Not at all easy and to be honest its probably one of the hardest games left.
They are running out of hard games GO was one of the hardest that exists. They been trying to win at that for like 30 years and I am pretty amazed they were able to do it. Their AI is truly learning in almost every sense of the word its just at one specific domain still
Veldril
Profile Joined August 2010
Thailand1817 Posts
Last Edited: 2016-03-18 00:39:16
March 18 2016 00:38 GMT
#375
I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).

Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works?
Without love, we can't see anything. Without love, the truth can't be seen. - Umineko no Naku Koro Ni
necrosexy
Profile Joined March 2011
451 Posts
March 18 2016 05:16 GMT
#376
On March 18 2016 09:38 Veldril wrote:
I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).

Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works?

http://www.teamliquid.net/forum/viewpost.php?post_id=25502046

Read page 2 section A of the pdf
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 18 2016 06:07 GMT
#377
On March 18 2016 14:16 necrosexy wrote:
Show nested quote +
On March 18 2016 09:38 Veldril wrote:
I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).

Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works?

http://www.teamliquid.net/forum/viewpost.php?post_id=25502046

Read page 2 section A of the pdf

You are confusing state space with complexity. What's the state space for throwing a basket ball in real life? That would be utterly impossible to do for an AI, right?
Gwavajuice
Profile Joined June 2014
France1810 Posts
March 18 2016 11:55 GMT
#378
Hey guys, why are you even arguing? You totally misread Boxer (and Flash). What they simply said was :

- "hey there is plenty of money in these show matches, please pick me!"
Dear INno and all the former STX boys.
necrosexy
Profile Joined March 2011
451 Posts
March 18 2016 13:14 GMT
#379
On March 18 2016 15:07 Mendelfist wrote:
Show nested quote +
On March 18 2016 14:16 necrosexy wrote:
On March 18 2016 09:38 Veldril wrote:
I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).

Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works?

http://www.teamliquid.net/forum/viewpost.php?post_id=25502046

Read page 2 section A of the pdf

You are confusing state space with complexity. What's the state space for throwing a basket ball in real life? That would be utterly impossible to do for an AI, right?

Didn't realize there was an AI that can beat NBA players!
FFW_Rude
Profile Blog Joined November 2010
France10201 Posts
March 18 2016 13:32 GMT
#380
Why is this thread going on ? I'm sure it was stated on every page that no AI will make a dragoon go up a ramp. Nor a goliath (ultralisk can apply)
#1 KT Rolster fanboy. KT BEST KT ! Hail to KT playoffs Zergs ! Unofficial french translator for SlayerS_`Boxer` biography "Crazy as me".
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 18 2016 13:49 GMT
#381
On March 18 2016 22:14 necrosexy wrote:
Show nested quote +
On March 18 2016 15:07 Mendelfist wrote:
On March 18 2016 14:16 necrosexy wrote:
On March 18 2016 09:38 Veldril wrote:
I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).

Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works?

http://www.teamliquid.net/forum/viewpost.php?post_id=25502046

Read page 2 section A of the pdf

You are confusing state space with complexity. What's the state space for throwing a basket ball in real life? That would be utterly impossible to do for an AI, right?

Didn't realize there was an AI that can beat NBA players!

I didn't mention beating NBA players. I said "throwing a basket ball". How large do you think the state space is for throwing a basket ball? How many discrete situations can occur, and what relevance do you think that has for how hard it is to do? I'm trying to tell you that the state space size for Starcraft is a red herring.
necrosexy
Profile Joined March 2011
451 Posts
March 19 2016 01:31 GMT
#382
On March 18 2016 22:49 Mendelfist wrote:
Show nested quote +
On March 18 2016 22:14 necrosexy wrote:
On March 18 2016 15:07 Mendelfist wrote:
On March 18 2016 14:16 necrosexy wrote:
On March 18 2016 09:38 Veldril wrote:
I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).

Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works?

http://www.teamliquid.net/forum/viewpost.php?post_id=25502046

Read page 2 section A of the pdf

You are confusing state space with complexity. What's the state space for throwing a basket ball in real life? That would be utterly impossible to do for an AI, right?

Didn't realize there was an AI that can beat NBA players!

I didn't mention beating NBA players. I said "throwing a basket ball". How large do you think the state space is for throwing a basket ball? How many discrete situations can occur, and what relevance do you think that has for how hard it is to do? I'm trying to tell you that the state space size for Starcraft is a red herring.

I was joking, because your analogy is terrible (e.g.,the goal is static, complete map information).

Static space is a rough measure of complexity. Of course it's not comprehensive (notice it's merely the first thing discussed in the report I linked), but the disparity between sc and chess/go is absurd -- even if you took a fraction. And bear in mind the estimates were excluding other factors that would've made it even worse!

Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 19 2016 07:56 GMT
#383
On March 19 2016 10:31 necrosexy wrote:
Static space is a rough measure of complexity.

No it isn't. Trying put a number on Starcrafts state space size is ridiculous, as is trying to do it on ANY real world problem. It doesn't tell you anything about how hard it is, because it's for all practical purposes always infinite. Starcraft is more similar to real world problems than Go, which I'm sure is why DeepMind thinks it's an interesting problem. For continuous problems you will have to find another number than state space.
mechengineer123
Profile Joined March 2013
Ukraine711 Posts
March 19 2016 23:55 GMT
#384
They either have no idea what they're talking about or they're deliberately just giving their standard PR answers ("I will do my best! I will be victorious!"). AI would destroy humans without a single doubt. The only interesting question would be how low you could limit the AI's APM before humans stand a chance.
StarStruck
Profile Blog Joined April 2010
25339 Posts
March 20 2016 00:56 GMT
#385
On March 20 2016 08:55 mechengineer123 wrote:
They either have no idea what they're talking about or they're deliberately just giving their standard PR answers ("I will do my best! I will be victorious!"). AI would destroy humans without a single doubt. The only interesting question would be how low you could limit the AI's APM before humans stand a chance.


I know some players who had 900+ apm playing BW and they suck. Lot's of people have tried to make AI's really hard for BW and it was still beatable. I'd like to see one advanced enough to even come close at BW.
Poopi
Profile Blog Joined November 2010
France12770 Posts
March 20 2016 03:45 GMT
#386
On March 20 2016 08:55 mechengineer123 wrote:
They either have no idea what they're talking about or they're deliberately just giving their standard PR answers ("I will do my best! I will be victorious!"). AI would destroy humans without a single doubt. The only interesting question would be how low you could limit the AI's APM before humans stand a chance.

Humans would keep destroying AI without a single doubt.

Very convincing right?
WriterMaru
stapla05
Profile Joined July 2011
Australia67 Posts
Last Edited: 2016-03-20 19:07:20
March 20 2016 07:36 GMT
#387
The AI has advantages and disadvantages versing a human for one its micro, bo and macro will be perfect. So it wont make mistakes. So in turn it will be very effective at whatever builds it does which will result in majority of players would start losing. Maybe not the top level player but a majority of the players will lose to it. As this is very expensive project to code an ai at that level it could accomplish tactic that even human cant perform once it starts to adopt these a tactics there is no hope. You could have put your research into the human brain which could have benefit us more but if you what an ai to win us a gaming while where have large issues around the world i don't know.What happened with deep blue will most possible happen again it will find the solution and make zero chance of an error as all human make errors as that what human are like but that said it was coded by a humans so it depends who codes it. But that said i'm on the wall with this one it could go either way as some of the top player are very intelligent people and i'm sure they have something up there sleeve.
http://www.rts-sanctuary.com/Dawn-Of-War/showuser=96956
Piste
Profile Blog Joined July 2006
6174 Posts
March 21 2016 03:52 GMT
#388
On March 20 2016 16:36 stapla05 wrote:
The AI has advantages and disadvantages versing a human for one its micro, bo and macro will be perfect. So it wont make mistakes. So in turn it will be very effective at whatever builds it does which will result in majority of players would start losing. Maybe not the top level player but a majority of the players will lose to it. As this is very expensive project to code an ai at that level it could accomplish tactic that even human cant perform once it starts to adopt these a tactics there is no hope. You could have put your research into the human brain which could have benefit us more but if you what an ai to win us a gaming while where have large issues around the world i don't know.What happened with deep blue will most possible happen again it will find the solution and make zero chance of an error as all human make errors as that what human are like but that said it was coded by a humans so it depends who codes it. But that said i'm on the wall with this one it could go either way as some of the top player are very intelligent people and i'm sure they have something up there sleeve.

I think you're missing the point on AI developement. The point is making an AI cabaple of learning and adapting, making decisions based on that. After a certain point it can start learning about more complex things other than simple computer games. Theyre not trying to make AI that is concentrating on a single game.
Serendib
Profile Joined May 2011
67 Posts
Last Edited: 2016-03-21 06:30:57
March 21 2016 06:29 GMT
#389
Hey Everyone. I'm Dave Churchill and I organize and run the AIIDE Starcraft AI Competition, and I also wrote UAlbertaBot. I've noticed a lot of misinformation in this thread, so rather than reply to everything individually I decided to take the time to write a detailed history of Starcraft AI Competitions for those who are interested, you can find it here:

http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml

In answer to Boxer's claim: I think it is foolish to say that AI will *never* beat humans at Starcraft, however I feel that this is still quite a few years away. Maybe 5-10 years (unless DeepMind is able to do something miraculous akin to AlphaGo, but that seems unlikely). I also believe that the first to beat expert humans will probably end up heavily abusing micromanagement to do so, so then we will probably enter a philosophical debate about what is 'fair' when it comes to dexterity based games.

Also, most people seem to be confused as to the objective that most of us in the RTS AI field have. Most of us are not really trying to make the best Starcraft bots possible, but instead to come up with new AI algorithms for solving hard problems, and then use Starcraft as a test-bed for those algorithms. We could have much stronger bots if we spent countless hours hard-coding strategies and rules, but that isn't very interesting from a true artificial intelligence point of view.

Thanks for all the discussion, it's great to see so many people interested in the topic!
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 21 2016 15:51 GMT
#390
Massive amounts of misinformation in this thread indeed, especially when it comes to deep reinforcement learning.

To elaborate on the current state of RTS AI, this recent article is well worth your time :
'RTS AI : Problems and techniques'
richoux.fr

"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
Hotshot
Profile Joined November 2004
Canada184 Posts
March 21 2016 19:59 GMT
#391
Obviously an AI could be coded to easily be better then any human. The question is how hard is it to create that AI.

I feel creating this is more along of the lines 'lots of work', compared to an AI of something like Chess or Go that requires a lot of taught and knowledge.

For example in sc there are builds that you can use on certain maps against certain races. Coding an AI to use those builds would be rather easier. Then you could code in functions that know how to adapt strategy based on certain situations (for example a one base all in, or if someone cannon rushes you are X minute at X spot and at X map there is a way to handle it as efficiently as possible). Then you code in functions knowing when to engage, when to run, what positions to fortify, etc. Then you code functions that understand map. Then you can code micro functions, id imagine probe/drone would be so effective human players would need to send 2 workers to deny 1 from permently harassing, or blink micro would be perfect (even with low'ish apm). Then you code functions that abuse the fact humans can't multitask as much (hitting many different spots at once). You can then even write functions knowing how certain people play, expecting certain strategies, knowing what they struggle with (aka: the ai would never forget). Then you can even write a function that can parse tends of thousands of games and better understand opponents and strategies.... etc.

So overall, I feel a sc2 AI would just take a lot of man hours. Unlike a game like chess or Go which each move exponentially increases the possibilities (so the AI needs to be smart enough to trim out all the obvious bad moves, while in a game like sc2 horribly bad news are much more obvious).

If there was a simple way to use something like c++ to code an AI (hooking into the game and getting the data in a nice clean interface) I am sure more people (like myself) would mess around and build diamond/masters level AI's.
KT_Elwood
Profile Joined July 2015
Germany858 Posts
March 21 2016 20:32 GMT
#392
BoxeR: "AlphaGo won't beat humans in StarCraft"

[image loading]

of course not BoxeR, it can only play Go.
"First he eats our dogs, and then he taxes the penguins... Donald Trump truly is the Donald Trump of our generation. " -DPB
MadMod
Profile Joined May 2011
Norway4 Posts
March 21 2016 22:10 GMT
#393
On March 21 2016 15:29 Serendib wrote:
…
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
…

Thanks for the article, I was hoping someone creating bots would post in this thread.

After reading the article and the paper posted just afterwards I have some questions:
  • I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
  • There were some mention of bots learning from replays, do you know if this was successful?

I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 22 2016 00:58 GMT
#394
On March 22 2016 07:10 MadMod wrote:
Show nested quote +
On March 21 2016 15:29 Serendib wrote:
…
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
…

Thanks for the article, I was hoping someone creating bots would post in this thread.

After reading the article and the paper posted just afterwards I have some questions:
  • I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
  • There were some mention of bots learning from replays, do you know if this was successful?

I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.



If by theory based you mean complex algorithms:
My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment.

There were some papers about learning from replay, but no top bot that I know of used replay analysis.
If you cannot win with 100 apm, win with 100 cpm.
Hotshot
Profile Joined November 2004
Canada184 Posts
March 22 2016 03:15 GMT
#395
On March 22 2016 09:58 LetaBot wrote:
Show nested quote +
On March 22 2016 07:10 MadMod wrote:
On March 21 2016 15:29 Serendib wrote:
…
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
…

Thanks for the article, I was hoping someone creating bots would post in this thread.

After reading the article and the paper posted just afterwards I have some questions:
  • I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
  • There were some mention of bots learning from replays, do you know if this was successful?

I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.



If by theory based you mean complex algorithms:
My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment.

There were some papers about learning from replay, but no top bot that I know of used replay analysis.


How are you doing this? All 100% using the editor or using third party tools?
vult
Profile Blog Joined February 2012
United States9400 Posts
Last Edited: 2016-03-22 12:21:55
March 22 2016 12:21 GMT
#396
AlphaGo discussed in The Daily Show with Trevor Noah segment last night (March 21st) --
http://www.cc.com/full-episodes/crzxbs/the-daily-show-with-trevor-noah-march-21--2016---shaka-senghor-season-21-ep-21081 -- second segment.

Starcraft also mentioned.
I used to play random, but for you I play very specifically.
MarlieChurphy
Profile Blog Joined January 2013
United States2063 Posts
Last Edited: 2016-03-22 13:02:05
March 22 2016 12:58 GMT
#397
On March 22 2016 21:21 vult wrote:
AlphaGo discussed in The Daily Show with Trevor Noah segment last night (March 21st) --
http://www.cc.com/full-episodes/crzxbs/the-daily-show-with-trevor-noah-march-21--2016---shaka-senghor-season-21-ep-21081 -- second segment.

Starcraft also mentioned.



Segment starts after first commercial about 10 min in. Not really a mention, just use SC as part of his joke saying he cant even beat computers in it or FIFA
RIP SPOR 11/24/11 NEVAR FORGET
MadMod
Profile Joined May 2011
Norway4 Posts
March 22 2016 20:36 GMT
#398
On March 22 2016 09:58 LetaBot wrote:
Show nested quote +
On March 22 2016 07:10 MadMod wrote:
On March 21 2016 15:29 Serendib wrote:
…
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
…

Thanks for the article, I was hoping someone creating bots would post in this thread.

After reading the article and the paper posted just afterwards I have some questions:
  • I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
  • There were some mention of bots learning from replays, do you know if this was successful?

I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.



If by theory based you mean complex algorithms:
My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment.

There were some papers about learning from replay, but no top bot that I know of used replay analysis.


That is very interesting. To create a good search space for the MCTS seems extremely hard. It would be awesome to see a very adaptable bot though.

I get the feeling from your answer that the current more adaptable bots play better against humans compared to the less adaptable ones,even though they are not the best in bot vs bot, is his true?
mierin
Profile Joined August 2010
United States4943 Posts
March 22 2016 20:50 GMT
#399
On March 19 2016 16:56 Mendelfist wrote:
Show nested quote +
On March 19 2016 10:31 necrosexy wrote:
Static space is a rough measure of complexity.

For continuous problems you will have to find another number than state space.


I kind of wonder if there is such a thing as a continuous problem...at least for playing games based on thought.
JD, Stork, Calm, Hyuk Fighting!
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 22 2016 20:55 GMT
#400
On March 22 2016 12:15 Hotshot wrote:
Show nested quote +
On March 22 2016 09:58 LetaBot wrote:
On March 22 2016 07:10 MadMod wrote:
On March 21 2016 15:29 Serendib wrote:
…
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
…

Thanks for the article, I was hoping someone creating bots would post in this thread.

After reading the article and the paper posted just afterwards I have some questions:
  • I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
  • There were some mention of bots learning from replays, do you know if this was successful?

I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.



If by theory based you mean complex algorithms:
My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment.

There were some papers about learning from replay, but no top bot that I know of used replay analysis.


How are you doing this? All 100% using the editor or using third party tools?



This is for Brood War. I use the (Wiki)Brood War Application Programming Interface





On March 23 2016 05:36 MadMod wrote:
Show nested quote +
On March 22 2016 09:58 LetaBot wrote:
On March 22 2016 07:10 MadMod wrote:
On March 21 2016 15:29 Serendib wrote:
…
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
…

Thanks for the article, I was hoping someone creating bots would post in this thread.

After reading the article and the paper posted just afterwards I have some questions:
  • I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
  • There were some mention of bots learning from replays, do you know if this was successful?

I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.



If by theory based you mean complex algorithms:
My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment.

There were some papers about learning from replay, but no top bot that I know of used replay analysis.


That is very interesting. To create a good search space for the MCTS seems extremely hard. It would be awesome to see a very adaptable bot though.

I get the feeling from your answer that the current more adaptable bots play better against humans compared to the less adaptable ones,even though they are not the best in bot vs bot, is his true?


Yea you need to reduce the search space to get good results with MCTS.

For now, the bots that are capable of executing one strategy partially well have a better chance defeating a human player. But in a Bo5 the more adaptable stands a better chance.

If you cannot win with 100 apm, win with 100 cpm.
Hotshot
Profile Joined November 2004
Canada184 Posts
March 24 2016 23:19 GMT
#401
That is interesting. I watched a game from 2015 bot vs a russian pro gammer... Looked interesting, but I felt I could definitely code something better/stronger if I invested enough time. I saw so many AI things that bugged me.

I am tempted to take a peek at coding something myself.
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
March 25 2016 00:41 GMT
#402
On March 25 2016 08:19 Hotshot wrote:
That is interesting. I watched a game from 2015 bot vs a russian pro gammer... Looked interesting, but I felt I could definitely code something better/stronger if I invested enough time. I saw so many AI things that bugged me.

I am tempted to take a peek at coding something myself.


http://www.teamliquid.net/blogs/485544-intro-to-scbw-ai-development
If you cannot win with 100 apm, win with 100 cpm.
Musicus
Profile Joined August 2011
Germany23576 Posts
March 27 2016 08:30 GMT
#403
So it seems like this will happen sooner or later for sure now. Google already contacted Blizzard, as confirmed by Tim Morten.

https://www.reddit.com/r/starcraft/comments/4c4vqr/from_wcs_shanghai_tim_morten_confirms_that_the/

Might actually be sc2 instead of BW though.
Maru and Serral are probably top 5.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 12:49 GMT
#404
Chess, check.
Go, check.
Starcraft, incoming
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 12:53:07
May 26 2017 12:52 GMT
#405
"Google is reportedly considering using a robot arm for its AI in order to even the odds with a human who will have to use a keyboard and mouse during the match."

Wow, if that ain't confidence I dunno what is.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Nebuchad
Profile Blog Joined December 2012
Switzerland12070 Posts
May 26 2017 13:03 GMT
#406
On May 26 2017 21:52 mishimaBeef wrote:
"Google is reportedly considering using a robot arm for its AI in order to even the odds with a human who will have to use a keyboard and mouse during the match."

Wow, if that ain't confidence I dunno what is.


I mean if you have infinite APM and as such absolutely perfect splits can't you MarineKing your way into every TvZ with minimal intelligence?
"It is capitalism that is incentivizing me to lazily explain this to you while at work because I am not rewarded for generating additional value."
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 13:05 GMT
#407
Oh, forgot to add these interesting quotes from recent articles:

"Players have praised the technology’s ability to make unorthodox moves and challenge assumptions core to a game that draws on thousands of years of tradition."

"This time, Mr. Hassabis said, a new approach allowed AlphaGo to learn more by playing games against itself."
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 14:15 GMT
#408
So, just to put forth another interesting thought.

If the AI can play games against itself in Starcraft, it can probably do so at blazing speed as well (x16 replay speeds? still might be a snail's pace given modern microprocessor speeds). Of course this would just be for the 'learning phase' and then when it faces humans it can be placed back on it's limitation handicap (the robotics interface, APM cap, etc.)
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
VioleTAK
Profile Joined July 2006
4315 Posts
May 26 2017 14:19 GMT
#409
If we compare Starcraft to Go, let's think about the Joseki at the beginning of the game, the Fuseki at the middle of the game, and then Engame Yose.

At the beginning of the game I believe AlphaGo could at some point be so good with micro, like was suggested as an example in this thread: Muta harass, on levels that humans never faced before.

In Go, AlphaGo kind of "becomes stronger" as the game progresses, which is basically saying that humans become more limited, and can come up with unthinkable moves. But in BW I believe the longer the game lasts, AlphaGo would get outshined by humans for various reasons, some of which Boxer mentioned.

Question is if humans could even reach an extended game against that perfect micro.

Another interesting point is that AlphaGo started to get so good at Go when it played itself, millions and millions of times. Can it really do that with Starcraft? Not to mention the various maps.

Go is infinitely more complex and deep than any game including Starcraft of course, there's no comparison there, but it is an entirely different story to reach a point where it teaches itself BW on a pro+ level, I think they have quite a challenge ahead if they really intend to continue, and I hope they do.

Starcraft is feels a lot more like "real" war than Go of course, and I also think that if AlphaGo would become better than top pros then... many countries will gain interest in developing A.I. for military use. It sounds ridiculous that Starcraft/AlphaGo could initiate such a thing but it's not that farfetched.

Anyway, Boxer is pure <3 :-)
Every fan of Starcraft is a fan of Lim Yo Hwan by association
todespolka
Profile Joined November 2012
221 Posts
Last Edited: 2017-05-26 14:43:28
May 26 2017 14:20 GMT
#410
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.



It has to issue commands and receive information the same way as a human. The apm is probably also capped, because you want to know if an ai is able to do as well as a human with a limited amount of apm. It has one advantage it doesn't get tired.

An automate can already beat a human in a micro battle, that is not the goal of ai research (look for sc2 automaton micro battle).


It is easy to determine what a good move is in go and in chess. But starcraft has no perfect moves, you have many good moves.
Another difficulty is that starcraft is a fast game. Human brain is made for fast things. An example: Human brain is able to recognize on a picture all objects at once (roughly). This is possible because neurons can work all at once. Scientists say that the brain recognizes a face in only 100 steps. Can you imagine a piece of code which does that?

Our brain is nothing else than a very complex computer and one day we will be able to copy and improve it. But if that day comes we will also enhance our own brain and maybe link it with the super computer. Who knows!
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 14:24 GMT
#411
On May 26 2017 23:20 todespolka wrote:
Show nested quote +
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.



It has to issue commands and receive information the same way as a human. The apm is probably also capped, because you want to know if an ai is able to do as well as a human with a limited amount of apm.

An ai can already beat a human in a micro battle if it can work directly with the game engine and if it has unlimited apm (look for sc2 automaton micro battle).


If AI ends up succeeding, we can certainly anticipate all sorts of allegations and complaints about the AI cheating along those lines.
Writersator arepo tenet opera rotas
Erik.TheRed
Profile Blog Joined May 2010
United States1655 Posts
Last Edited: 2017-05-26 14:36:17
May 26 2017 14:35 GMT
#412
On May 26 2017 23:20 todespolka wrote:
Show nested quote +
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.



It has to issue commands and receive information the same way as a human. The apm is probably also capped, because you want to know if an ai is able to do as well as a human with a limited amount of apm.

An ai can already beat a human in a micro battle if it can work directly with the game engine and if it has unlimited apm (look for sc2 automaton micro battle).


Yup, capping APM/ imposing some physical limitations will also force the AI to prioritize its 'attention' during a match. I would argue that the game of Starcraft (or any RTS) is contingent on that limitation. It will be fascinating to see how an attention-limited AI will adjust to the dynamics of a game where a big part of high-level play is trying to distract the other player.
"See you space cowboy"
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 14:45 GMT
#413
What would an "attention limited AI" be? A computer hooked up to several sensors will always accurately report their readings, unlike humans where we have to focus and can't parallel process too many things consciously (subconsciously there may be some processing going on).

Also, note that whatever parameters you decide on to make it "fair", this AI will never get fatigued or make mental mistakes. Unless of course you want to include code for that sort of thing.

IMO, it's not a matter of *if*, but *when*.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 14:46 GMT
#414
On May 26 2017 23:35 Erik.TheRed wrote:
Show nested quote +
On May 26 2017 23:20 todespolka wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.



It has to issue commands and receive information the same way as a human. The apm is probably also capped, because you want to know if an ai is able to do as well as a human with a limited amount of apm.

An ai can already beat a human in a micro battle if it can work directly with the game engine and if it has unlimited apm (look for sc2 automaton micro battle).


Yup, capping APM/ imposing some physical limitations will also force the AI to prioritize its 'attention' during a match. I would argue that the game of Starcraft (or any RTS) is contingent on that limitation. It will be fascinating to see how an attention-limited AI will adjust to the dynamics of a game where a big part of high-level play is trying to distract the other player.


That's an interesting thought. I can definitely see how the training methods of AI would help it choose some wickedly smart dropship vectors, or really clever and unexpected corsair/reaver micro, things like that.

Where I see it having humongous problems is with higher level reasoning. So like today, I was watching Soulkey's stream and he was playing a ZvT where intuitively I thought to myself that there should definitely be Science Vessels out, but there were none, so there must be drops coming. I knew that, because the only thing that would delay the vessels is if dropships were being built instead of vessels. And just as I thought that, two scourge popped for Soulkey and he put them on patrol on the exposed path to his main. He was thinking the same thing. How would an AI make that determination? I'm not saying it can't, but it's a very high level inference+accompanying action.

And god help it, I will admit defeat to the machines if it can figure out something like Nal_Ra's arbiter hallucination win on the first go (without detection obviously).
Writersator arepo tenet opera rotas
todespolka
Profile Joined November 2012
221 Posts
May 26 2017 14:47 GMT
#415
On May 26 2017 23:35 Erik.TheRed wrote:
Show nested quote +
On May 26 2017 23:20 todespolka wrote:
On March 13 2016 02:38 Axieoqu wrote:
I would assume Starcraft would be even easier for the AI because mechanics are so important. Just consider how well the simple blink/micro bots work.



It has to issue commands and receive information the same way as a human. The apm is probably also capped, because you want to know if an ai is able to do as well as a human with a limited amount of apm.

An ai can already beat a human in a micro battle if it can work directly with the game engine and if it has unlimited apm (look for sc2 automaton micro battle).


Yup, capping APM/ imposing some physical limitations will also force the AI to prioritize its 'attention' during a match. I would argue that the game of Starcraft (or any RTS) is contingent on that limitation. It will be fascinating to see how an attention-limited AI will adjust to the dynamics of a game where a big part of high-level play is trying to distract the other player.


This is the interesting part. In addition to that it has also to know where to position units best and when, how to scout, understand the map, know how to take a risk and many other things.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 14:58:31
May 26 2017 14:55 GMT
#416
If the AI is capable of playing, let's say, 1000 games against itself per day, I think you really need to think about the implications of this.

Not only is it playing 1000 games, it is playing them at their best ability. Progamers these days may play 40 games a day but, first they are not systematically (with machine precision) developing their skill set in relation to their strategy and mechanics, and second their opponents aren't playing at absolute top tier performance for all 40 games. The machine if playing against itself will always be testing itself against the top performing strategy and mechanics, executed at machine precision.

These things considered, the rate of growth of the AI is insurmountable by humans.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 15:09 GMT
#417
On May 26 2017 23:55 mishimaBeef wrote:
If the AI is capable of playing, let's say, 1000 games against itself per day, I think you really need to think about the implications of this.

Not only is it playing 1000 games, it is playing them at their best ability. Progamers these days may play 40 games a day but, first they are not systematically (with machine precision) developing their skill set in relation to their strategy and mechanics, and second their opponents aren't playing at absolute top tier performance for all 40 games. The machine if playing against itself will always be testing itself against the top performing strategy and mechanics, executed at machine precision.

These things considered, the rate of growth of the AI is insurmountable by humans.


I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.
Writersator arepo tenet opera rotas
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 15:14:54
May 26 2017 15:13 GMT
#418
The reason I brought up mechanics is that you can be making false inferences about strategic elements of the game simply because your strategy or tactics happened to work against a player that wasn't executing at top mechanical level.

In the AIs case, all the learning it does with regard to strategy is correct and not muddled by an opponent that poorly executed and made you think your strategy was sound in some way.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 15:29 GMT
#419
On May 27 2017 00:13 mishimaBeef wrote:
The reason I brought up mechanics is that you can be making false inferences about strategic elements of the game simply because your strategy or tactics happened to work against a player that wasn't executing at top mechanical level.

In the AIs case, all the learning it does with regard to strategy is correct and not muddled by an opponent that poorly executed and made you think your strategy was sound in some way.


I don't think there's any question that the AI will learn the ways people play and quickly. Like you'd have to show it only 100 games if not fewer of ZvT to figure out that it should having mutalisks by the 7 minute mark and they should be doing stuff. But that's not at all what's impressive.

What's impressive is ee han timing. What's impressive is knowing when you had an advantage and where to press that advantage. I have a very hard time believing that Jaedong knew Stork was weak when he went for the muta timing attack because he'd seen 1000 games like it. He'd probably never seen a game like that one before. But he knew Stork was weak because he'd done some quick mental calculations and some inferences based on what he'd seen from his opponent. I'm not an expert, but I do have experience in some machine learning techniques, and that's not at all how they learn or "think."

From what I understand to be the case, getting a machine to do something like that is extremely difficult and not easily solved just by throwing data at it. Machine learning people have tried throwing mountains of data at a problem before, and that technique has failed in the past. Just saying it's going to see oh-so-much-data-and-be-oh-so-smart-you-guys isn't really an accurate representation of the challenge or solution.
Writersator arepo tenet opera rotas
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 15:31 GMT
#420
Yeah, I'm not claiming it's easy or anything, just inevitable.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
cutha
Profile Joined April 2017
2 Posts
Last Edited: 2017-05-26 15:45:18
May 26 2017 15:40 GMT
#421
I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.


I agree with most of what you said about "strategy" and mechanics and how scouting/reacting is most crucial to winning games. However, I think you may be thinking in the wrong perspective here as a human. Scouting/reacting is not human-exclusive abilities. They are still within the boundaries of learn-able information during the training. For example as a Zerg, the AI can generalize the strategy as: "if I didn't see a natural at X min, I need to sacrifice an overlord to scout. If I see Y amounts of certain units, I need to adopt plan B" etc. If the game samples for training is carefully chosen to cover a wide range of excellent scouting/reactive actions, then in theory the AI has no problem learning from them. It's no different than say, learning active actions like build-order wise "strategy" and mechanics.

To elaborate more, for the double medivac drop in TvZ, the Zerg AI can precisely keep track of the exact number of marines and another other units/SCVs and make optimized defense strategy based on map length, and thus able to maximize drone count before making defensive lings at the last moment. And it can have a lot of wiggle room to decide on the best number of lings depending on maps and other situations which even top human players are impossible to keep track of.
Heartland
Profile Blog Joined May 2012
Sweden24580 Posts
May 26 2017 15:42 GMT
#422
I came here for jokes about Innovation and found none. What has happened to all the quality shitposting in this place?!
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 15:50 GMT
#423
On May 27 2017 00:42 Heartland wrote:
I came here for jokes about Innovation and found none. What has happened to all the quality shitposting in this place?!


We're in mourning

World's best Go player flummoxed by Google’s ‘godlike’ AlphaGo AI
https://www.theguardian.com/technology/2017/may/23/alphago-google-ai-beats-ke-jie-china-go

After his defeat, a visibly flummoxed Ke – who last year declared he would never lose to an AI opponent – said AlphaGo had become too strong for humans, despite the razor-thin half-point winning margin.

“I feel like his game is more and more like the ‘Go god’. Really, it is brilliant,” he said.

Ke vowed never again to subject himself to the “horrible experience”.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Arrian
Profile Blog Joined February 2008
United States889 Posts
May 26 2017 15:59 GMT
#424
On May 27 2017 00:40 cutha wrote:
Show nested quote +
I think you may be making a mistake here. If you cap AI mechanical performance to something reasonably high (350, say), then humans and AI are both approaching if not basically at the asymptotes for win% gain on the mechanical front. In other words, improving your AI's mechanics by a lot over these 1000 games per day isn't going to give you much of a gain in your AI's ability to win games. Most games among pros are not won on the basis of mechanics alone. Most of it is based on information, the inferences made from that information, and proper response. Mechanics is easy. How you approach any given situation given the information you have is hard.

The point that a lot of people keep bringing up in terms of the AI's shortcomings is the strategic and situational variability. Again, 1000 games is nice, but you need to be able to form good generalizations over those games in order for them to apply in a given circumstance. If you're playing 1000 games a day for 2 years of development, I can't see how you're not overfitting. Top pros aren't approaching the game from the standpoint of a massive chunk of data. They have already extracted the meaningful generalizations about most situations. 1000 games a day isn't going to do much but give the AI improvements in the marginal areas of win% gain. I say this because "strategy" and mechanics aren't so much where the game is won.

The bulk of the game is scouting and reacting. It's about knowing the right inferences to make for a relatively small amount of information. The right way to approach teaching an AI how to do that may or may not take the form of a massive chunk of data, that's an empirical question, but given the methods that will probably be used to train these AIs, tuning them to make the right inferences for an enormous space of possibilities is a huge challenge. But that's where games are won. Some are won with mechanics, sure, and some are won with strokes of brilliant strategy, but in reality, most games are won by making accurate inferences from little information and then knowing the right response and executing it.

That's basically the opposite of what AI is good at. AI is good at making accurate inferences from an enormous quantity of information, especially when there's no information asymmetry. It's a much tougher task than you're making it out to be.


I agree with most of what you said about "strategy" and mechanics and how scouting/reacting is most crucial to winning games. However, I think you may be thinking in the wrong perspective here as a human. Scouting/reacting is not human-exclusive abilities. They are still within the boundaries of learn-able information during the training. For example as a Zerg, the AI can generalize the strategy as: "if I didn't see a natural at X min, I need to sacrifice an overlord to scout. If I see Y amounts of certain units, I need to adopt plan B" etc. If the game samples for training is carefully chosen to cover a wide range of excellent scouting/reactive actions, then in theory the AI has no problem learning from them. It's no different than say, learning active actions like build-order wise "strategy" and mechanics.

To elaborate more, for the double medivac drop in TvZ, the Zerg AI can precisely keep track of the exact number of marines and another other units/SCVs and make optimized defense strategy based on map length, and thus able to maximize drone count before making defensive lings at the last moment. And it can have a lot of wiggle room to decide on the best number of lings depending on maps and other situations which even top human players are impossible to keep track of.


I don't think we really disagree here at a fundamental level. I agree that the AI can learn a lot of the things that are needed. At a general level, I was disagreeing with two ideas that I've seen presented. First, that an AI learning Starcraft is a "lots of data" question, which is the answer to a lot of learning problems but for various reasons I contest that in this case. Second, that it's in the margins of mechanics or strategic insight that the AI will win games. It's going to have to win games just like everybody else: making inferences from limited information. I think we probably agree on both of these points.

I think where we probably disagree is that I think the training method isn't probably going to be best done by a careful sample. I just really really don't think that Starcraft is the kind of problem that can be solved in the way that games like Go or Chess are. Those you can train with thousands if not millions of games and get great results. But at least in Chess if not Go, the whole board is known completely to both players. The AI doesn't have to make inferences about what the actual state of affairs is, because the actual state of affairs is known. When it has to start making those judgments, even if they are high reliability judgments like I didn't see natural at X minutes then do Y, you're opening up a brand new world of complexity.
Writersator arepo tenet opera rotas
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 16:05 GMT
#425
Neural networks are already known to be strong classifiers of X or not X (ex. spam or not spam). Thus, they already make inferences from limited information.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
Last Edited: 2017-05-26 17:07:21
May 26 2017 17:05 GMT
#426
While its true that AIs have a harder time in partially observable environments, I don't think it'll take more than a decade for AIs to beat humans at SC2. And that's a conservative timeline in my opinion. Go AIs weren't predicted to beat humans before another 30 years just 2 years ago.

But if I was to build a NN to determine if a mail is spam, I would feed it the whole email instead of a few binary values on wether a word is there or not. This sounds more like a naive bayes approach.
Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
Charoisaur
Profile Joined August 2014
Germany15900 Posts
May 26 2017 17:16 GMT
#427
I heard Google's new AI "AlphaSC2" is ready and will be tested tomorrow in the GSL.
Many of the coolest moments in sc2 happen due to worker harassment
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 17:38 GMT
#428
I don't know why it's put up as some mystical bonjwa inference mastery on predicting possibilities of your opponent's build order and strategy. I don't think it's all that complicated of a decision tree.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Blardy
Profile Joined January 2011
United States290 Posts
May 26 2017 17:38 GMT
#429
If AI is allowed unlimited or 1000+ APM at all times then no human will beat it within a year. If they were given a cap of 400 then I don't see an AI beating a human for a long time.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 17:48:12
May 26 2017 17:44 GMT
#430
Nah, the AI will adapt. It might even use its extra computational power to, in 1 ms, assess which of 10-100 potential actions are likely to have the most effect on their chances of winning. Sort of a real-time Most Effective Actions calculator.

This would be interesting as it could be tuned to always maintain its APM lower than its opponent.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
cutha
Profile Joined April 2017
2 Posts
May 26 2017 17:45 GMT
#431
I think where we probably disagree is that I think the training method isn't probably going to be best done by a careful sample. I just really really don't think that Starcraft is the kind of problem that can be solved in the way that games like Go or Chess are. Those you can train with thousands if not millions of games and get great results. But at least in Chess if not Go, the whole board is known completely to both players. The AI doesn't have to make inferences about what the actual state of affairs is, because the actual state of affairs is known. When it has to start making those judgments, even if they are high reliability judgments like I didn't see natural at X minutes then do Y, you're opening up a brand new world of complexity.


I did misinterpret you in the previous post. But I think what I said still stands - all the winning strategy regardless of forms, be it reactive defense, aggressive all-in, or pure superior mechanics, are all very reasonable trainable knowledge. What you are basically saying here is that it is impossible to make "perfect" judgement due to fog of war, so there has to be always some kind of educated guessing and gambling involved in the game. And this is different from chess/Go, since all the pieces are always visible on board. However, from knowing exactly the current "state" of the game, AlphaGo is playing by its trained neural network which is based on human experience plus its own reinforced learning. There is no way to play it perfectly based on current state of the game because there are an unimaginably large number of variations for future moves. In this regard, that unknown factor due to large number of variations is similar to the unknown factor in Starcraft 2 due to fog of war. If you compare the strategic complexity of Go one player can employ given a certain state of the board, with the number of popular choices any top SC2 player would do given an in-game situation, it seems to me SC2 is complete childplay. Think it from another perspective, a top SC2 player needs to decide his reactive actions based on scouting information within seconds, but a top Go player may often need minutes at any turn. The hard part of SC2 for AI is how to achieve balanced performance among a multitude of different aspects like mechanics, micro based on restricted APM, reactive actions etc. But for the strategic part, if AlphaGo can conquer Go, SC2 is a no-brainer in my opinion.
niteReloaded
Profile Blog Joined February 2007
Croatia5281 Posts
May 26 2017 18:52 GMT
#432
this is laughable.

it would probably be pretty easy to make an AI that dominates humans.

-> If there is no APM limit, then i guess we all agree. For example just pick Zerg and go muta.

-> no apm limit, still go for attention-intensive strategies. Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click. From the point of view of mechanics, it could be better than Flash playing the game on slowest speed setting.

fishjie
Profile Blog Joined September 2010
United States1519 Posts
May 26 2017 18:56 GMT
#433
Depends - would the AI be able to have unlimited APM? Or would there be a cap to APM. If there is an APM cap, then strategy would be more important, and it would have a tougher time.

One of the key ideas that made alpha GO work is that they looked at the probability either side would win given a position on the board, if the rest of the game were played out using random moves. They then did monte carlo simulations to play those out, and used that to evaluate how good a position was. That assumption won't work in a game like starcraft.

https://www.tastehit.com/blog/google-deepmind-alphago-how-it-works/
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 19:28 GMT
#434
So, in the article

"AlphaGo relies on two different components: A tree search procedure, and convolutional networks that guide the tree search procedure. The convolutional networks are conceptually somewhat similar to the evaluation function in Deep Blue, except that they are learned and not designed. The tree search procedure can be regarded as a brute-force approach, whereas the convolutional networks provide a level on intuition to the game-play."

The monte carlo method that you mention is the tree searching, but, as above, there seems to be more to AlphaGo.

Of course, they will have to build new models for starcraft, otherwise the notion of a 'move' isn't well defined even.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
May 26 2017 20:04 GMT
#435
On May 27 2017 03:52 niteReloaded wrote:
this is laughable.

it would probably be pretty easy to make an AI that dominates humans.

-> If there is no APM limit, then i guess we all agree. For example just pick Zerg and go muta.

-> no apm limit, still go for attention-intensive strategies. Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click. From the point of view of mechanics, it could be better than Flash playing the game on slowest speed setting.



That defeats the entire exercise of making the AI. It's supposed to try to outsmart so the APM will be limited.
Moderator
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-26 20:17:47
May 26 2017 20:07 GMT
#436
Can't wait to see what race the AI favors. This might even change depending on what APM setting it's on. Well, and the map come to think of it.

Apparently in Go, it gives a slight edge to the white stones (playing 2nd).

Unlike in the first round, AlphaGo played the black stones, which means it played first, something it views as a small handicap. "It thinks there is a just a slight advantage to the player taking the white stones,” AlphaGo’s lead researcher, David Silver, said just before the game. And as match commentator Andrew Jackson pointed out, Ke Jie is known for playing well with white.


Oh, it also defeated a team of 5 Champions today
+ Show Spoiler +
[image loading]
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 20:28 GMT
#437
Let's not forget that even tho the computer can only use a limited amout of APM, it can still 'think' a LOT about every single click.


Ya, considering 400 apm gives it an average of 2.5 milliseconds per click and modern processors run at around 4 GHz, that's 10 million raw CPU cycles per click and "AlphaGo ran on 48 CPUs and 8 GPUs and the distributed version of AlphaGo ran on 1202 CPUs and 176 GPUs."
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
KungKras
Profile Joined August 2008
Sweden484 Posts
May 26 2017 20:38 GMT
#438
4 pool vs the computer. All that counts is micro. No macro can save it
"When life gives me lemons, I go look for oranges"
fishjie
Profile Blog Joined September 2010
United States1519 Posts
Last Edited: 2017-05-26 21:34:46
May 26 2017 21:32 GMT
#439
On May 27 2017 04:28 mishimaBeef wrote:
So, in the article

"AlphaGo relies on two different components: A tree search procedure, and convolutional networks that guide the tree search procedure. The convolutional networks are conceptually somewhat similar to the evaluation function in Deep Blue, except that they are learned and not designed. The tree search procedure can be regarded as a brute-force approach, whereas the convolutional networks provide a level on intuition to the game-play."

The monte carlo method that you mention is the tree searching, but, as above, there seems to be more to AlphaGo.

Of course, they will have to build new models for starcraft, otherwise the notion of a 'move' isn't well defined even.


Also in the article:
value of a state = value network output + simulation result

I'd be interested to see how much they weighted the monte carlo vs the value network (the convolutional neural net). It sounds like trying either one solo did worse than the combination. So both are needed. But I don't think the monte carlo part wouldn't work in starcraft, because you can't just play random moves in an RTS. Furthermore, in a turn based game, you can only make one move per turn, so you can easily simulate resulting positions from a current position. In RTS, you can move multiple units, with different abilities, and combinatorial explosion would be disastrous.

Still, if I understand the article correctly, the neural net was used to evaluate positions and classify "good" or "bad" positions. It was trained by playing games against itself. The input to the neural net would presumably be the positions of the pieces. Currently neural networks take a long time to train, and every hidden layer you add the slower it gets. In a game like starcraft, there would be far more inputs needed to represent a current given position than in Go, and getting the NN to converge would take much longer.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 26 2017 21:46 GMT
#440
Yeah if you consider move = click, then it explodes. But usually you think in terms of high level "moves" (tech to vessel, pump marine medic, deflect muta) and use clicks to implement the higher level strategic "moves".
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Ernaine
Profile Joined May 2017
60 Posts
May 26 2017 22:38 GMT
#441
The google bot having a mechanical hand controlling the mouse physically is what I proposed when this idea of tackling RTS first came out. It does put a hard limit on what bots can do. Having no AMP ceiling and being able to control 100+ individual units has proven it's worth and effectiveness in SC AI.

But in the end, two things are true.

Whatever humans can do, AI can do. Because humans are just another type of AI.

Any problem can be reduced to a 'data problem', given enough computational power. All the possible game states in any RTS are going to be finite. In principle, you can write code that just exhausts the phase space that is all game states. In the end, all Go and chess AI are just about severely limiting the amount of game states that need to be sampled/checked/evaluated.

Will Google be able to do something impressive? I am not sure. The idea of the bot playing against itself to improve, that may have worked great for Go. But I can see how in RTS that would just push the AI into a way of thinking/playing that would be unique to AI, and easily bypassed and defeated by a human. But maybe with this thought, I fall for the same trap as many have before me, when discussing AI playing chess and go.

Without APM limits, I see a good team write a program that beats Flash in SC within a year. For SC2, I don't know enough about it. I know it is easier than SC BW. But how much easier? And if it is all about mind games/reads/obscure timings, how do you win a mind game vs an AI?
Achamian
Profile Joined May 2017
82 Posts
May 26 2017 22:43 GMT
#442
Have the AlphaGo use a mouse cursor and internal buttons like a keyboard, and play that way. Otherwise its completely unfair. If it's not using the same tools as human then there is no point. Its like having a thousand cursors and keys.
KeksX
Profile Blog Joined November 2010
Germany3634 Posts
May 26 2017 22:52 GMT
#443
On May 27 2017 07:43 Achamian wrote:
Have the AlphaGo use a mouse cursor and internal buttons like a keyboard, and play that way. Otherwise its completely unfair. If it's not using the same tools as human then there is no point. Its like having a thousand cursors and keys.


Thats practically what they are doing. It can still reach unthinkable APM levels if not restricted, though. (And I think they're doing that)
fishjie
Profile Blog Joined September 2010
United States1519 Posts
May 27 2017 00:25 GMT
#444
On May 27 2017 06:46 mishimaBeef wrote:
Yeah if you consider move = click, then it explodes. But usually you think in terms of high level "moves" (tech to vessel, pump marine medic, deflect muta) and use clicks to implement the higher level strategic "moves".


I was talking only about the representation of the current position, as an input to a neural net, which would then classify it as good or bad. This was used (in combination with monte carlo simulation to also evaluate positions) to then determine what the best next move to make was.

All machine learning algorithms have to deal with the curse of dimensionality, where you run into issues the more features you have in the vector representing your training example. In GO, the input is relatively compact. Its just the position of all the pieces on the board. In an RTS, you would have the position of all the units, their hitpoints, their upgrades, the position of all the buildings, their hitpoints, your worker count, minerals, vespene, and so on. Worse, there would be a fog of war, whereas GO you have all the information on the board readily available. In an RTS, the strength of a position is not independent of what the opponent is doing. So your input would have to take into account all the information you've scouted, and how long ago that information was scouted.

I'm not saying its not solvable, but I don't think current alphaGO could. It will be exciting to see how it gets solved, but its a harder problem than GO.

Disclaimer: I'll add *IMO, since I am not an expert by any means
polgas
Profile Blog Joined April 2010
Canada1752 Posts
May 27 2017 01:02 GMT
#445
Let's have the bot use all the APM that it can use. Handicapping it defeats the purpose of the challenge of a bot beating a human player. It's like reducing the bot's ability so that humans can stand a chance, which is the same as admitting that the bot already won.
Leee Jaee Doong
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 27 2017 01:33 GMT
#446
But nobody doubts whether an AI can more flawlessly blink stalkers in a 50 stalker army than humans can. Or whether they can more flawlessly split marines vs banelings or whether they can more flawlessly spread lings vs splash.. Whether they can out-strategize top players though? That's way less of a given, because it's so much more of a fluid game of interactions than the case is for go or chess, where it's more of a mathematical mapping out of possible scenarios.

I mean, I know that go has too many moves for the AI to calculate all the possible moves, but like, there's an insurmountably larger amount of possible bw positions.
Moderator
Justinian
Profile Joined August 2012
United Kingdom158 Posts
May 27 2017 01:46 GMT
#447
On May 27 2017 10:02 polgas wrote:
Let's have the bot use all the APM that it can use. Handicapping it defeats the purpose of the challenge of a bot beating a human player. It's like reducing the bot's ability so that humans can stand a chance, which is the same as admitting that the bot already won.

This is a game where mechanics matter, so it's only fair to bring the AI down to humans' physical level. Otherwise, the whole thing is pointless and the bot did basically win already. Look at these videos from 6 years ago (probably already posted in this thread, but still):

https://www.youtube.com/watch?v=0EYH-csTttw
https://www.youtube.com/watch?v=DXUOWXidcY0
KrOjah
Profile Joined March 2017
United Kingdom68 Posts
May 27 2017 02:06 GMT
#448
I would like to see a longer series (bo7 minimum) between the most sophisticated bot and a top player. I think in macro games bots may be able to just power through with immaculate macro, but I feel like high level players could just upset via finding weak spots with cheese builds. I am just not sure if there will be enough prolonged interest in making bots so sophisticated that they can address any funky cheese or early pressure build.
polgas
Profile Blog Joined April 2010
Canada1752 Posts
May 27 2017 02:23 GMT
#449
Just as I would not want to limit the bot's APM, I also would not limit the human to just a standard build. Let the human player bring all the cheese builds he can think of. Fake out the bot or any other tricks. This is my idea of a true test of this challenge.
Leee Jaee Doong
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 27 2017 02:25 GMT
#450
Yeah, 19x19 go had 2.08168199382×10^170 positions. If you cut a map into 20x20 grid, even with all possible unit combinations moving throughout this 20x20 grid are you gonna even get to that many positions? I mean, it stands to reason some sub-system would either evolve or be designed to handle micro situations in the small scale.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 27 2017 02:49 GMT
#451
On May 27 2017 11:23 polgas wrote:
Just as I would not want to limit the bot's APM, I also would not limit the human to just a standard build. Let the human player bring all the cheese builds he can think of. Fake out the bot or any other tricks. This is my idea of a true test of this challenge.


How do you beat the 100% perfectly executed blink stalker rush?

That humans are not going to be limited to a standard build is a given. Once again, there's no question whether an AI can execute better. With 0 limits to AI execution, it's just a matter of designing a safe build order that lets the computer get a big army and then micro completely flawlessly.
Moderator
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
May 27 2017 03:27 GMT
#452
On May 27 2017 11:23 polgas wrote:
Just as I would not want to limit the bot's APM, I also would not limit the human to just a standard build. Let the human player bring all the cheese builds he can think of. Fake out the bot or any other tricks. This is my idea of a true test of this challenge.


Not really. It's like challenging superman to a test of speed or strength. We all know superman will win. The question here is if you can beat him in a game of chess and rock-paper-scissors.
Moderator
polgas
Profile Blog Joined April 2010
Canada1752 Posts
Last Edited: 2017-05-27 04:40:34
May 27 2017 04:39 GMT
#453
If humans can't beat the AI's perfect micro, with any strategy, then my conclusion is AI beats humans in Starcraft. If you want to feed limiting parameters to the AI, then you're just giving humans a crutch for this challenge.
Leee Jaee Doong
CecilSunkure
Profile Blog Joined May 2010
United States2829 Posts
May 27 2017 09:21 GMT
#454
Hmm very interesting. I have a little experience with implementing low-level machine learning algorithms. The thing is right now I don't imagine hardware is capable of gathering and crunching enough data to generate a net that could play Brood War. It's absolutely possible that a net could be constructed given good enough hardware and good enough input data, so it's likely a matter of time until this happens.

But the thing is, who cares? I mean we care right now, but who would really care in the future? People like the human element, people like humans competing. Take for example speed running. All the novelty, the genuine interest, all the hype comes from real life flesh and blood energy. We all love to see a hero succeed, and with speedrunning we all love seeing the runner make a great accomplishment. It's in our nature. The robots have their time and place, but it can never supersede genuine human competition, or replace it, or really even compete with it. As a collective we humans like each other, and that's not going to change.

Just my thoughts.
Poopi
Profile Blog Joined November 2010
France12770 Posts
Last Edited: 2017-05-27 10:08:49
May 27 2017 10:04 GMT
#455
On May 27 2017 13:39 polgas wrote:
If humans can't beat the AI's perfect micro, with any strategy, then my conclusion is AI beats humans in Starcraft. If you want to feed limiting parameters to the AI, then you're just giving humans a crutch for this challenge.

? I'm not sure if you are trolling or genuinely not understanding that Deepmind want to tackle AI problems, and in this case you need to cap the mechanical part to human level for the strategy to even matter.
Because you know, Blizzard could program the game so that whenever you play against their AI, the AI wins at the start of the game. See how pointless this is?

edit: @CecilSunkure: the point of this is not for AI to compete with us on a regular basis, but to successfully handle super complex problems such as playing Starcraft at a high level. They hope that if we can make AI that do that, we could use them in real life domains such as medicine, economy or whatever.
WriterMaru
b0mBerMan
Profile Joined April 2012
Japan271 Posts
May 27 2017 11:12 GMT
#456
WAR BOXER!!!
lol tbh though, it would be hilariously one-sided for the AI side. For example, in close and semi close 2p maps, with perfect micro, ai can just rush 2 marines + 10 scvs.
Haukinger
Profile Joined June 2012
Germany131 Posts
May 27 2017 11:19 GMT
#457
On May 27 2017 13:39 polgas wrote:
If humans can't beat the AI's perfect micro, with any strategy, then my conclusion is AI beats humans in Starcraft. If you want to feed limiting parameters to the AI, then you're just giving humans a crutch for this challenge.


Exactly, as long as 15000 apm micro is in the game, the AI is allowed to use it. I'd rather add cooldown for everything, limiting the AI and not that much affecting the human.
Nebuchad
Profile Blog Joined December 2012
Switzerland12070 Posts
May 27 2017 11:29 GMT
#458
It's also interesting to play as a human, by the way. I don't think you can play terran cause everything that is based on multitask is greatly weakened (I suppose the instant a medivac comes into line of sight the bot can see it and react, so you can never really overwhelm him with your multitask). You need to create a set of strategies that cause you to be in a winning position immediately after they're revealed to the opponent.
"It is capitalism that is incentivizing me to lazily explain this to you while at work because I am not rewarded for generating additional value."
b0mBerMan
Profile Joined April 2012
Japan271 Posts
May 27 2017 11:32 GMT
#459
On May 27 2017 10:33 Liquid`Drone wrote:
But nobody doubts whether an AI can more flawlessly blink stalkers in a 50 stalker army than humans can. Or whether they can more flawlessly split marines vs banelings or whether they can more flawlessly spread lings vs splash.. Whether they can out-strategize top players though? That's way less of a given, because it's so much more of a fluid game of interactions than the case is for go or chess, where it's more of a mathematical mapping out of possible scenarios.

I mean, I know that go has too many moves for the AI to calculate all the possible moves, but like, there's an insurmountably larger amount of possible bw positions.

Im really surprised by these kinds of comments. Look, im not a programmer or IT guy, but I have done enough math and programming course in uni to know that in essence it will merely be a series of IF>THENs. It doesnt matter if it takes 100 or 1,000,000 routines and subroutines. People who say this miss the fact that human brain/consciousness/decision making process is nothing more than an elaborate almost infinite number of IF>THENs based on experience and risk taking. AI could do that way faster and with way more calculations. People are trying to romantices consciousness as if it were a magical entity.

As a more rigid example,consider this (let us use bw since Boxer is a bw player):

1. AI (zerg) vs. Boxer (Matchpoint)
2. AI has multiple BOs in database for reference (let us use 3 for example: 12CC, 10Rax, proxy Rax cheese)
3. AI sends scout at normal scout timing
4.1 AI drone in Boxer base - IF worker/building count = 12CC THEN anti-12CC BO
4.2 AI drone in Boxer base - IF worker/building count = 10RAX THEN anti-10RAX BO
4.3 AI drone in Boxer base - IF worker/building count = proxy Rax cheese THEN anti-prc BO. scout ideal proxy rax area.

This is an immense oversimplification, but the point is, if it even remotely possible for humans to imagine and do, the AI can do so with way better efficiency and accuracy.
Charoisaur
Profile Joined August 2014
Germany15900 Posts
May 27 2017 11:45 GMT
#460
On May 27 2017 20:32 b0mBerMan wrote:
Show nested quote +
On May 27 2017 10:33 Liquid`Drone wrote:
But nobody doubts whether an AI can more flawlessly blink stalkers in a 50 stalker army than humans can. Or whether they can more flawlessly split marines vs banelings or whether they can more flawlessly spread lings vs splash.. Whether they can out-strategize top players though? That's way less of a given, because it's so much more of a fluid game of interactions than the case is for go or chess, where it's more of a mathematical mapping out of possible scenarios.

I mean, I know that go has too many moves for the AI to calculate all the possible moves, but like, there's an insurmountably larger amount of possible bw positions.

Im really surprised by these kinds of comments. Look, im not a programmer or IT guy, but I have done enough math and programming course in uni to know that in essence it will merely be a series of IF>THENs. It doesnt matter if it takes 100 or 1,000,000 routines and subroutines. People who say this miss the fact that human brain/consciousness/decision making process is nothing more than an elaborate almost infinite number of IF>THENs based on experience and risk taking. AI could do that way faster and with way more calculations. People are trying to romantices consciousness as if it were a magical entity.

As a more rigid example,consider this (let us use bw since Boxer is a bw player):

1. AI (zerg) vs. Boxer (Matchpoint)
2. AI has multiple BOs in database for reference (let us use 3 for example: 12CC, 10Rax, proxy Rax cheese)
3. AI sends scout at normal scout timing
4.1 AI drone in Boxer base - IF worker/building count = 12CC THEN anti-12CC BO
4.2 AI drone in Boxer base - IF worker/building count = 10RAX THEN anti-10RAX BO
4.3 AI drone in Boxer base - IF worker/building count = proxy Rax cheese THEN anti-prc BO. scout ideal proxy rax area.

This is an immense oversimplification, but the point is, if it even remotely possible for humans to imagine and do, the AI can do so with way better efficiency and accuracy.

maybe it can choose adequate build orders based on scouting information but the amount of strategic thinking you have to do in a reactive macro game is much more complicated.
Recognizing when it's the best time to attack, where to attack, when it's better to go for harass, finding the best positions for a fight - that are all important decisions that aren't easy for an AI to learn.
Many of the coolest moments in sc2 happen due to worker harassment
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 27 2017 12:11 GMT
#461
I'm not talking about build orders, and I'm not saying AI's won't eventually be able to out-strategize players. I'm saying that they're already capable of winning without mechanical limitations, and that winning without those limitations is much further away. I also think sc2 is gonna be much easier than bw because sc2 units move way more streamlined and you don't have the ridiculously OP stuff like dark swarm+lurker that I think an AI would have a harder time with. I think in SC2, AIs should be able to win even with mechanical limits in the foreseeable future, but there's stuff in brood war that I think is seriously hard as fuck to program - much harder than anything we've seen from any turn based games. I guess maybe what the AI does is that it turns brood war into a turn based game with 20 turns happening every second though.

Anyway, say for example you have an engagement, 3 templar 12 zealot 9 goon vs 30 hydra. Sure, you can make the ai spread perfectly and dance perfectly and constantly target the weaker units and perfectly dodge the storms and then he'd win. But say there's a protoss AI against the zerg AI and the zerg AI is dancing perfectly and spreading perfectly; how does the protoss AI decide what the perfect time to throw down the storm is? Will they even use storm, seeing how perfect storm-dodge makes it much worse? Will it cover a perfect 3-storm area at the same time so that dodging becomes semi-impossible? Will the zerg AI adjust to this? In a Zvt late game battle, how does the AI calculate whether to plague 4 science vessels vs throw down a dark swarm saving 2 lurkers? Once again, if the AI has perfect mechanics, so it can consume with 4 different defilers in 4 different map screens while harassing with mutas, sending scourge vs vessels, perfectly macroing and lurker+ling+defiler dropping empty spaces, then it'll be invincible through mechanical ability, but if there are mechanical limitations and the AI needs to calculate what operations it should skip, it becomes incredibly, incredibly complex. For me, as a human player, I approach a battle differently depending on the composition that I have and that my opponent has. If I have a muta+hydra army against a protoss goon+templar army, then the calculation of whether I want to suicide my mutas into the templar is much more complex than 'I have 10 mutas, that 1-shots templars, so I can 1 shot the templars then attack with hydras', there are hundreds of small calculations like 'okay, now the templar deviated 2 cm to the left side of the goon army, I have a chance to snipe it NOW'.

Marine vs lurker+ling with human apm is the same thing. With infinite apm, then it becomes ridiculously easy for the terran. But if he's stuck with say, 6 actions per second and has to calculate which marine the lurker spine is more likely to target to move that one away from the rest of the group, if he has to calculate whether to focus fire on the flanking lings in the back or the lurkers in the front, if he has to calculate whether to build marines at home or whether to micro the battle, if the AI actually has to make all the decisions that humans have to make because we limit its ability to simply do 20 times more than a human can do, then I think we're looking at something that is ridiculously complex. Or reverse it, look at lurker+ling vs marine+medic. As a zerg, that's the kind of scenario where I try to distract the terran by attacking some other area (just to make him look there) before engaging, because if the terran doesn't pre-stim and pre-position, then the lurkers running in and burrowing next to the marines kills them all. How will the AI deal with that, if there's a limitation to how many places it can be at the same time? Will it evaluate that 'this attack is a distraction' (if you play pvz against mondragon, you actually learn that you should ignore the first attack, because the first attack is always a distraction) and focus on the second one?

There are literally hundreds of small scenarios like this where I think an AI is gonna have an incredibly hard time if the numbers of operations it can execute is limited to match that of the human player it faces off against. Of course, it can be programmed, sometime in the future, by a programmer who has progamer knowledge. But if you look at all the possible positionings of all the possible unit combinations on all the possible maps against all the different possible unit combinations, then we're looking at a go squared squared type of amount of possible options.
Moderator
Hemling
Profile Joined March 2010
Sweden93 Posts
May 27 2017 12:16 GMT
#462
There is also built in limits for humans in games like brood war, the 12 target max for example, these would not apply to a limitless apm bot and thus giving them an unfair advantage right? i mean if the AI should beat a human fairly it should also include human reacting times and time for moving fingers and all these things limiting a human from doing exactly what they think when they think it.

im curious if people with more knowledge about modern AI can answer if a bot can extract data from 10 000 replays in a day is that equivalent of playing 10 000 games per day for a human?

http://eu.battle.net/sc2/en/profile/246845/1/Hemligt/
ETisME
Profile Blog Joined April 2011
12348 Posts
May 27 2017 12:28 GMT
#463
So is it really confirmed because there's no news at all.

I dont really see how anyone can beat AI. Yes there are lots of possibilities because of the real time factor and the fog of war. But most possibilities are inefficient or/and unimportant.

I don't know the extend of the AI capability but it can get timings right to seconds for them to make any important decisions.
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-27 13:11:16
May 27 2017 12:40 GMT
#464
Interesting read and video:
https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/

+ Show Spoiler +



And here are some quotes from reddit people have said:

Deepmind will limit its APM to 200 from what I have heard

They said they will limit both input and output APM.

What do you mean by "input APM"?

They said something like only refreshing game state 15 times a second (to simulate reaction time) instead of 60 times per second.


Oh, and apparently they reworked the AI since the Lee Sedol match last year. The new version was the one that kept playing against itself to learn from itself.

Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
May 27 2017 12:53 GMT
#465
The interesting thing about deep mind is that they take a very hands off approach with AI where they try to get it to learn on its own instead of using expert data. That's why their research is so interesting.
Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
XenoX101
Profile Joined February 2011
Australia729 Posts
Last Edited: 2017-05-27 13:07:52
May 27 2017 12:55 GMT
#466
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-05-27 13:11:47
May 27 2017 13:10 GMT
#467
It's funny many think we will be outsmarting the AI with gimmicks and unorthodox play when in both Chess and Go it is the AI that showed us the best of both of these.

Google has since managed to take Ke by surprise: “There was a cut that quite shocked me, because it was a move that would never happen in a human-to-human Go match,” he said.



lol:

Earlier this year, Google secretly let the improved AlphaGo play unofficially on an online Go platform. The AI won 50 out of 51 games, and its only loss was owed to an internet connection timeout.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
LetaBot
Profile Blog Joined June 2014
Netherlands557 Posts
May 27 2017 15:16 GMT
#468
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.
If you cannot win with 100 apm, win with 100 cpm.
XenoX101
Profile Joined February 2011
Australia729 Posts
May 27 2017 16:04 GMT
#469
On May 28 2017 00:16 LetaBot wrote:
Show nested quote +
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human.

As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
May 27 2017 17:13 GMT
#470
It's interesting that in the turn based games it plays these godlike moves but in a real time game we have efforts to make it seem more human-like. That's good though, then it can actually teach *us* something about the game.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Sholip
Profile Blog Joined March 2014
Hungary422 Posts
Last Edited: 2017-05-27 17:47:55
May 27 2017 17:46 GMT
#471
On May 28 2017 01:04 XenoX101 wrote:
Show nested quote +
On May 28 2017 00:16 LetaBot wrote:
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human.

As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.


Well, the AI is trained playing against itself, but I assume it's also tested against various build orders which the AI itself wouldn't necessarily deploy. This should include a bunch of inefficient or unorthodox builds as well, in my opinion. Also, if the AI doesn't perform well enough against suboptimal play, then it may start deploying these very strategies against itself (upon seeing that they are effective) – which would lead to it playing against it quite often, and learning to respond appropriately.
Also, a proper neural network should, while learning, in my opinion, make the generalizations that you or I make when playing the game. So even if you present it with something unexpected which it hasn't played against a lot – which is unlikely in itself, it probably won't "break down" and start doing completely stupid stuff. It will probably do what it learned is best if it doesn't know what it's up against: scout, try to identify enemy tech and react accordingly, while creating workers, units, and probably playing a bit safer. Ultimately, suboptimal play is suboptimal, so it can be countered just by playing safe in most cases.
"A hero is no braver than an ordinary man, but he is brave five minutes longer. Also, Zest is best." – Ralph Waldo Emerson
loginn
Profile Blog Joined January 2011
France815 Posts
Last Edited: 2017-05-27 17:59:21
May 27 2017 17:55 GMT
#472
On May 28 2017 00:16 LetaBot wrote:
Show nested quote +
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Blizzard already confirmed that the API will allow AIs to play the game as slowly/fast as they want and obviously, unless someone is watching the game, there is no rendering necessary so that's a major part of the workload for every tick of the game that's removed. So now the only limit is computer power which we know google has heaps of.

Btw the API's expected functionalities have been documented here for anyone caring to take a look :
Specs

Update 1

Update 2

From the specs one of the most interesting parts is this : The ability to load a replay and examine the state of the game as it plays.

I'm counting on AIs to point mistakes in my play. Actually I'm actively working on that kind of system


Stephano, Taking skill to the bank since IPL3. Also Lucifron and FBH
sertas
Profile Joined April 2012
Sweden881 Posts
May 27 2017 18:51 GMT
#473
Well we dont know. Maybe Alpha can figure out a build that beats all the cheese/allins and also does well enough where it can win macro games too
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
Last Edited: 2017-05-27 19:23:40
May 27 2017 19:23 GMT
#474
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.
The harder it becomes, the more you should focus on the basics.
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
May 27 2017 19:36 GMT
#475
On May 28 2017 03:51 sertas wrote:
Well we dont know. Maybe Alpha can figure out a build that beats all the cheese/allins and also does well enough where it can win macro games too

This in the grand scheme of things, will be actually the least impressive of all if they manage to create a world class starcraft bot.
The harder it becomes, the more you should focus on the basics.
Poopi
Profile Blog Joined November 2010
France12770 Posts
May 27 2017 19:52 GMT
#476
On May 28 2017 04:23 sabas123 wrote:
Show nested quote +
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.

It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things.
Even in chess they don't use pure brute force.
WriterMaru
MockHamill
Profile Joined March 2010
Sweden1798 Posts
May 27 2017 20:24 GMT
#477
In a few years the best Starcraft player in the world will be an AI.

Some years later the smartest person on the planet will not be a person.

This in the end for humanity.
Poopi
Profile Blog Joined November 2010
France12770 Posts
May 27 2017 20:38 GMT
#478
On May 28 2017 05:24 MockHamill wrote:
In a few years the best Starcraft player in the world will be an AI.

Some years later the smartest person on the planet will not be a person.

This in the end for humanity.

No?
Every time there is a bit of improvement in AI, people oversell it but it never fails to disappoint and nobody wants to hear anything about it for the next few years
Don't be fooled by the hype
WriterMaru
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-27 22:51:17
May 27 2017 22:39 GMT
#479
On May 28 2017 04:52 Poopi wrote:
Show nested quote +
On May 28 2017 04:23 sabas123 wrote:
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.

It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things.
Even in chess they don't use pure brute force.



This comment is puzzling. Chess engines were beating the best chess players while being very bad at Go, exactly because of the number of game states/combinations.

A chess engine doesn't naively start to sample possible moves. But it does find the move it makes by calculating a lot of moves, including stupid moves that humans would instinctively reject.

The way you seem to refer to a 'brute force' algorithm is suggesting that an algorithm that is not brute forcing a solution is not significantly affected by the size of the game states. I was not trained as a computer scientists, so to me a brute force algorithm is an algorithm that naively explores part of the possible solutions and relies on huge computational power to get to a meaningful result. But that is not what is technically known as a brute force algorithm. But your comment suggest that, like me, you want to use a wider definition.

This opposed to an algorithm that uses a neural network that has already been trained to quickly come to some solution, using 'instinct'/pattern recognition, rather than partial naive sampling of a huge area of the solution space.

The fact is that a chess engine uses many tricks to limit the number of moves it considers. And that relies on sheer computational power. The fact is that a Monte Carlo uses tricks so it quickly converges on the correct solution, given that the phase space is small enough so that the computational power available can come to a solution in time scales that are acceptable. That to me still makes them brute force methods. But like I said, I realize I am not agreeing with the accepted definition.

But you seem oblivious to this entire debate, while accusing everyone else of being oblivious. Puzzling indeed.



As for the number of game states in an RTS, I wonder if making tiers and course graining them will work. And if that is used, how many significantly distinguishable game states there really are. Because to me, in RTS, many games will follow a similar general pattern. On the macro-level, once the opening has stabilized, there is only so many different game states. You can be ahead in economy, equal, or behind. Same for tech and army size. The fine details about unit positions often will not matter. You can spread the units those players have across the map in many different ways, But those are pointless game states. Atfer a Siege expand in TvP, both the terran and the protoss will have a certain number of units. And protoss will have map control. And terran will have a limited number of spots where the tanks can be, either sieged or unsieged. Given a slight deviation, either player can probably move their units into the optimal position, without any penalty. It is like a TvP siege expand game will always move through the same game state, almost always.

Poopi
Profile Blog Joined November 2010
France12770 Posts
May 27 2017 22:50 GMT
#480
On May 28 2017 07:39 Ernaine wrote:
Show nested quote +
On May 28 2017 04:52 Poopi wrote:
On May 28 2017 04:23 sabas123 wrote:
On May 27 2017 21:55 XenoX101 wrote:But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them.

You do know GO is also to complex for any computer to brute force just like starcraft?

FYI ,GO has 2.08168199382×10170 + Show Spoiler +
208 168 199 381 979 984 699 478 633 344 862 770 286 522 453 884 530 548 425 639 456 820 927 419 612 738 015 378 525 648 451 698 519 643 907 259 916 015 628 128 546 089 888 314 427 129 715 319 317 557 736 620 397 247 064 840 935
legal positions.

It's kinda amazing that even on this forum that is supposed to be of quality, so many people are clueless about such basic things.
Even in chess they don't use pure brute force.

...

I didn't get what you didn't get from my post.
One user said: "But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them." which seem really really weird because Go actually has a shitton of # positions, yet was still handled. So it seems like this user thinks we "solve" games using magic or by just evaluating every possible move, which is a rather impressive view in 2017.
WriterMaru
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-27 22:59:09
May 27 2017 22:56 GMT
#481
But the points that both people in your quote nests make are fair. The fact that Go has more states made it so it both needed more tricks and more computational power to beat the top players.

If RTS indeed have more states than Go, the same problem will persist.


The other point is that there is now AI that can beat computers even though we cannot naively exhaust all game states of Go (ie, brute force in the traditional sense of the word).

Go was 'handled' despite the number of positions. Not irregardless of them. It is still true that the difficulty of solving a game is approximately directly proportional to the number of game states. Why? Because our way of solving them still relies on naively sampling a small fraction of the possible game states. To solve a game, we both need smart tricks to make that fraction really really small. And we need computational power, because if the number of game states is really really big, even completely sampling that really really small fraction will take a lot of time.

Your comment is no longer puzzling to me. It is simply wrong.
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 27 2017 22:59 GMT
#482
I think expecting that knowledge of how the world's most advanced AI's operate should be common knowledge, even in this forum, is kinda silly.
Moderator
Poopi
Profile Blog Joined November 2010
France12770 Posts
Last Edited: 2017-05-27 23:07:13
May 27 2017 23:05 GMT
#483
On May 28 2017 07:56 Ernaine wrote:
But the points that both people in your quote nests make are fair. The fact that Go has more states made it so it both needed more tricks and more computational power to beat the top players.

If RTS indeed have more states than Go, the same problem will persist.


The other point is that there is now AI that can beat computers even though we cannot naively exhaust all game states of Go (ie, brute force in the traditional sense of the word).

Go was 'handled' despite the number of positions. Not irregardless of them. It is still true that the difficulty of solving a game is approximately directly proportional to the number of game states.

Your comment is no longer puzzling to me. It is simply wrong.

?
Of course it'll be harder if there are more states, except in a few particular cases, but that's obvious so it doesn't need to be said.
However it's hard to accurately measure the number of states of Starcraft because it's harder to "model" what is a game of starcraft.

But imho the most difficult thing about this all, will be convincing people that the AI won the games fairly.
Since mechanics are such a vital part of Starcraft, there will always be ways for defeated players to contest the loss.
Once the egos of the top players will be in danger, they'll not accept the games as fair because you can argue forever about it.
WriterMaru
Ernaine
Profile Joined May 2017
60 Posts
May 27 2017 23:16 GMT
#484
On May 28 2017 01:04 XenoX101 wrote:
Show nested quote +
On May 28 2017 00:16 LetaBot wrote:
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Wouldn't "speeding up the game" be considered cheating? It's something normal players would not have access to, so I would think they wouldn't allow the AI to do it. My thinking is the AI can only access the visual pixel information, the same as a real person, as this would put it on equal footing with a human.

As for sub-optimal play, this is an opportunity cost issue. You can only choose a build that is viable against some subset of builds, meaning you are guaranteed to be vulnerable to the compliment of that subset. The AI would ideally always pick the build that is viable against the most probable builds to be used, which are almost always the "best" builds for the other player to choose. The issue is that there will always be the risk of the human player choosing a "not so good" build, which is outside of the subset that the AI will do well against. This is because the AI is technically making the right choice, it is just that the right choice still has a build order loss probability. A simpler way to say this is that AI will always lose since there is no build without a BO loss probability.


This comment makes no sense. An AI is not a human. The point is somehow create an AI that can beat humans in a fair game. You do not create an AI the way you create a human. Maybe humans are cheating at RTS, because we humans have things AI's so far do not have.

Once the AI is created, I think there is something to be said about having the AI do completely the same tasks that humans do; looking at the screen and physically moving the mouse. But as any AI vs Human match is artificial anyway, you can arbitrarily select whatever way of having them compete.


As for picking BO's, there are different ways to go about it. You can code the AI to play a build that is soft countered by all, hard countered by none, and allows the AI to do it's AI thing, and get an advantage that way later. This assumes the AI can actually outplay the human in a long game.

You can provide the AI with statistics showing how likely it is for a human to play either build. This means there is no possibility of the AI not considering the opening build of the human. I also don't get how you claim a build can be 'the right choice' but can still 'always lose'.

There is also no reason why information about the BO the human picks cannot leak out of the player, through the way he plays, and inform the AI. Like how the player moves his scouting worker. How keen the player is on preventing scouting by the AI. Building placement, etc. AI can see details and pattern simply invisible to humans. And neural net/tensorflow algorithms can be very good at that.

In SC currently, we have AI's that use really odd and stupid BO's. But they are hard-coded/optimized for the select number of game states that result in those games. And that can be an ok scenario for them. So I do not see this problem that AI's might have with BO's. irrational decisions humans made, or limited information.
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-27 23:27:06
May 27 2017 23:20 GMT
#485
On May 28 2017 08:05 Poopi wrote:
But imho the most difficult thing about this all, will be convincing people that the AI won the games fairly.
Since mechanics are such a vital part of Starcraft, there will always be ways for defeated players to contest the loss.
Once the egos of the top players will be in danger, they'll not accept the games as fair because you can argue forever about it.


Why? Just look at chess. In chess, we knew computers were going to one day beat all humans. A very small number of chess players, and a slightly bigger number of ordinary people, thought that would be a problem for chess. And there are still people claiming that chess engines 'cheat', because they have endgame tables or opening databases. Or worse, because computers evaluate moves completely differently from how we think we evaluate a move (the truth is, we have little idea about how we do it).

But that all turned out to be completely irrelevant.

So we know that for any game, there will be a very small window in time where a computer vs human match will be interesting. Before and after that time, the human will either win or lose easily. And for human vs human competition, being before or after that small window is largely irrelevant.

You really think the ego of Usain Bolt is bruised by the fact that a car can run faster than he can?
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 27 2017 23:24 GMT
#486
Quite some people think AI's semi-solving chess kinda ruined the game. I definitely feel that way about backgammon, and I think following chess tournaments where the analysis ends up being 'so the computer says this is not the best move' 'the computer says this move would be better' 'the computer says this was indeed the ideal move' 'this is a flawless game so far, the moves have been identical to computer suggestions' is really boring. I expect AI's to become better than humans at everything humans do during my lifetime but I also think that's largely a negative thing.
Moderator
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-27 23:35:43
May 27 2017 23:34 GMT
#487
You refer here to a style of analysis where the analysis consists of simply stating what moves the computer would make. That indeed, is a boring and highly non-instructive analysis.

Chess engines certainly changed the way top players have to approach the game, where using engines to help analyze positions is a large thing. Including using computers to find novelty moves.

But all this preparation and memorization being done would be there even without computers, as it is a natural element of the way chess works. And we do the same thing in RTS. Any game where the starting point is identical every time will have this aspect to it.
The importance of opening repertoire and game preparation, they are removing some of the charm of chess. Not chess engines.

Computers certainly will remove 'magic' or 'mystery' or 'murkiness' of the game, reducing it to what it actually is. But that is just how insight in general works.
I have not seen statistics that show that indeed less players play chess now that would have without computers. Or that chess is less popular because of chess engines.
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 27 2017 23:40 GMT
#488
I'd guess the ease of playing pickup chess online has made more people play it, personally. And I'm not saying that my opinion is that of others or that others should adopt my opinion, but to me, I just don't find that much enjoyment in trying to master something a computer is better than me at. Which paints a bleak picture of the future, as that's going to be literally every activity. ;p Figuring out the magic and mystery is where the primary enjoyment lies. To me.
Moderator
Die4Ever
Profile Joined August 2010
United States17651 Posts
May 27 2017 23:44 GMT
#489
On May 28 2017 08:24 Liquid`Drone wrote:
Quite some people think AI's semi-solving chess kinda ruined the game. I definitely feel that way about backgammon, and I think following chess tournaments where the analysis ends up being 'so the computer says this is not the best move' 'the computer says this move would be better' 'the computer says this was indeed the ideal move' 'this is a flawless game so far, the moves have been identical to computer suggestions' is really boring. I expect AI's to become better than humans at everything humans do during my lifetime but I also think that's largely a negative thing.

do people really say that in chess tournaments? what a horrible way to ruin the fun
"Expert" mods4ever.com
Ernaine
Profile Joined May 2017
60 Posts
May 27 2017 23:49 GMT
#490
The point is, the magic and mystery wasn't there in the first place.

And computers or no computers, chess is the greatest waste of human intelligence either way. I guess what computers are doing is making people force to really think about why they do and care for certain things. Because without them, they don't need to know, or they think they know, but they don't.
hypercube
Profile Joined April 2010
Hungary2735 Posts
May 27 2017 23:49 GMT
#491
On May 28 2017 08:24 Liquid`Drone wrote:
Quite some people think AI's semi-solving chess kinda ruined the game.


I don't know why people would say that. Top chess tournaments are way more exciting than they were 10-15 years ago. There's a lot more fighting spirit and instead of playing the same lines over and over again, people are playing a wider variety of openings in order to dodge each others computer assisted preparation.

I definitely feel that way about backgammon, and I think following chess tournaments where the analysis ends up being 'so the computer says this is not the best move' 'the computer says this move would be better' 'the computer says this was indeed the ideal move' 'this is a flawless game so far, the moves have been identical to computer suggestions' is really boring.


I agree but the trend has been for commentator to not use chess engines anymore. Actually, even following the thought process of a top grandmaster would be a challenge for most of us. Better to explain ideas at the level of your audience, even if the result is that you don't reach a conclusion in many positions.
"Sending people in rockets to other planets is a waste of money better spent on sending rockets into people on this planet."
Poopi
Profile Blog Joined November 2010
France12770 Posts
Last Edited: 2017-05-28 00:13:05
May 27 2017 23:59 GMT
#492
On May 28 2017 08:20 Ernaine wrote:
Show nested quote +
On May 28 2017 08:05 Poopi wrote:
But imho the most difficult thing about this all, will be convincing people that the AI won the games fairly.
Since mechanics are such a vital part of Starcraft, there will always be ways for defeated players to contest the loss.
Once the egos of the top players will be in danger, they'll not accept the games as fair because you can argue forever about it.


Why? Just look at chess. In chess, we knew computers were going to one day beat all humans. A very small number of chess players, and a slightly bigger number of ordinary people, thought that would be a problem for chess. And there are still people claiming that chess engines 'cheat', because they have endgame tables or opening databases. Or worse, because computers evaluate moves completely differently from how we think we evaluate a move (the truth is, we have little idea about how we do it).

But that all turned out to be completely irrelevant.

So we know that for any game, there will be a very small window in time where a computer vs human match will be interesting. Before and after that time, the human will either win or lose easily. And for human vs human competition, being before or after that small window is largely irrelevant.

You really think the ego of Usain Bolt is bruised by the fact that a car can run faster than he can?

?
In chess, mechanics don't matter, so it's only a matter of which move to make, not how well you do it.
In starcraft, you can win games solely with your mechanics. You can play "stupidly" in "autopilot" mode but crush inferior players if they can't handle your superior multitasking.
You can win games with good micro, etc...
The game is not only about strategy but about forcing players to do mistakes by giving them less time to think, using your physical abilities.
There is a reason foreign terrans can't do anything whereas Koreans can win in their scene: terran is too hard mechanically compared to the other two races. So there are very concrete examples of why handling the mechanics issue is important for the integrity of the competition between AI and humans in SC.
Plus we humans perform these mechanical tasks with various levels of performance.
Strict training allow top players to reduce the variance of their performance, but there is still some. So you can't just allow the AI to micro as well as the best human ever achieved (unfair for the human), nor can you only allow it to micro like the best players micro on average (unfair for the AI).
So players will agree on something they thought was fair, but they'll want a rematch with other conditions because maybe it wasn't that fair .

edit: and the game being "solved" or not is a huge deal. I know I won't ever bother to learn how to play chess precisely because it's solved. And now I know I don't have to waste time learning Go either. Some people think like me, some don't care, but I'm pretty sure I'm not the only one so this fairness thing is very important. For example, when there was a lot of hype with people training themselves to succeed in the 2048 game, I didn't even bother to try to learn how to achieve it, because I could just launch a bot that does it for me, which I did, so better not waste my time with that game.
WriterMaru
Charoisaur
Profile Joined August 2014
Germany15900 Posts
May 28 2017 00:32 GMT
#493
You're right and I also think that an AI beating a human wouldn't prove that the AI was strategically superior.
Even with an APM cap people would say something like "the AI has perfect accuracy, never forgets supply depots etc"

However it would still be an incredible achievement to beat SC players with an AI, even if they have a slight mechanical advantage.
Many of the coolest moments in sc2 happen due to worker harassment
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-28 00:50:38
May 28 2017 00:48 GMT
#494
On May 28 2017 08:59 Poopi wrote:?
In chess, mechanics don't matter, so it's only a matter of which move to make, not how well you do it.
In starcraft, you can win games solely with your mechanics. You can play "stupidly" in "autopilot" mode but crush inferior players if they can't handle your superior multitasking.


I don't know what this means. There is no universal agreement on what 'mechanics' means in chess. It is used every once in a while, but referring to different things. Usually to tactics or endgame concepts.

I also don't know what exactly you mean with 'playing stupidly'. And I am sure, top players can beat you on autopilot in chess.


You can win games with good micro, etc...


In chess you can win games with good strategy, good positional play, good tactics, good endgame play, etc.


The game is not only about strategy but about forcing players to do mistakes


Which is true in all chess.


by giving them less time to think,


Which in chess is true in some time control formats always, and at certain points in any time control.


using your physical abilities.


I guess this refers to the ability to execute micro/multitasking/macro. I highly contest this is a physical ability. You don't need 'fast muscles' or 'strong fingers' to micro as a top player. It is about connections in your brain.


There is a reason foreign terrans can't do anything whereas Koreans can win in their scene: terran is too hard mechanically compared to the other two races. So there are very concrete examples of why handling the mechanics issue is important for the integrity of the competition between AI and humans in SC.
Plus we humans perform these mechanical tasks with various levels of performance.
Strict training allow top players to reduce the variance of their performance, but there is still some. So you can't just allow the AI to micro as well as the best human ever achieved (unfair for the human), nor can you only allow it to micro like the best players micro on average (unfair for the AI).


I don't get this. Computer scientists are in the business of making powerful computers and smart algorithms. Every now and then they decide to show off their abilities by creating AI that can play, and hopefully for them beat, human players. There is no 'fairness'. If the challenge the CS people have is to create an AI with the decision making ability of a SC player, the issue of mechanics and micro is irrelevant. The compute can have APM far beyond what is humanly possible. A car runs faster than a human can possibly run. That is just how things are.
If they want to create a robot arm that moves around the mouse, to show they can meet the mechanical challenge of the robotics involved there, they do that.

In the end humans are biological machines. Our performance has variance, always, because the biochemistry has a certain level of stochasticisity to it, being biochemical reactions. So with the same input, the output is not always the same. That is where 'human mistakes' come from. And that is something code doesn't have.


As in chess, moving the piece is an arbitrary task for both the human and the robot, it is not part of the challenge. Controlling the mouse and keyboard in SC isn't, neither for the human or the robot/AI.


So players will agree on something they thought was fair, but they'll want a rematch with other conditions because maybe it wasn't that fair .


But aren't the initial game conditions agreed on?


edit: and the game being "solved" or not is a huge deal. I know I won't ever bother to learn how to play chess precisely because it's solved. And now I know I don't have to waste time learning Go either. Some people think like me, some don't care, but I'm pretty sure I'm not the only one so this fairness thing is very important. For example, when there was a lot of hype with people training themselves to succeed in the 2048 game, I didn't even bother to try to learn how to achieve it, because I could just launch a bot that does it for me, which I did, so better not waste my time with that game.


Maybe you need to think about why you play a game. Or do anything you do. So you decide you like to do certain things. But then when an AI shows up and does it better than you, you decide top stop doing it. Then why did you do it in the first place? That seems odd to me.

Whatever you are doing, there is going to be a person, and probably an AI, that can do it better.
Poopi
Profile Blog Joined November 2010
France12770 Posts
Last Edited: 2017-05-28 01:21:31
May 28 2017 01:06 GMT
#495
I'm talking about Starcraft in my examples most of the time so why do you talk about chess instead?
Since you seem to answer in a weird manner on every detail, and that you have a very low post count, I can now be sure that you are trolling, so I won't bother answering anymore :/.
Edit: however the last portion is interesting so I'll answer that.
Because you can have fun with your opponent, tease each other and basically affect his feelings with your play.
I never play against AI not because they are bad, but because it's not fun. It's pointless to manner mule an AI, it's empty.
And I will keep playing Starcraft exactly because imho there is no way for the competition to be totally fair for both parties so I will always be able to argue that it doesn't prove much.
In my opinion anyways RTS are not hard per se strategically, because people collectively make the metagame so every top player eventually has the same knowledge and experience / reactions so it comes down to mechanics and not some magical creativity.
That is why I prefer players that have a very good micro because that's the most difficult thing, everyone plays so fucking much but the most ephemere thing players have are their mechanics, your hands get old whereas your mind can still take good decisions when you are old if you are experienced.
WriterMaru
Dav1oN
Profile Joined January 2012
Ukraine3164 Posts
May 28 2017 01:22 GMT
#496
The point is - if they'll make AlphaGo AI strong enough to compete humans with let's say apm 250 limit then it would be interesting, because perfect mechanics does not mean game winning situation. It means for perfect supply/worker production it takes some % from those (250 apm) restrictions.
In memory of Geoff "iNcontroL" Robinson 11.09.1985 - 21.07.2019 A tribute to incredible man, embodiment of joy, esports titan, starcraft community pillar all in one. You will always be remembered!
Ernaine
Profile Joined May 2017
60 Posts
May 28 2017 01:31 GMT
#497
Your post starts with 'In chess, [...], but I am not allowed to talk about chess?

Most things you say here seem contradictory or just overall not very well thought out. Ok, so now you say you prefer not to play vs AI. Fine. But earlier you and other suggested that it is pointless to play a game when there is AI that is better than the best human.

You only registered in 2010, so maybe your way of posting is understandable. Especially after you admitted that you play to humiliate other people, the knowledge that they feel worse about themselves because of you, and that is where you find your satisfaction.

You can not tell if I am trolling? Maybe that is what happens when a scientist talks to an illiterate college kid? They don't know if they are hit with knowledge bombs, or being trolled?
Liquid`Drone
Profile Joined September 2002
Norway28614 Posts
May 28 2017 01:43 GMT
#498
On May 28 2017 08:44 Die4Ever wrote:
Show nested quote +
On May 28 2017 08:24 Liquid`Drone wrote:
Quite some people think AI's semi-solving chess kinda ruined the game. I definitely feel that way about backgammon, and I think following chess tournaments where the analysis ends up being 'so the computer says this is not the best move' 'the computer says this move would be better' 'the computer says this was indeed the ideal move' 'this is a flawless game so far, the moves have been identical to computer suggestions' is really boring. I expect AI's to become better than humans at everything humans do during my lifetime but I also think that's largely a negative thing.

do people really say that in chess tournaments? what a horrible way to ruin the fun


I'm not really a connoisseur of chess tourneys, but with Magnus Carlsen becoming the pride of Norway, quite some of his games have been broadcast with commentary. And I mean, it's not the only thing they say, but the way they evaluate which player is ahead is through computer calculation of which player is ahead, and they always state what move the computer thinks is best before either player makes their move (unless it's really fast moving). I'm sure there might be other chess broadcasters where they avoid this style, because indeed, I don't think all that fun.

Moderator
Ernaine
Profile Joined May 2017
60 Posts
Last Edited: 2017-05-28 01:50:20
May 28 2017 01:50 GMT
#499
That's beside the point. At one point, people used to say 'Kasparov thinks it is a good/bad move' during some live game analysis, which is what was done just after he retired.
Poopi
Profile Blog Joined November 2010
France12770 Posts
May 28 2017 02:08 GMT
#500
On May 28 2017 10:31 Ernaine wrote:
Your post starts with 'In chess, [...], but I am not allowed to talk about chess?

Most things you say here seem contradictory or just overall not very well thought out. Ok, so now you say you prefer not to play vs AI. Fine. But earlier you and other suggested that it is pointless to play a game when there is AI that is better than the best human.

You only registered in 2010, so maybe your way of posting is understandable. Especially after you admitted that you play to humiliate other people, the knowledge that they feel worse about themselves because of you, and that is where you find your satisfaction.

You can not tell if I am trolling? Maybe that is what happens when a scientist talks to an illiterate college kid? They don't know if they are hit with knowledge bombs, or being trolled?

What?
I said firstly to have fun with my opponent. Because that's exactly what is is. If you win you get to boast, while if you lose it's the opponent. The thing is that online you can't have fun with the guy so everyone seems like an horrible person, but when players meet irl they can relax and have fun about their loss because they can connect with their opponent which isn't possible online.
Everyone on TL knows what people mean when saying "mechanics", at least most people have a rough idea.
The fact that you seemingly don't know this and rather speak from an outside perspective is really weird on such a forum.
The fact that you cherry pick what suits you best also increase the probability of trolling.
WriterMaru
Ernaine
Profile Joined May 2017
60 Posts
May 28 2017 02:20 GMT
#501
On May 28 2017 11:08 Poopi wrote:
Show nested quote +
On May 28 2017 10:31 Ernaine wrote:
Your post starts with 'In chess, [...], but I am not allowed to talk about chess?

Most things you say here seem contradictory or just overall not very well thought out. Ok, so now you say you prefer not to play vs AI. Fine. But earlier you and other suggested that it is pointless to play a game when there is AI that is better than the best human.

You only registered in 2010, so maybe your way of posting is understandable. Especially after you admitted that you play to humiliate other people, the knowledge that they feel worse about themselves because of you, and that is where you find your satisfaction.

You can not tell if I am trolling? Maybe that is what happens when a scientist talks to an illiterate college kid? They don't know if they are hit with knowledge bombs, or being trolled?

What?
I said firstly to have fun with my opponent. Because that's exactly what is is. If you win you get to boast, while if you lose it's the opponent.


Again, let me point out how an immature, narrow-minded sense of what 'fun' is this is.


Everyone on TL knows what people mean when saying "mechanics", at least most people have a rough idea.
The fact that you seemingly don't know this and rather speak from an outside perspective is really weird on such a forum.


This all came from this claim of yours:

On May 28 2017 08:59 Poopi wrote:
In chess, mechanics don't matter, ...


Which I dispute. Mechanics are important in chess, in the sense I like how the word can be applied. But the issue is, it is poorly defined. And since you seem to know very little about either AI, chess, CS, algorithms, or anything else, I don't see a point in debating that with you.

The fact that you cherry pick what suits you best also increase the probability of trolling.



Sadly, you pick your own words.
Poopi
Profile Blog Joined November 2010
France12770 Posts
May 28 2017 02:28 GMT
#502
The fact that you attack my knowledge and the previous attack (scientist vs illetrist) confirms that you are trolling, since you are obviously trying to trigger me into being angry, which is what trolls do.

Sadly you tried to bait the worst candidate for this :x.
Hopefully you'll have more luck in your next troll attempts!
WriterMaru
t'iELhizHedt
Profile Blog Joined May 2017
2 Posts
May 28 2017 04:44 GMT
#503
There is no way humans beat AI. Is he talking about some form of handicap for the AI?
XenoX101
Profile Joined February 2011
Australia729 Posts
May 28 2017 06:44 GMT
#504
On May 28 2017 02:55 loginn wrote:
Show nested quote +
On May 28 2017 00:16 LetaBot wrote:
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Blizzard already confirmed that the API will allow AIs to play the game as slowly/fast as they want and obviously, unless someone is watching the game, there is no rendering necessary so that's a major part of the workload for every tick of the game that's removed. So now the only limit is computer power which we know google has heaps of.

Btw the API's expected functionalities have been documented here for anyone caring to take a look :
Specs

Update 1

Update 2

From the specs one of the most interesting parts is this : The ability to load a replay and examine the state of the game as it plays.

I'm counting on AIs to point mistakes in my play. Actually I'm actively working on that kind of system




Then I think we can classify this as "Assisted AI", since it is allowed to special privileges to the game (fast playback) that human players do not have. This makes it a bit of an easier problem, since it won't need to interpret monitor pixels or rely on ladder match-making to gather info. However this has the downside of making the AI dependent on the source code of SC2, and not transferable to other games, unless those other games also release their APIs to the AI developers.

I guess you can't really blame them since developing AI is hard enough, forcing it to learn with the same limited information that a human has (that is purely visual info) may be outside of reach. But eventually this should be the goal, because if this can be solved, then the AI will be able to learn games that don't readily disclose their source code, or even real-world scenarios that don't have any source code. Evidently this would be a much more powerful, as well as a much more fair AI, since it doesn't need any help from anyone (Blizzard or otherwise) to become good at the game.
Sholip
Profile Blog Joined March 2014
Hungary422 Posts
May 28 2017 14:02 GMT
#505
On May 28 2017 15:44 XenoX101 wrote:
Show nested quote +
On May 28 2017 02:55 loginn wrote:
On May 28 2017 00:16 LetaBot wrote:
On May 27 2017 21:55 XenoX101 wrote:
One problem with AI learning a game like SC2 is that it can't speed up the learning process without access to the game code. (I.e. it cannot play 1,000,000,000 games against itself to learn optimal strategies). So it can only play games at the same pace as humans to gather information. The good part is that it can play 24/7 where a normal player can only play 12/7, and the amount of information it can gather per match is much higher than an AI. It could technically make more precise calculations about engagements than any player.

However it may also be possible for the AI to play games in its "head", if it plays enough games to be able to understand the game mechanics well enough. So then even if it can't physically play a lot of games, it can simulate them at a much faster rate than a human can play them. Technically if its mental model is accurate enough it could bypass playing games altogether, and rely solely on its understanding of the game to run "game experiments" in its head. But the flipside is that unlike Go there are too many possible outcomes for each game, such that you would need an incredibly powerful supercomputer to run through even a small fraction of them. So the AI would have to be very well optimized to only look at "worthwhile" simulations, and ignore the billions of not so valuable simulations (e.g. how to deal with mass reapers or a similarly weak startegy) that would only waste processing time.

EDIT: Thinking about this more I see one way that humans can still win, and that is through "sub-optimal" play that the AI would not expect or would be willing to gamble losing to because most players wouldn't do it. This would be something like a DT rush against someone going Robo tech with observers, or a transition into mass Muta after Terran has gone viking. If the AI doesn't scout it, it would not expect this kind of play. On average it will still likely win most games because of the balance of probabilities, but it would not win 100% due to these kind of oddball games where it would have to predict stupidity. Though this is more reflective of the game itself than the AI, where there are always going to be games that lead to inevitable build order losses. So the real test isn't whether AI can always beat human players, or even if it can beat the best players, but whether it can do so with a higher winrate than any existing player, i.e. greater than Flash's 70% in all matchups.



Deepmind works closely together with Blizzard, so they will probably have some way to speed up the game. Sub-optimal play won't work either, since even in BW there are bots that can consider the possible build orders and unit compositions based purely on the time of the game (there are only so many minerals you can gather in a certain amount of time).

The main issue of bots right now is actually micromanagement. Even with unlimited apm you still have to make tactical decisions, which bots aren't good at yet.


Blizzard already confirmed that the API will allow AIs to play the game as slowly/fast as they want and obviously, unless someone is watching the game, there is no rendering necessary so that's a major part of the workload for every tick of the game that's removed. So now the only limit is computer power which we know google has heaps of.

Btw the API's expected functionalities have been documented here for anyone caring to take a look :
Specs

Update 1

Update 2

From the specs one of the most interesting parts is this : The ability to load a replay and examine the state of the game as it plays.

I'm counting on AIs to point mistakes in my play. Actually I'm actively working on that kind of system




Then I think we can classify this as "Assisted AI", since it is allowed to special privileges to the game (fast playback) that human players do not have. This makes it a bit of an easier problem, since it won't need to interpret monitor pixels or rely on ladder match-making to gather info. However this has the downside of making the AI dependent on the source code of SC2, and not transferable to other games, unless those other games also release their APIs to the AI developers.

I guess you can't really blame them since developing AI is hard enough, forcing it to learn with the same limited information that a human has (that is purely visual info) may be outside of reach. But eventually this should be the goal, because if this can be solved, then the AI will be able to learn games that don't readily disclose their source code, or even real-world scenarios that don't have any source code. Evidently this would be a much more powerful, as well as a much more fair AI, since it doesn't need any help from anyone (Blizzard or otherwise) to become good at the game.


Yeah, I see this kind of assistance kind of necessary. Strictly speaking, even with the API's help, the AI itself, making the strategic decisions, is still realized fairly in my opinion, because it doesn't have access to more information than humans do, it just gathers said information in a different way. Creating an AI that can interpret all the information purely (audio)visually as humans do is not strictly a "strategic" task so to say, and I imagine it would be significantly harder to realize than the already impressive goal set for AlphaGo right now (not sure if it would be possible at all currently). It would be closer to a fully functional human AI than to a StarCraft bot, in my opinion.
"A hero is no braver than an ordinary man, but he is brave five minutes longer. Also, Zest is best." – Ralph Waldo Emerson
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
December 06 2017 13:59 GMT
#506
Dang! The newest breed of Deepmind's AI, AlphaZero, defeated the top chess engine stockfish after training for only 4 hours!

https://www.theverge.com/2017/12/6/16741106/deepmind-ai-chess-alphazero-shogi-go
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
LoneYoShi
Profile Blog Joined June 2014
France1348 Posts
December 06 2017 14:31 GMT
#507
This was pretty much the "logical" next step after they created Alphago zero. By going "full reinforcement learning" and nothing else, the rules of each games are only new parameters to pass to the program and are not hardcoded into the program anymore. So making the same program play different game was pretty logical.

It's still damn impressive though. Especially since it (allegedly) runs on lighter hardware than all previous Alphago iterations !

However I don't believe they could just "plug" that same program into sc2. All the previous games are kinda similar in structure (perfect information, turn based system, etc), and sc2 is pretty far from that.

Still, by going with a full reinforcement learning approach, AlphaSC2 would evolve with his own meta and not by affected by our meta at all. Looking at that would be super interesting !
AbouSV
Profile Joined October 2014
Germany1278 Posts
December 06 2017 17:37 GMT
#508
I though one of the basic idea was to train in actual matches, not against itself?
sjh
Profile Joined April 2010
Canada136 Posts
December 06 2017 18:17 GMT
#509
On December 07 2017 02:37 AbouSV wrote:
I though one of the basic idea was to train in actual matches, not against itself?


Nah it needs to play against itself to play enough games that it can learn effectively. It learned chess in 4 hours, but played 44 million games. If it was restricted to real-time play it would never get good.
Ceterum ceseo Protatem esse delendam
CrypticCoins
Profile Joined December 2017
8 Posts
Last Edited: 2017-12-06 18:22:54
December 06 2017 18:19 GMT
#510
On December 07 2017 02:37 AbouSV wrote:
I though one of the basic idea was to train in actual matches, not against itself?


My understanding is that they made a new AlphaGo that can learn by solely playing against itself (no external data), and eventually it does massively better than one that has been trained in actual matches (that is, fed previous matches by humans, then allowed to learn through its own games).

I think I heard one commentator say that this new AlphaGo was able to trounce the version that beat Sedol. That's pretty amazing to me, considering how crazy good the normal AlphaGo was.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
December 06 2017 18:28 GMT
#511
On December 06 2017 23:31 LoneYoShi wrote:It's still damn impressive though. Especially since it (allegedly) runs on lighter hardware than all previous Alphago iterations !

I want to point out that the hardware used for AlphaZero and Stockfish was very different. According to the development forum for Stockfish AlphaZero used 4 TPUs of 45 teraflops each which vastly outmatched the hardware used for Stockfish. If given the same processing power it's not at all clear that AlphaZero would have won.
GoloSC2
Profile Joined August 2014
710 Posts
December 06 2017 18:54 GMT
#512
On December 07 2017 03:28 Mendelfist wrote:
Show nested quote +
On December 06 2017 23:31 LoneYoShi wrote:It's still damn impressive though. Especially since it (allegedly) runs on lighter hardware than all previous Alphago iterations !

I want to point out that the hardware used for AlphaZero and Stockfish was very different. According to the development forum for Stockfish AlphaZero used 4 TPUs of 45 teraflops each which vastly outmatched the hardware used for Stockfish. If given the same processing power it's not at all clear that AlphaZero would have won.


Hmm, in the forum (I assume I found the same one as you) they also mention that the paper states AlphaZero evaluates about 80k positions per turn while Stockfish evaluates 70 million. So I don't think it has much to do with the hardware. I'm not sure if the TPUs are used for learning in which case that would just take longer, but still.
"Code S > IEM > Super Tournament > Homestory Cup > Blizzcon/WESG > GSL vs The World > Invitational tournaments in China with Koreans > WCS events" - Rodya
Mendelfist
Profile Joined September 2010
Sweden356 Posts
December 06 2017 19:25 GMT
#513
On December 07 2017 03:54 GoloSC2 wrote:
Hmm, in the forum (I assume I found the same one as you) they also mention that the paper states AlphaZero evaluates about 80k positions per turn while Stockfish evaluates 70 million. So I don't think it has much to do with the hardware. I'm not sure if the TPUs are used for learning in which case that would just take longer, but still.

The number of evaluated positions per turn or second is not relevant when comparing different engines. AlphaZero gets its strength from a very good and processing heavy evaluation function, hence its slow speed.
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
December 06 2017 19:26 GMT
#514
On December 07 2017 03:54 GoloSC2 wrote:
Show nested quote +
On December 07 2017 03:28 Mendelfist wrote:
On December 06 2017 23:31 LoneYoShi wrote:It's still damn impressive though. Especially since it (allegedly) runs on lighter hardware than all previous Alphago iterations !

I want to point out that the hardware used for AlphaZero and Stockfish was very different. According to the development forum for Stockfish AlphaZero used 4 TPUs of 45 teraflops each which vastly outmatched the hardware used for Stockfish. If given the same processing power it's not at all clear that AlphaZero would have won.


Hmm, in the forum (I assume I found the same one as you) they also mention that the paper states AlphaZero evaluates about 80k positions per turn while Stockfish evaluates 70 million. So I don't think it has much to do with the hardware. I'm not sure if the TPUs are used for learning in which case that would just take longer, but still.


Yeah, but they gave Stockfish 64 cores and only a 1 GB hash which is pretty sub-optimal no matter how you cut it.
saalih905
Profile Joined June 2017
8 Posts
December 06 2017 19:45 GMT
#515
An AI being successful at BW would probably be the same as if passing the Turing Test. There are aspects of BW that I don't think can be learnt by a machine. Remember, AI can never be intuitive, so it can never want or need. Baduk is turn based, so the AI makes it's move accordingly- when will an AI want to or need to attack his opponent in BW? Never.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
December 06 2017 19:52 GMT
#516
In any case, I'm not trying to downplay Deepminds success here. The mindboggling thing is that Alphazero got it's world class strength by self play in four hours and no human input, not that it beat Stockfish, which is questionable.
GothGirlGames
Profile Joined September 2017
167 Posts
December 06 2017 20:21 GMT
#517
An AI need a fixed set of rules to be really strong, most if not all boardgames maintain the same ruleset forever, hence AI can get extremly good at them.
Starcraft 2 have patches that in a way changes alot of things, or even add or remove something from the game.

The most logical way as I see it for AI would be to be set to pick random and then teached to execute the most hard-to-stop cheeses with every race. Then maybe it would do top GM performances, because it might to late to scout and if the human throws down a wall of turrets/bunkers the AI would be made so it either cancels the rush and goes economy or find other path such as transport the forces behind the turrets or take map control.

Anyhow, many theorys about this subject. My main point is that Starcraft is a game that has rule-changes and diffrent maps, chess and GO etc is on a fixed set of rules on a fixed map/board.
DSh1
Profile Joined April 2017
292 Posts
December 06 2017 20:41 GMT
#518
On December 07 2017 04:45 saalih905 wrote:
An AI being successful at BW would probably be the same as if passing the Turing Test. There are aspects of BW that I don't think can be learnt by a machine. Remember, AI can never be intuitive, so it can never want or need. Baduk is turn based, so the AI makes it's move accordingly- when will an AI want to or need to attack his opponent in BW? Never.


But there are also some aspects that the AI has the advantage in. E.g. it has the potential to use its apm more efficiently than any human opponent.
GoloSC2
Profile Joined August 2014
710 Posts
December 07 2017 10:05 GMT
#519
On December 07 2017 05:21 GothGirlGames wrote:
An AI need a fixed set of rules to be really strong, most if not all boardgames maintain the same ruleset forever, hence AI can get extremly good at them.
Starcraft 2 have patches that in a way changes alot of things, or even add or remove something from the game.

The most logical way as I see it for AI would be to be set to pick random and then teached to execute the most hard-to-stop cheeses with every race. Then maybe it would do top GM performances, because it might to late to scout and if the human throws down a wall of turrets/bunkers the AI would be made so it either cancels the rush and goes economy or find other path such as transport the forces behind the turrets or take map control.

Anyhow, many theorys about this subject. My main point is that Starcraft is a game that has rule-changes and diffrent maps, chess and GO etc is on a fixed set of rules on a fixed map/board.


While you're right in that the rules of SC2 change with patches, I think that's not that big a problem as you'd just have to run the learning process again. I mean, humans have to adapt to the new patch as well.
Also "teaching" the AI to cheese is probably the worst approach, at least AlphaGo's strength improved with less restrictions from the programming side. AlphaGo changed the Go meta if you will, it's whole point is not to do well what humans came up with but to find strategies/tactics on it's own.
I don't want to necessarily disagree with you, because the as you say Go/Chess and SC2 (or SC:R for that matter) are very different, but after reading a bit into it I find it hard to stay skeptical of the AI's potential.

"Code S > IEM > Super Tournament > Homestory Cup > Blizzcon/WESG > GSL vs The World > Invitational tournaments in China with Koreans > WCS events" - Rodya
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 07 2017 10:27 GMT
#520
It is just boardgames, if you have a generalish algorithm to win at Go then it makes sense you could use the same algorithm to win at chess. And if you have a really good algorithm for Go, then improving it slightly (for a game like Go where it has no competition other than a previous version of itaelf, and where the better player always wins), will create these results of a seemingly unbeatable engine.

But they clearly can’t just trivially adapt this algorithm to SC2, or else they would have done it by now. AlphagoZero only needed hours or days of training.

I would guess they don’t have to start from scratch, but it might be awhile before they know how to use these techniques for SC2.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Drake
Profile Joined October 2010
Germany6146 Posts
December 07 2017 12:53 GMT
#521
the problem is u have to allow the ai to be as good as u allow her to be
because u must deny her the movmentspeed.

so how muich u give the ai ? 300 ? 400 ? 500 ?
if u give her no restriction an ai have 1000000 apm can do all think so fast after each iother ...

even on the best case u can only put it on like the max speed on flash so normal players play vs flash speed ai ....

no this game isnt even possibel for this ai becuase with no restrictions even normal ai crush humans here
Nb.Drake / CoL_Drake / Original Joined TL.net Tuesday, 15th of March 2005
Kuraku
Profile Joined December 2017
1 Post
December 13 2017 02:54 GMT
#522
I don't even think AI need high APM. Human have hundreds APM because they have many ineffective actions, selecting units multiple times, giving the same order multiple times.

I think a good AI probably already good enough with 60 - 120 APM. Because all of their action are effective actions (120 APM means 2 effective action per second), assuming the AI decide which action to take in order of their priority.
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
Last Edited: 2017-12-13 02:59:52
December 13 2017 02:59 GMT
#523
On December 13 2017 11:54 Kuraku wrote:
I don't even think AI need high APM. Human have hundreds APM because they have many ineffective actions, selecting units multiple times, giving the same order multiple times.

I think a good AI probably already good enough with 60 - 120 APM. Because all of their action are effective actions (120 APM means 2 effective action per second), assuming the AI decide which action to take in order of their priority.


There's no misclicking either, and the APM isn't constrained by human limitations (such as doing consecutive actions on the same area of the map). Even with a small APM an AI would have some advantages of its own.

But yeah with no APM cap things are ridiculously easy for the AI.
FrkFrJss
Profile Joined April 2015
Canada1205 Posts
December 13 2017 03:13 GMT
#524
On December 13 2017 11:59 ZigguratOfUr wrote:
Show nested quote +
On December 13 2017 11:54 Kuraku wrote:
I don't even think AI need high APM. Human have hundreds APM because they have many ineffective actions, selecting units multiple times, giving the same order multiple times.

I think a good AI probably already good enough with 60 - 120 APM. Because all of their action are effective actions (120 APM means 2 effective action per second), assuming the AI decide which action to take in order of their priority.


There's no misclicking either, and the APM isn't constrained by human limitations (such as doing consecutive actions on the same area of the map). Even with a small APM an AI would have some advantages of its own.

But yeah with no APM cap things are ridiculously easy for the AI.


I agree that a good AI will have the highest effective APM, but the thing is, at 2 actions per second, you can't defend a double drop and manage a push at the front. Assuming that an action is changing the camera position, you would have used up your two moves/second by switching the camera and dealing with a single drop let alone dealing with two other threats.

Keep in mind that while the AI is doing this, they also have to macro. So while that double drop and push at the front is going on, they slip on their macro and forget to make units.

Or how does an ai stutter step or use blink micro with 120 apm while macroing? I'm not denying the effectiveness of a computer that know the optimal move to do at any given moment, but in times of stress, peoples' apm spikes up into the 400+ apm, and that includes effective actions like macro as well as sometimes less effective actions like in micro.
"Keep Moving Forward" - Walt Disney
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
Last Edited: 2017-12-13 03:21:59
December 13 2017 03:21 GMT
#525
On December 13 2017 12:13 FrkFrJss wrote:
Show nested quote +
On December 13 2017 11:59 ZigguratOfUr wrote:
On December 13 2017 11:54 Kuraku wrote:
I don't even think AI need high APM. Human have hundreds APM because they have many ineffective actions, selecting units multiple times, giving the same order multiple times.

I think a good AI probably already good enough with 60 - 120 APM. Because all of their action are effective actions (120 APM means 2 effective action per second), assuming the AI decide which action to take in order of their priority.


There's no misclicking either, and the APM isn't constrained by human limitations (such as doing consecutive actions on the same area of the map). Even with a small APM an AI would have some advantages of its own.

But yeah with no APM cap things are ridiculously easy for the AI.


I agree that a good AI will have the highest effective APM, but the thing is, at 2 actions per second, you can't defend a double drop and manage a push at the front. Assuming that an action is changing the camera position, you would have used up your two moves/second by switching the camera and dealing with a single drop let alone dealing with two other threats.

Keep in mind that while the AI is doing this, they also have to macro. So while that double drop and push at the front is going on, they slip on their macro and forget to make units.

Or how does an ai stutter step or use blink micro with 120 apm while macroing? I'm not denying the effectiveness of a computer that know the optimal move to do at any given moment, but in times of stress, peoples' apm spikes up into the 400+ apm, and that includes effective actions like macro as well as sometimes less effective actions like in micro.


Yeah, but conversely at 300 apm an AI can probably make widow mines completely useless. No matter what sufficiently large number you choose the AI will be able to do inhuman stuff, so limiting it to lower than a pro human's EAPM is probably fairer, though in truth there is no setup that will make everyone happy. Maybe 120 is too low though.
leublix
Profile Joined May 2017
493 Posts
December 13 2017 03:26 GMT
#526
On December 13 2017 11:59 ZigguratOfUr wrote:
There's no misclicking either, and the APM isn't constrained by human limitations (such as doing consecutive actions on the same area of the map). Even with a small APM an AI would have some advantages of its own.

That's probably why an apm cap is too simple. You need some kind of limiter for consecutive actions/mouse speed.
FrkFrJss
Profile Joined April 2015
Canada1205 Posts
December 13 2017 04:11 GMT
#527
On December 13 2017 12:21 ZigguratOfUr wrote:
Show nested quote +
On December 13 2017 12:13 FrkFrJss wrote:
On December 13 2017 11:59 ZigguratOfUr wrote:
On December 13 2017 11:54 Kuraku wrote:
I don't even think AI need high APM. Human have hundreds APM because they have many ineffective actions, selecting units multiple times, giving the same order multiple times.

I think a good AI probably already good enough with 60 - 120 APM. Because all of their action are effective actions (120 APM means 2 effective action per second), assuming the AI decide which action to take in order of their priority.


There's no misclicking either, and the APM isn't constrained by human limitations (such as doing consecutive actions on the same area of the map). Even with a small APM an AI would have some advantages of its own.

But yeah with no APM cap things are ridiculously easy for the AI.


I agree that a good AI will have the highest effective APM, but the thing is, at 2 actions per second, you can't defend a double drop and manage a push at the front. Assuming that an action is changing the camera position, you would have used up your two moves/second by switching the camera and dealing with a single drop let alone dealing with two other threats.

Keep in mind that while the AI is doing this, they also have to macro. So while that double drop and push at the front is going on, they slip on their macro and forget to make units.

Or how does an ai stutter step or use blink micro with 120 apm while macroing? I'm not denying the effectiveness of a computer that know the optimal move to do at any given moment, but in times of stress, peoples' apm spikes up into the 400+ apm, and that includes effective actions like macro as well as sometimes less effective actions like in micro.


Yeah, but conversely at 300 apm an AI can probably make widow mines completely useless. No matter what sufficiently large number you choose the AI will be able to do inhuman stuff, so limiting it to lower than a pro human's EAPM is probably fairer, though in truth there is no setup that will make everyone happy. Maybe 120 is too low though.


At 300 basically EPM, they can do a lot of things better than humans. I think 300 as an upper limit is probably too strong in that case. Having a limit at all, however, will make it so that in the most action-intensive moments, the AI will be at an intrinsic disadvantage because it cannot go higher, and even if a human has ineffective apm, I'm guessing that there are moments where their EPM has been higher than 300.
"Keep Moving Forward" - Walt Disney
pvsnp
Profile Joined January 2017
7676 Posts
December 13 2017 05:24 GMT
#528
APM limits are kind of a trivial point right now since they haven't even gotten the AI to perform the proper actions. It could have infinite apm right now and it wouldn't make (much) of a difference.

120 is probably just the working limit they've set for now, I'm sure they can adjust it if necessary, after the AI is sufficiently trained so as to have an idea of what to actually do.
Denominator of the Universe
TL+ Member
Jett.Jack.Alvir
Profile Blog Joined August 2011
Canada2250 Posts
December 13 2017 05:41 GMT
#529
Not sure if anyone linked this video (I can't sift through 27 pages to find out) but I think it would add relevant information to the discussion



So it seems even if the ai had unlimited apm, the challenge is getting it to use it efficiently.
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
December 13 2017 06:00 GMT
#530
Well yeah getting it to work with neural networks and reinforcement learning so that an AI learns Starcraft autonomously is immensely difficult.

Nevertheless if they really want to claim that the AI can beat humans on a more or less equal footing, you can't do it off the back of inhuman micro.
pvsnp
Profile Joined January 2017
7676 Posts
Last Edited: 2017-12-13 06:51:24
December 13 2017 06:45 GMT
#531
Linked the github repo, for anyone technically literate (@ZigguratOfUr).

https://github.com/deepmind/pysc2
Denominator of the Universe
TL+ Member
DSK
Profile Blog Joined February 2015
England1110 Posts
December 13 2017 08:01 GMT
#532
Perhaps a good idea would be to have APM/EAPM classes or ratings, like at 50, 60, 70, 80 and so on. The problem is matching the class to a player of the same ability or merely ramp up the AI class after every loss?.

Either way it's difficultbt come up with a fair playingfield for AI and player alike.
**@ YT: SC2POVs at https://www.youtube.com/c/SC2POVsTV | https://liquipedia.net/starcraft2/SC2POVs @**
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 13 2017 08:32 GMT
#533
I think limiting the apm is overrated when even with unlimited apm the bots (for me they are bots until I see "smartness") are not able to beat even medium skilled humans. I played a little with the api and the bot had 148401 apm. And it is true, on a open field with lings and a few banelings not one baneling comes even close... but then there is a choke or ramp up etc and boom all marines are gone. And good human players are fast to adapt to something like this. Another example slow lings against a reaper. Just kiting backwards is the closest way into a wall/corner. And there even 150k apm don't help you (actually too fast move commands cancel the cliff jump, so they even harm you). Having situational awareness is whats needed for an AI. And this is not related to apm. So I would make two milestones out of it. But I know deepmind is limiting themselves to 300 apm(?).

Also keep in mind, the bots don't have the same universal access to the data as you have in the editor. For example there is no "projectile unit" where you simply blink backwards if it is near. You also don't have access to target unit or weapon cool down of enemy units. So many of the fancy micro bot videos are not easy transferable.
Poopi
Profile Blog Joined November 2010
France12770 Posts
December 13 2017 10:50 GMT
#534
On December 13 2017 15:45 pvsnp wrote:
Linked the github repo, for anyone technically literate (@ZigguratOfUr).

https://github.com/deepmind/pysc2

Fuck yeah it's in Python!
They look far from succeeding tho
WriterMaru
Excludos
Profile Blog Joined April 2010
Norway8021 Posts
Last Edited: 2017-12-13 11:07:12
December 13 2017 11:06 GMT
#535
An AI just beat some of the world's best Dota players after only 2 weeks of training and you guys think it will take 5-10 years or more to develop one which can do the same in starcraft? Come on. Sure it's more strategically challenging, but it can also multitask perfectly. I was honestly surprised to find it hadn't already surpassed humans.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
December 13 2017 11:15 GMT
#536
seems the point about weapon cool down has been raised on the sc2 api forums

WEAPON COOLDOWN NOT SET FOR ENEMY UNITS
https://us.battle.net/forums/en/sc2/topic/20759386520

Official Blizzard post:
The C++ documentation looks wrong. It currently isn't being output for enemy units.

We've gotten feedback from a bunch of people that exposing this would be useful. We omitted it since it is in the grey area of what information a human player would be able to see. However you can roughly infer it from unit's animation.

If you think this is important, can you add it as an issue to the api GitHub page?
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 13 2017 11:22 GMT
#537
I don't know where it was said that Deepmind was limiting its APM. Does anyone have a link? Anyway, I had some new thoughts on this.

1. It might be hard to gauge the strength of a StarCraft AI. AlphaGo could test its strength by playing versus bots of reasonable strength, but these don't exist for StarCraft. The standard SC2 AI is just very easy to exploit and it can not learn from its mistakes or adapt when it sees that something is not working. That is very different from Go or Chess engines, which are vastly more sophisticated and are capable of reasonable decisions in any sort of situation. So given that it might be difficult to estimate its strength, having unlimited APM provides a safety feature in case they ever publicly challenge a human player

On the other hand, there is an acceptable way for an AI to test itself by playing on Ladder. AlphaGo, at one point, was allowed to play online for a week, where it defeated top pros by 60-0. If there was an opponent with odd decisions and inhuman levels of APM then a hypothetical AlphaSC could never enjoy anonymity, no matter the outcome of the games. If it had limited APM then it'd enjoy more security.

2. There is no real precedent for limiting APM. Existing AIs don't do this, and while chess engines might limit the hardware available, they do not limit themselves their calculation ability, which has historically been their main strength. If there is any question of whether machines can beat humans in a game of StarCraft, then neutering the engine so that their main advantage dissipates purely to prove a point about its superiority is at least dubious.

3. Deepmind has access to vastly powerful hardware, likely some of the best in the world. They need to reconfigure the learning process to efficiently use this hardware. So far they have tackled very slow-paced arcade games and board games with arbitrary time restrictions. StarCraft is a fast-paced real-time game, which requires a lot of computing power to simulate. Furthermore, it involves a lot of low level decisions which are fairly obvious and which consist of a sequence of steps to execute. If you try to learn chess it might make sense to very quickly play a lot of chess games, at a rate of, say, one per second. Your decisions might suffer, because you have only milliseconds per move to think, but that is okay because you compensate with volume of games. However, for SC2 it might be true that you can learn more quickly with less decisions and lower APM. But unlike for chess and go, the gameplay vastly differs depending on APM because there is a real-time component. A good move is still a good move in chess, even with different time controls. But if I only get to make 1 action per second, then I have to be very careful about my prioritizations in SC2.

4. Deepmind probably doesn't care as much if there are some whispers about how any showmatch is unfair. They had an unfair showmatch for chess, and people didn't care. If an AI can quickly crush any human player using some obscure marine rush, that will still provide headlines, even if from an AI perspective it is not as impressive as strategically outthinking humans. Deepmind probably keeps PR separate from its internal assessment of the quality of its AI. It's not like they will be done with SC2 the moment they beat a human player, because SC2 is so rich and complex.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
graNite
Profile Blog Joined December 2010
Germany4434 Posts
Last Edited: 2017-12-13 11:25:44
December 13 2017 11:25 GMT
#538
On December 13 2017 17:32 Archiatrus wrote:
Also keep in mind, the bots don't have the same universal access to the data as you have in the editor. For example there is no "projectile unit" where you simply blink backwards if it is near. You also don't have access to target unit or weapon cool down of enemy units. So many of the fancy micro bot videos are not easy transferable.


That is true, but they show what is possible in theory.
You could do what the Zerg is doing here:+ Show Spoiler +

https://www.youtube.com/watch?v=IKVFZ28ybQs

If you are just fast enough at reading which tank is shooting at which ling and then splitting all others if you have enough APM.
"Oink oink, bitches" - Tasteless on Pigbaby winning a map against Flash
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 13 2017 11:29 GMT
#539
On December 13 2017 20:06 Excludos wrote:
An AI just beat some of the world's best Dota players after only 2 weeks of training and you guys think it will take 5-10 years or more to develop one which can do the same in starcraft? Come on. Sure it's more strategically challenging, but it can also multitask perfectly. I was honestly surprised to find it hadn't already surpassed humans.

The DotA result did not seem that significant to me. The AI was eventually beaten by pro players, and as far as I know it was just 1v1 mid-only, which is not a serious category. Rote execution of last hit and deny mechanics with one single hero and a limited set of items is obviously something an AI would excel at, but this does not prove a serious ability to pick teams and evaluate item and strategy choices, nor does it prove that the AI(s) can coordinate effectively as a team, nor does it prove that the AI has some level of resilience vs exploitative and off-beat strategies designed to target its weaknesses.

Chess engines were unbeatable tactically long before they ever posed a serious threat to human players in a match.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2017-12-13 11:44:16
December 13 2017 11:34 GMT
#540
Here they reference 180 apm

https://deepmind.com/documents/110/sc2le.pdf
StarCraft II: A New Challenge for Reinforcement Learning

Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with
player skill, with professional players often spiking above 500 APM. In all our RL experiments, we
act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate
players.


Also about the chess showmatch, it was reputed that stockfish running for 8+ hours on some people's computers (laptops I suppose) was unable to find some of the moves alphazero played. I think I recall the presenter saying that once you showed stockfish the move however, then it liked it.

Whoa! Link to the dota AI being beaten ?
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-13 11:57:41
December 13 2017 11:50 GMT
#541
On December 13 2017 20:34 mishimaBeef wrote:
Here they reference 180 apm

https://deepmind.com/documents/110/sc2le.pdf
StarCraft II: A New Challenge for Reinforcement Learning

Show nested quote +
Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with
player skill, with professional players often spiking above 500 APM. In all our RL experiments, we
act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate
players.


Also about the chess showmatch, it was reputed that stockfish running for 8+ hours on some people's computers (laptops I suppose) was unable to find some of the moves alphazero played. I think I recall the presenter saying that once you showed stockfish the move however, then it liked it.

Whoa! Link to the dota AI being beaten ?

https://www.theflyingcourier.com/2017/9/11/16285390/elon-musk-open-ai-esports-bot-dota-2-defeated-beaten

This is all I know about it, I actually only heard about this a couple days ago and gleaned some information from that article. My assumption was that the Dota AI was 1v1 mid using a Shadowfiend and preset item choices. As far as I know, while the AI was self-learning, it was also given some specific subgoals such as that last hits and denies are good and so on. So it is not a completely general approach.

And yeah, I was just reading the paper as well looking for the APM limit. I guess I should read the information available before speculating. In any case, 180(E)APM seems reasonable because it is slightly above human capabilities while still keeping to some sort of limit so that learning can happen efficiently.

To be honest, at 180APM you can probably still easily devise unbeatable marine rushes if your execution is good enough.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Pulimuli1
Profile Joined August 2017
33 Posts
December 13 2017 13:05 GMT
#542
On December 13 2017 20:34 mishimaBeef wrote:
Here they reference 180 apm

https://deepmind.com/documents/110/sc2le.pdf
StarCraft II: A New Challenge for Reinforcement Learning

Show nested quote +
Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with
player skill, with professional players often spiking above 500 APM. In all our RL experiments, we
act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate
players.


Also about the chess showmatch, it was reputed that stockfish running for 8+ hours on some people's computers (laptops I suppose) was unable to find some of the moves alphazero played. I think I recall the presenter saying that once you showed stockfish the move however, then it liked it.

Whoa! Link to the dota AI being beaten ?


Yeah it didnt even consider some of the moves alphazero made. After A0 made the move Stockfish agreed that it was a good move
Excludos
Profile Blog Joined April 2010
Norway8021 Posts
December 13 2017 13:47 GMT
#543
On December 13 2017 20:29 Grumbels wrote:
Show nested quote +
On December 13 2017 20:06 Excludos wrote:
An AI just beat some of the world's best Dota players after only 2 weeks of training and you guys think it will take 5-10 years or more to develop one which can do the same in starcraft? Come on. Sure it's more strategically challenging, but it can also multitask perfectly. I was honestly surprised to find it hadn't already surpassed humans.

The DotA result did not seem that significant to me. The AI was eventually beaten by pro players, and as far as I know it was just 1v1 mid-only, which is not a serious category. Rote execution of last hit and deny mechanics with one single hero and a limited set of items is obviously something an AI would excel at, but this does not prove a serious ability to pick teams and evaluate item and strategy choices, nor does it prove that the AI(s) can coordinate effectively as a team, nor does it prove that the AI has some level of resilience vs exploitative and off-beat strategies designed to target its weaknesses.

Chess engines were unbeatable tactically long before they ever posed a serious threat to human players in a match.


Yes, it was beaten, and then it grew stronger. That's what training AI means.

But the significant part isn't that it just beat some players. The significant part is that it trained against itself and got good enough to beat top players within 2 weeks. During this time, all on its own, it learned to lasthit both enemy and friendly units, it learned what items to buy and when, it learned to make use of the donkey with both ferrying bottles and items, it learned to block friendly creeps, glyphs, dodging projectiles and a whole lot more.

A lot of these things are easily transferable to a game like starcraft. A good AI should easily be able to learn optimal build orders, multitasking armies with drops, perfect unit spread, map control, expansion, upgrades, etc. The one thing it might get trouble with is it's pitted against another race instead of a mirror. One of the sides might develop a strategy that the other simply isn't able to break out of, and the prior will simply stop developing because it keeps winning with this subpar strategy. But in a mirror, like I said, I'm surprised we haven't seen someone implement an AI already.
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 13 2017 15:00 GMT
#544
On December 13 2017 20:25 graNite wrote:
Show nested quote +
On December 13 2017 17:32 Archiatrus wrote:
Also keep in mind, the bots don't have the same universal access to the data as you have in the editor. For example there is no "projectile unit" where you simply blink backwards if it is near. You also don't have access to target unit or weapon cool down of enemy units. So many of the fancy micro bot videos are not easy transferable.


That is true, but they show what is possible in theory.
You could do what the Zerg is doing here:+ Show Spoiler +

https://www.youtube.com/watch?v=IKVFZ28ybQs

If you are just fast enough at reading which tank is shooting at which ling and then splitting all others if you have enough APM.


I really hope they don't expose this in the api. Because that really feels like cheating. All the data you read is exact. As human you just assume what unit is targeted before the first shot. Building a heuristic to estimate the target is ok. But actually just knowing...

I know about the apm limit from the discord channel for the SC2 AI Ladder. There someone from Deepmind said that it is at the moment at 180. Constant number of actions per second.
LoneYoShi
Profile Blog Joined June 2014
France1348 Posts
December 13 2017 15:13 GMT
#545
On December 14 2017 00:00 Archiatrus wrote:
Show nested quote +
On December 13 2017 20:25 graNite wrote:
On December 13 2017 17:32 Archiatrus wrote:
Also keep in mind, the bots don't have the same universal access to the data as you have in the editor. For example there is no "projectile unit" where you simply blink backwards if it is near. You also don't have access to target unit or weapon cool down of enemy units. So many of the fancy micro bot videos are not easy transferable.


That is true, but they show what is possible in theory.
You could do what the Zerg is doing here:+ Show Spoiler +

https://www.youtube.com/watch?v=IKVFZ28ybQs

If you are just fast enough at reading which tank is shooting at which ling and then splitting all others if you have enough APM.


I really hope they don't expose this in the api. Because that really feels like cheating. All the data you read is exact. As human you just assume what unit is targeted before the first shot. Building a heuristic to estimate the target is ok. But actually just knowing...

I know about the apm limit from the discord channel for the SC2 AI Ladder. There someone from Deepmind said that it is at the moment at 180. Constant number of actions per second.

You have to remember that the API (that has been developped by Blizz' with the help of Deepmind), and Deepmind's development of an AI are two different things. Having this information exposed in the API will not mean AlphaSC is going to use it.

If I remember correctly, Deepmind always mentionned that the only information that their AI was going to use was the "raw input", which means the pixels from the screen, just like a human player would. That's how they worked on their "little" Atari project, and that's how they told people they were approaching SC2 as well.

sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
December 13 2017 15:33 GMT
#546
On December 13 2017 20:50 Grumbels wrote:
Show nested quote +
On December 13 2017 20:34 mishimaBeef wrote:
Here they reference 180 apm

https://deepmind.com/documents/110/sc2le.pdf
StarCraft II: A New Challenge for Reinforcement Learning

Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with
player skill, with professional players often spiking above 500 APM. In all our RL experiments, we
act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate
players.


Also about the chess showmatch, it was reputed that stockfish running for 8+ hours on some people's computers (laptops I suppose) was unable to find some of the moves alphazero played. I think I recall the presenter saying that once you showed stockfish the move however, then it liked it.

Whoa! Link to the dota AI being beaten ?

https://www.theflyingcourier.com/2017/9/11/16285390/elon-musk-open-ai-esports-bot-dota-2-defeated-beaten

This is all I know about it, I actually only heard about this a couple days ago and gleaned some information from that article. My assumption was that the Dota AI was 1v1 mid using a Shadowfiend and preset item choices. As far as I know, while the AI was self-learning, it was also given some specific subgoals such as that last hits and denies are good and so on. So it is not a completely general approach.

And yeah, I was just reading the paper as well looking for the APM limit. I guess I should read the information available before speculating. In any case, 180(E)APM seems reasonable because it is slightly above human capabilities while still keeping to some sort of limit so that learning can happen efficiently.

To be honest, at 180APM you can probably still easily devise unbeatable marine rushes if your execution is good enough.

To give some frame of reference, most pro players in BW had ~180eapm, Flash and Jaedong had around 200-220 eapm at their peaks.
The harder it becomes, the more you should focus on the basics.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-13 16:18:28
December 13 2017 16:12 GMT
#547
On December 14 2017 00:33 sabas123 wrote:
Show nested quote +
On December 13 2017 20:50 Grumbels wrote:
On December 13 2017 20:34 mishimaBeef wrote:
Here they reference 180 apm

https://deepmind.com/documents/110/sc2le.pdf
StarCraft II: A New Challenge for Reinforcement Learning

Humans typically make between 30 and 300 actions per minute (APM), roughly increasing with
player skill, with professional players often spiking above 500 APM. In all our RL experiments, we
act every 8 game frames, equivalent to about 180 APM, which is a reasonable choice for intermediate
players.


Also about the chess showmatch, it was reputed that stockfish running for 8+ hours on some people's computers (laptops I suppose) was unable to find some of the moves alphazero played. I think I recall the presenter saying that once you showed stockfish the move however, then it liked it.

Whoa! Link to the dota AI being beaten ?

https://www.theflyingcourier.com/2017/9/11/16285390/elon-musk-open-ai-esports-bot-dota-2-defeated-beaten

This is all I know about it, I actually only heard about this a couple days ago and gleaned some information from that article. My assumption was that the Dota AI was 1v1 mid using a Shadowfiend and preset item choices. As far as I know, while the AI was self-learning, it was also given some specific subgoals such as that last hits and denies are good and so on. So it is not a completely general approach.

And yeah, I was just reading the paper as well looking for the APM limit. I guess I should read the information available before speculating. In any case, 180(E)APM seems reasonable because it is slightly above human capabilities while still keeping to some sort of limit so that learning can happen efficiently.

To be honest, at 180APM you can probably still easily devise unbeatable marine rushes if your execution is good enough.

To give some frame of reference, most pro players in BW had ~180eapm, Flash and Jaedong had around 200-220 eapm at their peaks.

Honestly, eapm as a metric does not really work for comparing an AI and a human. Because Flash is not carefully considering every action to find the correct one, so although a click by Flash might be distinguishable from pure spam, it is probably not as purely effective as an AI who can e.g. develop an internal sense of timing and only do some macro action the moment it is necessary. And I am sure that even with 60 apm you can do unnaturally good marine micro if you are precise.

Personally I would estimate about 100 apm for an AI is equivalent to 200 effective apm for a pro. If Deepmind uses 180apm, that puts it at around the maximum of potential human ability, or maybe a little beyond that.

Actually, I wonder if a selection counts as an action, I don’t think it does for Deepmind.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 13 2017 16:30 GMT
#548
On December 14 2017 00:13 LoneYoShi wrote:
Show nested quote +
On December 14 2017 00:00 Archiatrus wrote:
On December 13 2017 20:25 graNite wrote:
On December 13 2017 17:32 Archiatrus wrote:
Also keep in mind, the bots don't have the same universal access to the data as you have in the editor. For example there is no "projectile unit" where you simply blink backwards if it is near. You also don't have access to target unit or weapon cool down of enemy units. So many of the fancy micro bot videos are not easy transferable.


That is true, but they show what is possible in theory.
You could do what the Zerg is doing here:+ Show Spoiler +

https://www.youtube.com/watch?v=IKVFZ28ybQs

If you are just fast enough at reading which tank is shooting at which ling and then splitting all others if you have enough APM.


I really hope they don't expose this in the api. Because that really feels like cheating. All the data you read is exact. As human you just assume what unit is targeted before the first shot. Building a heuristic to estimate the target is ok. But actually just knowing...

I know about the apm limit from the discord channel for the SC2 AI Ladder. There someone from Deepmind said that it is at the moment at 180. Constant number of actions per second.

You have to remember that the API (that has been developped by Blizz' with the help of Deepmind), and Deepmind's development of an AI are two different things. Having this information exposed in the API will not mean AlphaSC is going to use it.

If I remember correctly, Deepmind always mentionned that the only information that their AI was going to use was the "raw input", which means the pixels from the screen, just like a human player would. That's how they worked on their "little" Atari project, and that's how they told people they were approaching SC2 as well.



But deepmind saying they don't use it, does not mean others wont. I WILL use it in my bot once it is exposed. Does not mean I like it.
sc-darkness
Profile Joined August 2017
856 Posts
December 13 2017 20:54 GMT
#549
Can't AI win vs people just with high APM? Each unit could be micro managed individually just like the usual bots.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-13 21:47:56
December 13 2017 21:38 GMT
#550
On December 14 2017 05:54 sc-darkness wrote:
Can't AI win vs people just with high APM? Each unit could be micro managed individually just like the usual bots.

In the paper it says that Deepmind can execute one command per 8 frames, which translates to 180APM. As far as I understand, that is an arbitrary choice they made to improve the efficiency of the learning process, and is not inherent to the environment. That is to say, they could change the AI to have much higher APM, but they don't think it improves the learning process. And while they might limit themselves to 180APM even in real games, other AIs might not be so generous.

The built-in AI actually "cheats" by having access to the SC2 API, which allows for the superhuman actions that you can see in the automaton videos. Deepmind afaik seeks to somewhat mimic the human interface by having to interpret visual information and selecting units. However, the visual information is preprocessed to be more understandable.
StarCraft II also has a raw API, which is similar to the Broodwar API (BWAPI [1]). In this case,
the observations are a list of all visible units on the map along with the properties (unit type, owner,
coordinates, health, etc.), but without any visual component. Fog-of-war still exists, but there is no
camera, so you can see all visible units simultaneously. This is a simpler and more precise representation,
but it does not correspond to how humans perceive the game. For the purposes of comparing
against humans this is considered “cheating” since it offers significant additional information.
Using the raw API, actions control units or groups of units individually by a unit identifier. There is
no need to select individuals or groups of units before issuing actions. This allows much more precise
actions than the human interface allows, and thus yields the possibility of super-human behaviour
via this API.
Although we have not used the raw API for machine learning research, it is included as part of
the release in order to support other use cases (e.g. scripted agents and visualisation) which the
community may find useful.




This is what the AI sees, it has to select units and give them commands. I don't know if it has to use selection squares and such though.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-14 08:20:49
December 13 2017 22:03 GMT
#551
Sorry for the multiple posts, but I found a recent article for Brood War AI research, which is also an active area of research. In fact, I wonder why Deepmind would even be interested in SC2 given that BW has a more active AI scene. For instance, Facebook targeted BW rather than SC2.

http://www.ucl.ac.uk/news/news-articles/0417/050417-starcraft-ai

Here is a video for an AI learning marine micro and such.



There was also a recent tournament where Facebook came second (e: sorry, sixth).

https://www.wired.com/story/facebook-quietly-enters-starcraft-war-for-ai-bots-and-loses/

And on an apocalyptic note, in case you're wondering why google and facebook are interested in AI research, and why they don't actually care about the games themselves:
A Microsoft research paper on machine learning this year said that improving predictions of when a user will click on an ad by just 0.1 percent would yield hundreds of millions of dollars in new revenue.
.
That's the end goal, enslaving humans to advertisements using machine learning.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-13 22:17:10
December 13 2017 22:16 GMT
#552
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
lestye
Profile Blog Joined August 2010
United States4149 Posts
December 13 2017 22:53 GMT
#553
On December 14 2017 05:54 sc-darkness wrote:
Can't AI win vs people just with high APM? Each unit could be micro managed individually just like the usual bots.

The goal isnt to win by straight up mechanics, but win by decision making. Hence why I know Deepmind, APM is throttled to near-human levels.
"You guys are just edgelords. Embrace your inner weeb desu" -Zergneedsfood
Cloak
Profile Joined October 2009
United States816 Posts
December 14 2017 00:34 GMT
#554
I think there are 3 issues for DeepMind in Starcraft as opposed to static games like chess/Go.

Too many degrees of freedom for movement. DeepMind learns by putting in random movements. Dumb moves in chess and Go are numerous, but dumb moves in SC2 somewhat dwarf that. The learning period will need to be more than 4 hours, more like thousands of hours.

Too many degrees of freedom with unit and resource interaction. Not just 2 bishops, but N bishops, and N pawns, and when and where should they be created? Will the computer be able to manipulate the resource balance like it can through position sense and material count for the board games?

Imperfect information and prediction. Predicting your opponent is somewhat easy in Go and Chess because you can force/expect near optimal responses for your opponent, SC2 has near optimal response too, theoretically, but it'll be less obvious due to positional and compositional/upgrade quirks and the sheer complexity due to the aforementioned points. DeepMind will need sophisticated, dynamic rule sets in real time.

I think it's possible, but I don't think the tech is there yet.

The more you know, the less you understand.
Cuce
Profile Joined March 2011
Turkey1127 Posts
December 14 2017 08:06 GMT
#555
On December 14 2017 07:53 lestye wrote:
Show nested quote +
On December 14 2017 05:54 sc-darkness wrote:
Can't AI win vs people just with high APM? Each unit could be micro managed individually just like the usual bots.

The goal isnt to win by straight up mechanics, but win by decision making. Hence why I know Deepmind, APM is throttled to near-human levels.


I would have perefered it to go under human levels, since it will be so probably perfectly efficient with it anyway.
64K RAM SYSTEM 38911 BASIC BYTES FREE
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 14 2017 08:33 GMT
#556
On December 13 2017 22:47 Excludos wrote:
Show nested quote +
On December 13 2017 20:29 Grumbels wrote:
On December 13 2017 20:06 Excludos wrote:
An AI just beat some of the world's best Dota players after only 2 weeks of training and you guys think it will take 5-10 years or more to develop one which can do the same in starcraft? Come on. Sure it's more strategically challenging, but it can also multitask perfectly. I was honestly surprised to find it hadn't already surpassed humans.

The DotA result did not seem that significant to me. The AI was eventually beaten by pro players, and as far as I know it was just 1v1 mid-only, which is not a serious category. Rote execution of last hit and deny mechanics with one single hero and a limited set of items is obviously something an AI would excel at, but this does not prove a serious ability to pick teams and evaluate item and strategy choices, nor does it prove that the AI(s) can coordinate effectively as a team, nor does it prove that the AI has some level of resilience vs exploitative and off-beat strategies designed to target its weaknesses.

Chess engines were unbeatable tactically long before they ever posed a serious threat to human players in a match.


Yes, it was beaten, and then it grew stronger. That's what training AI means.

But the significant part isn't that it just beat some players. The significant part is that it trained against itself and got good enough to beat top players within 2 weeks. During this time, all on its own, it learned to lasthit both enemy and friendly units, it learned what items to buy and when, it learned to make use of the donkey with both ferrying bottles and items, it learned to block friendly creeps, glyphs, dodging projectiles and a whole lot more.

A lot of these things are easily transferable to a game like starcraft. A good AI should easily be able to learn optimal build orders, multitasking armies with drops, perfect unit spread, map control, expansion, upgrades, etc. The one thing it might get trouble with is it's pitted against another race instead of a mirror. One of the sides might develop a strategy that the other simply isn't able to break out of, and the prior will simply stop developing because it keeps winning with this subpar strategy. But in a mirror, like I said, I'm surprised we haven't seen someone implement an AI already.

I don't necessarily agree. Sometimes a learning process converges to a local maximum and can not move beyond that. The AI was only tested on the most strategically shallow version of the game, where execution is paramount. It was beaten both by outplaying it straight up and by exploiting its lack of understanding by using off-beat strategies. I'll believe that an AI can continue improving up to the point that you can no longer beat it straight up, even if you are a professional player. But it is not at all obvious that an AI can learn to defeat targeted anti-AI strategies.

So while it learned to hit glyphs and avoid projectiles, there is no direct reason to believe that it could understand about map control in a 5vs5 setting and that it could learn to understand what builds are good. Other than faith in the inevitability of AI progress, but that is so indiscriminate and only tells you that AI will be able to defeat humans "eventually". But companies can easily abandon AI research for DotA
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
GoloSC2
Profile Joined August 2014
710 Posts
December 14 2017 14:50 GMT
#557
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.
"Code S > IEM > Super Tournament > Homestory Cup > Blizzcon/WESG > GSL vs The World > Invitational tournaments in China with Koreans > WCS events" - Rodya
Excludos
Profile Blog Joined April 2010
Norway8021 Posts
December 14 2017 22:21 GMT
#558
On December 14 2017 23:50 GoloSC2 wrote:
Show nested quote +
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.


You're not wrong, people underestimate things like this consistently. Again, have people already forgotten about the openAI beating top Dota players in 1v1 literally months ago? After only training for 2 weeks? Yes, you can argue that sc is more complex for sure, but it's not "decades away", or even "several years" away. AI research have absolutely skyrocketed these last few years. We are going to see an AI beat top sc players within 2018. If it's months or a year away I don't know, but it's right around the corner for sure.
FrkFrJss
Profile Joined April 2015
Canada1205 Posts
Last Edited: 2017-12-14 23:23:04
December 14 2017 23:15 GMT
#559
On December 15 2017 07:21 Excludos wrote:
Show nested quote +
On December 14 2017 23:50 GoloSC2 wrote:
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.


You're not wrong, people underestimate things like this consistently. Again, have people already forgotten about the openAI beating top Dota players in 1v1 literally months ago? After only training for 2 weeks? Yes, you can argue that sc is more complex for sure, but it's not "decades away", or even "several years" away. AI research have absolutely skyrocketed these last few years. We are going to see an AI beat top sc players within 2018. If it's months or a year away I don't know, but it's right around the corner for sure.



It is true that the AI development will probably take place much faster than people anticipate, but the thing is, that Dota match is as representative of an actual match as coop is an accurate representation of what happens in Starcraft 2. There are things that transfer over like micro and in Starcraft, macro, to a certain extent, but they are wildly different things.

In that demonstration, it was 1 player versus 1 player, playing the same character, in one lane, with basically perfect map vision (at least in the sense that one player generally knows where the other player is and what they are doing) with the same creeps duking it out in a battle of sheer mechanics and a bit of tactics.

In an actual Dota game, there are 8 more players, playing several different characters that all have different spells and abilities with bosses and camps that they can defeat and different objectives not to mention managing their gold economy such that even if you became the best on the 1v1 simulation, that only means that in a very specific situation, you're going to really good.

And personally, I think SC2 is more complex than Dota, so while it may not take as long as some people are thinking, I think it will take longer than your estimation.
"Keep Moving Forward" - Walt Disney
Archiatrus
Profile Joined June 2014
Germany64 Posts
Last Edited: 2017-12-15 08:13:16
December 15 2017 08:12 GMT
#560
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 09:58 GMT
#561
I thought this was funny, from the paper:
Convolutional networks for reinforcement learning [..] usually reduce spatial resolution of the input with each layer and ultimately finish with a fully connected layer that discards it completely. This allows for spatial information to be abstracted away before actions are inferred. In StarCraft, though, a major challenge is to infer spatial actions (clicking on the screen and minimap). As these spatial actions act within the same space as inputs, it might be detrimental to discard the spatial structure of the input.


I read somewhere that AlphaZero used the last seven moves as input for its network. This might seem odd, since theoretically in Go you only need to know the board position to come up with a correct move. The reason given was that it serves as an "attention mechanism", i.e. if you know the last couple of moves you get some information about what parts of the board are more significant. This is actually a very human way of approaching the game.

In both these examples researchers basically have to guess what information to feed their pet network for it to be able to grow effectively. Since StarCraft is a game where spatial relationships are important, let's assume the network requires input which does not mask this. It's like nurturing an alien organism you know nothing about.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
December 15 2017 10:20 GMT
#562
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.

[image loading]

I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Archiatrus
Profile Joined June 2014
Germany64 Posts
December 15 2017 10:54 GMT
#563
On December 15 2017 19:20 Grumbels wrote:
Show nested quote +
On December 15 2017 17:12 Archiatrus wrote:
Maybe interesting add to the Micro tasks are simple for AIs and it is only the "strategy part": Table 1 of this paper. Of course the paper is now four month old. But I would have thought that for example CollectMineralShards should be easy for the Atari-net.



I think micro tasks have a vastly reduced action space, you basically only have to attack and move. Particularly for the mini-game where marines face off against roaches or zerglings/banelings, you have so few units that you only need to keep all units selected (though you need to reselect every so often). You probably don't need to spread your units. I found it curious that deepmind's tester could not keep up with the grandmaster for the former, but actually performed better on the latter. Whereas the AI's did better on the former, but worse on the latter. What is the difference? What were the correct strategies to use and why couldn't the AI figure it out?


Now that you mention it, it is indeed odd. Here are replays of a GM getting 849.7 over 25 games. Maybe the GM in the paper slept through a few instances :D
PlayerofDota
Profile Joined May 2017
29 Posts
December 15 2017 19:38 GMT
#564
It will depend on the apm limitations if they put any at all. I feel like there should be APM limitations on the AI, as it would be unfair to humans, as imagine if we had a brain interface and could control units with our brain power.

But we have to think it in our brain, we have to visualize it with our eyes, we have to move the mouse and click and keyboard, have it register and appear on screen. The AI is directly wired, thus has an inherit advantage.

Games like Chess and Go are very linear and while there might be certain 'intuition' its not that deep actually. Its like the sort of Diablo 3 build combinations, wasn't the number Blizzard gave something like 44 million or some crazy number, but in reality just 50 different enough, while the rest were basically extremely minor modifications of those 50.

So in reality Go does have some "intuition", but the choices are a lot smaller than the ALL possible combinations, in reality a certain positions can only have 3-4 moves.

So mastering a real time 3D strategy game will require a lot more thought power. It has to consistently scout and make adjustments based on the scouting and weigh that with the strategy that its doing or has been doing as a result of the previous scouting.

Then there is the decision when to sack for example an army or base in order to win the larger battle, what units to build at what time and in which position to put them in, when to attack ,retreat, harass, etc...

And again I feel like on order to be a fair competition and not a mechanical auto win, the bot will have to have its APM limited to the average of pro players. Otherwise if it can always pick up a reaver at the last milisecond and perfectly spread marines and medics to never get more than 2 units hit by lurkers and if it can dance with dragoons indefinitely it can never lose.

I feel like the onus has to be on its "thinking" power and if it can outsmart, outstrategize humans.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2017-12-17 11:25:02
December 17 2017 11:24 GMT
#565
On December 15 2017 07:21 Excludos wrote:
Show nested quote +
On December 14 2017 23:50 GoloSC2 wrote:
On December 14 2017 07:16 Grumbels wrote:
https://www.cs.mun.ca/~dchurchill/starcraftaicomp/2017/

These were the results for the recent BW AI competition. Note that the AI created by a hobbyist that just rushes you every game won the competition, and four out of the first six winners had really short average game times. Also, afaik, in 10 years of AI research, there hasn't been a single one that could compete on any level with a pro.

In the other article someone said they did not expect an AI to be able to beat a human player in the next five years. So honestly, if you're only casually interested in this I would go to sleep and wake up in two years before checking if there was any significant progress. SC2 is just vastly more complex than Chess or Go, and it's not even clear if a single AI based on a general learning algorithm is capable of learning it to the point of posing any sort of challenge to a pro player.


a few months before alphago was initially released there was an article in which members of the a.i.-go community stated they believe a go program that could beat professional players was at least a decade away. the reasoning sounded quite like what you are saying, basically that go was far more complex than chess and that was shown by the fact that the best go programs at that time were playing at a low intermediate level.

note that i'm not trying to say you're necessarily wrong, the games are very different, i just want to point out that i've read something similar before and therefore doubt we can make very reasonable guesses as outstanders not involved in the development.


You're not wrong, people underestimate things like this consistently. Again, have people already forgotten about the openAI beating top Dota players in 1v1 literally months ago? After only training for 2 weeks? Yes, you can argue that sc is more complex for sure, but it's not "decades away", or even "several years" away. AI research have absolutely skyrocketed these last few years. We are going to see an AI beat top sc players within 2018. If it's months or a year away I don't know, but it's right around the corner for sure.

I checked Deepmind’s site, and they scarcely mention SC2 though. For instance, the only recent mention of SC2 on twitter is a short promotion of Blizzard’s AI workshop, where they explain the environment.
twitter

And if you look at the papers presented in their recent conference, most certainly have nothing to do with SC2 research and none of them seem to mention SC2. link

So personally I would not expect any sort of breakthrough in the next year.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
HomoDeus
Profile Joined July 2017
Netherlands12 Posts
December 17 2017 13:54 GMT
#566
It's not a matter of "if", but a matter of "when", an AI can beat a human professional.
ProMeTheus112
Profile Joined December 2009
France2027 Posts
Last Edited: 2017-12-17 14:57:55
December 17 2017 14:19 GMT
#567
they will have a lot of trouble with SC2 unless they can make some unbeatable explosive micro timing attack, with dota 1v1 the game is pretty simple, it can play on the frame accuracy where it has most advantage, but in SC2 the amount of possibilities make it so hard for an AI to make a good game, it can't calculate all of it so it has to run on too much error margin that humans can respond to much better, since humans understand the game instead of calculate lol the AI can only calculate stuff its not intelligent at all its just a calculator program that can run faster or with more memory, it doesn't understand concepts it only calculates. You have to implement yourself the concept that you want your CPU to calculate, so if you play depending on the calculation method of the CPU it will be messed up by your understanding of what it is doing vs what it doesn't know that you are doing in this particular game. Let's say that someone was able to fully mathematically map SC2 or that the AI does it itself with their methods, there will likely be flaws in the mathematical map due to very high complexity compared to dota 1v1 (talking like millions times more complicated lol), and then to handle this enormous data during gameplay would probably require hardware that nobody has built yet. Maybe I'm wrong and SC2 can be reduced to some kind of baneling or adept all-in with perfect micro that is unblockable lol, but I don't believe we're gonna see a AI consistently beat the best human players in the more complex RTS games before a long time, there is too much show-off talk from owners of AI patents. They still can't make AI that handles language properly, and that requires I think a lot less data than mapping starcraft..
in short I think AIs so far developped they may give appearance of being somewhat evolved but I haven't seen anything actually impressive in terms of something other than like being very fast or very accurate. you know like the most impressive I've seen is those bots that can jump obstacles and stabilize themselves on 2 legs or 4, but they're still so shaky about it right? sure they don't have all the hundreds of different muscles like animals, but animals seem damn smarter than robots lol
because they are. computers are stupid, completely stupid, only fast and accurate, there is no intelligence there only calculus, it's not brain its just circuitry responding to data code and instructions, very limited in the range of things it can do compared to a brain, it's only fast at that particular calculation that it's told to do, that's all. You can make that calculation complex, it is still limited to that, it cannot apprehend things differently and manipulate concepts, just run the stupid calculation lol

my math teacher used to say that in first computer science related lesson, computers are STUPID she stressed that hahaha she was right
mishimaBeef
Profile Blog Joined January 2010
Canada2259 Posts
Last Edited: 2018-02-07 20:05:43
February 07 2018 20:00 GMT
#568
Seems they are making progress: https://deepmind.com/blog/impala-scalable-distributed-deeprl-dmlab-30/

In our most recent work, we explore the challenge of training a single agent on many tasks.

Today we are releasing DMLab-30, a set of new tasks that span a large variety of challenges in a visually unified environment with a common action space. Training an agent to perform well on many tasks requires massive throughput and making efficient use of every data point. To this end, we have developed a new, highly scalable agent architecture for distributed training called IMPALA (Importances Weighted Actor-Learner Architectures) that uses a new off-policy correction algorithm called V-trace.

...

Thanks to the optimised model of IMPALA, it can process one-to-two orders of magnitude more experience compared to similar agents, making learning in challenging environments possible. We have compared IMPALA with several popular actor-critic methods and have seen significant speed-ups. Additionally, the throughput using IMPALA scales almost linearly with increasing number of actors and learners which shows that both the distributed agent model and the V-trace algorithm can handle very large scale experiments, even on the order of thousands of machines.

When it was tested on the DMLab-30 levels, IMPALA was 10 times more data efficient and achieved double the final score compared to distributed A3C. Moreover, IMPALA showed positive transfer from training in multi-task settings compared to training in single-task setting.
Dare to live the life you have dreamed for yourself. Go forward and make your dreams come true. - Ralph Waldo Emerson
lestye
Profile Blog Joined August 2010
United States4149 Posts
February 09 2018 01:57 GMT
#569
I was thinking about this the other day, and I think like, (keep in mind I've read next to little about how the underlying AI actually works, im just guessing), one of the things a perfect AI would do, is would to simulate the income/resources an opponent would ideally have and have already spent.

The AI would then be able to calculate a sum total of the resources a player has spent, and do a risk assessment based on those contrasting value. For instance, let's say the player is on 2 base, and has probably generated 8k minerals 2k gas or something (sry if those numbers are nonsensical, just throwing them out), and it sees that a drop it committed before was around 1k minerals 300 gas, and it scans and sees that 6k of those minerals are at the player's natural, it could then use that information to logically conclude where the player is most vulnerable, taking into account how many resources it might have defending its main.

Also, obviously if it detects that the player has spent 1 mineral more than the projected amount, it knows immediately there's an expansion that the AI doesn't know about.
"You guys are just edgelords. Embrace your inner weeb desu" -Zergneedsfood
Normal
Please log in or register to reply.
Live Events Refresh
Next event in 1h 38m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Harstem 460
Hui .277
Dewaltoss 86
StarCraft: Brood War
Britney 45965
Calm 8949
Rain 3450
firebathero 1840
actioN 1349
EffOrt 710
Mini 577
hero 236
Last 226
ggaemo 205
[ Show more ]
sSak 109
Barracks 71
Nal_rA 52
Killer 43
Hyun 42
Movie 34
Sacsri 33
soO 33
ToSsGirL 32
Shinee 24
Backho 24
Noble 19
zelot 19
GoRush 16
Rock 12
IntoTheRainbow 11
SilentControl 6
Bale 4
Terrorterran 3
Dota 2
Gorgc7324
Dendi2265
qojqva1201
XcaliburYe316
LuMiX1
League of Legends
JimRising 442
Heroes of the Storm
Khaldor407
Other Games
singsing2811
B2W.Neo2201
DeMusliM573
Fuzer 307
XaKoH 193
Has11
Organizations
StarCraft: Brood War
Kim Chul Min (afreeca) 6
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• 3DClanTV 90
• Adnapsc2 6
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• WagamamaTV545
League of Legends
• Jankos4703
Upcoming Events
Road to EWC
1h 38m
BSL Season 20
3h 38m
Sziky vs Razz
Sziky vs StRyKeR
Sziky vs DragOn
Sziky vs Tech
Razz vs StRyKeR
Razz vs DragOn
Razz vs Tech
DragOn vs Tech
Online Event
13h 38m
Clem vs ShoWTimE
herO vs MaxPax
Road to EWC
18h 38m
Road to EWC
1d 1h
BSL Season 20
1d 3h
Bonyth vs Doodle
Bonyth vs izu
Bonyth vs MadiNho
Bonyth vs TerrOr
MadiNho vs TerrOr
Doodle vs izu
Doodle vs MadiNho
Doodle vs TerrOr
Replay Cast
2 days
Replay Cast
2 days
Bellum Gens Elite
3 days
The PondCast
4 days
[ Show More ]
Bellum Gens Elite
4 days
Replay Cast
5 days
Bellum Gens Elite
5 days
Replay Cast
6 days
CranKy Ducklings
6 days
SC Evo League
6 days
Bellum Gens Elite
6 days
Liquipedia Results

Completed

Proleague 2025-05-28
DreamHack Dallas 2025
Calamity Stars S2

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
KCM Race Survival 2025 Season 2
NPSL S3
Rose Open S1
CSL Season 17: Qualifier 1
2025 GSL S2
Heroes 10 EU
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
ECL Season 49: Europe
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025
PGL Bucharest 2025
BLAST Open Spring 2025

Upcoming

CSL Season 17: Qualifier 2
CSL 17: 2025 SUMMER
Copa Latinoamericana 4
CSLPRO Last Chance 2025
CSLAN 2025
K-Championship
SEL Season 2 Championship
Esports World Cup 2025
HSC XXVII
Championship of Russia 2025
Bellum Gens Elite Stara Zagora 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.