• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 09:54
CET 15:54
KST 23:54
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 3 - RO16 Groups A & B Preview1TL.net Map Contest #21: Winners11Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting12
Community News
[TLMC] Fall/Winter 2025 Ladder Map Rotation10Weekly Cups (Nov 3-9): Clem Conquers in Canada4SC: Evo Complete - Ranked Ladder OPEN ALPHA8StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7
StarCraft 2
General
RSL Season 3 - RO16 Groups A & B Preview [TLMC] Fall/Winter 2025 Ladder Map Rotation Mech is the composition that needs teleportation t Weekly Cups (Nov 3-9): Clem Conquers in Canada Craziest Micro Moments Of All Time?
Tourneys
RSL S3 Round of 16 Constellation Cup - Main Event - Stellar Fest Tenacious Turtle Tussle Master Swan Open (Global Bronze-Master 2) Sparkling Tuna Cup - Weekly Open Tournament
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
FlaSh on: Biggest Problem With SnOw's Playstyle [ASL20] Ask the mapmakers — Drop your questions BW General Discussion Terran 1:35 12 Gas Optimization BGH Auto Balance -> http://bghmmr.eu/
Tourneys
[Megathread] Daily Proleagues Small VOD Thread 2.0 [BSL21] RO32 Group D - Sunday 21:00 CET [BSL21] RO32 Group C - Saturday 21:00 CET
Strategy
Current Meta PvZ map balance How to stay on top of macro? Soma's 9 hatch build from ASL Game 2
Other Games
General Games
Should offensive tower rushing be viable in RTS games? Nintendo Switch Thread Stormgate/Frost Giant Megathread EVE Corporation Path of Exile
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread Canadian Politics Mega-thread The Games Industry And ATVI
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread Movie Discussion! Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Dyadica Gospel – a Pulp No…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1750 users

BoxeR: "AlphaGo won't beat humans in StarCraft" - Page 16

Forum Index > SC2 General
568 CommentsPost a Reply
Prev 1 14 15 16 17 18 29 Next All
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 15 2016 12:00 GMT
#301
On March 15 2016 16:56 Liquid`Bunny wrote:
Show nested quote +
On March 14 2016 21:01 BeyondCtrL wrote:
On March 14 2016 20:51 Liquid`Bunny wrote:
On March 14 2016 20:03 Liquid`Nazgul wrote:
On March 14 2016 19:59 Liquid`Bunny wrote:
Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something!

Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task

As long as they create some laws restricting AI from taking over the world~~

When we create a program that can make a better program on it's own, that's when the trouble starts.


AlphaGo does that already.

AlphaGo doesn't change the way it's programmed, it will always be programmed in a certain way. What it can change is parameters within functions to achieve a better result.


Just like humans. You only have about 3 billion variables to program a human, and the great majority of that is about synthesizing proteins and throwing little molecules around.

Which reminds me of a story about Claude Shannon, one of the founders of computer science:
Reporter: Can computers think?
Shannon: Can you think?
Reporter: Yes.
Shannon: So, yes.
What qxc said.
JimmyJRaynor
Profile Blog Joined April 2010
Canada16998 Posts
March 15 2016 12:24 GMT
#302
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?
Ray Kassar To David Crane : "you're no more important to Atari than the factory workers assembling the cartridges"
necaremus
Profile Joined December 2013
45 Posts
Last Edited: 2016-03-15 12:45:37
March 15 2016 12:28 GMT
#303
On March 15 2016 16:59 deacon.frost wrote:
Show nested quote +
On March 15 2016 12:25 necaremus wrote:
i am really interested in this topic as well, but i would side with the "humans would win" opinion

Go is a game with "full information", while sc2 is not. AI can't handle this situation.
if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player.

That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain.

In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW).

If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7


my point about "full information": in SC you have fog of war, you do not see your enemy. There is a big uncertain factor for the AI: do i move out with my army and risk being counter attacked? For the human these factors of uncertainty are normal: to be honest, it's the only way of how we interact with the outside world: with a big uncertainty - we don't even know it any different except for special cases like the game of "Go".

I could imagine that the AI has a big problem, if the AI-scout doesn't find the enemy army (because our human didn't build one, maybe?) it would try to scout the whole map before moving out, because it doesn't want to risk a counter-atk. A human would just a-move and win

On March 15 2016 18:07 Caihead wrote:
As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete.

this.

when i heard about how AlphaGo works, i thought of this: numberphile, knots/DNA
more precise (2nd video to this topic on numberphile ~ 1:30)
Type II Topoisomerase

my thought was "holy shit, we can already 'build' the logical infrastructure, of a component of bacteria"
only thing is: we need about 10^100000 more energy... (arbitrary chosen, but something rly huge as factor)

so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
“Never assume malice when stupidity will suffice.”
NukeD
Profile Joined October 2010
Croatia1612 Posts
March 15 2016 12:36 GMT
#304
Machines are our friends, not our enemies!
sorry for dem one liners
deacon.frost
Profile Joined February 2013
Czech Republic12129 Posts
March 15 2016 13:25 GMT
#305
On March 15 2016 21:28 necaremus wrote:
Show nested quote +
On March 15 2016 16:59 deacon.frost wrote:
On March 15 2016 12:25 necaremus wrote:
i am really interested in this topic as well, but i would side with the "humans would win" opinion

Go is a game with "full information", while sc2 is not. AI can't handle this situation.
if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player.

That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain.

In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW).

If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7


my point about "full information": in SC you have fog of war, you do not see your enemy. There is a big uncertain factor for the AI: do i move out with my army and risk being counter attacked? For the human these factors of uncertainty are normal: to be honest, it's the only way of how we interact with the outside world: with a big uncertainty - we don't even know it any different except for special cases like the game of "Go".

I could imagine that the AI has a big problem, if the AI-scout doesn't find the enemy army (because our human didn't build one, maybe?) it would try to scout the whole map before moving out, because it doesn't want to risk a counter-atk. A human would just a-move and win

Show nested quote +
On March 15 2016 18:07 Caihead wrote:
As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete.

this.

when i heard about how AlphaGo works, i thought of this: numberphile, knots/DNA
more precise (2nd video to this topic on numberphile ~ 1:30)
Type II Topoisomerase

my thought was "holy shit, we can already 'build' the logical infrastructure, of a component of bacteria"
only thing is: we need about 10^100000 more energy... (arbitrary chosen, but something rly huge as factor)

so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)

It doesn't.

It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.

The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)

Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu

It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is).
It seems to me that either we are talking about something else or you don't know how that works
I imagine France should be able to take this unless Lilbow is busy practicing for Starcraft III. | KadaverBB is my fairy ban mother.
heqat
Profile Joined October 2011
Switzerland96 Posts
March 15 2016 13:39 GMT
#306
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 13:53:11
March 15 2016 13:41 GMT
#307
On March 15 2016 21:24 JimmyJRaynor wrote:
Show nested quote +
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?


How they did with GO:

1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game).
At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".

2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.

3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.

They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.
Be polite, be professional, but have a plan to kill everybody you meet.
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 14:25 GMT
#308
On March 15 2016 22:25 deacon.frost wrote:
It doesn't.

It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.

The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)

Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu

It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is).
It seems to me that either we are talking about something else or you don't know how that works


I don't know how it works completely, but i have a few information and a lot of uncertainty, which i use to evaluate the situation

I know, that the AI doesn't need all information, but i wanted to point out, that we have a whole new range of problems regarding starcraft compared to go.

let's suppose the AI has the "perfect" strategy: this would mean the AI always plays the exact same way -> the human wouldn't have this "uncertainty", because he knows how the AI is going to play and he could craft a strategy, that isn't perfect, but beats the strategy of the AI (for example a doom-drop? i don't know)

you could try to bypass this and give the AI a range of strategies it can choose from. but if you hardcode this into the AI i don't see the point of even trying to build an AI for starcraft. The hardcoded strategy would be human-created making it a "machine+human vs human" match
“Never assume malice when stupidity will suffice.”
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 14:52:17
March 15 2016 14:46 GMT
#309
On March 15 2016 23:25 necaremus wrote:
Show nested quote +
On March 15 2016 22:25 deacon.frost wrote:
It doesn't.

It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.

The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)

Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu

It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is).
It seems to me that either we are talking about something else or you don't know how that works


I don't know how it works completely, but i have a few information and a lot of uncertainty, which i use to evaluate the situation

I know, that the AI doesn't need all information, but i wanted to point out, that we have a whole new range of problems regarding starcraft compared to go.

let's suppose the AI has the "perfect" strategy: this would mean the AI always plays the exact same way -> the human wouldn't have this "uncertainty", because he knows how the AI is going to play and he could craft a strategy, that isn't perfect, but beats the strategy of the AI (for example a doom-drop? i don't know)

you could try to bypass this and give the AI a range of strategies it can choose from. but if you hardcode this into the AI i don't see the point of even trying to build an AI for starcraft. The hardcoded strategy would be human-created making it a "machine+human vs human" match


If AI will succeed at finding perfect strategy which will win 100% of times and can't be countered by any counter-action of his opponent, he will execute it over and over again without any chance of loosing. Otherwise that strategy can't be called perfect and AI wouldn't play it over and over again. He knows what predictability is and will vary his buildorders.

He is programmed in a way that he will try to do every move that increases his chance of winning and will avoid any situations that decreases his chance of winning.

He can blink micro for 16 hours if he knows that that will 100% give him 1 HP advantage over his opponent. But he will never go for all-in with 98% chance of winning if there is any possible way to increase it for another 0.000000001%.
Be polite, be professional, but have a plan to kill everybody you meet.
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
Last Edited: 2016-03-15 14:51:29
March 15 2016 14:50 GMT
#310
Be polite, be professional, but have a plan to kill everybody you meet.
Pwere
Profile Joined April 2010
Canada1556 Posts
Last Edited: 2016-03-15 14:53:25
March 15 2016 14:52 GMT
#311
Sorry to say this, necaremus, but it seems to me you don't understand how AI works. Some of us have an advanced degree in AI, but since it's not exactly the same branch, we don't feel comfortable making predictions.

What you are describing is absolutely a non-issue for this type of AI. You are thinking of a bot, which is vastly inferior. There is nothing inherently difficult about Starcraft for an AI. The strategic aspect is complex, but you only have to be better than humans, not perfect. And humans waste most of their training regimen on mechanics.

AI these days are at least on par with humans when dealing with uncertainty. Pure numbers over thousands of games beat intuition. They don't even bother with profiling to exploit people's weaknesses because of how dominant the analytical approach is when dealing with uncertainty.

All the "problems" pointed out in this thread are mostly annoyances. Not being able to simulate millions of games per day is the bigger struggle, but I feel comfortable saying you can easily run hundreds of games of Broodwar per hour on gaming hardware, so Google would find a way.

I think the main reason Google even considers Starcraft is that it's fun to watch, and millions of people would watch these games. It would be a publicity stunt.
thePunGun
Profile Blog Joined January 2016
598 Posts
March 15 2016 14:57 GMT
#312
On March 15 2016 22:41 sh1RoKen wrote:
Show nested quote +
On March 15 2016 21:24 JimmyJRaynor wrote:
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?


How they did with GO:

1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game).
At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".

2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.

3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.

They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.


Well it's not that simple considering, that SC is not turn based. There are much more complex calculations involved. I'm sure it will get there eventually, but it won't learn as fast in a real-time startegy game like SC, since it's way more random.

"You cannot teach a man anything, you can only help him find it within himself."
BaronVonOwn
Profile Joined April 2011
299 Posts
March 15 2016 14:58 GMT
#313
I'm sure BoxeR's only saying this because he'd love the publicity, because any serious AI would make a pro SC2 player look like a bronze noob. AI's have perfect mechanics meaning he will lose every micro/macro battle. Mechanics dominate strategy in SC2 and you can win games based on pure micro/macro alone. Starcraft was developed assuming human players with lag and poor reactions. A lot of the game elements would be rendered useless with an AI player. For example think about raven seeker missiles. Those will basically never hit against a properly coded AI. Hell they never hit against human players either.
sh1RoKen
Profile Joined March 2012
Russian Federation93 Posts
March 15 2016 15:13 GMT
#314
On March 15 2016 23:57 thePunGun wrote:
Show nested quote +
On March 15 2016 22:41 sh1RoKen wrote:
On March 15 2016 21:24 JimmyJRaynor wrote:
On March 15 2016 19:24 sh1RoKen wrote:
Because this beast can learn.
And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.

the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.

do they plan on making this work without heuristics?
or are you claiming the Google AI will create its own as it studies replays and plays itself?


How they did with GO:

1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game).
At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".

2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.

3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.

They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.


Well it's not that simple considering, that SC is not turn based. There are much more complex calculations involved. I'm sure it will get there eventually, but it won't learn as fast in a real-time startegy game like SC, since it's way more random.



There is nothing random in Starcraft. It might be random for humans but for computer it is 100% predictable.

It will definitely take much more time to learn lesson 1 because of the more complicated design than GO.
But Man. Some enthusiast were managed to teach another much more simpler artificial neural network to complete Mario level in 34 attempts! And this was a child play compared to what google is capable of in both intellectual and hardware resources.
Be polite, be professional, but have a plan to kill everybody you meet.
ClanRH.TV
Profile Joined July 2010
United States462 Posts
March 15 2016 15:27 GMT
#315
On March 15 2016 22:39 heqat wrote:
Show nested quote +
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com


That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.
"Don't take life too seriously because you'll never get out alive."
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 15:29 GMT
#316
On March 15 2016 23:52 Pwere wrote:
Sorry to say this, necaremus, but it seems to me you don't understand how AI works. Some of us have an advanced degree in AI, but since it's not exactly the same branch, we don't feel comfortable making predictions.

What you are describing is absolutely a non-issue for this type of AI. You are thinking of a bot, which is vastly inferior. There is nothing inherently difficult about Starcraft for an AI. The strategic aspect is complex, but you only have to be better than humans, not perfect. And humans waste most of their training regimen on mechanics.

AI these days are at least on par with humans when dealing with uncertainty. Pure numbers over thousands of games beat intuition. They don't even bother with profiling to exploit people's weaknesses because of how dominant the analytical approach is when dealing with uncertainty.

i do agree, that we have a different picture of the situation. I also know, that my "hardcoded" example is pointing towards a bot and the AI the deepmind team created is entirely different.

A weakness of mine may very well be, that i do not fear to make predictions, although my range of information is very limited (like in this case). I do make these prediction on the one hand to find out where i might be wrong (because of lack of information) and on the other hand... because it's fun for me

some people think this attitude is annoying, but - as far as i know - it's the fastest way of learning and improving oneself. Maybe people think this kind of thing is annoying, because they make the mistake of interpreting evaluations as facts or opinions.

but i wanna get back to the AI and starcraft... and i wanna try to explain why i may build a different picture as you do

i didn't say it, but when i evaluated the AI vs human thingy, i didn't take the state of the game as is, but a slightly different version of starcraft, where AI and human would be on-par if it comes to micro-management of the units:
both the human and the AI would use the same algorithm for blink-micro, concave building, focus fire (and so on) reducing the game to positional advantage on the map and build-order strategy.

i did this, because you would not need the deepmind AI to win against a human if you take the state of the game as of now.
the superior micro-control of a simple AI (which even i could program) would win pretty much every game against a real human...
“Never assume malice when stupidity will suffice.”
necaremus
Profile Joined December 2013
45 Posts
March 15 2016 15:49 GMT
#317
On March 16 2016 00:27 ClanRH.TV wrote:
Show nested quote +
On March 15 2016 22:39 heqat wrote:
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com


That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.

ty
I actually didn't bother reading it, because i thought "total nonsense" when i saw the title. But your statement suggest, that it may very well be worth a read... just a bad title :3
“Never assume malice when stupidity will suffice.”
heqat
Profile Joined October 2011
Switzerland96 Posts
March 15 2016 16:09 GMT
#318
On March 16 2016 00:49 necaremus wrote:
Show nested quote +
On March 16 2016 00:27 ClanRH.TV wrote:
On March 15 2016 22:39 heqat wrote:
On March 15 2016 21:28 necaremus wrote:
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)


Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.

Reference here:

arstechnica.com


That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.

ty
I actually didn't bother reading it, because i thought "total nonsense" when i saw the title. But your statement suggest, that it may very well be worth a read... just a bad title :3


Sure, but still from the article:

"To put our findings in perspective, the 6.4*1018 instructions per second that human kind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second,"

Of course the brain works very differently than a CPU, so we cannot directly compare them in term of power.

nimdil
Profile Blog Joined January 2011
Poland3751 Posts
March 15 2016 19:40 GMT
#319
It's funny that people discuss - at the same time - if AI could beat top players in StarCraft and the ways how SC-tuned AlphaGo like AI (so AlphaSC I guess) should be obstructed so that the game will be fair.

AlphaGo is an AI that bases it's aactions on graphical input and that's it. If you feel that you need to tune down the ability of AI to execute perfect strategies at superhuman speed even though it would be using standard inputs then sorry but it is game over. AI won.
Xyik
Profile Blog Joined November 2009
Canada728 Posts
March 15 2016 19:45 GMT
#320
For those claiming that it will learn from games on the internet, how many replays / pieces of game data is there available online? I would guess < 100K high-level games (and thats being generous, I would guess even fewer as top pros rarely release replays).

Lets say we have 100,000 replays.

divide that by the 9 possible match-ups (TvX, ZvX, PvX), now we only have 11111 per match-up.
divide that by lets say, 10 popular maps within the last 2 years. Now we have ~1000 replays per map per match-up.
divide that by the number of build order openings / start-positions and we have at the most, 100 replays to study from in each match-up on a particular map on particular openings.

I don't think thats enough data to properly seed the A.I, so most of its learning will be from playing itself which will be quite difficult. Lets say Google uses a cluster of 10,000 machines, each running a copy of SC for the A.I to play, that allows it play maybe 2M games a day against itself. Now do the same division to figure how many games it will be able to play for each map / match-up / starting position / build-order a day.

I don't know how much data AlphaGo needed to reach its current level in Go, but clearly training the A.I in SC will be a much more difficult task based on acquiring enough data alone.

Then there is the challenge of it actually being able to learn the nuances of the game and interpret game-state. Even if Google found a way to sufficiently train it, I am really not convinced it could win.

I think whats really important is map-level data as well. will the A.I be able to interpret what the map looks like?
Prev 1 14 15 16 17 18 29 Next All
Please log in or register to reply.
Live Events Refresh
Kung Fu Cup
12:00
2025 Monthly #3: Day 2
Reynor vs ShoWTimELIVE!
RotterdaM967
SteadfastSC177
IndyStarCraft 174
TKL 123
IntoTheiNu 102
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 967
Reynor 363
Lowko287
SteadfastSC 177
IndyStarCraft 174
Rex 135
TKL 123
BRAT_OK 54
StarCraft: Brood War
Calm 4908
Free 2079
Sea 1057
Horang2 924
firebathero 514
Rush 468
Soulkey 305
Leta 159
hero 111
ToSsGirL 81
[ Show more ]
Barracks 75
Backho 62
zelot 59
Sea.KH 57
Aegong 40
ajuk12(nOOB) 13
Terrorterran 4
Dota 2
qojqva2806
Gorgc2341
Dendi1164
singsing1151
XcaliburYe130
febbydoto19
Counter-Strike
markeloff106
oskar66
Other Games
B2W.Neo1043
hiko490
crisheroes363
Hui .282
Fuzer 192
DeMusliM182
Sick129
QueenE50
Liquid`VortiX30
Organizations
StarCraft: Brood War
Kim Chul Min (afreeca) 11
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• C_a_k_e 1938
League of Legends
• Nemesis1627
• TFBlade560
Other Games
• WagamamaTV324
Upcoming Events
PiGosaur Monday
10h 7m
RSL Revival
19h 7m
Classic vs Creator
Cure vs TriGGeR
Kung Fu Cup
21h 7m
GuMiho vs MaNa
herO vs TBD
Classic vs TBD
CranKy Ducklings
1d 19h
RSL Revival
1d 19h
herO vs Gerald
ByuN vs SHIN
Kung Fu Cup
1d 21h
IPSL
2 days
ZZZero vs rasowy
Napoleon vs KameZerg
BSL 21
2 days
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
2 days
RSL Revival
2 days
Reynor vs sOs
Maru vs Ryung
[ Show More ]
Kung Fu Cup
2 days
WardiTV Korean Royale
2 days
BSL 21
3 days
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
IPSL
3 days
Dewalt vs WolFix
eOnzErG vs Bonyth
Replay Cast
3 days
Wardi Open
3 days
Monday Night Weeklies
4 days
WardiTV Korean Royale
4 days
The PondCast
5 days
Replay Cast
6 days
RSL Revival
6 days
Liquipedia Results

Completed

Proleague 2025-11-07
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
RSL Revival: Season 3
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual

Upcoming

SLON Tour Season 2
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
META Madness #9
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.