|
On March 15 2016 16:56 Liquid`Bunny wrote:Show nested quote +On March 14 2016 21:01 BeyondCtrL wrote:On March 14 2016 20:51 Liquid`Bunny wrote:On March 14 2016 20:03 Liquid`Nazgul wrote:On March 14 2016 19:59 Liquid`Bunny wrote:Well of course the AI will be able to beat human starcraft players, regardless of it being apm capped or not, as long as they put enough effort into making it. However it would be boring if players didn't take on the challenge of beating it, i myself would love to experience playing against it, we might learn something! Also it's kind of funny how everyone is viewing the AI winning as humans "losing" I think it would be a great achievement for humanity to make an AI that can learn such a complex task  As long as they create some laws restricting AI from taking over the world~~ When we create a program that can make a better program on it's own, that's when the trouble starts. AlphaGo does that already. AlphaGo doesn't change the way it's programmed, it will always be programmed in a certain way. What it can change is parameters within functions to achieve a better result.
Just like humans. You only have about 3 billion variables to program a human, and the great majority of that is about synthesizing proteins and throwing little molecules around. 
Which reminds me of a story about Claude Shannon, one of the founders of computer science: Reporter: Can computers think? Shannon: Can you think? Reporter: Yes. Shannon: So, yes.
|
On March 15 2016 19:24 sh1RoKen wrote: Because this beast can learn. And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.
the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions.
do they plan on making this work without heuristics? or are you claiming the Google AI will create its own as it studies replays and plays itself?
|
On March 15 2016 16:59 deacon.frost wrote:Show nested quote +On March 15 2016 12:25 necaremus wrote:i am really interested in this topic as well, but i would side with the "humans would win" opinion  Go is a game with "full information", while sc2 is not. AI can't handle this situation. if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player. That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain. In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW). If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7 
my point about "full information": in SC you have fog of war, you do not see your enemy. There is a big uncertain factor for the AI: do i move out with my army and risk being counter attacked? For the human these factors of uncertainty are normal: to be honest, it's the only way of how we interact with the outside world: with a big uncertainty - we don't even know it any different except for special cases like the game of "Go".
I could imagine that the AI has a big problem, if the AI-scout doesn't find the enemy army (because our human didn't build one, maybe?) it would try to scout the whole map before moving out, because it doesn't want to risk a counter-atk. A human would just a-move and win 
On March 15 2016 18:07 Caihead wrote: As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete. this.
when i heard about how AlphaGo works, i thought of this: numberphile, knots/DNA more precise (2nd video to this topic on numberphile ~ 1:30) Type II Topoisomerase
my thought was "holy shit, we can already 'build' the logical infrastructure, of a component of bacteria" only thing is: we need about 10^100000 more energy... (arbitrary chosen, but something rly huge as factor)
so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
|
Machines are our friends, not our enemies!
|
Czech Republic12128 Posts
On March 15 2016 21:28 necaremus wrote:Show nested quote +On March 15 2016 16:59 deacon.frost wrote:On March 15 2016 12:25 necaremus wrote:i am really interested in this topic as well, but i would side with the "humans would win" opinion  Go is a game with "full information", while sc2 is not. AI can't handle this situation. if you make it a bo5 with random maps (both players don't know the map beforehand) i doubt the current status of the AI could even beat a mid-class master player. That's not necessary. This isn't solving by the traditional method(from A you can go to A1 - A56464984651894) but simulating the learning process of the brain. So the biggest obstacle is to transform all the information in the process to the computer so it understands it. Go is much easier for such translation(you have only X-Y, no ramps, not reachable terrain, blank spaces, bases etc.). But the learning itself works the same as our brain. In SC2 we have multiple high end replay packs so the PC can learn from the best(not sure about BW). If they do the job properly and PC can play SC without any problems... then the human person will have really tough enemy. Because then it's all about the time and the learning process. And PC can train 24/7  my point about "full information": in SC you have fog of war, you do not see your enemy. There is a big uncertain factor for the AI: do i move out with my army and risk being counter attacked? For the human these factors of uncertainty are normal: to be honest, it's the only way of how we interact with the outside world: with a big uncertainty - we don't even know it any different except for special cases like the game of "Go". I could imagine that the AI has a big problem, if the AI-scout doesn't find the enemy army (because our human didn't build one, maybe?) it would try to scout the whole map before moving out, because it doesn't want to risk a counter-atk. A human would just a-move and win  Show nested quote +On March 15 2016 18:07 Caihead wrote: As to whether AI's will eventually beat humans at any specific task, in my opinion it's not about whether or not one system has inherent superiority over the other or not, but the amount of energy and resources required to devote to the task to compete. this. when i heard about how AlphaGo works, i thought of this: numberphile, knots/DNAmore precise (2nd video to this topic on numberphile ~ 1:30) Type II Topoisomerasemy thought was "holy shit, we can already 'build' the logical infrastructure, of a component of bacteria" only thing is: we need about 10^100000 more energy... (arbitrary chosen, but something rly huge as factor) so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p) It doesn't.
It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results.
The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares)
Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu 
It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is). It seems to me that either we are talking about something else or you don't know how that works
|
On March 15 2016 21:28 necaremus wrote: so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain.
Reference here:
arstechnica.com
|
Russian Federation93 Posts
On March 15 2016 21:24 JimmyJRaynor wrote:Show nested quote +On March 15 2016 19:24 sh1RoKen wrote: Because this beast can learn. And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.
the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions. do they plan on making this work without heuristics? or are you claiming the Google AI will create its own as it studies replays and plays itself?
How they did with GO:
1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game). At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition".
2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more.
3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved.
They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.
|
On March 15 2016 22:25 deacon.frost wrote:It doesn't. It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results. The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares) Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net  The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu  It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is). It seems to me that either we are talking about something else or you don't know how that works 
I don't know how it works completely, but i have a few information and a lot of uncertainty, which i use to evaluate the situation 
I know, that the AI doesn't need all information, but i wanted to point out, that we have a whole new range of problems regarding starcraft compared to go.
let's suppose the AI has the "perfect" strategy: this would mean the AI always plays the exact same way -> the human wouldn't have this "uncertainty", because he knows how the AI is going to play and he could craft a strategy, that isn't perfect, but beats the strategy of the AI (for example a doom-drop? i don't know)
you could try to bypass this and give the AI a range of strategies it can choose from. but if you hardcode this into the AI i don't see the point of even trying to build an AI for starcraft. The hardcoded strategy would be human-created making it a "machine+human vs human" match
|
Russian Federation93 Posts
On March 15 2016 23:25 necaremus wrote:Show nested quote +On March 15 2016 22:25 deacon.frost wrote:It doesn't. It will analyze probably all the top games, then it will play "some" and analyze these. Now we skip a plenty of tech data. In the end it has an information, that if it has this level of uncertainty, this big army and the enemy is doing that, then it's better to move out than not to. And in the end it could be a wrong move, because of the fact it doesn't know everything. That's the beauty of learning a neural net. If you prepare proper learning scenarios it has similar decision making as a human being. The better learning material the better results. The "AI" doesn't need ALL the information if the learning models work with that. But to build such "AI" you need time, money, proper learning materials and HW. That's why game developers use AI that cheats and then dumb it down. It's easier to write. (or maybe they used dumbed net, who cares) Imagine a savant who can ONLY play SC and nothing else. That's a result of properly trained net  The question can be what will be the input for the "AI", will there be a limitation on the controlling mechanism of it? And other questions asked by the Dagger of Bisu  It is the exactly same thing as the difference between Koreans and Foreigners. Thanks to the fact that Koreans can train selected scenarios multiple hours in a row they have more optimal solution and these solution can be sometimes abused(you know what the expected result is). It seems to me that either we are talking about something else or you don't know how that works  I don't know how it works completely, but i have a few information and a lot of uncertainty, which i use to evaluate the situation  I know, that the AI doesn't need all information, but i wanted to point out, that we have a whole new range of problems regarding starcraft compared to go. let's suppose the AI has the "perfect" strategy: this would mean the AI always plays the exact same way -> the human wouldn't have this "uncertainty", because he knows how the AI is going to play and he could craft a strategy, that isn't perfect, but beats the strategy of the AI (for example a doom-drop? i don't know) you could try to bypass this and give the AI a range of strategies it can choose from. but if you hardcode this into the AI i don't see the point of even trying to build an AI for starcraft. The hardcoded strategy would be human-created making it a "machine+human vs human" match
If AI will succeed at finding perfect strategy which will win 100% of times and can't be countered by any counter-action of his opponent, he will execute it over and over again without any chance of loosing. Otherwise that strategy can't be called perfect and AI wouldn't play it over and over again. He knows what predictability is and will vary his buildorders.
He is programmed in a way that he will try to do every move that increases his chance of winning and will avoid any situations that decreases his chance of winning.
He can blink micro for 16 hours if he knows that that will 100% give him 1 HP advantage over his opponent. But he will never go for all-in with 98% chance of winning if there is any possible way to increase it for another 0.000000001%.
|
Russian Federation93 Posts
|
Sorry to say this, necaremus, but it seems to me you don't understand how AI works. Some of us have an advanced degree in AI, but since it's not exactly the same branch, we don't feel comfortable making predictions.
What you are describing is absolutely a non-issue for this type of AI. You are thinking of a bot, which is vastly inferior. There is nothing inherently difficult about Starcraft for an AI. The strategic aspect is complex, but you only have to be better than humans, not perfect. And humans waste most of their training regimen on mechanics.
AI these days are at least on par with humans when dealing with uncertainty. Pure numbers over thousands of games beat intuition. They don't even bother with profiling to exploit people's weaknesses because of how dominant the analytical approach is when dealing with uncertainty.
All the "problems" pointed out in this thread are mostly annoyances. Not being able to simulate millions of games per day is the bigger struggle, but I feel comfortable saying you can easily run hundreds of games of Broodwar per hour on gaming hardware, so Google would find a way.
I think the main reason Google even considers Starcraft is that it's fun to watch, and millions of people would watch these games. It would be a publicity stunt.
|
On March 15 2016 22:41 sh1RoKen wrote:Show nested quote +On March 15 2016 21:24 JimmyJRaynor wrote:On March 15 2016 19:24 sh1RoKen wrote: Because this beast can learn. And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.
the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions. do they plan on making this work without heuristics? or are you claiming the Google AI will create its own as it studies replays and plays itself? How they did with GO: 1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game). At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition". 2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more. 3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved. They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design.
Well it's not that simple considering, that SC is not turn based. There are much more complex calculations involved. I'm sure it will get there eventually, but it won't learn as fast in a real-time startegy game like SC, since it's way more random.
|
I'm sure BoxeR's only saying this because he'd love the publicity, because any serious AI would make a pro SC2 player look like a bronze noob. AI's have perfect mechanics meaning he will lose every micro/macro battle. Mechanics dominate strategy in SC2 and you can win games based on pure micro/macro alone. Starcraft was developed assuming human players with lag and poor reactions. A lot of the game elements would be rendered useless with an AI player. For example think about raven seeker missiles. Those will basically never hit against a properly coded AI. Hell they never hit against human players either.
|
Russian Federation93 Posts
On March 15 2016 23:57 thePunGun wrote:Show nested quote +On March 15 2016 22:41 sh1RoKen wrote:On March 15 2016 21:24 JimmyJRaynor wrote:On March 15 2016 19:24 sh1RoKen wrote: Because this beast can learn. And he can do it really fast. He can play 10 million games per day vs himself to understand how the game works and what move or tactic or strategy is better at any possible game position. He can analyse all pro replays in the internet to study how humans play and what is better to use against them. It might sound really hard to believe but that is how it works. He doesn't just calculate all possible moves and chose the best. Google AI thinks much more like human does than you think. And he does it better.
the stuff it will learn playing against itself won't teach it the kinds of tactics that can defeat a human. it might be good for creating its own heuristic functions. do they plan on making this work without heuristics? or are you claiming the Google AI will create its own as it studies replays and plays itself? How they did with GO: 1. They showed to AI 30 million GO moves from the internet marked with "Good move" (of a player who won at this very game) or "Bad move" (of a player who lost at this very game). At this point AplhaGO learned GO rules just by itself. Without any manual algorithms or instructions given by human. It started to recognize and predict what move a human would do when he is trying to win in common game positions. It has got a human "Intuition". 2. Then it started playing vs itself over and over and over again to create a database of moves evaluated by pure calculations % of winning for every move. And it trained it's intuition even more. 3. Then it combines calculations which human can never achieve with an intuition which human don't really understand to somehow compare it level to what AlphaGO achieved. They will probably do the same thing with Starcraft. After lesson 1 it will start to play like a really good human. After lesson 2 it will defeat anyone without any chances on "strategy" level. I don't even mention mechanical reaction, speed and accuracy level which wasn't even considered during starcraft balance design. Well it's not that simple considering, that SC is not turn based. There are much more complex calculations involved. I'm sure it will get there eventually, but it won't learn as fast in a real-time startegy game like SC, since it's way more random.
There is nothing random in Starcraft. It might be random for humans but for computer it is 100% predictable.
It will definitely take much more time to learn lesson 1 because of the more complicated design than GO. But Man. Some enthusiast were managed to teach another much more simpler artificial neural network to complete Mario level in 34 attempts! And this was a child play compared to what google is capable of in both intellectual and hardware resources.
|
On March 15 2016 22:39 heqat wrote:Show nested quote +On March 15 2016 21:28 necaremus wrote: so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain. Reference here: arstechnica.com
That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading.
|
On March 15 2016 23:52 Pwere wrote: Sorry to say this, necaremus, but it seems to me you don't understand how AI works. Some of us have an advanced degree in AI, but since it's not exactly the same branch, we don't feel comfortable making predictions.
What you are describing is absolutely a non-issue for this type of AI. You are thinking of a bot, which is vastly inferior. There is nothing inherently difficult about Starcraft for an AI. The strategic aspect is complex, but you only have to be better than humans, not perfect. And humans waste most of their training regimen on mechanics.
AI these days are at least on par with humans when dealing with uncertainty. Pure numbers over thousands of games beat intuition. They don't even bother with profiling to exploit people's weaknesses because of how dominant the analytical approach is when dealing with uncertainty. i do agree, that we have a different picture of the situation. I also know, that my "hardcoded" example is pointing towards a bot and the AI the deepmind team created is entirely different.
A weakness of mine may very well be, that i do not fear to make predictions, although my range of information is very limited (like in this case). I do make these prediction on the one hand to find out where i might be wrong (because of lack of information) and on the other hand... because it's fun for me 
some people think this attitude is annoying, but - as far as i know - it's the fastest way of learning and improving oneself. Maybe people think this kind of thing is annoying, because they make the mistake of interpreting evaluations as facts or opinions.
but i wanna get back to the AI and starcraft... and i wanna try to explain why i may build a different picture as you do 
i didn't say it, but when i evaluated the AI vs human thingy, i didn't take the state of the game as is, but a slightly different version of starcraft, where AI and human would be on-par if it comes to micro-management of the units: both the human and the AI would use the same algorithm for blink-micro, concave building, focus fire (and so on) reducing the game to positional advantage on the map and build-order strategy.
i did this, because you would not need the deepmind AI to win against a human if you take the state of the game as of now. the superior micro-control of a simple AI (which even i could program) would win pretty much every game against a real human...
|
On March 16 2016 00:27 ClanRH.TV wrote:Show nested quote +On March 15 2016 22:39 heqat wrote:On March 15 2016 21:28 necaremus wrote: so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain. Reference here: arstechnica.com That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading. ty I actually didn't bother reading it, because i thought "total nonsense" when i saw the title. But your statement suggest, that it may very well be worth a read... just a bad title :3
|
On March 16 2016 00:49 necaremus wrote:Show nested quote +On March 16 2016 00:27 ClanRH.TV wrote:On March 15 2016 22:39 heqat wrote:On March 15 2016 21:28 necaremus wrote: so... before we come and surpass the human... there is a loooooong way ahead of us: maybe we need more energy than our sun can provide in its lifetime to simulate a human. (i'm not saying it's impossible ;p)
Regarding this subject, in 2007 the world’s total CPU power was approximatively equal to one human brain. Reference here: arstechnica.com That study is saying that the world's total storage capacity is the same as an adult humans DNA, not that the world's CPU power is approximately equal to one human brain. The title that article uses and the actual conclusions are very misleading. ty I actually didn't bother reading it, because i thought "total nonsense" when i saw the title. But your statement suggest, that it may very well be worth a read... just a bad title :3
Sure, but still from the article:
"To put our findings in perspective, the 6.4*1018 instructions per second that human kind can carry out on its general-purpose computers in 2007 are in the same ballpark area as the maximum number of nerve impulses executed by one human brain per second,"
Of course the brain works very differently than a CPU, so we cannot directly compare them in term of power.
|
Poland3747 Posts
It's funny that people discuss - at the same time - if AI could beat top players in StarCraft and the ways how SC-tuned AlphaGo like AI (so AlphaSC I guess) should be obstructed so that the game will be fair.
AlphaGo is an AI that bases it's aactions on graphical input and that's it. If you feel that you need to tune down the ability of AI to execute perfect strategies at superhuman speed even though it would be using standard inputs then sorry but it is game over. AI won.
|
For those claiming that it will learn from games on the internet, how many replays / pieces of game data is there available online? I would guess < 100K high-level games (and thats being generous, I would guess even fewer as top pros rarely release replays).
Lets say we have 100,000 replays.
divide that by the 9 possible match-ups (TvX, ZvX, PvX), now we only have 11111 per match-up. divide that by lets say, 10 popular maps within the last 2 years. Now we have ~1000 replays per map per match-up. divide that by the number of build order openings / start-positions and we have at the most, 100 replays to study from in each match-up on a particular map on particular openings.
I don't think thats enough data to properly seed the A.I, so most of its learning will be from playing itself which will be quite difficult. Lets say Google uses a cluster of 10,000 machines, each running a copy of SC for the A.I to play, that allows it play maybe 2M games a day against itself. Now do the same division to figure how many games it will be able to play for each map / match-up / starting position / build-order a day.
I don't know how much data AlphaGo needed to reach its current level in Go, but clearly training the A.I in SC will be a much more difficult task based on acquiring enough data alone.
Then there is the challenge of it actually being able to learn the nuances of the game and interpret game-state. Even if Google found a way to sufficiently train it, I am really not convinced it could win.
I think whats really important is map-level data as well. will the A.I be able to interpret what the map looks like?
|
|
|
|