For those of you who haven't heard of it yet: Artificial Intelligence conquered Go a while ago. So far, the game and it's gosu human players were considered the toughest nut for artificial intelligence to crack. And humans indeed are kind of good at it - They have a bad ass 4'000-year old East Asian tradition, with all their different schools and styles and hundreds of years of paper record of ancient master Games. (1)
The Robots are winning!
They are coming for us next. As mentioned here before, Goolge has set aim to crack Starcraft II. Blizzard is on their side already, there is no stopping them now. (2)
The AI will be playing fair, it only sees what's on the screen, maybe even have an APM limit. But it is coming, and it will be tough. We better burrow those Widdow Mines behind the Ultralisk line and cloak them from our Mothership. (2)
But there is hope. There is a way for our race to fight back, while holding back the attack waves. The best part: It actually does read like straight from the Starcraft missions. The chess scene has already done it years ago, and the Go-scene is currently starting with it.
We will become hybrids. Or in Starcraft-Terms, we will play in Archon Mode with that AI. In chess, it became clear quickly that a Human-AI Archon, or „Centaur“ as they call it (3), will beat both two humans and two AIs for the very best players, and they have their own league now, pushing both human and AI understanding of chess to new limits. Gary Kasparow invented that play style. That approach to dealing with advanced AI is also known as “Race with the Machines” and applies to all Human-AI strategic interaction in life, not just strategy games (4).
We still have some time before the AI arrives, so let's focus on defense. We need to put some effort into this as like, right now, and I hope you are with me. What can we do to be ready for them? Well, I did go through a scientific education in my day life, so let me set out a draft build-order. That's really what this video series is meant to develop - a meta-build order, a discussion on how to systematically improve the highest level of play.
Let me give you some examples of the thinking:
My first hypotheses for the human-AI Achron mode meta is that all teams will end up playing random.
A first primitive model of the Game was enough to develop that theoretical prediction, with only two assumptions: The 1st Axiom is Perfect Play – So I'm saying random has an advantage, once you play all races perfect, because you have less pressure to scout. Elazers 4-Ravenger rush against random without Probe-Scout anyone? The only reason for not choosing random would be imbalance - Enter the 2nd Axiom - Perfect Balance. Every thing else being equal, and assuming god-like Blizzard perfectly nerving the Adept when ever it gets too much of those shady Protos wins, Random will rule the day. The experimental test for Oma's Random Hypothesis will take a long time though.
Fighting Back Stage 1: Scouting out the AI's Weak Spots Edit
In general terms, what the AI's in Go and Chess are really bad at is Meta. I understand just about enough Go to extract that much information from the Go-Communities analysis of the Pro-Go games against Alpha Go. Don't get me wrong: The AI invented a new move that killed the current meta. Move 37 in the first game, where the human took a 2 hour break before making his move – that's like half of the entire game's thinking time. The moved altered something in the meta of Go. But the humans where faster to start adapting the Meta, and that is the only way then can at least occasionally win a game. Changing number 37 is a bit like adding 6 burrowed Banelings in the mineral line of your opponents 3rd before he takes it. A really small detail in the bigger picture. But it may be enough to break the balance in favor of Zerg when used in an otherwise perfectly executed hydra-ling-bane in ZvP. Once that piece is broken, all the logic about the early-game strategy leading to that point also changes. And that point is where we humans still prevail: Adapting the Meta. If I really understand enough Go-lingo, I believe that this is exactly how the only human win in the 4-1 series with Lee was achieved: He change up something in the early game to a new move that was considered impossible long ago. But then somehow made it work, by using a variation of the infamous 37. This weakness makes sense when looking at the technical way the AIs the game at the moment: They need a very, very large library of games to get going with their reinforcement learning. We are still ahead when it comes to creating a small number of viable new game styles in that library as soon a new change pops up. That is because we have a good theory of how the game works, while the AI is only experimenting based on re-enforcement learning. Or can they develop theories? I don't even know what that would mean… Anyway, for now that is our only chance to keep winning. Or at least to have a continued role in AI-Human Archon games. So what can we practically do to achieve that?
We need to speed up on improving the Meta.
Please excuse me being all sciency about it and let me briefly define a bit more formally what is commonly referred to as the “Meta”. The terms is generally used to describe the expected reactions of your opponent and outcomes of strategic confrontations. It can also be called the viable decision tree. For example, you can say: “Battlecruisers are currently not in the meta in TvT”, which means you won't see battlecruisers in like 95% of Grand-Master game winners in TvT. The meta is composed of a set of build-orders for each race, which are all connected with a set of “reasonable” responses. To make it work out formally, I need to use a slightly bigger definition of “build order” including certain non-building “meta-moves” like purposefully engaging in a non-efficient trade, or base-trading in the definition to keep it more broad.
But how can we get more systematic about Meta Progress'
That is easier said than done… Aren't we all trying all the time to get ahead in the Meta? I'd even claim that that is what makes Starcraft so great and let's us stick with the same fucking game for decades...? We are doing it, yes, and we are even quite advanced at it, so we have a language to talk about, as used in this forum. While I'm sure there is a lot of talk about exactly the kind of weird move like 6 burrowed banelings in your opponents third mineral line before he takes it, we do not take records or systematize how we understand Starcraft in a unified theory. We don't have a real “Theory of Starcraft” that lists all the relevant factors. Chess and Go both have such theoretical frames, they are used for example to create the scoring scheme of a mid-game situation, and are really helpful to figure out new crazy moves. Or, the be more precise: There is a pretty advanced theory out there in the community that we somehow share so we can talk about these things and know a lot, but we never formalized that theory. The AI's will learn all that about the current Meta from our previous games and start cornering us due to their better execution. So how can we as a whole human community improve our meta?
Developing the Science of Starcraft
One way to formalize that talk would be to develop a code that allows us to talk faster. We have an informal code that is really sophisticated already – like in when I make a statement like: “Can you believe it? Stats just won after seven minutes in the finals against Gumiho, when he did the Void Ray & Basetrade move that Strange had used in that “Strange vs. Polt” game”. Most readers in this forum will have understood right away which move in which game I'm referring to, even if it's already more than a year old. I would argue that what Strange did in Strange vs. Polt was a potentially Meta altering event. There are many other such games, so I'm trying to formalize what he said here. A more recent one would be “Neebs Archon Drop”, but I'll stick to Strange vs. Polt as an example for now. The more subtly we can improve the Meta, the better. Defending Archon Drops with Roaches at perfect execution is something so clearly possible that the AI would probably find that by just experimenting a bit beyond a fully standardized, perfect execution ZvP game.
We already have the machine-readable formats of replays and game stats, and our informal language to theorize about possible SC2 is pretty good so far. Saying “Stats just won the GSL against Gumhio by initiating a base trade against Terran with Void Rays at 6 minutes” works pretty good. Probably that game would also end up the Hall of Fame of crazy games, so that would make it “Stats won in (Stats;Gumiho; 6:00;BC17; GF), because he did…what? Maybe “BaseTrade in PvT with Void-Rays at 6 minutes” – codified that would make it “Strategy in MatchUp in BuildOrderDeviation from Meta”. One thing we might want to do in order to improve our understanding is draw up a Meta-Decsion Tree, with verbal reasoning why certain things will or won't work at perfect execution. But that's only one part of doing science:
Deliberately Experimenting
But as all sciences, Starcraft will for now be an experimental driven science. We have no way of telling theoretically if (BT; PvT;VoidR; 600) is opening up a new avenue for P to get a larger % in wins or not. I doubt it can, please excuse the silly example. Currently, our way to find out is competitive play of pros, who play at our best approximation to perfect execution. But the AI will have perfect execution and will be able to trial-and-err much faster than we can. What can we do?
We need to search the SC Meta for more strange, but viable games like (Strange; Polt; 15:00; DH16; RO32) in a smart and efficient way. And you know who probably would like to see more games like that? The casters and the viewers. I didn't check the numbers, but Strange vs. Polt pretty much was the most watched Starcraft game ever. Unfortunately games like that are rare, and moves like Strange's VoidRay Basetrade againt Terran even rarer. Pro-gamers cannot rely on the scientific axiom of “perfect execution” that will become a reality as soon as the AIs hit, so they can't just do something they never tried under high pressure to win a lot of money. We already see a lot more experimentation among the Pro's non-price games, which is what made todays Meta so sophisticated. But could there be a way to push our Pro's to focus less on execution and more on Meta? They will have too, as soon as the Robo-Hybrids enter the ladder and pure executing skill will lose value first. So let me just propose the first Starcraft research policy:
A crowd-sourced award for the coolest game.
As the crowd apparently really, really liked Polt vs. Strange, and the casters and you-tubers even made actually more money because of that particular game, we could launch a second reward to the biggest tournaments. One that is rewards the player who made the most unorthodox move work in all the matches. You can vote based on how much money you put in & everybody who chips in a minimum amount gets all the replays.… That, and the crowd, should be all it takes to get one, or maybe two, very important things in Starcraft II:
1) Better chances against the robots 2) More Epic Games
This is it for now from Oma Morkie's Starcraft Science Blog, but there is already my first policy recommendation.
In SC2, any advanced AI is gonna mop the floor with human players. SC2 is very, very different from Chess and Go though, so I am still not sure that the AlphaGo team has much of an advantage from conquering Go. Chess and Go have the peculiar property that a single move can influence the game very unpredictably far down the road and that it also isn't straightforward, what is the best move at any given position and which move is "winning". SC2 (or BW), as much as people want to pretend that they are a "strategic chess", they are, in fact, mostly games of local tactical skill. You can see immediately if a fight helped you or harmed you, you can have tangible short-term goals that are undoubtedly positive for you etc. This makes it actually less suitable for a "play and learn" AI approach which was so successful with Go, because such a strategy would waste incredible amount of resources of going through useless space of moves that can be eliminated based on short-term judgement. However if an AI capable of short-term judgement is constructed and those steps are put together in a manner of a (very simple) overall Go/chess approach, such an AI would be unbeatable. It's not just about the micro, but about the fact that such AI should always be able to judge the outcome of a fight and never take losing fights. How do you beat someone who never loses armies? I think an optimal AI game of SC2 looks extremely different from what we are used to see from human players, it's basically a clusterfuck of small armies trying to outposition each other for a local advantage or surround that makes the smaller army unable to retreat - or the AI will discover some unbeatable timings and that will be all, that's also a possibility.
On July 12 2017 16:34 OmaMorkie wrote: That is because we have a good theory of how the game works, while the AI is only experimenting based on re-enforcement learning. Or can they develop theories? I don't even know what that would mean… Anyway, for now that is our only chance to keep winning.
Sorry, not true. AlphaGo learns by self play, not just by observing human games. This means that it can develop new strategies and meta by itself. The latest iteration of AlphaGo won 30-0 against top pros on a Go-server, and 3-0 against the no 1 world champion Ke Jie.
It's play style especially in the opening is new and all over the world professionals are trying to copy it and experiment with these new ideas.
In fact, the AI that is teaching us humans, not the other way around.
Depends a bit on whether APM limits are introduced. Without APM limits, I agree with you, it will be a reaper-micro freakshow. But with strict APM limits, things could get a lot more interesting.
While the micro decisions have less influence, build orders, base-setup etc. are still really complex. Not sure if it can compare to Go (again, under the assumption of perfect execution, think of two Pro's playing Archon at Very Slow game speed).
One way or another, improving on the Meta seems like the only role for humans in competitive Starcraft.
I guess I should also change the title - It's really not about beating the AI (no chance), but about teaming up with it and become better than pure AI. What Kasparow did for Chess, but for SC.
[QUOTE]On July 12 2017 17:05 Mendelfist wrote: [QUOTE]On July 12 2017 16:34 OmaMorkie wrote:
I would argue that Chess is the better reference of where the game is going long term. Human / AI Hybrids still win there, and I believe they will soon start showing up in Go.
I agree, beating an AI without aid is impossible, but I would take a bet that (at least for a few more years), Human-AI Archons will beat pure AI.
The idea of collectively focusing more on Meta is meant to improve our play style for the future of AI-Human Archon tournaments with APM limit... Which I expect to be absolutely awesome to play and watch.
I was not talking about APM. Even with limited actions, the ability of the AI to make calculated decisions correctly at each point will eventually completely turn the gamr upside down. I don't think there will be much room for any "thinking" left. How many actual strategic decisions are there during one game? How many situations where one action isn't strictly better? I'd say just a handful.
On July 12 2017 18:19 opisska wrote: I was not talking about APM. Even with limited actions, the ability of the AI to make calculated decisions correctly at each point will eventually completely turn the gamr upside down. I don't think there will be much room for any "thinking" left. How many actual strategic decisions are there during one game? How many situations where one action isn't strictly better? I'd say just a handful.
Most decisions depends on your ability to micro (at least for terran), so I don't think there will ever be a fair way to make AI play against humans, the game has not that much strategic depth afaik, but maybe I am underestimating our ability to make inferences...
On July 12 2017 16:41 OmaMorkie wrote: Even in chess, the Human-AI Centaurs are still winning...
You can call this "winning", but what happens in reality is that the AI does almost all of the work, only when it evaluates two moves equally does human judgment come into play to break the tie. The same situation in SC2 it would be like a hybrid where the human player makes high level build order decisions while the AI actually plays the game, including micro / macro / tactics. Basically, it's totally humiliating for humans to participate in as a competitive activity, which is why centaur chess is not very well respected.
And theoretically, one can say that AI + something > AI. That is to say, as long as humans have some marginallly useful input it will improve the AI, but this will be true even if this input is essentially negligible.
On July 12 2017 16:34 OmaMorkie wrote: A really small detail in the bigger picture. But it may be enough to break the balance in favor of Zerg when used in an otherwise perfectly executed hydra-ling-bane in ZvP.
What? ZvP is already in favour fo zerg w/o borrowed banelings. for god's sake...
On July 12 2017 17:03 opisska wrote: It's not just about the micro, but about the fact that such AI should always be able to judge the outcome of a fight and never take losing fights. How do you beat someone who never loses armies? I think an optimal AI game of SC2 looks extremely different from what we are used to see from human players, it's basically a clusterfuck of small armies trying to outposition each other for a local advantage or surround that makes the smaller army unable to retreat - or the AI will discover some unbeatable timings and that will be all, that's also a possibility.
This used to be a contentious topic of discussion in BW vs SC2 threads, the degree to which the game encourages you to engage. The fear was that unlimited selection, improved pathfinding (responsiveness, movement) and clumping of units, in a world where both sides had powerful splash attacks, would prohibitively discourage engagements. One wrong step would mean your entire army would evaporate and you couldn't take that risk, and because of unlimited selection you could easily direct your entire army to avoid engagements. Supposedly, in BW you could more easily withdraw from an engagement without severe losses (as your army is more spaced out), but you were also more likely to get drawn into an engagement and have some sort of skirmish because controlling your army was so hard.
I can't evaluate whether SC2 suffers from this, I guess there are also countervailing forces to negate this dynamic so it's not a clear cut case. One thing that happens is that players with some reason to will push for engagements, and it's not always possible to avoid them. And pro players are good enough to spread out their army, be careful and not overcommit. So this dynamic might not be that obvious.
What you can say is that the AI should be very confident in its micro (as it will be near perfect), so it should always push for engagements, even if it seems to human strategically ill-advised. This will immediately break the meta and give the AI a likely insurmountable strategic advantage. On the other hand, in AI vs AI games you might see the exact opposite, both sides will never engage because defensive set-ups are so powerful etc. It's hard to predict, and it depends also on how they "grow" the AI. It it only plays itself it might severely overestimate its opponent's ability to micro. I think it will be interesting to witness this development, both in relation to AI behavior and within the context of these fundamental aspects of SC2 design. The AI gives a fresh perspective and a potentially higher caliber of play, so it might break a lot of assumptions.
I think its quite premature to evaluate the potential success of a AlphaSC2 given that SC2 and GO are completely different games with different rules and parameters.
For one Go is a game with complete information and turns while SC2 is a game a game with incomplete information set in real time.
It is true that if the game was about pure mechanics humans would probably be crushed, however the aim of the Alpha crew is to make a AI that can out-think a human.
And the out-thinking part is much, much tougher when the information you have is incomplete. Thus the AI needs to learn the specific tells of a certain strategy, it needs to learn how to scout for said tells, when to scout and how to potentially identify miss-information from elaborate cheeses.
After the scouting is out of the way the AI also needs to be able to learn how to develop counter-strategies and tactics of its own, how to evaluate its losses in the grand scheme of its strategy and how to evaluate its position vs that of the human.
After the above are ironed out the AI needs to learn how to execute its plan.
I'm not saying its not possible, but I think it'll take the AI team a solid 5 years at the minimum to reach the level of refinement needed to beat humans in SC2.
On July 12 2017 19:28 Destructicon wrote: I think its quite premature to evaluate the potential success of a AlphaSC2 given that SC2 and GO are completely different games with different rules and parameters.
For one Go is a game with complete information and turns while SC2 is a game a game with incomplete information set in real time.
It is true that if the game was about pure mechanics humans would probably be crushed, however the aim of the Alpha crew is to make a AI that can out-think a human.
And the out-thinking part is much, much tougher when the information you have is incomplete. Thus the AI needs to learn the specific tells of a certain strategy, it needs to learn how to scout for said tells, when to scout and how to potentially identify miss-information from elaborate cheeses.
After the scouting is out of the way the AI also needs to be able to learn how to develop counter-strategies and tactics of its own, how to evaluate its losses in the grand scheme of its strategy and how to evaluate its position vs that of the human.
After the above are ironed out the AI needs to learn how to execute its plan.
I'm not saying its not possible, but I think it'll take the AI team a solid 5 years at the minimum to reach the level of refinement needed to beat humans in SC2.
Heck it might possibly take it even 10.
While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means. I think if you compare chess and go with starcraft, then even if starcraft is the more complex game, and is the game where humans might have the greatest advantage strategically, it is also the game where humans might be weakest tactically.
On July 12 2017 19:28 Destructicon wrote: I think its quite premature to evaluate the potential success of a AlphaSC2 given that SC2 and GO are completely different games with different rules and parameters.
For one Go is a game with complete information and turns while SC2 is a game a game with incomplete information set in real time.
It is true that if the game was about pure mechanics humans would probably be crushed, however the aim of the Alpha crew is to make a AI that can out-think a human.
And the out-thinking part is much, much tougher when the information you have is incomplete. Thus the AI needs to learn the specific tells of a certain strategy, it needs to learn how to scout for said tells, when to scout and how to potentially identify miss-information from elaborate cheeses.
After the scouting is out of the way the AI also needs to be able to learn how to develop counter-strategies and tactics of its own, how to evaluate its losses in the grand scheme of its strategy and how to evaluate its position vs that of the human.
After the above are ironed out the AI needs to learn how to execute its plan.
I'm not saying its not possible, but I think it'll take the AI team a solid 5 years at the minimum to reach the level of refinement needed to beat humans in SC2.
Heck it might possibly take it even 10.
AI have beaten humans in heads up texas holdem which has hidden information and randomness. And the real-time element probably favours AI, not humans. At least that was true for chess, where humans resisted far longer in correspondence chess than over the board play.
Yes, I agree - and the AI will even have to learn things like anticipating that an active tech lab on a Starport may be fake and be ready for a Raven instead of a cloak Banshee. I.e. it will need to learn how to play mind games, which makes it so exciting.
I would argue that hybrid SC2 would still be a hell lot of fun exactly for the complexities pointed out by Destructicon, with the AI taking care of Micro and Build-Order, while the human makes all the cheesy decisions, mind-games, adapts the play style to new maps and so on.
On July 12 2017 22:23 StarscreamG1 wrote: No way alpha go will be on SC2, 100 apm, marines and medivacs would be enough.
The Deepmind team announced SC2 AI as their new project quite a while ago.
where did you see this?
I've only ever seen it announced as an "interest" or "something they'd like to conquer in the future" (basically something they spend time on but not an official project like alphago for example)
anyways sc2 is a different beast from turn based games
chess or go have branching factors that could at least be represented with numbers
the branching factors in sc2 are basically infinity
machine learning approaches to games like sc2 are going to need completely new and radical methods, and when they've conquered it, it will mean we are close to generalized machine learning.
don't be surprised if when machine learning based sc2 AIs are released by say, google - that they are actually very limited in comparison to what many people might expect. they may not be able to adapt or strategize quite like you might think if you use alphago's performance to form your expectations
It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
The biggest limitation a human has is having to use a monitor, speakers, keyboard and mouse to control the game. Build an AI that mechanically controls the game with a mouse, monitor, speakers and a keyboard, and therefore suffers the same limitations as a human (like the mouse slipping a bit more on their mouse pad than expected causing a mis-micro) and the best players will toast the AI for many years to come.
The more interesting dynamic is allow humans to see, hear and control the game with their mind and compete with an AI. Then, humans could achieve perfect micro and macro and a ridiculously high APM, and I see no possibility for AI to defeat them with regularity in my lifetime. The mind is too powerful.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
On July 12 2017 19:28 Destructicon wrote: I think its quite premature to evaluate the potential success of a AlphaSC2 given that SC2 and GO are completely different games with different rules and parameters.
For one Go is a game with complete information and turns while SC2 is a game a game with incomplete information set in real time.
It is true that if the game was about pure mechanics humans would probably be crushed, however the aim of the Alpha crew is to make a AI that can out-think a human.
And the out-thinking part is much, much tougher when the information you have is incomplete. Thus the AI needs to learn the specific tells of a certain strategy, it needs to learn how to scout for said tells, when to scout and how to potentially identify miss-information from elaborate cheeses.
After the scouting is out of the way the AI also needs to be able to learn how to develop counter-strategies and tactics of its own, how to evaluate its losses in the grand scheme of its strategy and how to evaluate its position vs that of the human.
After the above are ironed out the AI needs to learn how to execute its plan.
I'm not saying its not possible, but I think it'll take the AI team a solid 5 years at the minimum to reach the level of refinement needed to beat humans in SC2.
Heck it might possibly take it even 10.
AI have beaten humans in heads up texas holdem which has hidden information and randomness. And the real-time element probably favours AI, not humans. At least that was true for chess, where humans resisted far longer in correspondence chess than over the board play.
SC2 has far more ambiguity, far more possibilities, in its hidden information and randomness than Texas holdem. Far, far more.
AI advantage over humans in SC2 would be mostly tactical - perfect build execution, perfect micro - until the AI built up knowledge of its human opponents and also had the capacity to correctly weigh possibilities based on incomplete information. Which is one of the hardest parts of creating a true AI, giving it that human intuition, the leaps of logic that human brains are capable of making using incomplete information and still reaching an accurate conclusion.
On July 12 2017 19:28 Destructicon wrote: I think its quite premature to evaluate the potential success of a AlphaSC2 given that SC2 and GO are completely different games with different rules and parameters.
For one Go is a game with complete information and turns while SC2 is a game a game with incomplete information set in real time.
It is true that if the game was about pure mechanics humans would probably be crushed, however the aim of the Alpha crew is to make a AI that can out-think a human.
And the out-thinking part is much, much tougher when the information you have is incomplete. Thus the AI needs to learn the specific tells of a certain strategy, it needs to learn how to scout for said tells, when to scout and how to potentially identify miss-information from elaborate cheeses.
After the scouting is out of the way the AI also needs to be able to learn how to develop counter-strategies and tactics of its own, how to evaluate its losses in the grand scheme of its strategy and how to evaluate its position vs that of the human.
After the above are ironed out the AI needs to learn how to execute its plan.
I'm not saying its not possible, but I think it'll take the AI team a solid 5 years at the minimum to reach the level of refinement needed to beat humans in SC2.
Heck it might possibly take it even 10.
AI have beaten humans in heads up texas holdem which has hidden information and randomness. And the real-time element probably favours AI, not humans. At least that was true for chess, where humans resisted far longer in correspondence chess than over the board play.
SC2 has far more ambiguity, far more possibilities, in its hidden information and randomness than Texas holdem. Far, far more.
But one of the consequences is that a lot of that complexity isn't explored by humans either. One of the interesting takeaway from the poker match was that humans had trouble adjusting to styles that was very rare among humans or to situations that came up infrequently. E.g. strange slowplays followed by large overbets (with superhuman balance). People are wondering about how the AI will be able to read humans and adjust to mindgames. But humans reading the AI might be an even tougher challange.
Which is one of the hardest parts of creating a true AI, giving it that human intuition, the leaps of logic that human brains are capable of making using incomplete information and still reaching an accurate conclusion.
Intuition is no divine spark, it relies on experience. The further out you are from your usual experience the more you have to rely on conscious thought, which is far slower.
And modern AI do learn from experience. That's what made AlphaGo so successful. Learning first from human play, than self-play, to the point where it amassed orders of magnitudes more experience than pros who studied the game their whole life.
If humans still have an advantage it's in our pace of learning. We can learn from fewer examples. But since Deepmind was only restricted in APM, not the number of training games they can use, I don't see how this will serve us.
Ofcourse AI will be better in terms ot makro and micro. Mechanical task are easy for a machine , humans dont want to compete in that area and you dont have to. Once game reaches late middle game and position is equal the machine will take over easy. The thing is can the machine outsmart and predict human in strategy, build order and early game plan. Can a machine make a massive drop while setting a trap on other side of the map? Can machine change game plan and readjust accoriding to new information that receives about opposition? Can hide his intention? Cancel building? Make a pylon in secret area? Those and more of this type of questino are importhant. A.I did this many years ago with chess. Not only showed perfect tactical vision, traps, sacrifices, incredible defense and deep calculation predicting future, (the easy part) but was able to spot deep strategic ideas and plans. And i think this eventually will happen. The Brute force of CPU is already enough. We don't need faster CPU. The SC is limited in terms of possibilities. Chess and GO had infinity positions and still the machine won not by seeing everything because it can't (CPU not fast enough even all super computers put togheter). It will do it with starcraft too, but it might take longer, because is completely different type of game with inconpete information but defintely possible. It will be expontential growth. For example 3 month AI practice will be playing like crap. 1 year later still bad. 3 years later still making silly mistakes and playing worse than C- broodwar player. but 2 weeks more and is beyond super human level out of nowhere. Just like they expected Alpha Go to beat best GO player in 2025.. but it happened in 2016 huge surprise. So we just have to wait and will happen.
AI to be smart as bug - a lot of years, to be smart as dog - many many years, to reach chimpase level - super long. To go from chimpase to ultra smarter many times more than all humans brain put togheter - just a year. Technical Singularity is close, but that does not mean it will happen, but is getting closer.
Is like building a puzzle with 100 pieces. It's extremely hard to make sense which piece have to go with which piece at beginning or even at middle. So one by one, with test you link them togheter. At the end you have just 5 pieces left. So much work and so much time has passed. But still nothing - is incomplete. Then BUM.. last 5 pieces are easier than first 80 and from nothing to complete picture is way shorter time.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
On July 12 2017 22:23 StarscreamG1 wrote: No way alpha go will be on SC2, 100 apm, marines and medivacs would be enough.
The Deepmind team announced SC2 AI as their new project quite a while ago.
where did you see this?
I've only ever seen it announced as an "interest" or "something they'd like to conquer in the future" (basically something they spend time on but not an official project like alphago for example)
anyways sc2 is a different beast from turn based games
chess or go have branching factors that could at least be represented with numbers
the branching factors in sc2 are basically infinity
machine learning approaches to games like sc2 are going to need completely new and radical methods, and when they've conquered it, it will mean we are close to generalized machine learning.
don't be surprised if when machine learning based sc2 AIs are released by say, google - that they are actually very limited in comparison to what many people might expect. they may not be able to adapt or strategize quite like you might think if you use alphago's performance to form your expectations
This was announced last Blizzcon. They even had a panel on it and everything.
On July 12 2017 16:34 OmaMorkie wrote: That is because we have a good theory of how the game works, while the AI is only experimenting based on re-enforcement learning. Or can they develop theories? I don't even know what that would mean… Anyway, for now that is our only chance to keep winning.
Sorry, not true. AlphaGo learns by self play, not just by observing human games. This means that it can develop new strategies and meta by itself. The latest iteration of AlphaGo won 30-0 against top pros on a Go-server, and 3-0 against the no 1 world champion Ke Jie.
It's play style especially in the opening is new and all over the world professionals are trying to copy it and experiment with these new ideas.
In fact, the AI that is teaching us humans, not the other way around.
On August 06 2017 17:39 Modesty00 wrote: Ofcourse AI will be better in terms ot makro and micro. Mechanical task are easy for a machine , humans dont want to compete in that area and you dont have to. Once game reaches late middle game and position is equal the machine will take over easy. The thing is can the machine outsmart and predict human in strategy, build order and early game plan. Can a machine make a massive drop while setting a trap on other side of the map? Can machine change game plan and readjust accoriding to new information that receives about opposition? Can hide his intention? Cancel building? Make a pylon in secret area? Those and more of this type of questino are importhant. A.I did this many years ago with chess. Not only showed perfect tactical vision, traps, sacrifices, incredible defense and deep calculation predicting future, (the easy part) but was able to spot deep strategic ideas and plans. And i think this eventually will happen. The Brute force of CPU is already enough. We don't need faster CPU. The SC is limited in terms of possibilities. Chess and GO had infinity positions and still the machine won not by seeing everything because it can't (CPU not fast enough even all super computers put togheter). It will do it with starcraft too, but it might take longer, because is completely different type of game with inconpete information but defintely possible. It will be expontential growth. For example 3 month AI practice will be playing like crap. 1 year later still bad. 3 years later still making silly mistakes and playing worse than C- broodwar player. but 2 weeks more and is beyond super human level out of nowhere. Just like they expected Alpha Go to beat best GO player in 2025.. but it happened in 2016 huge surprise. So we just have to wait and will happen.
AI to be smart as bug - a lot of years, to be smart as dog - many many years, to reach chimpase level - super long. To go from chimpase to ultra smarter many times more than all humans brain put togheter - just a year. Technical Singularity is close, but that does not mean it will happen, but is getting closer.
Is like building a puzzle with 100 pieces. It's extremely hard to make sense which piece have to go with which piece at beginning or even at middle. So one by one, with test you link them togheter. At the end you have just 5 pieces left. So much work and so much time has passed. But still nothing - is incomplete. Then BUM.. last 5 pieces are easier than first 80 and from nothing to complete picture is way shorter time.
I think that's a good point. If you look at the Encephalization quotient (roughly brain to body mass ratio) then humans outpace other great apes by only a factor of three. Assuming that the main difference between humans and others is in brain size, which translates to computing power, that means that if you are capable of simulating a chimpansee then simulating human intelligence is only a question of linking another supercomputer.
This is obviously a rather simplified of looking at it, since brain differences between mammals are qualitative as well as quantitative, but it is useful. If you compare Starcraft players at the top level, then there are primarily quantitative differences. All players are capable of doing everything, but some are better at it. The leap from awful to godlike execution for an AI is fairly trivial, the more difficult question is whether you can bring an AI to care about scouting or micro or build orders etc. at all. Once it is capable of competing with a human on any level, then it's only a few more months of practice or another upgrade in hardware for it to vastly outpace any human.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
This seems fun for FPS games. Everyone always has perfect headshots because that's how they envisioned it mentally.
On August 06 2017 17:39 Modesty00 wrote: Ofcourse AI will be better in terms ot makro and micro. Mechanical task are easy for a machine , humans dont want to compete in that area and you dont have to. Once game reaches late middle game and position is equal the machine will take over easy. The thing is can the machine outsmart and predict human in strategy, build order and early game plan. Can a machine make a massive drop while setting a trap on other side of the map? Can machine change game plan and readjust accoriding to new information that receives about opposition? Can hide his intention? Cancel building? Make a pylon in secret area? Those and more of this type of questino are importhant. A.I did this many years ago with chess. Not only showed perfect tactical vision, traps, sacrifices, incredible defense and deep calculation predicting future, (the easy part) but was able to spot deep strategic ideas and plans. And i think this eventually will happen. The Brute force of CPU is already enough. We don't need faster CPU. The SC is limited in terms of possibilities. Chess and GO had infinity positions and still the machine won not by seeing everything because it can't (CPU not fast enough even all super computers put togheter). It will do it with starcraft too, but it might take longer, because is completely different type of game with inconpete information but defintely possible. It will be expontential growth. For example 3 month AI practice will be playing like crap. 1 year later still bad. 3 years later still making silly mistakes and playing worse than C- broodwar player. but 2 weeks more and is beyond super human level out of nowhere. Just like they expected Alpha Go to beat best GO player in 2025.. but it happened in 2016 huge surprise. So we just have to wait and will happen.
AI to be smart as bug - a lot of years, to be smart as dog - many many years, to reach chimpase level - super long. To go from chimpase to ultra smarter many times more than all humans brain put togheter - just a year. Technical Singularity is close, but that does not mean it will happen, but is getting closer.
Is like building a puzzle with 100 pieces. It's extremely hard to make sense which piece have to go with which piece at beginning or even at middle. So one by one, with test you link them togheter. At the end you have just 5 pieces left. So much work and so much time has passed. But still nothing - is incomplete. Then BUM.. last 5 pieces are easier than first 80 and from nothing to complete picture is way shorter time.
I think that's a good point. If you look at the Encephalization quotient (roughly brain to body mass ratio) then humans outpace other great apes by only a factor of three. Assuming that the main difference between humans and others is in brain size, which translates to computing power, that means that if you are capable of simulating a chimpansee then simulating human intelligence is only a question of linking another supercomputer.
This is obviously a rather simplified of looking at it, since brain differences between mammals are qualitative as well as quantitative, but it is useful. If you compare Starcraft players at the top level, then there are primarily quantitative differences. All players are capable of doing everything, but some are better at it. The leap from awful to godlike execution for an AI is fairly trivial, the more difficult question is whether you can bring an AI to care about scouting or micro or build orders etc. at all. Once it is capable of competing with a human on any level, then it's only a few more months of practice or another upgrade in hardware for it to vastly outpace any human.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
This seems fun for FPS games. Everyone always has perfect headshots because that's how they envisioned it mentally.
Once a good enough AI id made, programming it to use a human interface should not be too hard. It would hotkey around like a mofo, not to mention, it could micro pretty well on the minimap.
It is not a question of IF, but rather when. I can imagine the horrors a perfected AI could do with 2 medivacs and 10 marines, using pickups, individual stims, splits and targetting to hold off an insane amount of ling/bane controlled by a human. A good startingpoint would be studying a million replays, to learn the metagame. Computers are stupid, so starting with "most probable scenario" would probably mean worker-rushing, because it would calculate it probable that the opponent would move his own workers randomly around the map, as there are more ways to do that than to mine...
Good scouting should be a very hard thing to program, but I can imagine an AI sending halucinated fenixes and lings all over the place, and an AI would calculate extremely accurately based on what it can observe, production, mining, expansions etc.
If you want to make this fair, as described by BronzeKnee, you'd need to simulate the human body, mouse mouvements, finger and hands on keyboard,... The only way to properly simulate that would be to add a part of "misfunctionment". like 1% errors. Movements from two click are only precise by a few %, wich means most of the time, the computer clicks well, but sometimes it misses, just like actual humans does. Same errors. We saw Inno yesterday build a second armory and then canceling, cause it was a mistake. If even those ro players with 12 hours of play per days are making those mistakes, it means that it would be "unfair" to remove them completely from the AI play. Maybe you don't have to go that far in the mistakes made by the computer, but you should have a small percentage of mistakes still.
From what it sees, the amout of drones, the amout of minerals mined on every single patches, it will exactly know which of all the possible every played openings are possible, and which are not. It will know if there is an scv/probe out on the map building a proxy, or not.
It will see your gas minded, ur units and buildings built and will instantly know what s the earliest possible time when a cloaked banshee or dark templar can arrive at his base.
It will detect certain openings and know exactly how many zerglings he has to build to stop this push, he will not build a single zerling too much.
You don t even need a really smart AI, all you need is enough data and a programm able to read this data fast enough.
The more data you have, the less smart the AI can be, it doesn t need to understand things, it just needs to copy things that worked out in the past, and since AI will have perfect micro and macro, this will be more than enough.
On August 07 2017 21:30 aerlinss wrote: The AI enters ur base with a scout.
From what it sees, the amout of drones, the amout of minerals mined on every single patches, it will exactly know which of all the possible every played openings are possible, and which are not. It will know if there is an scv/probe out on the map building a proxy, or not.
It will see your gas minded, ur units and buildings built and will instantly know what s the earliest possible time when a cloaked banshee or dark templar can arrive at his base.
It will detect certain openings and know exactly how many zerglings he has to build to stop this push, he will not build a single zerling too much.
You don t even need a really smart AI, all you need is enough data and a programm able to read this data fast enough.
The more data you have, the less smart the AI can be, it doesn t need to understand things, it just needs to copy things that worked out in the past, and since AI will have perfect micro and macro, this will be more than enough.
Then that is not AI, and that is very far from what is developping with deepmind. And though this might be enough for chess, It was not enough for GO. I guess it wouldn't work with Starcraft also.
On August 07 2017 21:30 aerlinss wrote: The AI enters ur base with a scout.
From what it sees, the amout of drones, the amout of minerals mined on every single patches, it will exactly know which of all the possible every played openings are possible, and which are not. It will know if there is an scv/probe out on the map building a proxy, or not.
It will see your gas minded, ur units and buildings built and will instantly know what s the earliest possible time when a cloaked banshee or dark templar can arrive at his base.
It will detect certain openings and know exactly how many zerglings he has to build to stop this push, he will not build a single zerling too much.
You don t even need a really smart AI, all you need is enough data and a programm able to read this data fast enough.
The more data you have, the less smart the AI can be, it doesn t need to understand things, it just needs to copy things that worked out in the past, and since AI will have perfect micro and macro, this will be more than enough.
There are far too many ways to play SC2 to make the AI account for everything, even for a supercomputer. One of the most difficult, and important, things to program would be teaching it not to get lost in useless calculations, like the different ways of moving the starting workers randomly around the map or every possible location of a proxy.
The first step would probably be some kind of timing-attack, made to exploit its advantages over humans, and making it safe to as much cheese as possible. The AI should love stimmed marines, medivacs, blink stalkers and mutalisks. I can't imagine ling/bane with perfect superhuman micro, but that sounds scary as well.
On August 07 2017 21:30 aerlinss wrote: The AI enters ur base with a scout.
From what it sees, the amout of drones, the amout of minerals mined on every single patches, it will exactly know which of all the possible every played openings are possible, and which are not. It will know if there is an scv/probe out on the map building a proxy, or not.
It will see your gas minded, ur units and buildings built and will instantly know what s the earliest possible time when a cloaked banshee or dark templar can arrive at his base.
It will detect certain openings and know exactly how many zerglings he has to build to stop this push, he will not build a single zerling too much.
You don t even need a really smart AI, all you need is enough data and a programm able to read this data fast enough.
The more data you have, the less smart the AI can be, it doesn t need to understand things, it just needs to copy things that worked out in the past, and since AI will have perfect micro and macro, this will be more than enough.
Then that is not AI, and that is very far from what is developping with deepmind. And though this might be enough for chess, It was not enough for GO. I guess it wouldn't work with Starcraft also.
To add to this, AlphaGo was already very strong even without calculation. Using only its facility for pattern recognition it already outpaced most players. I can't recall the exact ratings.
I think SC2 is the sort of game where this sort of ponderous deliberation, where you peer into the forest of future possibilities to navigate a winning path, is almost totally useless. The AI only needs to make okayish decisions based on general principles to win. After all, progamers after the opening tend to play very reactively too, they base their decisions on what seems instinctively correct. And it is very hard for them to make decisive mistakes because they can always retreat or build another base etc. That is why the advice given to aspiring youngsters is not to improve their decisionmaking, but to develop good fundamentals.
That said, I recall an experiment I did last year playing WC3 a lot, where I tried to actually think about what I was doing and use strategic tools, instead of playing mindlessly as usual. For instance, I would invest in hardcounter units not part of the meta, or I would scout more than usual and try obnoxious base sniping strats and such. And my winrate went up! Still...
On August 07 2017 21:30 aerlinss wrote: The AI enters ur base with a scout.
From what it sees, the amout of drones, the amout of minerals mined on every single patches, it will exactly know which of all the possible every played openings are possible, and which are not. It will know if there is an scv/probe out on the map building a proxy, or not.
It will see your gas minded, ur units and buildings built and will instantly know what s the earliest possible time when a cloaked banshee or dark templar can arrive at his base.
It will detect certain openings and know exactly how many zerglings he has to build to stop this push, he will not build a single zerling too much.
You don t even need a really smart AI, all you need is enough data and a programm able to read this data fast enough.
The more data you have, the less smart the AI can be, it doesn t need to understand things, it just needs to copy things that worked out in the past, and since AI will have perfect micro and macro, this will be more than enough.
lol if you ever tried playing lower leagues you would know that build orders of non-pros don't make sense. That's cause they make strategical and macro mistakes that can lead even a human misread the situation. Same here. If you screw up with timings/BO you will essentially feed false data into AI to analyze and make incorrect analysis
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
In the grand scheme of all of this, the advantage gained will still be very negligible. Besides, the ai still has to work with the same UI and pros are good enough that they don't have to think about an action in order to execute it.
This is so interesting. All this AI evolution really gets excited and somewhat scared. But as someone already said before, AI Libratus already beat top poker heads up players, and by quite a margin. And this is relevant because its a game with incomplete information, such as Starcraft. So adding the real time factor, micro, APM, and all of that, I just don't see how humans could beat this AI, when its ready to play. Regardless, I will be waiting eagerly to watch these matches.
On August 06 2017 17:39 Modesty00 wrote: Ofcourse AI will be better in terms ot makro and micro. Mechanical task are easy for a machine , humans dont want to compete in that area and you dont have to. Once game reaches late middle game and position is equal the machine will take over easy. The thing is can the machine outsmart and predict human in strategy, build order and early game plan. Can a machine make a massive drop while setting a trap on other side of the map? Can machine change game plan and readjust accoriding to new information that receives about opposition? Can hide his intention? Cancel building? Make a pylon in secret area? Those and more of this type of questino are importhant. A.I did this many years ago with chess. Not only showed perfect tactical vision, traps, sacrifices, incredible defense and deep calculation predicting future, (the easy part) but was able to spot deep strategic ideas and plans. And i think this eventually will happen. The Brute force of CPU is already enough. We don't need faster CPU. The SC is limited in terms of possibilities. Chess and GO had infinity positions and still the machine won not by seeing everything because it can't (CPU not fast enough even all super computers put togheter). It will do it with starcraft too, but it might take longer, because is completely different type of game with inconpete information but defintely possible. It will be expontential growth. For example 3 month AI practice will be playing like crap. 1 year later still bad. 3 years later still making silly mistakes and playing worse than C- broodwar player. but 2 weeks more and is beyond super human level out of nowhere. Just like they expected Alpha Go to beat best GO player in 2025.. but it happened in 2016 huge surprise. So we just have to wait and will happen.
AI to be smart as bug - a lot of years, to be smart as dog - many many years, to reach chimpase level - super long. To go from chimpase to ultra smarter many times more than all humans brain put togheter - just a year. Technical Singularity is close, but that does not mean it will happen, but is getting closer.
Is like building a puzzle with 100 pieces. It's extremely hard to make sense which piece have to go with which piece at beginning or even at middle. So one by one, with test you link them togheter. At the end you have just 5 pieces left. So much work and so much time has passed. But still nothing - is incomplete. Then BUM.. last 5 pieces are easier than first 80 and from nothing to complete picture is way shorter time.
I think that's a good point. If you look at the Encephalization quotient (roughly brain to body mass ratio) then humans outpace other great apes by only a factor of three. Assuming that the main difference between humans and others is in brain size, which translates to computing power, that means that if you are capable of simulating a chimpansee then simulating human intelligence is only a question of linking another supercomputer.
This is obviously a rather simplified of looking at it, since brain differences between mammals are qualitative as well as quantitative, but it is useful. If you compare Starcraft players at the top level, then there are primarily quantitative differences. All players are capable of doing everything, but some are better at it. The leap from awful to godlike execution for an AI is fairly trivial, the more difficult question is whether you can bring an AI to care about scouting or micro or build orders etc. at all. Once it is capable of competing with a human on any level, then it's only a few more months of practice or another upgrade in hardware for it to vastly outpace any human.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
This seems fun for FPS games. Everyone always has perfect headshots because that's how they envisioned it mentally.
I envision this boiling down to "whoever's brain can process information the fastest wins" because you would never need to be more accurate or precise than anyone else, you would simply need to process faster than anyone else. I doubt that would be very fun, especially since the less overall brain power someone has, the larger percentage of brain power they do have would need to be spent on movement "controls" (walking, crouching, etc.) rather than processing headshots.
If the "mind game" was made in such a way that you had to mentally move the gun in the virtual space so that accuracy and precision were still necessary, then it could be more fun. But in that case I imagine a game where your mind is placed in a full virtual body and you control the soldier's actions with your mind, which is very different from current controller or keyboard and mouse FPS.
On August 07 2017 21:30 aerlinss wrote: The AI enters ur base with a scout.
From what it sees, the amout of drones, the amout of minerals mined on every single patches, it will exactly know which of all the possible every played openings are possible, and which are not. It will know if there is an scv/probe out on the map building a proxy, or not.
It will see your gas minded, ur units and buildings built and will instantly know what s the earliest possible time when a cloaked banshee or dark templar can arrive at his base.
It will detect certain openings and know exactly how many zerglings he has to build to stop this push, he will not build a single zerling too much.
You don t even need a really smart AI, all you need is enough data and a programm able to read this data fast enough.
The more data you have, the less smart the AI can be, it doesn t need to understand things, it just needs to copy things that worked out in the past, and since AI will have perfect micro and macro, this will be more than enough.
Then that is not AI, and that is very far from what is developping with deepmind. And though this might be enough for chess, It was not enough for GO. I guess it wouldn't work with Starcraft also.
OP is talking about the future, so am i, i know that AI isn t like that right now.
The question is what AI is in the end. Is it a programm that you will just tell "learn sc2" and then it will automaticly do it, that would be REAL AI.
Or are we talking about some sort of superbot that combines perfect micro with an AI like decission making databes request out of megadatabases with preset informations about every single element in the game, because as far as i understand that is what the AI playing GO is doing.
If you break down the game in very little tiny parts and steps every decission, every action is just a small thinking process considering a small amout of factors.
The more your break this process down into little tiny parts, the better it gets. Instead of the normal KI just building a missle turret in TvT, the KI now starts a process comparing thousands of games with a similar opening into:
Whats the earliest possible time a missle turret was needed, you ll have to break down this one in further requests obviosly. Like search all games u have on this map for TvT for when was the earliest hit from a flying unit to a target withing range x from spawning position.
Then you will start to also take into consideration the scouting information, like oh no gas first, so remove all games from the database with gas first builds. Oh cc first, remove all games that are not cc first etc. etc.
That are requests a human has to categorize by hand. But the more requests like that one are added, the stronger the computer gets. And after that, the more data is available, the stronger it gets.
You can add endless amounts of little improvements, you can also add requets like "do i have to take a risk, or should i play safe because im ahead etc." into the request for every single action.
Combine that with a "perfect KI marine micro against banelings" like KI and it will be unbeatable after a few weeks or months of development, depending on the amount of efford / money someone is willing to put into it.
lol if you ever tried playing lower leagues you would know that build orders of non-pros don't make sense. That's cause they make strategical and macro mistakes that can lead even a human misread the situation. Same here. If you screw up with timings/BO you will essentially feed false data into AI to analyze and make incorrect analysis
----------------
Oh come on. I can t descibe EVERY single step, but its obvios that you will give the data from pros way more value then the data from bronze leage players in your database......if KI plays against masterleauge, the database should only use data from hig master and gm for example.....thats easy to solve.
How on cloud is the AI supposed to be limited in terms of handling 'seeing' burrowed units?
The current AI is not allowed to see burrowed units, whereas humans currently can. I disagree with this, but is there an alternative?
There are many factors in the function of SCII that actually depend on the other competitor having imperfect knowledge compared to what he is actually "able" to have.
For example, in a typical scout, you may see some building going up, click on a geyser to determine remaining, etc.
But an AI?
Example AI scout goals: Click on every mineral patch, both vespene geysers Deduce how much mining. Count probes everywhere. If anything wrong, determine farthest that a missing probe could be on map. Count pylons. Check hp of warping in building to determine when it was started compared to normal.
etc, etc., etc.
The list could go on in many other areas, but basically this kind of information SHOULD make it strategically impossible to trick the AI. Just imagine the perfect information available to a zerg AI with sufficient pneumatized carapace overseers. And imagine how many buildings would get covered in slime and stop funcitoning whenever you looked away?
Therefore, ultimately the developers will have to decide how the AI is limited. I don't think APM is a good restriction. Maybe add some sort of "virtual mouse" physics? Like add a virtual mouse that mimics the cursor control of a real mouse/keyboard? To give a delay between desired item (info, action, etc.) and result, which is very much the human element of the game that keeps it fun.
Here's an example of where the deepmind will improve the current overmind: Playing right now against hard AI, they don't avoid reaper grenades, ever. Playing against elite AI, they avoid grenades every time. Either way every time is just a stupid reaction. You need a mix. That kind of thing is what alphaSCII can handle.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
In the grand scheme of all of this, the advantage gained will still be very negligible. Besides, the ai still has to work with the same UI and pros are good enough that they don't have to think about an action in order to execute it.
But in terms of what the brain is capable of, it's actually not negligible. Being able to coordinate all your muscles and stuff to be able to play sc2 at 300+ apm is incredibly complicated. In terms of conscious and active decision making, yeah to an extent we can make sc2 mechanics muscle memory, but having to do this with eyes and hands still puts a lot of overhead.
There was one video of rat neurons directly connected to a computer, playing a flight simulator in the highest difficulty scenario, and it flew through without crashing. This was like 20k neurons I think. That's an absolutely tiny amount and they could do a task we humans take a lot of training to do. The one post talking about the AI being able to record everything about gamestate is what I think really is gonna put the AI above humans. It could essentially have almost perfect reads on most scenarios.
edit: and we do all have perfect headshots in a sense, our eyes can lock on to something really quickly :D
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
In the grand scheme of all of this, the advantage gained will still be very negligible. Besides, the ai still has to work with the same UI and pros are good enough that they don't have to think about an action in order to execute it.
But in terms of what the brain is capable of, it's actually not negligible. Being able to coordinate all your muscles and stuff to be able to play sc2 at 300+ apm is incredibly complicated. In terms of conscious and active decision making, yeah to an extent we can make sc2 mechanics muscle memory, but having to do this with eyes and hands still puts a lot of overhead.
There was one video of rat neurons directly connected to a computer, playing a flight simulator in the highest difficulty scenario, and it flew through without crashing. This was like 20k neurons I think. That's an absolutely tiny amount and they could do a task we humans take a lot of training to do. The one post talking about the AI being able to record everything about gamestate is what I think really is gonna put the AI above humans. It could essentially have almost perfect reads on most scenarios.
This is amazing, how did i not know about this before? Everyone needs to know this.
Also, it opens up two very interesting questions: A) What could you do if you actually link up a human brain to a computer, B) What kind of amazing computer could we build out of rat neurons.
Both of them are amazing and cool and incredibly SciFi. We shouldn't try to bring the computer down to human levels artificially, we should hook the humans up directly, and then play whatever amazing game is challenging in that state.
On July 13 2017 01:23 BronzeKnee wrote: It is going to be a long time before AI conquers the human mind. I don't think I will be alive to see it, and even the youngest among us probably won't.
But many of you assume the AI will have an unfair advantage without actually realizing that it is an unfair advantage.
On July 12 2017 19:34 Grumbels wrote: While it seems very difficult to create an AI capable of outthinking humans in SC2, I don't think it will take them 10 years to beat humans. If you have perfect micro there must be an infinite number of one-trick build orders that are unstoppable by conventional means.
Remember SC2 is played with a mouse, keyboard, speakers and a monitor. The AI should have to play with those too. Or the mind shouldn't have to.
I have no idea what your trying to imply, but the AI won't use anything physical, but will have access to a virtual version of those.
And he implies that that is an advantage. The AI can basically control the game "with its mind", while for a human, there are a bunch of hoops to jump through.
I actually hope that we will eventually get to control games "with our minds". UI design is basically just that, trying to make controlling games feels more natural and not require extra thoughts. For me this is often one of the problems with games, especially strategy games that require a lot of different inputs quickly. I know what needs to be done, but i just can't get that information from my mind to the units on screen quickly enough. Of course, more practice would make this easier and work better, but there is a big hurdle in between my mind and the game, and an AI does not have this problem.
In the grand scheme of all of this, the advantage gained will still be very negligible. Besides, the ai still has to work with the same UI and pros are good enough that they don't have to think about an action in order to execute it.
But in terms of what the brain is capable of, it's actually not negligible. Being able to coordinate all your muscles and stuff to be able to play sc2 at 300+ apm is incredibly complicated. In terms of conscious and active decision making, yeah to an extent we can make sc2 mechanics muscle memory, but having to do this with eyes and hands still puts a lot of overhead.
There was one video of rat neurons directly connected to a computer, playing a flight simulator in the highest difficulty scenario, and it flew through without crashing. This was like 20k neurons I think. That's an absolutely tiny amount and they could do a task we humans take a lot of training to do. The one post talking about the AI being able to record everything about gamestate is what I think really is gonna put the AI above humans. It could essentially have almost perfect reads on most scenarios.
This is amazing, how did i not know about this before? Everyone needs to know this.
Also, it opens up two very interesting questions: A) What could you do if you actually link up a human brain to a computer, B) What kind of amazing computer could we build out of rat neurons.
Both of them are amazing and cool and incredibly SciFi. We shouldn't try to bring the computer down to human levels artificially, we should hook the humans up directly, and then play whatever amazing game is challenging in that state.
On August 08 2017 20:57 aerlinss wrote: Oh come on. I can t descibe EVERY single step, but its obvios that you will give the data from pros way more value then the data from bronze leage players in your database......if KI plays against masterleauge, the database should only use data from hig master and gm for example.....thats easy to solve.
exactly, it will learn on data from PROs, but in the game itself i would be interested to know how it will handle deviations from the norm based on human mistakes/fake actions.
machine learning works by the means of grouping events and making cross-links between them. AI uses this in decision making. When we observe pros playing sometimes we see they make decisions based on mind-games and not necessarily what would be the best strategy in this particular game. I wonder how AI is going to process that.
On August 08 2017 20:57 aerlinss wrote: Oh come on. I can t descibe EVERY single step, but its obvios that you will give the data from pros way more value then the data from bronze leage players in your database......if KI plays against masterleauge, the database should only use data from hig master and gm for example.....thats easy to solve.
exactly, it will learn on data from PROs, but in the game itself i would be interested to know how it will handle deviations from the norm based on human mistakes/fake actions.
machine learning works by the means of grouping events and making cross-links between them. AI uses this in decision making. When we observe pros playing sometimes we see they make decisions based on mind-games and not necessarily what would be the best strategy in this particular game. I wonder how AI is going to process that.
What you wrote indicates that you are familiar with the neural network pop-literature but do not understand how the system actually works. I'm not going to explain it to you either, but I will explain one of the first implementations with huge success. Take 1,000,000 passport photos. Make a RGB map based on a finite pixellation spatial frequency, and then assign a ranking of some probability amplitude to select a ~35 state eigenbasis. Then, you can proceed decompose every one of 7 billion people's photographs into a 35-unit "vector" (a few bytes) plus a kilobyte-size analysis of the main features of the remaining error. With this, you can reproduce the photo from scratch. The deal is, if your neural network algorithm finds that 35 bases don't cut it by analysis of the error, it just adds the next most likely thing to the basis. It won't "adjust" the previous bases based on new (oin your case, mis-) information. Instead, this type of procedure is used to "learn".
Oh, I forgot. You don't need passport-photos. Just resolve the 3D Fourier-transform with 2 cameras to make holographic standardization in position between eyes, mouth, etc. You can do it live for facial recognition with 2 webcams and a dedicated ~4GB VRAM machine.