|
On March 13 2016 04:08 OtherWorld wrote: I dunno. As literally everyone pointed out, a competent AI would destroy any human if allowed unlimited APM. With restrictions, humans could win, but only until ~2030 methinks. What should the restrictions be?
I thought the following ones would be simple enough: - no hotkeys for buildings allowed (to stop flawless macro) - (simulated) mouse and keyboard to control the game, with some restrictions to dpi and ap(m/s) - short reaction time added before visual input can be processed
This lets the AI have superhuman control, but not inhuman control, and I think it would do a lot for the legitimacy of the challenge.
An additional handicap might be to simulate errors with mouse control, but I have a feeling that one might be harmful.
|
France12758 Posts
I still think that even allowing cheating (perfect apm/micro speed wise) it should be easy to mess with the AI, sending workers to hang in vision of the AI and stuff like that.
By the way if they can't do anything before 2030 that's a pity xD, they got so much money to throw around on rather useless things.
|
wonder if there will be a person helping the AI, the moment the player finds a glitch and uses it. Just like with Deep Blue. Still the way this AI works sounds actually interesting. And as long as they don't program this AI with anti siege tank micro and stuff everything there should be no need for limitations.
|
On March 13 2016 03:01 Garrl wrote:Show nested quote +On March 13 2016 02:50 Musicus wrote: All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited. people thought Go AI beating professional players was a long way off. AFAIK deepmind's project is a generalized solution that takes only pixel data as an input. could be far, far closer than you might think. + Show Spoiler ++ Show Spoiler +
Yea, before the match began Lee Sedol said he thinks he will beat this thing 4-1 or 5-0, but maybe in a few more years it can surpass him. Now he is fighting his hardest just to take a game and probably won't even be able to do that.
|
Isn't it absolutly obvious that an AI would win against any humain at starcraft ? How delusionnal can Boxer be ? :/
|
On March 13 2016 04:49 Grumbels wrote:Show nested quote +On March 13 2016 04:08 OtherWorld wrote: I dunno. As literally everyone pointed out, a competent AI would destroy any human if allowed unlimited APM. With restrictions, humans could win, but only until ~2030 methinks. What should the restrictions be? I thought the following ones would be simple enough: - no hotkeys for buildings allowed (to stop flawless macro) - (simulated) mouse and keyboard to control the game, with some restrictions to dpi and ap(m/s) - short reaction time added before visual input can be processed This lets the AI have superhuman control, but not inhuman control, and I think it would do a lot for the legitimacy of the challenge. An additional handicap might be to simulate errors with mouse control, but I have a feeling that one might be harmful.
I completely agree with this, but forget about the simulating errors part for now imo.
|
On March 13 2016 05:05 Poopi wrote: I still think that even allowing cheating (perfect apm/micro speed wise) it should be easy to mess with the AI, sending workers to hang in vision of the AI and stuff like that.
By the way if they can't do anything before 2030 that's a pity xD, they got so much money to throw around on rather useless things.
No way, at the time when Google considers their AI ready for this challenge those kinds of obvious exploits will definitely not be working.
|
Has would go undefeated in a tournament vs ai
|
On March 13 2016 03:57 MyLovelyLurker wrote:Show nested quote +On March 13 2016 03:53 Oshuy wrote:On March 13 2016 02:56 brickrd wrote:it's not a question of "if," it's a question of when. maybe not in 5 years, maybe not in 10 years, but nothing is going to stop AI from getting better and becoming able to excel in complex tasks. they said the same thing about chess, same thing about go, same thing about lots of computerized tasks. it's cute that he thinks it's not possible, but there's no reasonable argument outside of "when will it happen" On March 13 2016 02:50 Musicus wrote: All this talk about something that might or might not happen in 5 to 10 years. When the challenge is out and the date is set, I will get excited. sorry for finding science interesting! The "maybe not in 10 years" sounds hopeful. Deepmind was created in 2010. Alphago is 18months old (as in : the project started 18 months ago). There is a hurdle to design what to feed to the neural networks and how to represent the output in a game of starcraft : the space both of current status and potential action are huge; but once those representation are designed, the learning process will either fail or succeed in a few months. The fact that information is incomplete is almost irrelevant in case of a neural network feed. Those are the type of problems we designed networks for in the first place. Real time and information retention may make things more difficult, but it could get there fast. It's actually not irrelevant in reinforcement learning, as you need to compute a conditional expectation of the state of play with respect to the information you have - and the update of said expectation will change algorithms by quite a lot. This is being tackled almost as we speak, here is a two weeks old article on the subject - from one of the fathers of AlphaGo - with an application to poker : arxiv.org
Building the dataset for supervised learning from replay databases consisting of both the incomplete information (one player view) and the complete information (spectator view) should provide a first estimate of a potential convergence for a given game representation.
Self-play reinforcement would be great; agreed, I have no idea how to construct an evaluation function (and quite sure it cannot be done on individual actions that are mostly meaningless in themselves). Unsure if it would be necessary at this point (why isn't supervised all the way with a spectator AI impossible ?).
Interesting part in the self-play is that the AI would get to the match with its own metagame that the human players faces for the first time during the match, while the human metagame will have been the basic dataset the AI learned from initialy.
|
Bots have already surpassed humans in StarCraft. If you ever saw any of the AI competitons held at the University of California, you'd see bots with superior APM that are able to pull off absurd strategies.
|
How can you even compare a game like GO where you both have complete knowledge of the game state with a game like starcraft that completely depends on fog of war.
And it's incredibly obvious that they won't let the AI access the game state, that's called "cheating". It has to interpret the game through a single screen, and use a cursor and keyboard to select and direct units/buildings. It'll only have 10 control groups just like humans do. Anything else is just cheating and wouldn't be a testament to the capabilities of deep learning.
edit: then again, if it would be lightning fast,these restrictions are most likely useless. What is the limit on command input in brood war? :D
|
sOs vs Alpha Go for SC2? Sounds like an interesting series Five Bo7 in five days maybe? Let us see if AI is really "intelligent"
|
i think the scary part of perfect AI would be perfect micro other than that, strategical depth etc i think progamers have the edge with instincts, and reaction
|
Starcraft 2 has ruinned the perception of RTS games so much that in a thread about the evolution of AI and their capabilities, all everybody is thinking is perfect mechanics, bot level micro and infinite apm SMH.
|
BoxeR is John Connor leader of the human resistance
|
France12758 Posts
On March 13 2016 03:55 [PkF] Wire wrote:Show nested quote +On March 13 2016 03:47 AdrianHealeyy wrote: I think we need to differentiate two things here.
It's probably not that hard to come up with an AI that can have perfect micro. The trick is: can we design an AI with 'human' micro that can still consistently beat humans, based on insight, analysis, response, etc.?
That would be the ultimate challenge. I still think they can do it, but it'll take longer. The problem is how do you define human micro (and even human multitask). A simple limit on the APM wouldn't even be enough I think, since the computer doesn't spam and -more importantly- sees all screens at once. That's why they won't even try, human performance isn't constant so you can't cap or make the AI randomly missmicro, it's almost comical that they spoke about it before even realizing this obvious paradox.
|
|
8748 Posts
Unlike turn-based board games, where inputting moves is trivial and thus the method can be ignored, playing SC is intrinsically tied to keyboard and mouse control. If the AlphaGo team wants to tackle SC, then they have a significant robotics challenge in front of them that I'm not really sure is going to be worth their time as AI researchers. It's always a bad idea to bet against technology when technology is allowed unlimited time to develop, but SC presents some very significant increases in difficulty just for the AI, robotics aside. It's far more complex because in addition to a "mirror match" you've got to be able to beat two completely different sets of "game pieces" and there isn't just one simple game board. And after all that, games can hinge on luck like in a poker game. The human can pick randomly, like glance at his mineral count and do one extreme if it ends in an even number and do another extreme if it ends in an odd number, and there might simply be no solution for both possibilities. Avoiding all such situations seems unlikely. Because of this, it could possibly be a top player if it avoids predictability, but it seems just as likely as a poker AI to consistently win tournaments. Nonetheless I'm excited to see how it progresses. I wonder if the Korean BW players have a renewed sense of purpose seeing as how an AI might be entering one of their tournaments someday.
|
i think the question is whether or not to cap the apm. of course a bot has the advantage with unlimited apm. i think capping it would make for more interesting results. it then becomes a game of how well the programmers can design an ai that can strategize and predict the opponents moves.
|
On March 13 2016 05:42 BlysK wrote: i think the scary part of perfect AI would be perfect micro other than that, strategical depth etc i think progamers have the edge with instincts, and reaction
If there's anything it isn't better than humans at then it isn't perfect AI. But who says Google's will be perfect? It won't be.
|
|
|
|