|
This should be interesting. I hope if it gets to the point of playing against a person, it has limited apm. Everyone knows a computer can micro better than a person; the question is whether it can outsmart one with a clever strategy or smarter execution.
|
On January 24 2019 02:18 Rodya wrote: Is there something about neural nets that make this interesting? I mean wont we just see insane tank dropship abuse? afaik there is no information available on what type of type of progress they have been making and what the tendencies of the AI will be. I scanned their website last year to see if it had any papers or releases which obviously involved SC2 (I don’t know that much about AI though) and couldn’t find it. We will have to wait and see, but probably it won’t be able to play a game competently. Recall that in the demo they revealed a year ago their AI couldn’t even build units or move them around the map, it would click on the minimap to an empty location and then get stuck trying to get back.
edit: apparently more information has come out, supposedly it can beat the SC2 insane AI 50% of the time
https://www.pcgamer.com/blizzard-will-show-off-googles-deepmind-ai-in-starcraft-2-later-this-week/
|
On January 24 2019 02:31 zealotstim wrote: This should be interesting. I hope if it gets to the point of playing against a person, it has limited apm. Everyone knows a computer can micro better than a person; the question is whether it can outsmart one with a clever strategy or smarter execution. They released a paper which explains they have limited the APM to 180 APM. Note this is still super human because an AI can potentially be very efficient in its actions.
|
On January 23 2019 02:21 travis wrote: What they have done is isolate "mini games" from within sc2 - tasks like mineral mining or base organization. So it may be the case that they are now performing well on such minigames and want to show that off and talk about how they've done it. Was about to PM you to tell you you might want to check that out, but I see you're way ahead of me!
|
the problems they had in the beginning was, that it was hard to set up a proper learnming environment. biggest problem was building placement imho. but remember, once they got the environment the AI will learn by playing 1000 or more games at once. it will overcome human with ease then. even if it learns only very little with every game. the pure mass will do it.
sadly most people think this game would be too hard to learn for an ai, but it will destroy every progamer in the end.
|
Haha you all think its all work in progress but on Thursday at 6:00pm GMT we will see Terminator arm cutting scene but with Serral
|
Let us see what they got. Deepmind versus build-in Cheater-AI? Or some entertaining stuff like Artosis versus AI with Rotterdam casting?
Announcement that a Deepmind version gets implemented in the SC2 client so one can train versus that AI, and then let differently trained AIs play versus each other?
Or replay analysis of human 1v1 by the AI? Perhaps not today.
Still, the possibilities are endless ... if they have an AI able to beat a human. What if Blizzard creates a GSL seed for the AI?
|
I think we will be amazed.
|
|
|
On January 24 2019 05:18 [F_]aths wrote:Still, the possibilities are endless ... if they have an AI able to beat a human. What if Blizzard creates a GSL seed for the AI? Fire Pro Wrestling had an entire scene devoted to AI versus AI action. AIs handcrafted by humans fighting each other. If Blizzard can create a competitive scene out of various computer science teams building their own AI-bots.. that'd be incredible.
|
On January 24 2019 01:26 Ronski wrote:Show nested quote +On January 24 2019 00:57 neutralrobot wrote:On January 23 2019 11:53 KalWarkov wrote:On January 23 2019 11:16 neutralrobot wrote:On January 23 2019 10:53 ZigguratOfUr wrote:On January 23 2019 06:17 Ronski wrote:On January 23 2019 05:14 ZigguratOfUr wrote: I hope the deepmind team is more open about what they produce. Show-matches are all very well, but giving players the opportunity to out-mindgame the AI afterwards would be interesting. AlphaZero was somewhat disappointing in the sense that no one really has a good sense of exactly how good it is at Shogi or Chess. Didn't they make it pretty clear that its the best chess engine there is atm? Beating the strongest engine at chess means that no human player could ever hope to beat it so at least when it comes to chess I would say its clear that AlphaZero is the best there is. I mean probably? But even when their paper was eventually released, it's still just a bunch of games against an old version of Stockfish in circumstances completely controlled, set up, and chosen to be favourable by the Deepmind team. The newest version of Stockfish can also beat the older version of Stockfish by about the same margin. But arguing who is the best and stuff like that isn't too meaningful in the first place (it isn't of any importance if AlphaZero is the best or the second best)--the important thing is the machine learning research. And with Deepmind controlling everything about their research there's no room for other people to investigate things like whether AlphaZero with the current training would also be able to play Chess960 or adapt to starting with a piece handicap and so on and so forth. It would be very disappointing if AlphaStarcraft came out and crushed Serral, Maru and Stats in showmatches and got shelved never to see the light again, leaving people to wonder about how AlphaStarcraft would react to (for example) playing on an island map, or how it would defend a cannon rush. Well, actually... They recently played more games vs Stockfish in better conditions and AlphaZero comprehensively destroyed Stockfish. Also, they released the algorithm, which might not be as open as releasing the code or the trained network, but it did mean that the algorithm was implemented in a more open manner in the Leela Chess Zero project, which is now pretty competitive with Stockfish and playing interesting games against it in the TCEC. (https://www.youtube.com/watch?v=UPkcAS2B60s) This is the generalized Alpha Zero algorithm -- can be applied to a variety of games. So if they follow that pattern, maybe with Starcraft they'll shelve their code but release the research, which means it can be replicated. Guess we'll see! Keen to see what they've come up with. You'd think it must be a big leap. Bear in mind that once they had the right algorithm, they could train AlphaZero in a matter of hours and get it to a point where it's the best in the world by a mile. They have an incredible ability to test and implement learning algorithms quickly. Part of what gives them such an edge is their TPU hardware. So once there's been a breakthrough it could go from "how do we do this?" to "HOLY SHIT!" in a very short timeframe. until alpha zero beats stockfish in TCEC finals, i will never call alpha zero the strongest engine. everything is controlled by google. no table base, no opening books - which sf isnt trained for. and still, it isn't live games vs sf11dev. and who knows if they released all games or are just cherry picking? Well, I mean, it's always possible that they're presenting some kind of falsehood about the 100 game match vs Stockfish recently where Alpha Zero took no losses, but... why? Why would they flatly lie about the results of that match? Honestly I don't think they even care much about proving themselves in the domain of chess -- it was just part of a proof of concept about generalizing the AlphaGo algorithm to be applicable to other games. What do they gain by lying about this? Like, if you want to say that there should be a public tournament with different conditions before it's definitive, I can respect that, but the cherry picking idea seems pretty far-fetched to me, particularly considering the growth of Leela this year. On January 23 2019 17:40 Grumbels wrote: AlphaZero becoming the strongest engine in a matter of hours is a bit deceiving, given that it still required fifty million games of practice and computing a new version of the network every 25k games. It was estimated to take months for the Leela project (open source imitation of AZ), which is distributed on hundreds of computers. Google just has really powerful hardware.
There are some interesting quirks with Leela. For instance, it's not capable of playing endgames efficiently, it seemingly aimlessly moves around, making moves that don't lose the advantage. It doesn't "get to the point". If an SC2 AI is built on the same concept, expect it to not be able to finish off games quickly and take an hour to mine out the entire map and build a fleet of random units to randomly move around the map. Another quirk of the project is that the algorithm uses not just the current move as input, but also the history of moves. This gives it some measure of what part of the board to pay "attention" to. It also means that if you give it a random position as input, without history, that it can't function. As far as I know Leela is useless in solving tactical puzzles and in handicap games without training it first. Leela also typically doesn't understand theory of endgames. It doesn't just play them weirdly, but it also doesn't grasp some almost mathematical ideas such as identifying a class of endgames that are drawn despite material imbalances (opposite color bishops, wrong color bishop). It's apparently also not better at fortress positions, where you have material disadvantages, but your position can't be cracked. There are some known positions like these, and it was hoped that neural networks would be better at them, and would be capable of reasoning that these are a special class of positions that require a different approach. But it doesn't really seem like it.
Leela is also probably already better than Stockfish if you have bad hardware and no opening book. You can imagine that if there was a market for SC2 bots, that they could have opening books updated for every patch and which would have a team of people dedicated to keeping track of the meta and adding knowledge of it to the bot. But Deepmind's AI would use self-learning, i.e. only playing itself and developing its own meta. I don't know if that would make it easier or harder to beat as a human. I think the tree-search method for chess is bound to scale better with hardware than a neural network approach, given that chess is theoretically solvable with tree search. But this method would be useless for SC2, unless the AI uses some sort of abstraction of strategy and tries to think ahead. But I don't think you really need to think ahead in SC2 to get decent results. If you just react to your opponent and have perfect, bot-like control, you will win. Yeah, there are some quirks about Leela's play like the ones you mentioned. It's kinda hilarious watching Leela take forever to mate with Queen and King vs King, for example. But in most contexts, when both engines agree that the game is completely decided, they call it. Maybe Fantasy would make a new AI play for 2+ hours under totally lost conditions, but hopefully there would be a gg called before then in most cases. The talk of openings and the translation to SC2 is interesting to think about. AlphaZero seemed to keep going back to a relatively small handful of openings (I seem to remember it kept using the Berlin defense?) when left to its own devices as opposed to starting from a book position. But SC2 openings seem like they have to account for a lot more variables. Would a deep RL algorithm for SC2 play differently when optimizing for series vs single maps? Would it develop opening strategies that are more or less water-tight no matter what the context? Also, Would it show some of AlphaZero/Leela's brilliance for understanding positional compensation and imbalanced material? I guess we might find out about all this stuff soon. The latest match where Stockfish and AlphaZero played 1000 games Stockfish was using its opening books and did manage to win a decent amount of games with white pieces. Alphazero still won the match overall but Stockfish did take games on a somewhat consistent rate.
Yeah, true. It seems there were 12 matches of 100 games each = 1200 games total. AlphaZero won 290 and lost 24. Of those losses, I remember watching a few games that were started with book openings that seem to have been disadvantageous.
|
I am hunnert percent “working from home” tomorrow.
|
On January 24 2019 12:03 UncleVinny wrote: I am hunnert percent “working from home” tomorrow.
Twitch.tv on personal laptop, conference webinar on work laptop.
|
On January 24 2019 03:22 ScarPe wrote: the problems they had in the beginning was, that it was hard to set up a proper learnming environment. biggest problem was building placement imho. but remember, once they got the environment the AI will learn by playing 1000 or more games at once. it will overcome human with ease then. even if it learns only very little with every game. the pure mass will do it.
sadly most people think this game would be too hard to learn for an ai, but it will destroy every progamer in the end.
Yes, but there are a lot of intricacies in high level starcraft. Brute force and micro won't win. AI can't do cleverness or trickery.
Even Chess which has many orders of magnitude less possible moves than SC2 isn't really viable to brute force. I think you're giving too much credit to current neural nets
|
On January 24 2019 14:57 Parrek wrote:Show nested quote +On January 24 2019 03:22 ScarPe wrote: the problems they had in the beginning was, that it was hard to set up a proper learnming environment. biggest problem was building placement imho. but remember, once they got the environment the AI will learn by playing 1000 or more games at once. it will overcome human with ease then. even if it learns only very little with every game. the pure mass will do it.
sadly most people think this game would be too hard to learn for an ai, but it will destroy every progamer in the end. Yes, but there are a lot of intricacies in high level starcraft. Brute force and micro won't win. AI can't do cleverness or trickery. Even Chess which has many orders of magnitude less possible moves than SC2 isn't really viable to brute force. I think you're giving too much credit to current neural nets
playing thousands of games really quickly and learning isn't the same as winning a game with brute force. AlphaGo didn't just brute force calculate every possible Go move, it used strategies.
But, it learned by playing numerous games against itself and other versions of itself.
|
Czech Republic12128 Posts
On January 24 2019 14:57 Parrek wrote:Show nested quote +On January 24 2019 03:22 ScarPe wrote: the problems they had in the beginning was, that it was hard to set up a proper learnming environment. biggest problem was building placement imho. but remember, once they got the environment the AI will learn by playing 1000 or more games at once. it will overcome human with ease then. even if it learns only very little with every game. the pure mass will do it.
sadly most people think this game would be too hard to learn for an ai, but it will destroy every progamer in the end. Yes, but there are a lot of intricacies in high level starcraft. Brute force and micro won't win. AI can't do cleverness or trickery. Even Chess which has many orders of magnitude less possible moves than SC2 isn't really viable to brute force. I think you're giving too much credit to current neural nets Neural networks don't brute force things. The only question is how good the learning was.
|
On January 24 2019 16:50 deacon.frost wrote:Show nested quote +On January 24 2019 14:57 Parrek wrote:On January 24 2019 03:22 ScarPe wrote: the problems they had in the beginning was, that it was hard to set up a proper learnming environment. biggest problem was building placement imho. but remember, once they got the environment the AI will learn by playing 1000 or more games at once. it will overcome human with ease then. even if it learns only very little with every game. the pure mass will do it.
sadly most people think this game would be too hard to learn for an ai, but it will destroy every progamer in the end. Yes, but there are a lot of intricacies in high level starcraft. Brute force and micro won't win. AI can't do cleverness or trickery. Even Chess which has many orders of magnitude less possible moves than SC2 isn't really viable to brute force. I think you're giving too much credit to current neural nets Neural networks don't brute force things. The only question is how good the learning was.
Mind tricks work in StarCraft, because the game is only partially observable. That is a very important difference and it remains to be seen how well deepmind works in partially observable games.
|
So if Deepmind's AI is being fed with ladder games, does that mean that ladder games you play are Blizzard's property? Does anyone know if there is an opt-in/opt-out? Or a direct clause in the ToU or so?
|
On January 24 2019 02:34 Grumbels wrote:Show nested quote +On January 24 2019 02:18 Rodya wrote: Is there something about neural nets that make this interesting? I mean wont we just see insane tank dropship abuse? afaik there is no information available on what type of type of progress they have been making and what the tendencies of the AI will be. I scanned their website last year to see if it had any papers or releases which obviously involved SC2 (I don’t know that much about AI though) and couldn’t find it. We will have to wait and see, but probably it won’t be able to play a game competently. Recall that in the demo they revealed a year ago their AI couldn’t even build units or move them around the map, it would click on the minimap to an empty location and then get stuck trying to get back. edit: apparently more information has come out, supposedly it can beat the SC2 insane AI 50% of the time https://www.pcgamer.com/blizzard-will-show-off-googles-deepmind-ai-in-starcraft-2-later-this-week/ From the comment section of that article:
Tetsuo Wow nice to finally see some results lets see if any A.I can beat Serral ^^
Prrredictable Negative. The Koreans deployed eight humanoid AIs to Blizzcon last year who were all met with a Serral victory. ggwp
|
|
|
|