|
On May 19 2015 08:47 Lobotomist wrote: So, generally do these AIs execute certain builds or play adaptively like chess AIs which execute certain moves based on win percentage rather than a "game plan"? if it's builds are they mostly rushes? I'd imagine it gets pretty complex the longer the game goes
Some of the AIs, my one included, are capable of playing adaptively to a certain degree, but I don't think any of them calculate the best possible move like a Chess AI would. The range of possible moves in StarCraft is enormous; there's a much larger number of pieces on the board than Chess, every piece is capable of moving simultaneously, and it's possible to create new pieces. Then you add in the fact that you have up to 55ms to calculate before being disqualified and the Chess style solution isn't really workable.
The focus of my AI last year was to be reactive in terms of unit composition. I based it around David Churchill's StarCraftBuildOrderSearch which was taken from his UAlbertaBot AI. This allows you to input a goal set of units, for example 10 Marines and 1 Siege Tank. It then takes into account the current game state such as your resources, worker count, tech level and current production capacity and calculates a viable build order that would result in the creation of the goal set of units. So if I already have a few barracks and a factory and a healthy worker count, and I put in the previous example goal set of units, the outputted build order would probably just be to train marines and a tank and build enough supply depots to accommodate those units. However if, say, I don't have a factory, then the build order will include the construction of a factory before it attempts to train any tanks.
To determine the build order goals to feed into the build order search, my AI would collect information about it's opponent. It would store the number and type of all enemy units that it had seen, and translate these into a percentage. So if it sees 8 Mutas and 12 lings, the enemy composition would be 40% Muta, 60% Zergling. It then had a table of enemy unit compositions and compositions that were 'counters' to them. It would search through the table for the most similar enemy composition to the current one and then retrieve the 'counter' unit composition and use that as a build order goal to feed into the build order search. The build order that was outputted by the build order search could then be executed by the lower level parts of the AI system that manage building placement and unit training and stuff.
So what I'm saying is that, in a limited way, my AI was capable of scouting it's opponent and reacting to its unit composition. If there were no similar enough unit compositions in the table, it was also capable of slightly adapting a previous one to fit a new situation, and if the new composition was successful in the game, it would store it in the table for future use. I found that last part wasn't particularly effective though, maybe I'll get round to improving it this year.
Since I've switched my AI to zerg this year, I had problems with the build order search module I was using, so I have replaced it with my own one. My one is pretty crude at the moment, but it's sort of working ok.
A major weakness from last year was that reacting in terms of unit compositions is only one of the factors a StarCraft player needs to pay attention to. My AI wasn't capable of comparing its economy to its opponent's, so sometimes it would be sitting on one base trying to concoct the perfect composition while it's opponent would be macroing on 3 bases. It doesn't really matter what type of unit you build if you only have 1/3rd of your opponent's economy. This year I'm trying to make my AI capable of adapting in other meaningful ways.
I think some of the AI systems just follow a scripted build order throughout the game, which can be pretty effective because at least the build order can be optimised by the developer. My AI last year would often come out with the most ridiculous build orders (3 robo observer in reaction to a single lurker).
|
Belgium6758 Posts
I was watching the stream, are those games live? They seemed to be running at 2x speed, is that correct? I read somewhere that your bot can't take more than 55ms per frame. Wouldn't this then be impacted by the higher framerate?
I think some of the AI systems just follow a scripted build order throughout the game, which can be pretty effective because at least the build order can be optimised by the developer. My AI last year would often come out with the most ridiculous build orders (3 robo observer in reaction to a single lurker).
hahaha brilliant. An obvious solution would be to just simply hard-code 'no multiple buildings of kind x'. You mentioned the table for future use. If your bot would do enough 'training sessions', wouldn't it figure out that this build sucked? It doesn't really teach it conceptually why it sucks but at least it seems like experience/learning is a more elegant solution to me. Actually letting a cpu conceptually understand this game is pretty much the million dollar question I guess.
Has anyone done any research towards using convolutional & recurrent neural nets in regards to tactical decisions (or perhaps even strategy?) I'm by no means an expert on this, I just know they're used for all sorts of computer vision tasks and abstracting patterns. So it doesn't seem too far out to apply the idea to analyzing (enemy) unit formations.
|
On May 19 2015 21:26 Xeofreestyler wrote:I was watching the stream, are those games live? They seemed to be running at 2x speed, is that correct? I read somewhere that your bot can't take more than 55ms per frame. Wouldn't this then be impacted by the higher framerate? Show nested quote +I think some of the AI systems just follow a scripted build order throughout the game, which can be pretty effective because at least the build order can be optimised by the developer. My AI last year would often come out with the most ridiculous build orders (3 robo observer in reaction to a single lurker). hahaha brilliant. An obvious solution would be to just simply hard-code 'no multiple buildings of kind x'. You mentioned the table for future use. If your bot would do enough 'training sessions', wouldn't it figure out that this build sucked? It doesn't really teach it conceptually why it sucks but at least it seems like experience/learning is a more elegant solution to me. Actually letting a cpu conceptually understand this game is pretty much the million dollar question I guess. Has anyone done any research towards using convolutional & recurrent neural nets in regards to tactical decisions (or perhaps even strategy?) I'm by no means an expert on this, I just know they're used for all sorts of computer vision tasks and abstracting patterns. So it doesn't seem too far out to apply the idea to analyzing (enemy) unit formations.
more than 1 million dollars, my friend, a lot more^^ An "AI" will never know if a build "sucks"^^. It'll required a input in order to provoke a reaction countering the craziness of the AI
In my opinion based on an average maths lvl for a nerd, i think RNN will be useless as the number of bifurcation can be huge in one game. However there is so many type of RNN so..... we need an expert :p
|
Damn, very interesting. Seems like a lot of rushes would be difficult to stop, in particular anything that requiresa worker pull. Or things that require inference (ie opponent is mining lots of gas but no tech buildings / units are seen). While creating a reactive ai that "plays" is certainly a more interesting challenge, i would think an ai that executes difficult-to-defend rushes would be more successful. Thoughts?
|
On May 19 2015 21:26 Xeofreestyler wrote: hahaha brilliant. An obvious solution would be to just simply hard-code 'no multiple buildings of kind x'. You mentioned the table for future use. If your bot would do enough 'training sessions', wouldn't it figure out that this build sucked? It doesn't really teach it conceptually why it sucks but at least it seems like experience/learning is a more elegant solution to me. Actually letting a cpu conceptually understand this game is pretty much the million dollar question I guess.
My bot's learning abilities were pretty shitty so no it wouldn't have. If it had had success executing a build in the past then it would've tried it again in future. It's possible that it could win a game even with a retarded build (say it's opponent bugs out or something) and then think that this build is ok to use again. Using some kind of learning mechanism would be a really nice feature if done well though (mine was not at all effective so I removed it from the AI for the competitions).
On May 20 2015 08:35 Lobotomist wrote: Damn, very interesting. Seems like a lot of rushes would be difficult to stop, in particular anything that requiresa worker pull. Or things that require inference (ie opponent is mining lots of gas but no tech buildings / units are seen). While creating a reactive ai that "plays" is certainly a more interesting challenge, i would think an ai that executes difficult-to-defend rushes would be more successful. Thoughts?
I think you're right; in StarCraft it sometimes requires more skill to see a rush coming and stop it than it does to execute it. I think these situations usually require a more sophisticated AI too. It isn't really a 'rush' but UAlbertaBot scouts after it's first pylon, when it finds the enemy base it starts attacking one of their workers with it's scouting probe. If it gets attacked then it runs away from the threat; running round in circles round the enemy base. If it stops being attacked then it comes back and starts attacking enemy workers again.
This is a really strong opening because it basically weeds out an AIs that don't have a system in place to deal with pulling workers. If your AI simply does nothing until it gets its first fighting unit out, you will probably have lost half of your workers by the time you get rid of the probe. If you have a simple response like "If I have no fighting units, pull all my workers and defend", then you end up with all your workers chasing a single probe around in circles. Again you will lose in this situation because you won't be mining. So in order to be able to defend this single probe harassment, you have to program your AI to be able to judge the threat level of the enemy units that are attacking and defend with a proportional force. Another method could be to return workers to mining if they chase too far or for too long. Either way it forces you to put in a bit of effort in development just to defend against 1 probe.
There's lots of possible little annoying things like this you can do in a StarCraft game I think. It's pretty easy for a human player to work out what's going on and not do something stupid like pull all their workers, but it's maybe not so simple for an AI. There's loads of little things you have to worry about, and it would be impractical to try to hard-code a response to every single possibility.
|
Belgium6758 Posts
I'm assuming the sscait tournament system can be used to let your bot play thousands of sessions, wouldn't this be sufficient to weed out retarded strategies? Of course the opponent needs to be somewhat capable as well, but the other bots are all open-source now, right? So why not simply brute force the approach like this until it has found optimal solutions?
Again, not an expert at all, just thinking out loud here
|
Thanks! I'd love to take a summer to develop a competitive AI (not that I think I'd be very successful). Maybe I can work a piece of it into a graduate school project someday ha.
|
The problem with algorithms that sort of "learn" is that they need thousands if not hundred thousands of samples to learn a small step. Well you say that's how we learn too. However that's not true, as we humans see a problem we take it and make a abstract version of that problem and make thousands calculations within our heads without playing or solving the problem for real. Until AI's are capable of this there is no way we'll see them to make big learning steps.
So scripted bots for the win.
|
On May 21 2015 07:19 LastWish wrote: The problem with algorithms that sort of "learn" is that they need thousands if not hundred thousands of samples to learn a small step. Well you say that's how we learn too. However that's not true, as we humans see a problem we take it and make a abstract version of that problem and make thousands calculations within our heads without playing or solving the problem for real. Until AI's are capable of this there is no way we'll see them to make big learning steps.
So scripted bots for the win.
If you're interested on the topic, you should read the last article of Google, Deep Mind on DNQ. The AI "learns" games!! not like chest when it's just basic computing. Humans dont make thousands of calculations to solve problem. If we could, there would not have been any need for counting frame^^ Basically, we just use memories to extrapolate alternative solutions.
SKYNET!!!! + Show Spoiler +
|
Belgium6758 Posts
Could you make a bot that learns in some way, upload it to one of those scai tournament websites and keep the memory synced with your version of the ai? Or does it have to be disconnected from internet?
|
Awesome, wanted to give BWAPI a shot for a long time, that should come in super handy.
|
there was an AI bw competition like 7 years a go
can someone link that?
|
On May 24 2015 04:49 repomaniak wrote: there was an AI bw competition like 7 years a go
can someone link that?
2010 AIIDE and CIG started, SSCAIT in 2011, but they are not consistently documented. I gathered the tournament websites a couple of months ago, in the links above.
|
On May 22 2015 06:46 Xeofreestyler wrote: Could you make a bot that learns in some way, upload it to one of those scai tournament websites and keep the memory synced with your version of the ai? Or does it have to be disconnected from internet?
Like Cazimirbzh said, it is a bit of a pain to learn complex behaviour if your learner (bot) is very artificial (hasn't got billions of years of evolution coded into it to learn or evolve). That said, you could play m/b/z/illions of games, the AIIDE tournament software is open source, ideally you'd need to make it faster so it could be played on a super cluster and reintegrate the results of that.
The oldest bots on SSCAIT have been there for years and played only about 4600 matches each. Compared to the number of variables involved that is not very much. Also most of these older bots have undergone major revisions, I doubt any current version has 2000 matches under its belt.
|
Hey, does anyone who's used this know how to compile an AI from source? I would really prefer not having to get Visual Studio, and it seems like it shouldn't be that hard with only 3 files in the VS project (though there may be a lot of interactions I'm missing from just looking at the files).
|
I think it should work with Visual Studio Express, which afaik is free.
|
On May 25 2015 13:06 IamTheArchitect wrote: Hey, does anyone who's used this know how to compile an AI from source? I would really prefer not having to get Visual Studio, and it seems like it shouldn't be that hard with only 3 files in the VS project (though there may be a lot of interactions I'm missing from just looking at the files).
On May 26 2015 21:24 PoP wrote: I think it should work with Visual Studio Express, which afaik is free.
Really? ![](/mirror/smilies/smile.gif)
IamTheArchitect , what's your OS ? Also do you intend to use the software for others things apart from AI compiling ? I start by suggesting SharpDevelop.
User was warned for this post
|
afaik every solution file is cross compatile.
writing an AI seems a really cool thing to do, maby it I will start someday to get some dust off my c++ skills.
|
there wouldn't happen to be golang bindings for this would there?
|
I'm afraid I've only compiled using VS so I can't really help there.
Afaik there's no golang bindings either, tried googling and didn't find anything anyway :p
On May 25 2015 01:38 nepeta wrote:+ Show Spoiler +On May 22 2015 06:46 Xeofreestyler wrote: Could you make a bot that learns in some way, upload it to one of those scai tournament websites and keep the memory synced with your version of the ai? Or does it have to be disconnected from internet? Like Cazimirbzh said, it is a bit of a pain to learn complex behaviour if your learner (bot) is very artificial (hasn't got billions of years of evolution coded into it to learn or evolve). That said, you could play m/b/z/illions of games, the AIIDE tournament software is open source, ideally you'd need to make it faster so it could be played on a super cluster and reintegrate the results of that. The oldest bots on SSCAIT have been there for years and played only about 4600 matches each. Compared to the number of variables involved that is not very much. Also most of these older bots have undergone major revisions, I doubt any current version has 2000 matches under its belt.
I think if you really wanted to, you could run automated games faster than they do at the SSCAIT tournament. Last year the AIIDE tournament went on for like a week (iirc) and all the bots played 1139 games each: results.
I think the main difficulty is creating an effective learning mechanism that will actually benefit from grinding out thousands of games. I know my feeble attempt from last year wouldn't have.
|
|
|
|