The StarCraft AI Competition introduced last year will be once again be part of the Artificial Intelligence and Interactive Digital Entertainment (AIIDE) conference. The call for participation is now live at the University of Alberta, which will be hosting this years contest. The current site is a placeholder, which will soon include details about the rules and tournament structure. Ideally, this year the tournament will be automated and provide a wide variety of options for how to run the tournament.
I'd like to hear from the StarCraft community: what would you like to see from this years competition, as far as organization, results, and rules?
Last years competition was a huge success and we are excited to see what future competitions will show us. UC Berkeley won last years main event with raging Mutalisks, what else can AI show us?
A round where a 'surprise' element is added to the game on tournament day, in order to check for AI's adaptability. In mirror matchups maybe a unit is removed. For example, zerglings or mutas in ZvZ.
Or maybe a special Python where the the main has only 4 mineral patches. A sudden decrease of CC/Hatch/Nexus price to 75/100/100. (ehh, probably imba in favor of zerg) These are some example surprises.
This way we can see whether the AIs are truly intelligent or have hard coded BOs.
From my understanding all the current bots would fail pretty hard if tested like this. But with another year maybe it's enough time to write something more flexible?
[B]This way we can see whether the AIs are truly intelligent or have hard coded BOs.
BOs are expected, due to the complexity of the domain. But I do like some of the ideas you are proposing. Basically, there are two directions in which to increase complexity. The first is to relax the constraint of a fixed race, make the bot play all races, against all races, even random. The second is to relax the constraint of know maps, make the bot play on new maps. But AIs are not yet able to overcome humans when following both of these constrains, so why relax them at this point?
StarCraft is an excellent domain for AI research and there are all sorts of potential for investigating general intelligence techniques, but first consider humans. Humans have to first study maps, or at least be given a high-level description of map features. Second, even top level players don't random races, because its too complicated, even for humans. I agree that an advanced AI would be capable of these feats, but it is far beyond the current capabilities of systems.
I'm interesting in reading about what kinds of general intelligence research that's come of the competition. From what I understand the bar is set pretty low.
An AI should be given all the information that a newbie human is given, data on unit HP, attack, move speed, range, etc, and tasked with figuring out unit counters themselves. They should be able to figure out on the fly the answers to questions like, I'm expecting mutas, should I build more turrets or make barracks for more marines.
Year 1 should be enough for figuring out all the mundane stuff like unit micro and pathfinding. Year 2 should be dumping the constraints for more interesting play.
If the AI is thinking on a high enough level then things like race or map should not matter to them.
If you set the rules from day 1 to favor flexibility then you might be able to encourage faster development in that direction. Otherwise teams might be tempted to stuff the bots with mid to low level strategies, like BOs and unit counters and end up with effective but uninteresting bots.
If you set the rules from day 1 to favor flexibility then you might be able to encourage faster development in that direction. Otherwise teams might be tempted to stuff the bots with mid to low level strategies, like BOs and unit counters and end up with effective but uninteresting bots.
my 2 cents.
My view has always been that before we build general intelligence, we must first build specific intelligence, With chess, we went too far, and built chess specific machines, but as long as we are using cpus off-the-shelf we should be fine in our quest for intelligence. We can't build expert general intelligence until we have first built expert intelligence for a specific domain!
It would be nice if you can add the ability to analyze new maps, so that we can see how "creative" the AIs are. I know it's hard but isn't it the point of all AI researches ?
They layed alot of the ground work last year. Now that the AIs are far along in getting things like scouting and micro and basic build orders down, there's going to be alot less left to compete over besides the real depth of the strategy of the game. I'm interested to see them bring it to the next level getting into varied build orders, complex micro management with multiple unit types, and everything to do with the battle to secure resources.
All i want to see is more full tournaments with lots of games played on proper maps, ala the final stage of last year's competition. For micro maps to be worth a damn they really should have dynamic armies of differing races, and then balancing the map would be sort of a problem and probably just a waste of resources, unless you have them to spare.
What i really want to see is more stuff like that article on Berkley's Overmind. I find it extremely interesting to read about the whole process of game theory based programming, especially in relation to starcraft.
I made an SC AI for my AI unit last semester. It was a lot of fun, and BWAPI is a great API which was really easy to get the hang of. I might have to work on my AI and submit something for AI competition this year
More micro maps! With the previous bots, I always felt that even though their micro was adequate, their decision making was always kinda suspect. IIRC, you posted a video of you playing a bot on the mnm micro map, and utterly destroyed it because you always fought on the high ground. There's still a lot of room for improvement.
In addition, more interesting unit compositions for micro maps would be nice instead of just plain mnm or dragoons. Something like tank/vulture/goliath/dropship. With a composition like this, you can do a lot of different things. Since this is essentially TvT, positioning becomes key. Where do you set up tanks? Can you lure the enemy within range of those tanks? How should I place the few mines I have? Can I effectively utilize my dropships to make their tanks splash friendly units?
To date, the majority of the terran bots I've seen tend to be very sloppy regarding positioning. Oftentimes they'll siege up over their own mines, with predictable results. Stuff like that shouldn't happen.
Basically, I want to see more emphasis on positional play. This is the one area I think bots really need to improve in.
I was arguing with my friends the other day about AI in Starcraft, but I am not really a computer expert so I'd like to ask you guys
1) No AI is capable of competing on a professional level in Starcraft
we seemed to agree on this point
but they claim that it is possible to program an AI that would be capable of outmicro-ing professional gamers rather easily while I was under the impression that this would be one of the hardest things to program
You're both mistaken. Programming an AI to "outmicro" is not the hard part, programming it to micro intelligently is. It doesn't have a clickspeed, obviously, so its APM can be infinite. But how do you get it to make decisions about what to do with its units? Does it try to snipe tanks/HT/defilers, or pull back to better position? How many units ahead should it be to try to break a good concave of ranged units? What about breaking a ramp? When it has melee units vs. ranged, how should it behave? Should it charge in and try to intermingle with the ranged units, or retreat? How many hydras are worth charging with X number of zealots? The answers all depend on positioning and how important map control is at that stage of the game. Pro players don't have formulas for working these things out, they just have instincts and thousands of hours of experience.
Why is it that every thread about AI invariably has some complete moron that thinks that any AI that doesn't have human-level intelligence is somehow worthless? We are at least 50 years from achieving that, probably more. Real, true intelligence requires levels of complexity that are hard to imagine. Having a computer solve even a constrained problem like playing Starcraft well requires a ton of work and interesting research. It doesn't matter that much of it is hard-coded to solve a particular problem, it's just a first step. We are already seeing some emergent behaviour in advanced bots like the Overmind and that's truly fascinating and more than I really expected from all of this. Personally I'm absolutely thrilled to see where things will go now that many of the teams have the basics out of the way and can concentrate on making their bots even smarter and more versatile.
As far as what I would like to see: a detailed write-up from each team, how they went about making their bots, design philosophies, etc... like the article on the Overmind but not just from the winner. Maybe some sort of documentary that could be put on YouTube would be cool too, with interviews with the developers and maybe some graphics to explain certain concepts etc... to make it all more accessible to a wider audience. Maybe someone could contact one of the science magazines like New Scientist to do a piece and maybe even a TV station like the National Geographic channel. I'm not sure what the funding and organization is like behind this competition but there are some fantastic PR opportunities here.
@Goragoth, That would be awesome (Since that would mean i get an interview :D ZotBot FTW)
In general, to what i have read on the previous posts. My bot, and pretty much almost all of the other bots, already have an algorithm to analyze the maps. My bot will spend the first 30 seconds analyzing the map if it has never played on it b4.
As was stated before, The micro actions themselves are easy, but knowing when to micro is hard. I should probably post a video of my Psi Storm micro :D a 80 supply army beats 200/200 Hydras with 80 food of siege tanks supporting them :D
I'll prolly post here again when i have more time.
On February 05 2011 18:13 Goragoth wrote: Why is it that every thread about AI invariably has some complete moron that thinks that any AI that doesn't have human-level intelligence is somehow worthless? We are at least 50 years from achieving that, probably more. Real, true intelligence requires levels of complexity that are hard to imagine. Having a computer solve even a constrained problem like playing Starcraft well requires a ton of work and interesting research.
This is why we are starting with a fixed task. Not sure about your timeline of general intelligence, but yes, playing expert StarCraft is only the beginning in building human level AI systems.
@ gen.Sun: There are some bots that have released their code. I am in the midst of creating a new bot from scratch. The only benefit i have from being in the previous year's competition is my knowledge. I'm sure if you worked on it a bit you could have a really good bot, perhaps even the winning one :D
Could you guys possibly give data about the terrain that are good/bad for specific races or something? Or some tactics on how to fight against a certain terrain? The bot never used darkswarm or plague against hostiles. It seemed like they were being held back by some restrictions.
If most of the competitors can improve the macro (expansions, workers actually working, and units actually moving), and improve the push too.. it will be very cool.