thanks for putting them in one thread
BW AI Videos Thread - Page 3
Forum Index > BW General |
aeroH
United States1034 Posts
thanks for putting them in one thread | ||
SWPIGWANG
Canada482 Posts
| ||
![]()
Spazer
Canada8029 Posts
I found the reliance on mnm kinda strange. Has mech just not been implemented yet, or does the AI really perform better with mnm? | ||
Saturnize
United States2473 Posts
| ||
MamiyaOtaru
United States1687 Posts
| ||
3clipse
Canada2555 Posts
On December 12 2009 19:55 MamiyaOtaru wrote: given the limitations and difficulty inherent in programming BW AI this is amazing Yeah, the only full game I've watched is the TvZ and it blew my mind. I can't imagine the final product of all these projects. | ||
leetchaos
United States395 Posts
| ||
![]()
Flicky
England2657 Posts
It seems so far that if they're being countered, they'll just keep going the same tech path. | ||
Eatme
Switzerland3919 Posts
| ||
djsherman
United States140 Posts
I'm expecting the results of the competition to lead to commentary worthy VODs + Show Spoiler + The bot loses, obviously. I was just impressed it lasted 11 minutes. | ||
djsherman
United States140 Posts
My long term plans are to have the bot learn from mining replays of professional players. The bot will select a standard build given the map and match up and then adapt to the opponent based on actions that professional players have taken. Right now this is completely theoretical (I'm a PhD student), but I expect to advance the field of game AI. | ||
wswordsmen
United States987 Posts
On December 13 2009 01:33 djsherman wrote: Right now the EISBot uses a fixed strategy, where certain tech buildings are built at specific supply timings. Also, the placing of mines if completely reactive, based on detecting an enemy. So once an enemy is in the range of a vulture, the vulture will plant mines and then flee. Tanks have a similar behavior and will siege as soon as an enemy is in range and unsiege once an enemy is no longer in range. So this explains why the bot is still predictable. My long term plans are to have the bot learn from mining replays of professional players. The bot will select a standard build given the map and match up and then adapt to the opponent based on actions that professional players have taken. Right now this is completely theoretical (I'm a PhD student), but I expect to advance the field of game AI. That explains why the bot fell apart to Mutalisks. If the C player didn't know it was a bot I would guess he was very suprised that a player that defended that well to his attack couldn't deal with Mutalisks. | ||
![]()
motbob
![]()
United States12546 Posts
| ||
butter
United States785 Posts
On December 13 2009 01:33 djsherman wrote: My long term plans are to have the bot learn from mining replays of professional players. The bot will select a standard build given the map and match up and then adapt to the opponent based on actions that professional players have taken. Right now this is completely theoretical (I'm a PhD student), but I expect to advance the field of game AI. I wish you luck. Developing an AI that plays like a skilled human is a much harder (and more interesting?) problem than simply producing a competitive AI. | ||
![]()
motbob
![]()
United States12546 Posts
| ||
Traveler
United States451 Posts
They don't adapt at all, and they also have no concept of positioning either building or unitwise. Otherwise the micro ones are incredible even though they also seem to have some problems in prioritizing. | ||
Vasoline73
United States7759 Posts
| ||
SWPIGWANG
Canada482 Posts
My long term plans are to have the bot learn from mining replays of professional players. The bot will select a standard build given the map and match up and then adapt to the opponent based on actions that professional players have taken. Right now this is completely theoretical (I'm a PhD student), but I expect to advance the field of game AI. I'm not sure this is a good plan, since the capacities of a human and a computer is very different. Look at successful chess programs for example, the way a computer plays is very different from that of a human, with different kind of strength and weaknesses. I don't think trying to emulate a human would result in a strong opponent or really advance the field of research that is AI. (since it is probably end up as some decision tree sort type of thing with a lot handcrafting given resource and time constrains) I think the best way to build a strong game AI in starcraft is to first divide the game up into sub problems (eg. muta vs marines), attempt to figure out which ones can be solved and design AI build orders in things that is good in while avoiding those that it can not cope with. Build orders are often the major weakness of human players. However for a computer, even something as basic as getting units unjammed or coping with a lone guardian shooting at your CC will take work, let alone things like avoiding a flank. AI is very very stupid and the simplest things to a human is hard in an AI. | ||
ithron
Norway19 Posts
The comment at the end is just too hilarious. | ||
niteReloaded
Croatia5281 Posts
(and it will make for some sick sick replays.. imagine wraith micro at 5 differerent places at once) | ||
| ||