Zerg Build Order optimizer. - Page 17
Forum Index > SC2 General |
icezar
Germany240 Posts
| ||
Dharmok
Netherlands57 Posts
10 OV 10 pool 12 gas 14 Queen 16 gas 16 OV 16 Lair 23 OV 23 Spire On steppes I was able to start the mutas at 6:26 (same as your build order). But since it uses a Queen, you're in a lot better shape. Makes me wonder if something like a fast expand build using 4 geysers might be feasible as well... Anyway, I find the builds this program generates pretty inspiring. | ||
azzu
Germany141 Posts
10 - Overlord 10 - ExtractorTrick 14 - SpawningPool 13 - Extractor 12 - Extractor 16 - Lair 18 - Overlord 20 - Spire 19 - Queen 24 - Overlord 24 - Overlord 25 - 5 Mutas @6:20 - Mutas pop http://pastebin.com/11Pp87Um Also note that you're ~15 seconds slower if you do this ingame | ||
Dudemeister
Sweden314 Posts
10 ET 11 Ovie 11 Gas 11 Pool 11 Ling 2 drones 14 ling 15 Banes nest 14 ling 15 ling 16 Gas 15 ling 16 ling Make 12 banes Time: 4:40 | ||
Andre112
Canada52 Posts
30 drones 2 queens 2 hatcheries (1 at nat) 2 pairs of lings (before 4:30 mins) ling speed maybe lair too (i'm not sure about the timing) i wanna know which BO it comes up. Is it 15hatch>14pool>13gas, 14gas>14pool>21hatch, or 14pool>16hatch>17gas, or something crazy? | ||
PepperoniPiZZa
Sierra Leone1660 Posts
Howcome your ai takes so many hours to calculate, are you doing real-time simulations? I'm asking because my CPU can do billons of operations per second, millons of build orders should fit in there, what's the part that consumes so much time? | ||
Bumblebees
United States328 Posts
On October 19 2010 02:26 kaliax wrote: Uh, so this is not minimax search. You're describing some sort of tree search with pruning and a heuristic evaluating function (wtf is a "minmax weighting function"?), but not minimax, which requires an adversary who is trying to minimize your score (i.e. chess), or at least a model which creates a zero-sum game (i.e. modeling player vs nature, and trying to minimize the worst possible outcome, which again does not make sense here since all actions aside from opponents actions (which we aren't modeling) are within player's control). Thank you for post OP, looks great, will keep checking this. Extensions or modifications to a search algorithm do not make it no longer that algorithm. There are many forms of minimax, and essentially the only requirement to be considered so is that the search's primary function is to compare two optimal outcomes arising from evaluating 2 opposing developments. There's plenty of various minimax based searches (such as alpha-beta and it's variants) with varying extensions (heuristics, quiescence, iterative deepening, aspiration searches, etc..). Nearly all of them can be added to a basic minimax, though the performance will not be enhanced nearly as much as moving to even a simple alpha-beta, which is still a form of minimax. Also in a build order search, there IS an opponent and it is quite simply another build order. The evaluation is time for a specific outcome. Simply comparing 2 build orders initiating at a zero-state initially to a specific depth is quite simply enough. There are plenty of ways to greatly speed this up. Even just a simple negascout with principal variation and very little else. I haven't read the whole thread, but if the OP is using a genetic algorithm as seemed to be implied at the beginning of the thread, the aforementioned method would be significantly faster. | ||
Bumblebees
United States328 Posts
On October 20 2010 03:21 PepperoniPiZZa wrote: Hi, I'm not too much into Starcraft but I like ai and I'd like to ask a question. Howcome your ai takes so many hours to calculate, are you doing real-time simulations? I'm asking because my CPU can do billons of operations per second, millons of build orders should fit in there, what's the part that consumes so much time? You have it a bit reversed. Given "billions of calcuations", even a relatively static calculation of a build order will take "millions of calculations". Ideally a well written program should be able to evaluate many (likely tens or hundreds of thousands) static points in time. Every point in time needs to be evaluated (ideally at increments of time equivalent to the income of 5 minerals/gas given the current mining rate). Once you consider the branching that will occur for every state that needs to be evaluated, that increases the time significantly as you move forward in 'game time', Many, many millions of game-states need to be evaluated. If the OP is using a genetic algorithm, then you have to consider each possible build order possible to achieve the outcome and essentially work backwards to narrow it down to a select field of "fit" options. Either way you look at it, the program has to prune or create branches to determine the idea outcome from millions and millions of potential solutions. | ||
Lomilar
United States130 Posts
There's been tremendous support for this tool and lots of help from ~25 alpha testers. Hoping to get a version out to you guys by the end of this week. | ||
Subversion
South Africa3627 Posts
On October 20 2010 09:32 Lomilar wrote: I love you guys. There's been tremendous support for this tool and lots of help from ~25 alpha testers. Hoping to get a version out to you guys by the end of this week. you are the man. can't wait for this. | ||
Goobus
Hong Kong587 Posts
| ||
Almania
145 Posts
I haven't read the whole thread, but if the OP is using a genetic algorithm as seemed to be implied at the beginning of the thread, the aforementioned method would be significantly faster. Can you prove prove this to us? ie, write a BO calculator faster than Lomilar's? | ||
latan
740 Posts
| ||
ibgeekn4me
United States75 Posts
| ||
Deleted_143
Australia256 Posts
| ||
Schnullerbacke13
Germany1199 Posts
On October 20 2010 04:31 Bumblebees wrote: Extensions or modifications to a search algorithm do not make it no longer that algorithm. There are many forms of minimax, and essentially the only requirement to be considered so is that the search's primary function is to compare two optimal outcomes arising from evaluating 2 opposing developments. There's plenty of various minimax based searches (such as alpha-beta and it's variants) with varying extensions (heuristics, quiescence, iterative deepening, aspiration searches, etc..). Nearly all of them can be added to a basic minimax, though the performance will not be enhanced nearly as much as moving to even a simple alpha-beta, which is still a form of minimax. Also in a build order search, there IS an opponent and it is quite simply another build order. The evaluation is time for a specific outcome. Simply comparing 2 build orders initiating at a zero-state initially to a specific depth is quite simply enough. There are plenty of ways to greatly speed this up. Even just a simple negascout with principal variation and very little else. I haven't read the whole thread, but if the OP is using a genetic algorithm as seemed to be implied at the beginning of the thread, the aforementioned method would be significantly faster. 1st: with "weighting function" i mean a method computing a value of "how good" a build variant is. 2cnd: ofc i implemented some variant of minmax. However i do not care how it is called. it works :-D 3rd: People seem confuse some things: I am not the OP, i wrote a distinct build calculator independent of the OP using a minmax (or as you want to call it), and yes: it seems to run faster (~some minutes) instead of genetic algos. However the performance depends on how agressive the pruning of "weaker" build variants is done. Sometimes a build looks bad in the beginning (e.g. early hatch, queen) and gets good at a later point, the less agressive pruning is performed, the more likely the "best" build is found and the higher the computation time. So i usually use some fast settings for initial discovery (3 minutes), but make a long run (1 day) to ensure there is not another, better build. | ||
kaliax
United States48 Posts
Wondering if you could share some of the technical details of your genetic algo implementation. Doing some research on the topic for an unrelated project, would be fascinated to know how you applied it here. | ||
kaliax
United States48 Posts
On October 20 2010 04:31 Bumblebees wrote: Extensions or modifications to a search algorithm do not make it no longer that algorithm. There are many forms of minimax, and essentially the only requirement to be considered so is that the search's primary function is to compare two optimal outcomes arising from evaluating 2 opposing developments. There's plenty of various minimax based searches (such as alpha-beta and it's variants) with varying extensions (heuristics, quiescence, iterative deepening, aspiration searches, etc..). Nearly all of them can be added to a basic minimax, though the performance will not be enhanced nearly as much as moving to even a simple alpha-beta, which is still a form of minimax. Also in a build order search, there IS an opponent and it is quite simply another build order. The evaluation is time for a specific outcome. Simply comparing 2 build orders initiating at a zero-state initially to a specific depth is quite simply enough. There are plenty of ways to greatly speed this up. Even just a simple negascout with principal variation and very little else. I haven't read the whole thread, but if the OP is using a genetic algorithm as seemed to be implied at the beginning of the thread, the aforementioned method would be significantly faster. Of course you can model this problem as a variant of minimax, I wasn't disputing that. I was just stating that the post I quoted was not actually a minimax, but just a heuristic search tree pruning algorithm. Moot point anyways. You bring up a great point with negascout though, it'd be great to see that implementation of this. | ||
Bumblebees
United States328 Posts
On October 20 2010 11:44 Almania wrote: Can you prove prove this to us? ie, write a BO calculator faster than Lomilar's? There is nothing to prove. Use of a genetic algorithm is only ever going to be faster in a 2 opponent game if there is some form of mutation occurring that limits the accuracy or ability of a static comparison between branches. Build order evaluation is completely static as the current position can always be evaluated compared to a previous game state, and it becomes even more effective over GA because there is a defined cutoff that will limit branching and allow for effective search extensions. Of course GA can be fairly easily made to fit most game-theory problems, but the question isn't really "can it work?" For 99% of problems where some form of minimax is directly applicable, it will be faster. And no, I won't write one. I spent many years of my life working on, with and discussing chess software. I have other things in my life I rather focus on in my spare time, such as actually playing starcraft 2 edit: I wanted to point out that if we were trying to figure out what the best response to a adaptive build order* is, then some sort of genetic algo would be quite applicable. This would be even more computationally intense though as it would need to take into account the map, unit pathing/speed, unit loss, etc... to be even remotely useful. Essentially it would be AI playing the game. * This is assuming that the 'opponent' is constantly changing his response to the information that can be gained from the 'player's emerging build order. | ||
osten
Sweden316 Posts
This is absolutely perfect! Can't wait to try it out. Also veeery interrested in how this thing works! | ||
| ||