The AI
AI's most obvious advantage is that a computer could make a million moves in the time that a human makes two moves.
suppose we limit the AI's actions per minute (APM). to what extent do we limit the AI's actions per minute to achieve parity with humans?
could an AI with 60 APM defeat Flash, the most accomplished human player (250+ APM)?
Simulation
how does the simulator learn?
suppose we could simulate 1,000,000,000 starcraft games learning only with trial and error. does this 'purely deductive' approach fail the test of efficiency?
suppose we could 'magically' intuit the correct answer to one question. how many simulations are required to arrive at the same answer with 95% certainty?
suppose 10,000,000,000 simulations yield that you should scout at 8 drones in ZvR.
now the simulation employs 8 drone scouting 'going forward'.
Questions
does your simulator answer questions one at a time? how does the simulator prioritize questions? what if the simulator neglects essential questions? how does the simulator detect 'acceptable solutions' that are not the correct solution?
Human Brain
the human brain features some remarkable learning mechanisms. a human could be expected to say: "i've run 6,000,000,000 simulations whether 8 drone scouting is the optimal ZvR solution. however, i'm happy scouting with 8 drones or 11 drones. if i scout somewhere in the 8-11 drone spectrum, then this is optimal given current knowledge. investigating this question is no longer the optimal allocation of my time. now, i should answer whether to open 13 pool or 15 hatchery ZvR. also: is engineering bay or pylon blocking hatchery 'game-breaking'?
what constitutes game-breaking?
Humans vs AI
could we test RTS games with AI and exclude human testers? my current estimation is: AI is okay, humans are necessary.
one major consideration is that we're creating a human-friendly, human-enjoyable game.
game-breaking is, "could a human player reasonably be expected to handle this scenario"? "could a human player reasonably be expected to recover from this"?
suppose that a 60-APM AI could defeat flash playing at 250 APM. that's quite a feat. however, suppose we employ the same AI to arrive at conclusions like: is engineering bay or pylon blocking hatchery game-breaking?
the AI advantage is enough to defeat flash while the AI plays at less than 1/4th the APM. could we translate that same advantage to say that AI could recover from scenarios where recovery would be impossible with humans? how well does AI gauge its potential? in some sense this is like asking: "when do i all-in?"
suppose you're in the AI's position. you beat flash every game despite playing at less than 1/4th his speed. when do you make an all-in gamble? the question doesn't make sense. if your expectation is that you will win with 100% certainty, then you're 'all-in gambles' do not apply in your case.
imagine we employ this AI to create a game. and imagine that we specify game-breaking this way: game-breaking is when my AI loses. does this make sense? in games where the AI is playing against humans, it's clear that AI easily recovers from positions that would be not be possible with humans.
balancing an RTS with this formula gives an "OK" to a lot of scenarios that are very unreasonable when creating a game humans will play.
Other Questions
there are some proponents of IQ tests who argue that intelligence is whatever intelligence tests study. this is erroneous (and narcissistic).
likewise, if we define our simulator to be optimal then whatever the simulator calls the best solution is what we call the best solution. this is erroneous.
we deny the possibility that one could magically intuit the correct solution (or that there exists a better simulator).
if better solutions exist, how does our simulator make revisions? should we deem all solutions equally accurate? if not, how do we identify 'strong' solutions vs 'weak' solutions?