There is a branch of mathematics called 'Game Theory' which is concerned with strategic decisions, especially under uncertainty.
I am writing today to share some of these ideas and attempt to apply them to Starcraft.
There are two basic notions of optimality or "best play" in cases of uncertainty.
The first notion is "best response".
If I believe you're going to play a certain way (say, 4pool) and I know that some other strategy (say, early bunker) counters it, I can just play this other strategy.
The second notion is Nash equilbrium which really just means "mutual best response".
In this kind of situation, neither player can change to another strategy to win more and both are "countering" the opponent as hard as possible.
A mutual best response may not always be obvious.
For example, in Rock-paper-scissors if I play Rock and you play Paper you're playing the best response to my strategy but I have a better response to yours - Scissors.
No "pure strategy" (always playing a single move) mutual best-reponse exists in RPS.
Instead, mutual best response takes the form of a "mixed equilibrium" where I play Rock/paper/scissors each 1/3 of the time and so do you.
In this scenario, we both win/lose/draw 1/3 of the time and neither player can switch to a different strategy that improves his win-rate (unless the other player also switches strategy).
The 1/3 1/3 1/3 mix is a product of the symmetry of the payoffs (win, lose or draw).
If instead Rock wins are twice as good as Paper/Scissors wins, some other mix will emerge as the mutual best response.
I believe equilbrium here is to play rock 1/3 of the time, paper 1/2 and scissors 1/6, which is kind of interesting in that it's actually the counter to Rock that gets boosted and the Rock play-rate is the same.
In a certain Starcraft matchup and situation if I know that a certain build (say, center 2gate) will have a 60% winrate against my opponent's mix of strategies, and other builds I might play have a lower win-rate, I should just play that build (ie always center 2gate).
My opponent should react by playing things that counter 2gate more frequently (drivings its win rate down) and things that counter other stuff less frequently (driving the win-rate of other builds I might play up).
As he does this, either I can play center 2gate less frequently and we approach some mutual best-response scenario (what poker players call game theory optimal) or I can try to get my opponent to counter the center 2gate "too often" and play some counter to THAT.
Poker players call it "chasing each other through donkeyspace" when 2 players are each playing in an exploitable way to try to maximally exploit the other player.
One interesting feature of mixed equilibria is that both players are actually indifferent between each individual option.
Being a mutual best response, it is also a best response.
For my "best response" to be equally paper, rock and scissors they have to have the same expected return (or in Starcraft, winrate).
This gives us some help in finding least exploitable play.
If a certain mix of builds or strategies is least exploitable, then each build in the opponent's mix should have the same winrate (and each build not in his mix should have a lower win-rate).
So to find least-exploitable play in a certain situation:
1) If a certain option is always best no matter what the opponent does, play it
2) Otherwise, randomize among your "good" options at a frequency so that the opponent's "good" options against you all have the same winrate and his "bad" options have a lower winrate
Then if you want to abandon least-exploitable play to try and punish some tendency of the opponent, you can.
One final note:
Mutual best response involves each player playing as if the other player knows his strategy (not necessarily the choice he makes among random options, but the % mix he will employ).
If you want to play in a least-exploitable way, you should imagine that the other player knows your % mix of builds (or strategies if the branching out is later on like goon/obs/DT in PvP).