There will always be a lot of talk about trends in from a statistical perspective. People love to throw numbers around to support their argument, because numbers cannot lie, right? Well, yes, numbers themselves cannot lie, but with incorrect application or selection, they can provide an incomplete or incorrect picture. So all I want to do is clarify basic terms and statistical concepts for everyone, so that y'all know enough to intelligently understand and apply statistical tests to data.
Terms
People love to banter over what the various terms mean in statistics, because, in many cases, there is some wiggle room. What I will provide here are clear definitions of what they are supposed to mean, and how we can use them appropriately.
1) "Probability"
Probability, in general, comes from very complicated mathematical theories regarding how random events should behave given a large amount of data. What you should know is that "a probability" refers to the chance of an event relative to all events, or example, the number of heads in total coin flips. So we would say the probability of getting heads when you flip a coin is .5. Probabilities should always be between 0 and 1.
2)" Odds"
Odds is one of the most misused words. Odds are a RATIO, whereas Probability is a PROPORTION. A ratio is the chance of 1 event relative to another event, where both events are mutually exclusive. The odds of flipping a coin and getting heads is 1 (1 chance at heads/1 chance at tails = 1). Odds should always be between 0 and positive infinity.
3) "Mean"
The mean is the average value of all your data points, whether in a sample or population. This value can be dramatically skewed by the high and low ends of your data set.
4) "Median"
The actual 'middle' point. Consider if you wrote down all data points in a line and took off those at each end until you had 1 (or 2) left. That number left is the median. It is often more accurate to the sample/population than the mean. When the median and the mean are statistically significantly (we will discuss this term later) different, your sample/population typically has a non-normal distribution.
5) "Variance"
Imagine you rolled a die 12 times and got only 3s and 4s. The variance of that data set is low, because 3 and 4 are close together. If you roll a die 10 times and get 1-6 twice each, that data has a higher variance. Low variances suggest that the data is clustered around the mean. High variances suggest that the data is spread out. We can account for differing variances if we apply our test statistics correctly.
6) "Test Statistic"
The mathematical test or equation we will use to analyze if the data. There are an incredible number of these and selecting an appropriate one is one of the challenges of data analysis.
7) "Sample"
We can say that a sample, referred to as 'n', is a representative portion of the true population, where the population is everyone effectively. So if you took 100 players out of all the SC2 players, that would be your sample. If we select our sample correctly, it should accurately reflect our true population, with some exceptions. Importantly, the ONLY difference between a sample and a population for statistical analysis is in the names given to the variable representing variance and mean values. Almost all statistical tests CAN STILL apply even if you know the entire population, because you are not just examining if the data can represent a population but also if the data is possible according to a given distribution, normal, random, or non-random, for example. If our sample is too small, our tests lose Power and test statistics often cannot provide us statistically significant data.
8) "Null hypothesis"
A null hypothesis is what we expect to be true (or what should be true, in some cases). For example, that all three races in SC2 are equally powerful and this would be represented by equal success, in terms of ELO, win percentage, etc. A simple example is the null hypothesis that probability of getting heads when flipping a coin should be .5.
9) "P-value"
A p-value is the cornerstone of statistical analysis. What a p-value, and not alpha, refers to is the probability that, given an analogous set of data, meaning concerning the same topic and within a similar range and sample size, your test statistic would find a difference AS or MORE extreme from the null hypothesis.
10) "Power"
The ability of a test statistic to detect a difference between the sample(s) and the population at a given null hypothesis, expected difference, and sample size, assuming a difference exists. Ways to increase power are by increasing sample size, expecting a larger difference between your sample(s) and the population,
11) "Statistical significance"
The bombshell. This refers to whether or not, given an appropriate test statistic, the examiner can make a mathematically supported conclusion to reject the null hypothesis. We can ONLY reject the null hypothesis or say there is not enough evidence to reject the null hypothesis. We can never know the 'truth' so what we have to settle for is whether or not our test statistic gave us a different answer than expected. If the combination of sample, expected result, and data lack sufficient power, finding statistical significance from your data is less likely or can even be impossible.
12) "alpha"
Alpha is set to .05 by convention, which means that, if we were to run this test data again and again, .05 percent of results would NOT contain the value of the true population. What this means, in reverse, is that if our test statistic provides us statistically significant results, we can say that there is a 95% chance that our results contain the true mean, where our results are a value and a 95% confidence interval, a description of which I will add later, it is complicated and rarely necessary outside of publications.
13) "confounder"
Any factor that might account for your result other than what you are testing. If you were to have Idra and I play 100 games, and I won 50, you might conclude we are equally skilled. If he were drunk at the time, or if I cheese'd him every game, that might be a confounder. These can be explained and controlled for if you are careful in your analysis, experiment, and data collection.
14) "bias"
Any factor that skews the data of your sample and eliminates its ability to be 'externally valid,' meaning whether or not it can adequately reflect the true population. Bias may occur in selection, testing, data collection, pretty much anywhere. Common bias might be an imbalanced map pool, a tournament's non-equal race distribution or matchup distribution. The big one you will have to deal with is non-random sample selection. If you want to examine for Terran imbalance in the whole population, you cannot look at just the top percentage, because you are actually forcing a bias onto your sample. You can never actually be sure that a non-randomly selected sample reflects your population. http://en.wikipedia.org/wiki/Bias_(statistics)
Data Sets and Appropriate Statistical Tests
I will not be able to explain all of these things in sufficient detail, but what I will be able to do is explain which one's you can use for what kind of data. These will be the most common tests we can apply in RTS.
1) The student's t-test
For one-sample, like if I were to play Idra in a series, we can examine whether or not the results are consistent with an expected result. If we have a single matchup with a given sample size and expected result under a single paradigm, we can use a one-sample t-test to compare the data we have against an expectation. For example, our null hypothesis could be that the map pool is balanced ZvT. We could look at the win percentages for ZvT over all the maps, compares those to the expected .5 result across all maps and we would be able to find a t-value. http://en.wikipedia.org/wiki/Student's_t-test#Independent_one-sample_t-test
For a two-sample data set, like if I would want to compare if Terran's success is significantly different than Zerg's, we can see if two total populations are actually different from one another. In most cases, the size and variance of each sample (the Terran results and Zerg results, individually) will be different and we can account for that with proper statistical understanding. We would set our null hypothesis to be that Terran and Zerg should have equal win rates, variances, distribution, and examine the rest of the data based on that. http://en.wikipedia.org/wiki/Student's_t-test#Independent_two-sample_t-test
Our results will be in the form of t-value which can be converted to a p-value with a table.
2) The chi-squared test or "goodness of fit"
This test can be used to examine whether or not a series of data points conform to an expectation. For example, if we want to examine whether or not all three races have equal win percentages, this is most appropriate. Going back to our last example, if we want to prove Terran is imbalanced we would need to show that it varies statistically significantly from the appropriate 'goodness of fit' model with regards to not just Zerg but Protoss as well, and that, given the whole dynamic, there is a demonstrable difference. Importantly, this CANNOT account for population variance like appropriate two-sample data sets can. This means that it will provide statistically conclusive result but may not be enough to actually make the conclusion with any external validity, meaning it represents the population correctly. http://en.wikipedia.org/wiki/Pearson's_chi-square_test
This will give us a chi-squared value, which can be used to determine a p-value from a table.
I will add more to this as time goes on and demand increases.
Conclusions
The rule is this... You need to correctly select a test, apply it correctly, and then understand its limitations with regards to your data. Even if you pick the 'right' test and have adequate data, you cannot conclude anything but one specific result from any single test. I could show that Terran outperforms Zerg by two-sample t-test, but Protoss' success is a confounder. I could show that Terran outperforms both Zerg and Protoss is terms of mean win percentage, but this would not take into account sample variance, meaning a few extremely well performing Terrans could skew the mean (a confounder). No test is perfect, and be open to the fact that your data is not conclusive. It rarely will be. The correct response is to start explaining why your data is adequate, why the confounders aren't actual confounders, why the possible bias is not actual bias, etc.
If people ask for specific explanations, I am happy to provide them. I hope this offers some clarity into the nature of statistics and statistical discussion.
Cheers!