Ladder Anxiety is a term from the Starcraft community referring to distress caused by playing ranked/ladder games leading to the inclination to avoid them. This is usually just emotional, but can also include physical symptoms such as cold extremities, quick breathing, fatigue, etc. Though the advice given in response to this situation alternates between folksy relaxation techniques and the advice to "man up", it is still a common response and a response that is the opposite of what a game developer should want. Not only have I seen this discussion come up in the League of Legends community as well, suggesting that it's not something isolated to Starcraft, but I suspect the plague of general poor sportsmanship that infects online gaming may share a contributing factor with ladder anxiety -- the negative emotional response that manifests in some people as ladder anxiety could manifest in others as a desire to insult their opponents, make excluses, gloat, or otherwise behave in a "toxic" manner.
A feature in a game which motivates a person to avoid playing is an error of design. The common community response to this complaint makes the assumption that something is wrong with the player, not the game. I disagree. I think that, with some small changes that take into account human nature, game developers can create a much larger population of satisfied, competitive gamers. They'd have a better ladder system, too.
What is a rating system
Put simply, a rating system is a method to create a dynamic and ongoing hierarchy of a group of competitors. Based on previous performance, players are given a numerical rating which represents their skill level and is continiously updated after each match based on the result and the skill level of their opponent. The primary benefit this system has to a competitive online game is in matchmaking -- it's little fun for novices to play against experts or vice versa, and being able to play an evenly skilled opponent with the click of the button adds greatly to the value and longevity of a competitive game.
But this benefit is largely unappreciated by the mass of gamers who value rating systems solely for quantifying skill. Instead of being used to find appropriate opponents or to help the learning process, the rating functions like an appraisal of self worth. Players lose interest in having fun or learning the game and focus entirely on increasing their rating -- not their skill level, but their rating. It is exceedingly common for people to take rating drops that come as a result of misfortune (such as technical troubles) as a personal slight. This even extends past the realm of actual misfortune and into taking offense at perfectly valid behavior such as their opponent employing certain tactics they take to be "unfair". In truth, being rated below your skill level is little more than an inconvenience -- you'll be matched with easier opponents until your performance raises your rating to an appropriate level.
A skill rating is not a posterboard of gold stars and the goal of a competitive hobby is not to accumulate rating points like coins in Super Mario Brothers. Regardless of how precise the now famous "10,000 hour rule" is, acquiring skill in a difficult activity is a long, difficult road. The unfortunate fact is, an improving player in a truly accurate, honest rating system can expect his rating to increase at roughly the rate people actually improve: slow.
Primer on Probability
Even though it's perhaps a little too elementary to point this out to the people who would be interested in reading this kind of article, I'm going to do so anyway just to ensure everyone is on the same page. Let's take a coinflip as an example -- a random event between two equally likely outcomes (50/50). Even though there are only two outcomes and they should happen equally often, flipping a coin four times will not necessarily create a string of alternating results (heads, then tails, then heads, then tails). A person flipping a coin could easily see a result of heads all four times and from that experience mistakenly draw a conclusion that the coin is flawed or has a picture of a head on both sides. Despite the simplicity of a coinflip, it would take a large number of trials to be very confident of obtaining a result that gives an accurate reflection of the truth. Not only is human perception and memory is simply unable to handle that amount of data, our brains are designed for prediction. This makes humans really bad at making sense of a series of events where the outcome is subject to randomness. At least when they must rely on their subjective experience, of course. We fare far better when we rely on Excel.
Ratings are meant to reflect the probability of one player defeating another: two players with the same rating should win an equal number of times, while a higher rated player should win more often. More often, not always. Regardless, this means that losing regulary is expected. Not only are individual losses unavoidable, but streaks will be as well, bouncing your rating up and down around where it's meant to be. This is called variance. While chance will come close enough to evening out over the long run, a person who is experiencing these results in real time can't help but misinterpret them. A streak of victories is taken to represent a surge in ability; a streak of defeats is taken as an unfortunate injustice or personal failure.
Neither is accurate. If you were to wake up one day better, marginally, than your current rating, your performance would not result in win after win in a direct path towards your new, true rating. Instead, you would win slightly more often until your rating reflected your newfound skill.
Loss Aversion
People are irrational. This shouldn't be news to anyone. One of these universal irrational tendencies is a psychological principle called loss aversion. Put simply, humans put more value in avoiding losing something than they would on acquiring that something to begin with. For example, a typical person will be more upset by losing $20 than they would be happy upon finding $20. This preference has an effect on our behavior and is so powerful it can lead to decisionmaking which is quite ludicrous when analyzed objectively.
What does this have to do with rating systems?
Let's look at the player experience when playing a rated game in either Starcraft II or League of Legends. While both use different systems, each give players a rating which increases after each win and decreases after each loss. Not only is the rating adjusted but for the purpose of clarity, the difference is displayed quite prominently to each player alongside their new rating in the statistics screen that follows each match. This feedback is intended to be motivational: the joy of gaining points synergizes with the natural joy of winning, increasing one's desire to play. Upon losing, however, the player has the exact opposite response. Because of loss aversion, these two possibilities do not cancel each other out -- losing is the more significant factor.
In an accurate rating system, hitting the "Play" button matches you up with a person who is approximately your skill level. This means someone against whom your chances of winning should be as close to 50-50 as possible. If you win, you will be rewarded with points. If you lose, you will be punished and have points taken away from you. Even though you will win and lose about the same number of points in the long run (assuming your skill remains static), psychologically you put more value on the points you currently have than any points you may win. Losing ten points hurts more, a lot more in fact, than winning ten points feels good. From an emotional perspective, this is a losing proposition. Unless your victories will substantially outnumber your losses (which should not happen except for the few very best players), participation is a recipe for misery. If you're in the business of cooking up fun, this isn't something you want in the oven.
Ironically, instant feedback is misleading
The Elo rating system (the most popular and widespread -- though always with adaptations) was implemented by the USCF in 1960 as an improvement over a previous rating system already being used. This was long before personal computers were everywhere, and one of the strengths of Elo is that it relied on relatively simple arithmetic. This was important since rating adjustments were done by an actual person (perhaps with a calculator) thumbing through sheets of chess results after a tournament. Logistically, this meant that rating adjustments were neither instantaneous nor were they for individual matches. Waiting for the results to be processed blunts the negative response since the material loss, the decrease in points, is not immediately attached to the emotional impact of losing. Handling the entire tournament in one adjustment meant that the change in rating reflected not a single game but the handful played over the entire tournament. Processing the games in batches makes for less variance in each adjustment -- ratings will not bounce all over the place, and each change carries more significance.
This is where irony comes in. Many people would assume that the more immediate and clear the feedback, the more reliable and helpful that feedback will be. With ratings, this isn't the case. Showing the adjustments of each individual game is a lie. Once your rating has been established in an accurate system and you are paired against similar opponents, you are meant to lose from time to time. Often, in fact! Around as much as you win! The outcome of one or two games carries little, if any, statistical significance. However, this is lost on someone without a solid understanding of both the rating system and statistics, which is almost everyone. All they see is a mean robot taking away gold stars everytime they lose a game.
What's the solution?
Simple: batch adjustments. Not only would ratings (once established) work better changing on, say, a weekly basis, rather than game to game, it would also be best to update less often for players who do not play many games.
This idea is actually built into the Glicko rating system and called the "rating period". Any matches that take place during one rating period are to be considered simultaneous, so changes are made based on chunks of games rather than one by one. The official paper outlining the Glicko system recommends a rating period which has, on average, 10-15 games per player.
While removing the instant adjustments is an improvement in a rating implementation, it isn't necessarily an improvement in overall game design. Instantaneous feedback is an important part of game design. The problem is that losses are not just unavoidable -- a good matchmaking system guarantees that they will happen about half the time. Providing negative reinforcement (the loss of points) for an unpreventable event is akin to alternating randomly between smacking a puppy on the nose and giving it a treat -- the natural response is fear and anxiety. The idea that a player just needs to perform better to obtain his reward (and avoid punishment) isn't valid, since better performance will only result in more difficult challenges.
Instead, developers need to find other ways to integrate feedback and rewards into the competitive gaming process that bear in mind the reality of participating in a competitive hobby. Even for the best of us, losses happen and are obviously a suboptimal result. No one likes to lose, even if they've grown to accept it. Rubbing it in by taking points away does not accomplish anything. The satisfaction derived from this kind of hobby comes from ongoing self improvement and testing yourself, and the design should encourage this aspect of the experience.