Now, unlike with the topic of rocketry, which is directly relevant to my line of work, I would not claim too much depth of expertise in this topic. I did some academic study in the area in my school years, and it has had intermittent relevance to several projects I have worked on. I know far more about AI than the average individual or the hobbyist, but I know for a fact that there are others on this very website whose AI expertise dwarfs my own. Still, I can say I'm confident enough in my own knowledge of the topic to try to give a good overview of what the topic entails.
This one is going to be a one-off blog post; unlike with the rocket blogs, I do not know enough about AI to write a whole series and still have interesting things to say by the end of it all. But for those interested in AI but who don't know enough about it to get started, I hope this will be a helpful springboard towards further study. If nothing else, perhaps it will help to appreciate what goes into the process of making those Starcraft bot tournaments. In any case, I hope you enjoy.
Introduction
A deceptively difficult question to answer is, what is artificial intelligence (AI)? A short, and technically correct, answer is that it is the discipline concerned with developing the means by which computers are to be able to accomplish tasks that are traditionally accomplished with the help of human intelligence. If you find that definition to be frustratingly vague, you're not alone - that's just a reality of how difficult it is to pin down specifically what artificial intelligence work is actually about.
An interesting and important phenomenon with AI work is known as the AI effect - in a nutshell, the phenomenon where a new task that computers can do that is generally accomplished with human intelligence, but in the case of computers is done through cruder means, is popularly discounted as "not actually AI." In a way, this effect follows the development of the entire discipline of computer science from its foundations - and it should be no surprise that the early pioneers of computer science were the very same folk who are considered to be the pioneers of artificial intelligence. In a way, AI is a field of research that focuses on not-yet-developed sub-disciplines within software that, once developed, will expand the scope of what software is capable of accomplishing.
In the field itself, the problem of an ambiguity of definition of the scope of AI has always been a problem. The best, and most common, scope of what AI actually focuses on was relatively recently codified in one particularly important textbook: Artificial Intelligence: A Modern Approach. One class, taught at UC Berkeley based on that book, CS 188, is a widely-used means by which to instruct individuals within the field. For anyone who wants to take a meaningful course of study on AI, these two resources are just about the best source to get started. The class is free, and as for the book - buy an old edition, use an online copy, and so on - by no means should money be a limiting factor in this context.
My only problem with the above listed materials is that the tendency to use the very same teaching tools has a habit of leading to an unfortunate propensity towards groupthink. There are folk who have been in AI well before this book came out and have their own perspective, and there are folks who only stumbled into the field after they came out and never knew any other way. I am of the latter kind and I will not pretend that my insights into the field are all that unique given that I simply never became enough of an expert in AI to be able to provide any particularly unique insights; I have merely done enough to become familiar with the field and to be able to complete a few AI-related projects in my line of work. I am sure that there are others here who know more, given that we do have an AI scene here on TL itself.
The utility of artificial intelligence in its aggregate contribution to science as a whole is ubiquitous; anyone who exists in the modern world, and especially anyone who works in a technical field, can attest to the increasing importance of software in their life - and the development of software largely follows advancements in the field of AI. Like any field with such promise, the industry has many sorts: brilliant thinkers, gigantic egos, clever hucksters, political rent-seekers, interested investors, dreamers, philosophers - you name it. If history is any precedent, much of what is promised will never come to fruition, but what is left will still be a critical contribution to the advancement of the field.
The rest of this blog post is essentially going to be in two parts. In the first part, I'll gloss over a few of the most important concepts in AI - environments, knowledge and decision-making, and machine learning. In the second part, I'll talk about three of the most popular pursuits within the field - social media, gaming, and self-driving cars. There is, of course, no way that I can do justice to every aspect of the field in a single blog post - but for those who want to know more, there are plenty of high-quality, freely available resources online you can look to; the above link is a great place to start and is supplemented by a huge number of available further resources for study. Even research papers are generally quite available if you're sufficiently interested in learning more about AI.
Fundamentals: Environments
For any AI, you have to know what kind of environment it's operating in to be able to know how it is to be able to do its job. Only once you define the conditions under which the AI is operating can you even consider how it is that you can devise a scheme by which it can do what it is meant to do. And of course you don't always have access to all knowledge about the environment itself - so you have to know how it is that you are perceiving that environment.
Environments are far-ranging - they can be as simple and straightforward as a chess board, to something as complicated as the road on a busy city street or a conversation with a real human being. They can also be any number of things you wouldn't normally consider to be tied to AI, such as the innards of a database of emails or the contents of a message board. If you're trying to develop a method as a proof-of-concept of a certain tactic of AI problem-solving, you might even want to induce an artificially simple environment - such as the Wumpus World as pictured below.
The monster is smelly, the pits are breezy, and you are a blind man who wants gold.
Some environments, for example for a chessboard, everything you want to know is right before your eyes - there is nothing that you need to know beyond what is on the board in front of you. For others, such as for Starcraft - things are a bit more complicated; you can't see everything all at once, your "optimal" strategy depends a lot on what your opponent does, the environment isn't "turn-based" and what you think is true at one point has no guarantee of being true at some point in the future, and so on. And in many real-life scenarios, where you can't directly perceive a code-based reality such as a video game, and instead have to deal with sensors such as audio and video inputs - you get further complications. Each additional complexity leads to further troubles to making good AI solutions, and it should be no surprise that the latter problems are seen as more difficult than the former. Every one of these complications has to be addressed - and in fact each one opens up its own sub-discipline of AI that needs to be studied. You want to be able to simplify the world you have into something you can work with - and that's a whole job in and of itself. Once you have that, you can start distilling it down into a strategy for solving that specific problem. And the mother of all those tactics is of course the fabled idea of the general AI, the computer-based thinking machine that will be on par with the general intelligence of humans.
Fundamentals: Knowledge and Decision-Making
So, now that we have the environment down, what do we do with it? The short answer is that we have the AI do stuff in it. And the "how" is, of course, dependent very much on what the environment tells us about what needs to be done and what doesn't. In general, the gist of it is that the AI has to make some sort of decision - how fast to drive, what move to make in a game of chess, when to stim marines, which emails to highlight as important, and so on.
In principle, that's a fairly straightforward idea - you only have so many means by which you can act, and you just have to pick the best actions under those constraints. In reality, finding that "best" way is a computational nightmare. Even for a game as relatively simple for computers as chess, there are generally dozens of possible moves, and dozens of possible responses to that move, that will lead to dozens of possible responses to that move as well - ad infinitum. Out of all those possibilities, which one move should you make now to get to that point? Even the best modern computers can't properly go through every possible move to figure out which one is best; that is an exponentially growing list of possibilities that simply is not feasible to analyze. To be able to get to a point where you can actually feasibly play a game of chess, you have to reduce your consideration to only a select few of the most promising possibilities of moves.
The means by which decisions are made is essentially to be able to, within the constraints of the environment that you have created, throw out as many possible nonviable ideas as you can, and work with whatever is left to find the best possibility out of those. Easier said than done of course - computation is still often pretty tough (for example Go is a different beast than chess, despite having much the same environment, by virtue of being more complex computationally), and being able to know what you can ignore is an art and science in and of itself. By a combination of computation and such intuitive means, however, the field of AI moves forward.
Same general idea, but a whole different beast computationally.
In the context of AI, knowledge is information that you know - or assume - to be true about the environment you're in. If, for example, you know that certain chess moves are illegal, you don't make them. If you know crashing in a car is bad, you won't try to drive in ways that cause crashes. This can be codified in many ways depending on the application you're working on - as a series of logical statements that the AI must follow, as heuristic rules that the AI obeys, as an incentive that internally rewards certain behavior, and so on.
In short: put information together to be able to know what decisions you definitely won't be making, and your job becomes a whole lot easier - maybe even possible!
Fundamentals: Machine Learning
In the modern day, the subfield of machine learning is almost indistinguishable from the idea of artificial intelligence in general. While that's not quite strictly true, a cursory view of the entire field will quickly show you why. Essentially, machine learning is concerned with being able to have a computer that, with more practice, gets better at whatever task it is undertaking - learning, if you will. This can lead to emergent behavior within the AI, behavior which is not explicitly programmed into the computer but which, after sufficient practice, has learned that behavior as the best known way to solve a task. This basic concept mimics a lot of the learning power that humans themselves have, so it shouldn't be a surprise that it's an important area of research. Only, instead of the biological means that humans use, AI learn with the help of mathematics - probability, in particular.
The field of machine learning is in many ways just a fancy form of doing statistics. The tools are the same - probability theory, data collection, and data analysis - only applied to the similar, but not identical, problem of determining desired behavior (as opposed to the statistical problem of making sense of data). It draws a lot of inspiration from the concept of Bayesian statistics - which I have previously described in this post. Although that post discusses the idea in the context of elections rather than AI - the fundamental concept is the same; make assumptions, get data, improve your working assumptions based on the results you get.
In a nutshell, machine learning is a way to deal with uncertainty, whatever form that uncertainty may take. It may be uncertainty in your knowledge about the world, or in your ability to perceive it, or in the chance of truly random events and their chance of occurring. In any case, you won't be able to attain perfection - but if your learning is done effectively, then you will be able to replicate some of that intuitive "knack" for events that humans develop over a lifetime of their own learning.
Applications
There are a wide range of applications in which some forms of AI are used; it would be nigh impossible to cover them all. I will be focusing on just three of them.
The first we're going to talk about is the field of social media - your Facebook, Twitter, and to some extent even your Google and your news feed. This one, broadly speaking, deals with people and with what they do, informed by what you can mine out of what they post online and what they view online. Conceptually, it would seem almost boring, if not for the fact that this is the subfield of AI that really brings home the bacon. When you can get a good handle on people and their habits, what is that useful for? Advertising, of course! It should be no surprise that the major players in the field of AI are almost exclusively companies that make their fortune primarily off of internet advertising, since that's where the major profit of that kind of work comes in.
The big blue-sky AI idea of the future that gets the most focus these days is, of course, self-driving cars. In principle, it's an idea that isn't too bad - stay on the road, follow well-defined rules of traffic, and you start to have a working self-driving car. And there are already prototypes of working ones that do test drives on the road. So what's the problem with this idea? The fact that the driving environment, while generally calm and straightforward, is occasionally extremely unpredictable and possibly fatal. Those insurance folk and government regulators are going to need a lot of convincing before they're willing to let those things on the road in sufficiently large numbers - and they're not wrong to be cautious about a technology that isn't even close to ready for dealing with rare, dangerous situations.
As a gaming forum, the application most interesting to us is of course... gaming. Although, it might be interesting to entertain one related question: why do so many serious research institutions dedicate so much real AI research labor to board games and video games? The answer is actually fairly simple: these represent a very simplified environment in which important aspects of AI research - computation, dealing with uncertainty, getting enough samples to learn, and so on - can be tested and developed. I have even seen some AI work in which a Minecraft model was used to help teach a bipedal robot how to stand up - without having to damage the physical equipment in every single iteration of a task that can be effectively faked within a video game. Chess helped to develop computational board gaming, Go (with the recent AI champion, AlphaGo) helped to develop some important machine learning methods for AI (specifically within the subfield of deep learning) - and within video games, the next frontier is our own bread-and-butter: Starcraft.
There's a lot to Starcraft that is interesting from a computational perspective: real-time decision-making, imperfect information, an almost infinite range of potential decisions at any given instance in time, and so on. Being able to crack Starcraft would be to be able to outdo many of the standard "intellectual" advantages that humans possess in the game. And so it should be no surprise that it is a prize sought by many.
The Starcraft: Brood War scene has one particularly useful tool that makes it a great game for AI development: BWAPI. In a nutshell, it's an interface into the Brood War engine that AI can use in order to be able to perceive the environment. The long-standing tradition of developing custom content has helped build a rather substantial AI community within the Brood War scene, and they almost without stop play matches against each other as a means by which to gauge each other's progress and find means by which to improve their strategies. Starcraft, by comparison to the already complex challenges of chess and Go, is a particularly tough nut to crack, for all the reasons listed above. But little by little, these Brood War AI are getting better... perhaps one day, they will even be able to compete with the big dogs and actually win games against highly skilled human players?
Conclusion
Relative to my rocket blogs, this post is a fairly short one. That's not for lack of possible content to cover; AI is a large field with much that would be of interest to a wide range of people. But it's a tough topic to cover in the proper amount of depth - in part due to the fact that I am but a novice in the field, but in part due to the fact that it very quickly descends into an unpleasant amount of highly technical minutiae. Nevertheless, I hope that I've provided you with an interesting read - to be able to learn more, to be able to understand what the whole fuss about the Starcraft AI is about, or even just some food for thought for the next time you philosophize with your friends about what the world will look like when the AI take over. There's an important, albeit small, subsect of our gaming community that is very involved in all this stuff - and they do some interesting work that I'm sure many of you could appreciate if you understand the gist of what it is all about.