If you like tanks and AI, you should try looking into this game called RoboCode.
My colleague and I just released an article explaining the implementation of feedforward neural networks, reinforcement learning and neuroevolution of augmenting topologies (new science only 10ish years old).
The article can be found on Robocodes wiki. (It's the latest added article).
And a promo video here:
I'm currently starting to write my thesis with some PhD students who are solving balance between unit-types (ie starcrafts units, marines, zerglings, etc.)
Thats actually really really really cool. Thanks so much for sharing. I can't wait to get later on into my comp engineering degree to hopefully play with some cool stuff like this and more.
While video game graphics have been improving at astounding rates, game play and especially computer AI seem to have been neglected along the way. Starcraft2, Civ5, etc. These games are supposed to captivate us w/ their strategy not woo us with their porn! We need more emphasis on game play and AI over graphics. Your video reminds me of a simpler video game era and I applaud your efforts. Thnx for sharing and gluck w/ your studies.
On December 15 2010 04:52 Bajadulce wrote: While video game graphics have been improving at astounding rates, game play and especially computer AI seem to have been neglected along the way. Starcraft2, Civ5, etc. These games are supposed to captivate us w/ their strategy not woo us with their porn! We need more emphasis on game play and AI over graphics. Your video reminds me of a simpler video game era and I applaud your efforts. Thnx for sharing and gluck w/ your studies.
The reason AI doesn't improve at the same rate of graphics and game play, is due to programmers being too scared to try new things - everything (when on a budget and a tight timeline) needs to be debug-able. Newer AI (as in the video) can start acting weird, and it can be troublesome to debug it. Ie we rewarded the tank to develop a movement to optimize it's speed and minimize the amount of times it crashed into walls. It evolved into a tank running in circles - optimizing speed and certainly not running into walls. Gah! So we told it to minimize turning as well.
We had guest lecturer Morten Heiberg (ex-IOInteractive lead programmer, hitman and Kane and Lynch-series) telling us they only had very simple AI due to the above reasons .
Complex AI can only be seen in games like Black and white (remember the God-game?). Hope more publishers dare to take risks.
Happy that you guys like it! Lots of new knowledge can be brought into the field of AI, so please do start studying it .
On December 15 2010 04:52 Bajadulce wrote: While video game graphics have been improving at astounding rates, game play and especially computer AI seem to have been neglected along the way. Starcraft2, Civ5, etc. These games are supposed to captivate us w/ their strategy not woo us with their porn! We need more emphasis on game play and AI over graphics. Your video reminds me of a simpler video game era and I applaud your efforts. Thnx for sharing and gluck w/ your studies.
The reason AI doesn't improve at the same rate of graphics and game play, is due to programmers being too scared to try new things - everything (when on a budget and a tight timeline) needs to be debug-able. Newer AI (as in the video) can start acting weird, and it can be troublesome to debug it. Ie we rewarded the tank to develop a movement to optimize it's speed and minimize the amount of times it crashed into walls. It evolved into a tank running in circles - optimizing speed and certainly not running into walls. Gah! So we told it to minimize turning as well.
We had guest lecturer Morten Heiberg (ex-IOInteractive lead programmer, hitman and Kane and Lynch-series) telling us they only had very simple AI due to the above reasons .
Complex AI can only be seen in games like Black and white (remember the God-game?). Hope more publishers dare to take risks.
Happy that you guys like it! Lots of new knowledge can be brought into the field of AI, so please do start studying it .
It's too bad the creature in Black and White 2 was dumbed down so much. That ruined the game for me =(.
On December 15 2010 04:52 Bajadulce wrote: While video game graphics have been improving at astounding rates, game play and especially computer AI seem to have been neglected along the way. Starcraft2, Civ5, etc. These games are supposed to captivate us w/ their strategy not woo us with their porn! We need more emphasis on game play and AI over graphics. Your video reminds me of a simpler video game era and I applaud your efforts. Thnx for sharing and gluck w/ your studies.
The reason AI doesn't improve at the same rate of graphics and game play, is due to programmers being too scared to try new things - everything (when on a budget and a tight timeline) needs to be debug-able. Newer AI (as in the video) can start acting weird, and it can be troublesome to debug it. Ie we rewarded the tank to develop a movement to optimize it's speed and minimize the amount of times it crashed into walls. It evolved into a tank running in circles - optimizing speed and certainly not running into walls. Gah! So we told it to minimize turning as well.
We had guest lecturer Morten Heiberg (ex-IOInteractive lead programmer, hitman and Kane and Lynch-series) telling us they only had very simple AI due to the above reasons .
Complex AI can only be seen in games like Black and white (remember the God-game?). Hope more publishers dare to take risks.
Happy that you guys like it! Lots of new knowledge can be brought into the field of AI, so please do start studying it .
It's too bad the creature in Black and White 2 was dumbed down so much. That ruined the game for me =(.
Wait it was dumbed down? I only played the first one... Damn i remember teaching it to pick up villagers and throw them out in the water.
On December 15 2010 05:46 AcrossFiveJulys wrote: How did you represent the statespace/actionspace for NN/RL? What RL algorithm did you use?'
edit: ah, nevermind, i see you posted a link to a wiki with some papers. will read through those when I have a chance.
For input for the aim ANN were previous positions of the enemy tank relative to the position of our own tank (we did experiments how many previous positions were optimal), and the power of the shot we were firing. RL algorithm we used a standard Q-table, which we found out weren't necessary as it can easily be calculated but that's a learning experience thing.
For the NEAT we evolved in phases. First learning the tank not to run into walls, then learn it to chase the opponent tank, then learning to avoid shots, etc. Keep updating it's fitness function.
On December 15 2010 05:46 AcrossFiveJulys wrote: How did you represent the statespace/actionspace for NN/RL? What RL algorithm did you use?'
edit: ah, nevermind, i see you posted a link to a wiki with some papers. will read through those when I have a chance.
For input for the aim ANN were previous positions of the enemy tank relative to the position of our own tank (we did experiments how many previous positions were optimal), and the power of the shot we were firing. RL algorithm we used a standard Q-table, which we found out weren't necessary as it can easily be calculated but that's a learning experience thing.
For the NEAT we evolved in phases. First learning the tank not to run into walls, then learn it to chase the opponent tank, then learning to avoid shots, etc. Keep updating it's fitness function.
What was the fitness equation? Is hitting the target in the middle more valuable than hitting the target near the edge? I did my undergrad work on AI. (Have not gone to grad school, don't want to be a prof).
On December 15 2010 04:52 Bajadulce wrote: While video game graphics have been improving at astounding rates, game play and especially computer AI seem to have been neglected along the way. Starcraft2, Civ5, etc. These games are supposed to captivate us w/ their strategy not woo us with their porn! We need more emphasis on game play and AI over graphics. Your video reminds me of a simpler video game era and I applaud your efforts. Thnx for sharing and gluck w/ your studies.
The reason AI doesn't improve at the same rate of graphics and game play, is due to programmers being too scared to try new things - everything (when on a budget and a tight timeline) needs to be debug-able. Newer AI (as in the video) can start acting weird, and it can be troublesome to debug it. Ie we rewarded the tank to develop a movement to optimize it's speed and minimize the amount of times it crashed into walls. It evolved into a tank running in circles - optimizing speed and certainly not running into walls. Gah! So we told it to minimize turning as well.
We had guest lecturer Morten Heiberg (ex-IOInteractive lead programmer, hitman and Kane and Lynch-series) telling us they only had very simple AI due to the above reasons .
Complex AI can only be seen in games like Black and white (remember the God-game?). Hope more publishers dare to take risks.
Happy that you guys like it! Lots of new knowledge can be brought into the field of AI, so please do start studying it .
It's too bad the creature in Black and White 2 was dumbed down so much. That ruined the game for me =(.
Lol, punishing it for running into walls I don't think is a good idea. Running into a wall doesn't immediately correspond to worse position, instead adding an input variable for how close it is to a wall will help the AI more since it will eventually learn that being too close to a wall corresponds with a less optimal strategy, but that not moving is even worse. This is why I like reinforcement learning better than a simple ANN. ANN tend to converge towards local maxima to maximize immediate rewards, whereas the randomness introduced in a TD algorithm helps it get closer to a global maxima since individual moves are less important than an overall goal. In a ANN only algorithm, it will get close to a wall and think that it is doing worse, but in actuality running into a wall is fine if it helps you learn. I that mixing algorithms like the monte carlo, etc will produce better results.
I feel like there is some algorithm out there waiting to be discovered still.
On December 15 2010 04:52 Bajadulce wrote: While video game graphics have been improving at astounding rates, game play and especially computer AI seem to have been neglected along the way. Starcraft2, Civ5, etc. These games are supposed to captivate us w/ their strategy not woo us with their porn! We need more emphasis on game play and AI over graphics. Your video reminds me of a simpler video game era and I applaud your efforts. Thnx for sharing and gluck w/ your studies.
The reason AI doesn't improve at the same rate of graphics and game play, is due to programmers being too scared to try new things - everything (when on a budget and a tight timeline) needs to be debug-able. Newer AI (as in the video) can start acting weird, and it can be troublesome to debug it. Ie we rewarded the tank to develop a movement to optimize it's speed and minimize the amount of times it crashed into walls. It evolved into a tank running in circles - optimizing speed and certainly not running into walls. Gah! So we told it to minimize turning as well.
We had guest lecturer Morten Heiberg (ex-IOInteractive lead programmer, hitman and Kane and Lynch-series) telling us they only had very simple AI due to the above reasons .
Complex AI can only be seen in games like Black and white (remember the God-game?). Hope more publishers dare to take risks.
Happy that you guys like it! Lots of new knowledge can be brought into the field of AI, so please do start studying it .
It's too bad the creature in Black and White 2 was dumbed down so much. That ruined the game for me =(.
Lol, punishing it for running into walls I don't think is a good idea. Running into a wall doesn't immediately correspond to worse position, instead adding an input variable for how close it is to a wall will help the AI more since it will eventually learn that being too close to a wall corresponds with a less optimal strategy, but that not moving is even worse. This is why I like reinforcement learning better than a simple ANN. ANN tend to converge towards local maxima to maximize immediate rewards, whereas the randomness introduced in a TD algorithm helps it get closer to a global maxima since individual moves are less important than an overall goal. In a ANN only algorithm, it will get close to a wall and think that it is doing worse, but in actuality running into a wall is fine if it helps you learn. I that mixing algorithms like the monte carlo, etc will produce better results.
I feel like there is some algorithm out there waiting to be discovered still.
Actually for movement it was a NEAT and not just a "simple ANN" . Punishing it for driving into walls were needed, as it drains energy from the tank. The fitness equations (all of them) can be found in the article. Lots of math there :p.
And yeah I agree there's lot of stuff to be discovered in the field of AI - I hope we'll write a new subcategory for NEAT in my thesis
On December 15 2010 10:32 Qzy wrote: The article and video were just mentioned on RoboWiki's twitter :p... /proud
And I found this thread from @pavelsavara (one of the Robocode devs) =) Great work on that vid - was easy to see you had passion for your work with those meticulous debugging graphics. (I'm Voidious and behind @robowiki, btw.) Are you going to enter any of your bots in the rumble?
It's interesting to note that the best Robocode bots use relatively simple classification algorithms compared to the field of machine learning in general. Things like k-nearest neighbors. How you distill the game state into relevant inputs and outputs is super important (GuessFactor is a prime example). You don't have much CPU time to make a decision. And you're continually gathering data, so your system needs to learn quickly but also scale well to leverage huge amounts of data if it's available.
And yeah, you should all check out Robocode / the RoboWiki. :-P It's pretty easy to get started but with plenty of room for depth. Though I've basically been on hiatus from Robocode since I first got a SC2 beta key, so of course I can understand why you wouldn't... =)