I've been working on an integrated client a while ago. Actually, we got it done, you can request, and play matches over the network, but then I suspended the project for other reasons. It is called StarCraft Human 'N' AI League, or SCHNAIL Here is the first video:
And shortly after, an update:
Now I'm working on it again. I reworked the client to work fully locally, downloading the bots. As of writing this, I can run most bots. I don't have a video on it, but you will hear about this soon enough. As you can see, you don't need to mess around with downloading and configuring anything - just select a bot, a map, and press play. I will publish an article about it in the near future. As you can see, it is 1.16. As soon as BWAPI gets Remastered support, I will make the switch.
On November 20 2019 10:36 Broodwar4lyf wrote: I'm trying to download SAIDA bot (arguably the best one ever) but it seems it's gone. Also those library errors when executing the exe is a pain in the ass.
Most of those bots only works in 1.16.1
I have 1.16.1 but still run into a lot of missing .dll files when running bots with executable files e.g. Krasi0, Adias
This is a topic that pops up time and again – we should modify the Brood War API to do things in a certain way, so bots will behave more like humans! Bots are having unfair advantages!
the goal is to have human-like games, is it not?
Everyone seems to have a different opinion about this topic, but personally, I would say the main challenge for StarCraft AI is rather for AIs to beat humans by outsmarting them - they don't necessarily have to play human-like, unless you want to use them as practice partners while you're preparing/training for playing against humans.
FYI, in the context of SC2, Oriol Vinyals from DeepMind recently gave a presentation on AlphaStar at KHIPU, and in the video at http://tv.vera.com.uy/video/55389 at 1:04:46 he says (paraphrasing):
Why does fairness matter? Why are you limiting your agent? Why not simply treat it as a reinforcement learning problem and just try to win and see what comes out? The problem I think, especially in games like StarCraft, it's a game that's been designed with some of these limitations in mind. So, what we wouldn't do is break the game, so to speak. As I was saying, I don't want "rock" to suddenly be very powerful because then the game of rock-paper-scissors becomes uninteresting. I think the question of why we try to impose as many fairness constraints that seem reasonable as possible is mostly so that we don't break the game that has been carefully designed and actually evolved through many years. The game was not only released - they actually patched the game based on balance issues between races that people find, so it's a really complicated process. The actions per minute is one obvious element of imposing some limits for agents but there are actually quite a few more, like how precise are the actions. People that play, if they get under pressure they might start making mistakes, and so on. It's really hard, even if we had the robotic arm, to really say "this is exactly how people attack or play the game" so we need something that is reasonable in that aspect but there's always going to be more. If you see an image, do you add noise to the image? Certainly, we have more precision in the center of our eyes. It's a very cool problem to start thinking about and one that I think more people might start discussing.
Oriol then also elaborated on the issue of fairness about how the agents play two hundred game years, as opposed to a professional player who plays the game a lot but plays in the order of years not hundreds of years. He then said (paraphrasing):
It would certainly be interesting to limit the amount of experience of agents and there's some papers that do that. We didn't do it in this project but if you limit it, clearly, the ceiling of performance would be quite lower but it would still probably be above average play, I would say, in terms of percentile.
So, the primary reason they add limitations for APM and camera etc were to try to avoid AlphaStar settling on an uninteresting strategy, not because the humans would complain AlphaStar's interface to the SC2 API was unfair compared to a human's interface to the SC2 UI, but I expect that was the secondary reason. Note that I referred to the human's interface to the SC2 UI - not the limitations of humans except for how they relate to the interface to the SC2 UI. It's about the capabilities of the interface, not about the capabilities of the human. Oriol does talk specifically about the limitations of humans, but personally I am more interested in seeing what AIs would capable of using the SC2 UI by running as software on the PC (just using pixels, virtual mouse & keyboard control, and perhaps virtual audio) than also trying to limit their mechanical capabilities to be more human-like (robot arms controlling a physical keyboard and mouse, video camera watching a physical monitor, microphone, etc etc etc).
AlphaStar's APM throttling and camera limitations are just a simple ways to make the capabilities using the SC2 API behave more similarly to the capabilities of the SC2 UI. E.g. in the SC2 UI you need to move the camera to get info and select your units on a different part of the map, which might require many frames to do, but the SC2 API can select and command your units from all over the map all within a single frame, hence the camera limitations and APM throttling. If AlphaStar had just used pixels and virtual mouse & keyboard control (and perhaps virtual audio) like they did for their Atari work (rather than an API that uses a raw data interface containing much more highly structured data as input and output), personally I wouldn't care if they removed the APM throttling logic, because throttling APM would be an unnecessary limitation imposed above and beyond the interface to the SC2 UI.
^^ I can't beat the updated bots without "gaming" the entire bot's strategy. I would like to watch bots fight against other bots and I have tested a lot but Locutus keeps winning at FS. I'm still trying to run ADIAS (SAIDA clone) and Saida bot itself if i knew where to get one for download.
To answer your questions, recent versions of the binaries (but not the source code) of BananaBrain, Locutus, adias (which is currently identical to the version of SAIDA that was used in the SSCAIT 2018/19 tournament, just renamed), krasi0 can be downloaded from https://sscaitournament.com/index.php?action=scores. I won't explain how to install and run them though. SSCAIT streams bot-vs-bot matches 24x7 at https://sscaitournament.com or https://www.twitch.tv/sscait. If you want to see particular bots play against each other, except around the time of SSCAIT's annual tournament, you can vote on which bots will play against each other in the next game, via https://sscaitournament.com/index.php?action=voteForPlayers. SSCAIT and other bot-vs-bot ladders like BASIL (https://basil.bytekeeper.org/) also provide replays.
The binaries for AlphaStar haven't been published, and probably won't be. DeepMind published the pseudo-code and detailed neural network architecture specification, hyperparameters, implementation details etc in a paper, but not the full source code.
On November 22 2019 14:09 Quatari wrote: To answer your questions, recent versions of the binaries (but not the source code) of BananaBrain, Locutus, adias (which is currently identical to the version of SAIDA that was used in the SSCAIT 2018/19 tournament, just renamed), krasi0 can be downloaded from https://sscaitournament.com/index.php?action=scores. I won't explain how to install and run them though. SSCAIT streams bot-vs-bot matches 24x7 at https://sscaitournament.com or https://www.twitch.tv/sscait. If you want to see particular bots play against each other, except around the time of SSCAIT's annual tournament, you can vote on which bots will play against each other in the next game, via https://sscaitournament.com/index.php?action=voteForPlayers. SSCAIT and other bot-vs-bot ladders like BASIL (https://basil.bytekeeper.org/) also provide replays.
The binaries for AlphaStar haven't been published, and probably won't be. DeepMind published the pseudo-code and detailed neural network architecture specification, hyperparameters, implementation details etc in a paper, but not the full source code.
I'm not having success running Adias or Saida because the former keeps telling me the .dll it loads is "nothing" and the Saida one just goes back to the desktop and drops. My computer says something about resolution but I've had that error with some other bots and they go back to play in like 5 seconds. I hope there's a program where you can simply just play with or against bots any time. The "nothing" dll error is incorrect since i've already pointed it out in the droplauncher program
@Broodwar4lyf When a lot of bots run, in addition to depending on BWAPI, they depend on DLLs such as a particular version of the Visual C++ Redistributable(s) or Qt, or BWAPI-related library DLLs (especially Java bots). I suggest you try installing the redistributables at http://www.cs.mun.ca/~dchurchill/starcraftaicomp/all_vcredist_x86.zip and try again. If that doesn't work, perhaps try copying the files from https://github.com/Games-and-Simulations/sc-docker/tree/master/docker/dlls into your Starcraft program folder (in the same folder as Starcraft.exe). Also check you're using the correct version of BWAPI if you haven't already (each bot depends on a particular version of BWAPI). Depending on what the individual bot depends on, it may or may not work. SCHNAIL aims to avoid all these problems and make it easy to play vs bots, and I am looking forward to it.
On December 05 2019 16:58 Peter767 wrote: Artificial Intelligence is the very newest and broadest topic available nowadays. I have learned the basics of AI from Facebook. I got some new and exciting information there.
Awesome initiative! I would love to be able to practice against AIs with selectable (approximate) MMR ranges, maybe 1 for each rank (F, E, D, ...) up to however good they get lol.
I think this could make team games, like a co-op, fun too! Me and my friends used to do like 3v5 comps on large maps, which start out really fun when you're a complete newby, but soon enough after midgame the default comps basically do nothing. (Another aside idea: adaptive difficulty, so the AI constantly changes its difficulty in-game to keep the game going. If it's killing you too fast, it slows down its macro, if the human(s) are winning, it cranks up its micro and multitasking, etc.)
How can people help support your work? (edit - found your Patreon )
I think this could make team games, like a co-op, fun too! Me and my friends used to do like 3v5 comps on large maps, which start out really fun when you're a complete newby, but soon enough after midgame the default comps basically do nothing.
Well, hate to ruin your fun, but most bots only support 1v1 Melee matches That's not to say that this is impossible in the future!
(Another aside idea: adaptive difficulty, so the AI constantly changes its difficulty in-game to keep the game going. If it's killing you too fast, it slows down its macro, if the human(s) are winning, it cranks up its micro and multitasking, etc.)
Some bots have opponent modeling, which basically do this. The more you play against it, the better it gets. And generally, bots that have learning enabled will get tougher. This is one of the great questions for me, how to handle it.
How can people help support your work? (edit - found your Patreon )
Much appreciated! A lot of work went into this, and there is much more to come. Every penny helps.