|
On March 18 2016 22:14 necrosexy wrote:Show nested quote +On March 18 2016 15:07 Mendelfist wrote:On March 18 2016 14:16 necrosexy wrote:On March 18 2016 09:38 Veldril wrote: I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).
Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works? http://www.teamliquid.net/forum/viewpost.php?post_id=25502046Read page 2 section A of the pdf You are confusing state space with complexity. What's the state space for throwing a basket ball in real life? That would be utterly impossible to do for an AI, right? Didn't realize there was an AI that can beat NBA players! I didn't mention beating NBA players. I said "throwing a basket ball". How large do you think the state space is for throwing a basket ball? How many discrete situations can occur, and what relevance do you think that has for how hard it is to do? I'm trying to tell you that the state space size for Starcraft is a red herring.
|
On March 18 2016 22:49 Mendelfist wrote:Show nested quote +On March 18 2016 22:14 necrosexy wrote:On March 18 2016 15:07 Mendelfist wrote:On March 18 2016 14:16 necrosexy wrote:On March 18 2016 09:38 Veldril wrote: I have a feeling that most people are either underestimate the complexity of Go (due to it's not being well-known in the west), overestimate the complexity of Starcraft (due to not understanding heuristic or bias), or not understand how the new AI technology work (due to have not read the Nature's paper yet).
Out of curiosity, how many people here have read or skim through the Nature's paper that describe how Alphago works? http://www.teamliquid.net/forum/viewpost.php?post_id=25502046Read page 2 section A of the pdf You are confusing state space with complexity. What's the state space for throwing a basket ball in real life? That would be utterly impossible to do for an AI, right? Didn't realize there was an AI that can beat NBA players! I didn't mention beating NBA players. I said "throwing a basket ball". How large do you think the state space is for throwing a basket ball? How many discrete situations can occur, and what relevance do you think that has for how hard it is to do? I'm trying to tell you that the state space size for Starcraft is a red herring. I was joking, because your analogy is terrible (e.g.,the goal is static, complete map information).
Static space is a rough measure of complexity. Of course it's not comprehensive (notice it's merely the first thing discussed in the report I linked), but the disparity between sc and chess/go is absurd -- even if you took a fraction. And bear in mind the estimates were excluding other factors that would've made it even worse!
|
On March 19 2016 10:31 necrosexy wrote: Static space is a rough measure of complexity.
No it isn't. Trying put a number on Starcrafts state space size is ridiculous, as is trying to do it on ANY real world problem. It doesn't tell you anything about how hard it is, because it's for all practical purposes always infinite. Starcraft is more similar to real world problems than Go, which I'm sure is why DeepMind thinks it's an interesting problem. For continuous problems you will have to find another number than state space.
|
They either have no idea what they're talking about or they're deliberately just giving their standard PR answers ("I will do my best! I will be victorious!"). AI would destroy humans without a single doubt. The only interesting question would be how low you could limit the AI's APM before humans stand a chance.
|
On March 20 2016 08:55 mechengineer123 wrote: They either have no idea what they're talking about or they're deliberately just giving their standard PR answers ("I will do my best! I will be victorious!"). AI would destroy humans without a single doubt. The only interesting question would be how low you could limit the AI's APM before humans stand a chance.
I know some players who had 900+ apm playing BW and they suck. Lot's of people have tried to make AI's really hard for BW and it was still beatable. I'd like to see one advanced enough to even come close at BW.
|
France12482 Posts
On March 20 2016 08:55 mechengineer123 wrote: They either have no idea what they're talking about or they're deliberately just giving their standard PR answers ("I will do my best! I will be victorious!"). AI would destroy humans without a single doubt. The only interesting question would be how low you could limit the AI's APM before humans stand a chance. Humans would keep destroying AI without a single doubt.
Very convincing right?
|
The AI has advantages and disadvantages versing a human for one its micro, bo and macro will be perfect. So it wont make mistakes. So in turn it will be very effective at whatever builds it does which will result in majority of players would start losing. Maybe not the top level player but a majority of the players will lose to it. As this is very expensive project to code an ai at that level it could accomplish tactic that even human cant perform once it starts to adopt these a tactics there is no hope. You could have put your research into the human brain which could have benefit us more but if you what an ai to win us a gaming while where have large issues around the world i don't know.What happened with deep blue will most possible happen again it will find the solution and make zero chance of an error as all human make errors as that what human are like but that said it was coded by a humans so it depends who codes it. But that said i'm on the wall with this one it could go either way as some of the top player are very intelligent people and i'm sure they have something up there sleeve.
|
On March 20 2016 16:36 stapla05 wrote: The AI has advantages and disadvantages versing a human for one its micro, bo and macro will be perfect. So it wont make mistakes. So in turn it will be very effective at whatever builds it does which will result in majority of players would start losing. Maybe not the top level player but a majority of the players will lose to it. As this is very expensive project to code an ai at that level it could accomplish tactic that even human cant perform once it starts to adopt these a tactics there is no hope. You could have put your research into the human brain which could have benefit us more but if you what an ai to win us a gaming while where have large issues around the world i don't know.What happened with deep blue will most possible happen again it will find the solution and make zero chance of an error as all human make errors as that what human are like but that said it was coded by a humans so it depends who codes it. But that said i'm on the wall with this one it could go either way as some of the top player are very intelligent people and i'm sure they have something up there sleeve. I think you're missing the point on AI developement. The point is making an AI cabaple of learning and adapting, making decisions based on that. After a certain point it can start learning about more complex things other than simple computer games. Theyre not trying to make AI that is concentrating on a single game.
|
Hey Everyone. I'm Dave Churchill and I organize and run the AIIDE Starcraft AI Competition, and I also wrote UAlbertaBot. I've noticed a lot of misinformation in this thread, so rather than reply to everything individually I decided to take the time to write a detailed history of Starcraft AI Competitions for those who are interested, you can find it here:
http://webdocs.cs.ualberta.ca/~cdavid/starcraftaicomp/history.shtml
In answer to Boxer's claim: I think it is foolish to say that AI will *never* beat humans at Starcraft, however I feel that this is still quite a few years away. Maybe 5-10 years (unless DeepMind is able to do something miraculous akin to AlphaGo, but that seems unlikely). I also believe that the first to beat expert humans will probably end up heavily abusing micromanagement to do so, so then we will probably enter a philosophical debate about what is 'fair' when it comes to dexterity based games.
Also, most people seem to be confused as to the objective that most of us in the RTS AI field have. Most of us are not really trying to make the best Starcraft bots possible, but instead to come up with new AI algorithms for solving hard problems, and then use Starcraft as a test-bed for those algorithms. We could have much stronger bots if we spent countless hours hard-coding strategies and rules, but that isn't very interesting from a true artificial intelligence point of view.
Thanks for all the discussion, it's great to see so many people interested in the topic!
|
Massive amounts of misinformation in this thread indeed, especially when it comes to deep reinforcement learning.
To elaborate on the current state of RTS AI, this recent article is well worth your time : 'RTS AI : Problems and techniques' richoux.fr
|
Obviously an AI could be coded to easily be better then any human. The question is how hard is it to create that AI.
I feel creating this is more along of the lines 'lots of work', compared to an AI of something like Chess or Go that requires a lot of taught and knowledge.
For example in sc there are builds that you can use on certain maps against certain races. Coding an AI to use those builds would be rather easier. Then you could code in functions that know how to adapt strategy based on certain situations (for example a one base all in, or if someone cannon rushes you are X minute at X spot and at X map there is a way to handle it as efficiently as possible). Then you code in functions knowing when to engage, when to run, what positions to fortify, etc. Then you code functions that understand map. Then you can code micro functions, id imagine probe/drone would be so effective human players would need to send 2 workers to deny 1 from permently harassing, or blink micro would be perfect (even with low'ish apm). Then you code functions that abuse the fact humans can't multitask as much (hitting many different spots at once). You can then even write functions knowing how certain people play, expecting certain strategies, knowing what they struggle with (aka: the ai would never forget). Then you can even write a function that can parse tends of thousands of games and better understand opponents and strategies.... etc.
So overall, I feel a sc2 AI would just take a lot of man hours. Unlike a game like chess or Go which each move exponentially increases the possibilities (so the AI needs to be smart enough to trim out all the obvious bad moves, while in a game like sc2 horribly bad news are much more obvious).
If there was a simple way to use something like c++ to code an AI (hooking into the game and getting the data in a nice clean interface) I am sure more people (like myself) would mess around and build diamond/masters level AI's.
|
BoxeR: "AlphaGo won't beat humans in StarCraft"
of course not BoxeR, it can only play Go.
|
Thanks for the article, I was hoping someone creating bots would post in this thread.
After reading the article and the paper posted just afterwards I have some questions:
- I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
- There were some mention of bots learning from replays, do you know if this was successful?
I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.
|
Netherlands557 Posts
On March 22 2016 07:10 MadMod wrote:Thanks for the article, I was hoping someone creating bots would post in this thread. After reading the article and the paper posted just afterwards I have some questions: - I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
- There were some mention of bots learning from replays, do you know if this was successful?
I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article.
If by theory based you mean complex algorithms: My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment.
There were some papers about learning from replay, but no top bot that I know of used replay analysis.
|
On March 22 2016 09:58 LetaBot wrote:Show nested quote +On March 22 2016 07:10 MadMod wrote:Thanks for the article, I was hoping someone creating bots would post in this thread. After reading the article and the paper posted just afterwards I have some questions: - I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
- There were some mention of bots learning from replays, do you know if this was successful?
I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article. If by theory based you mean complex algorithms: My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment. There were some papers about learning from replay, but no top bot that I know of used replay analysis.
How are you doing this? All 100% using the editor or using third party tools?
|
|
Segment starts after first commercial about 10 min in. Not really a mention, just use SC as part of his joke saying he cant even beat computers in it or FIFA
|
On March 22 2016 09:58 LetaBot wrote:Show nested quote +On March 22 2016 07:10 MadMod wrote:Thanks for the article, I was hoping someone creating bots would post in this thread. After reading the article and the paper posted just afterwards I have some questions: - I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
- There were some mention of bots learning from replays, do you know if this was successful?
I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article. If by theory based you mean complex algorithms: My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment. There were some papers about learning from replay, but no top bot that I know of used replay analysis.
That is very interesting. To create a good search space for the MCTS seems extremely hard. It would be awesome to see a very adaptable bot though.
I get the feeling from your answer that the current more adaptable bots play better against humans compared to the less adaptable ones,even though they are not the best in bot vs bot, is his true?
|
On March 19 2016 16:56 Mendelfist wrote:Show nested quote +On March 19 2016 10:31 necrosexy wrote: Static space is a rough measure of complexity.
For continuous problems you will have to find another number than state space.
I kind of wonder if there is such a thing as a continuous problem...at least for playing games based on thought.
|
Netherlands557 Posts
On March 22 2016 12:15 Hotshot wrote:Show nested quote +On March 22 2016 09:58 LetaBot wrote:On March 22 2016 07:10 MadMod wrote:Thanks for the article, I was hoping someone creating bots would post in this thread. After reading the article and the paper posted just afterwards I have some questions: - I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
- There were some mention of bots learning from replays, do you know if this was successful?
I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article. If by theory based you mean complex algorithms: My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment. There were some papers about learning from replay, but no top bot that I know of used replay analysis. How are you doing this? All 100% using the editor or using third party tools?
This is for Brood War. I use the Brood War Application Programming Interface
On March 23 2016 05:36 MadMod wrote:Show nested quote +On March 22 2016 09:58 LetaBot wrote:On March 22 2016 07:10 MadMod wrote:Thanks for the article, I was hoping someone creating bots would post in this thread. After reading the article and the paper posted just afterwards I have some questions: - I am still wondering how they theory based bots fare versus the more heuristic based ones, are there any bots that use theory based approaches on the strategic level?
- There were some mention of bots learning from replays, do you know if this was successful?
I could track this down myself, but then I would have to wade through a lot of hard to interpret papers. So I am hoping for an answer here. Thanks again for the article. If by theory based you mean complex algorithms: My bot uses pathfinding algorithms like A* for things like Wall-building and optimizing mineral gathering. I am currently working on influence maps and MCTS for the strategic level. For bot vs bot a heuristic based approach will still get you further at the moment. There were some papers about learning from replay, but no top bot that I know of used replay analysis. That is very interesting. To create a good search space for the MCTS seems extremely hard. It would be awesome to see a very adaptable bot though. I get the feeling from your answer that the current more adaptable bots play better against humans compared to the less adaptable ones,even though they are not the best in bot vs bot, is his true?
Yea you need to reduce the search space to get good results with MCTS.
For now, the bots that are capable of executing one strategy partially well have a better chance defeating a human player. But in a Bo5 the more adaptable stands a better chance.
|
|
|
|