• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 05:53
CEST 11:53
KST 18:53
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 1 - Final Week6[ASL19] Finals Recap: Standing Tall12HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0
Community News
Esports World Cup 2025 - Brackets Revealed10Weekly Cups (July 7-13): Classic continues to roll3Team TLMC #5 - Submission extension3Firefly given lifetime ban by ESIC following match-fixing investigation17$25,000 Streamerzone StarCraft Pro Series announced7
StarCraft 2
General
Weekly Cups (July 7-13): Classic continues to roll Esports World Cup 2025 - Brackets Revealed The GOAT ranking of GOAT rankings Team TLMC #5 - Submission extension Who will win EWC 2025?
Tourneys
RSL: Revival, a new crowdfunded tournament series FEL Cracov 2025 (July 27) - $8000 live event $5,100+ SEL Season 2 Championship (SC: Evo) WardiTV Mondays Sparkling Tuna Cup - Weekly Open Tournament
Strategy
How did i lose this ZvP, whats the proper response Simple Questions Simple Answers
Custom Maps
External Content
Mutation # 482 Wheel of Misfortune Mutation # 481 Fear and Lava Mutation # 480 Moths to the Flame Mutation # 479 Worn Out Welcome
Brood War
General
Flash Announces (and Retracts) Hiatus From ASL BW General Discussion BGH Auto Balance -> http://bghmmr.eu/ Starcraft in widescreen A cwal.gg Extension - Easily keep track of anyone
Tourneys
Cosmonarchy Pro Showmatches [Megathread] Daily Proleagues CSL Xiamen International Invitational [BSL20] Non-Korean Championship 4x BSL + 4x China
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
Path of Exile Nintendo Switch Thread Stormgate/Frost Giant Megathread CCLP - Command & Conquer League Project The PlayStation 5
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
Russo-Ukrainian War Thread Stop Killing Games - European Citizens Initiative US Politics Mega-thread Summer Games Done Quick 2025! Things Aren’t Peaceful in Palestine
Fan Clubs
SKT1 Classic Fan Club! Maru Fan Club
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread [\m/] Heavy Metal Thread
Sports
Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023 2024 - 2025 Football Thread NBA General Discussion NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Men Take Risks, Women Win Ga…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 503 users

DeepMind sets AlphaGo's sights on SCII - Page 5

Forum Index > SC2 General
Post a Reply
Prev 1 2 3 4 5 6 7 14 15 16 Next All
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 28 2016 14:05 GMT
#81
On March 28 2016 15:39 JimmyJRaynor wrote:
Show nested quote +
On March 28 2016 13:58 kingjames01 wrote:
On March 28 2016 13:50 JimmyJRaynor wrote:
On March 28 2016 13:46 kingjames01 wrote:
On March 28 2016 13:39 JimmyJRaynor wrote:
without heuristic functions i think AlphaStar will have trouble "learning" the game.
and with heuristic functions it'll play with the style of dictated by them.

That was the prevailing stance before AlphaGo fought Lee Sedol... It didn't hold up to reality though.


who is the winner?
the best Chess AI with heuristics or Alpha?


That's a weird way to defend a point...

Are they playing Chess, Go or StarCraft?


here is why Alpha will have a tough time without heuristics in a game like Chess or Starcraft

https://webdocs.cs.ualberta.ca/~cdavid/pdf/aiide12-combat.pdf

years from now .. who knows.. maybe no heuristics are required. but for now.. Alpha will have a tough time.


Did you even read the paper? You can't just skim the abstract for keywords and post it if the article doesn't actually support your stance.

This paper outlines an alpha-beta pruning method which can be used in small group combats. The only relevant text in the entire article is:
These advantages come at a cost, namely increased runtime and/or memory requirements. Therefore, it can be challenging to adapt established search algorithms to large scale realtime decision problems, e.g. video games and robotics.



I'm not sure if I'm missing your real point or how familiar you are with the internal workings of AlphaGo. AlphaGo uses a policy network to decide which moves to consider. This plays the same role as heuristic-based pruning. The difference between AlphaGo's and previous Go playing AIs is that AlphaGo's policy network does not rely on domain knowledge (ie. special rules about Go) explicitly programmed in by the developers. AlphaGo learned how to play Go using an initial seed of sample human games and then it was set to play against itself.



Who would sup with the mighty, must walk the path of daggers.
IAmWithStupid
Profile Blog Joined February 2013
Russian Federation1016 Posts
March 28 2016 14:08 GMT
#82
Sniper should be a consultant for Google. Just imagine it: most evil and loathsome zerg helps an AI to defeat humanity.
Insert wise words here
sertas
Profile Joined April 2012
Sweden886 Posts
March 28 2016 14:23 GMT
#83
its a big misstake to chose sc2 over bw btw, one is complex strategical game and sc2 is just a pure tactical game dissapointed in google here
alukarD
Profile Joined July 2012
Mexico396 Posts
March 28 2016 14:44 GMT
#84
We should really hype this as a community and help it become a mainstream Machines vs Humans event (for the sake of gaining a bigger fanbase for SCII at least).

Also, if people understand the depth and layers of a complex game such as SCII, people would be more inclined on giving it a try, you know, people like challenges.
Die Trying
ZAiNs
Profile Joined July 2010
United Kingdom6525 Posts
March 28 2016 14:46 GMT
#85
On March 28 2016 23:44 alukarD wrote:
We should really hype this as a community and help it become a mainstream Machines vs Humans event (for the sake of gaining a bigger fanbase for SCII at least).

Also, if people understand the depth and layers of a complex game such as SCII, people would be more inclined on giving it a try, you know, people like challenges.

This is years away from happening, if it does happen.
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 15:03:42
March 28 2016 14:50 GMT
#86
On March 28 2016 19:32 Mendelfist wrote:
Show nested quote +
On March 28 2016 19:15 MockHamill wrote:
Starcraft is so much more complex than Go it is absurd. Basically the amount of permutations in the first 3 minutes of Starcraft is larger then any match of Go.

As someone who has played chess (no rank), Go (10 kyu) and Starcraft (platinum) at low amateur level I say you are wrong. At the strategic level Go is enormously more complex and deep than both chess and Starcraft. There is absolutely no comparison. Go completely dwarfs any game that I have come in contact with. Measuring complexity in "number of permutations" is not meaningful. How many permutations are there in a game of billiard? Do you think it would be hard for a computer?


You miss the point.

Number of permutations is indeed important, in fact it's the most important factor when it comes to making a self learning AI. You still have to actually use the inputs of a game in order to train the gaming algorithm. The data set is way larger, and the task of purely training a neural network on this is highly intractable. The dimensionality of the space of all possible moves for starcraft is orders, and orders of magnitude larger than that of Go. For instance, in Go, at every turn you can move one unit. You have N units. Thus each turn, you have N possible choices of which units to move. In Starcraft, if you have N units, there are 2^N possible choices of which units to select. This includes buildings. In Go, the average number of turns is in the tens. In Starcraft there's 24 frames per second, and each frame has a possible set of actions. Go is a 19x19 board, Starcraft is a thousands by thousands board.

With Go, it is reasonable (though probably still extremely computationally expensive) to set your dimensionality of the data set as all possible moves, then take games as a data set, and just straightforwardly train a neural network to that. I'm sure the Alpha Go team was more sophisticated than that, and took a crazy amount of clever shortcuts. But most of the cleverness there came from finding ways to conserve computational resource, not from trying to think of the deep strategies. How deep the strategies are in Go had virtually nothing to do with the difficulty of their task. Starcraft will be insurmountably harder to train than Go, regardless of whether or not its strategies are deeper (and I agree with you, Go is much deeper in this regard). Like I said, with Go, even though the team was more clever, it is feasible to brute force train a NN on the exact moves. With Starcraft that isn't even remotely feasible. Not with the greatest super computers in the world. The number of possible moves scales exponentially with the number of units you have, and again exponentially with the number of frames the game takes. They'll have to take extremely different and far more clever strategies in training a SC2 AI, and it will be a hell of a lot more difficult.

I think a lot of people here are thinking too much about this as if it were like teaching a human. If that were so, then yes, Go would be a lot harder to teach the strategies to. But the problem of taking the best strategy when you have in mind the set of all strategies you can play is pretty much solved in machine learning already. Any maximum likelihood estimator can make a better decision than pretty much any human, as long as it has the right inputs. Giving it the set of strategies it can use is the real problem, and much, much harder to train for an AI than for a human. And ridiculously harder to train for Starcraft than for Go.
andrewlt
Profile Joined August 2009
United States7702 Posts
March 28 2016 15:01 GMT
#87
You can easily cut SC2's difficulty by giving the computer a non-Korean visa.
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 28 2016 15:02 GMT
#88
On March 28 2016 23:50 The Bottle wrote:
Show nested quote +
On March 28 2016 19:32 Mendelfist wrote:
On March 28 2016 19:15 MockHamill wrote:
Starcraft is so much more complex than Go it is absurd. Basically the amount of permutations in the first 3 minutes of Starcraft is larger then any match of Go.

As someone who has played chess (no rank), Go (10 kyu) and Starcraft (platinum) at low amateur level I say you are wrong. At the strategic level Go is enormously more complex and deep than both chess and Starcraft. There is absolutely no comparison. Go completely dwarfs any game that I have come in contact with. Measuring complexity in "number of permutations" is not meaningful. How many permutations are there in a game of billiard? Do you think it would be hard for a computer?


You miss the point.

Number of permutations is indeed important, in fact it's the most important factor when it comes to making a self learning AI. You still have to actually use the inputs of a game in order to train the gaming algorithm. The data set is way larger, and the task of purely training a neural network on this is highly intractable. The dimensionality of the space of all possible moves for starcraft is orders, and orders of magnitude larger than that of Go. For instance, in Go, at every turn you can move one unit. You have N units. Thus each turn, you have N possible choices of which units to move. In Starcraft, if you have N units, there are 2^N possible choices of which units to select. This includes buildings. In Go, the average number of turns is in the tens. In Starcraft there's 24 frames per second, and each frame has a possible set of actions. Go is a 19x19 board, Starcraft is a thousands by thousands board.

With Go, it is reasonable (though probably still extremely computationally expensive) to set your dimensionality of the data set as all possible moves, then take games as a data set, and just straightforwardly train a neural network to that. I'm sure the Alpha Go team was more sophisticated than that, and took a crazy amount of clever shortcuts. But most of the cleverness there came from finding ways to conserve computational resource, not from trying to think of the deep strategies. How deep the strategies are in Go had virtually nothing to do with the difficulty of their task. Starcraft will be insurmountably harder to train than Go, regardless of whether or not its strategies are deeper (and I agree with you, Go is much deeper in this regard). Like I said, with Go, even though the team was more clever, it is feasible to brute force train a NN on the exact moves. With Starcraft that isn't even remotely feasible. Not with the greatest super computers in the world. The number of possible moves scales exponentially with the number of units you have, and again exponentially with the number of frames the game takes. They'll have to take extremely different and far more clever strategies in training a SC2 AI, and it will be a hell of a lot more difficult.


Though I agree with your basic thoughts, I have to point out that it is NOT POSSIBLE to brute force solve the game of Go. There are 10^761 possible board states. That is a number beyond our capability to imagine. With current technologies it is impossible to store the entire game tree.
Who would sup with the mighty, must walk the path of daggers.
playa
Profile Blog Joined December 2010
United States1284 Posts
March 28 2016 15:05 GMT
#89
Starcraft 2 doesn't even deserve to have Starcraft in its name. I'd be surprised if Deepmind didn't get the better of the pros, and it will be a black eye for this game. This game used to be chess like and strategical. Now it's a game with 0 options that lives on the past of BW. Should have strategy stripped from the genre if SC 2 pros lose. Deepmind should challenge pros in Z vs P. I wonder how long it would take Deepmind to learn that every opponent will go phoenix into mass immortals.

A true test of AI here, ladies in gents. Is tac tac toe next? Look out world. My only question is will the AI bm the creators of this boring fraudulent strategy game. I'd like to see Deepmind BM someone for wasting its time.

User was temp banned for this post.
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 15:07:19
March 28 2016 15:05 GMT
#90
On March 29 2016 00:02 kingjames01 wrote:
Show nested quote +
On March 28 2016 23:50 The Bottle wrote:
On March 28 2016 19:32 Mendelfist wrote:
On March 28 2016 19:15 MockHamill wrote:
Starcraft is so much more complex than Go it is absurd. Basically the amount of permutations in the first 3 minutes of Starcraft is larger then any match of Go.

As someone who has played chess (no rank), Go (10 kyu) and Starcraft (platinum) at low amateur level I say you are wrong. At the strategic level Go is enormously more complex and deep than both chess and Starcraft. There is absolutely no comparison. Go completely dwarfs any game that I have come in contact with. Measuring complexity in "number of permutations" is not meaningful. How many permutations are there in a game of billiard? Do you think it would be hard for a computer?


You miss the point.

Number of permutations is indeed important, in fact it's the most important factor when it comes to making a self learning AI. You still have to actually use the inputs of a game in order to train the gaming algorithm. The data set is way larger, and the task of purely training a neural network on this is highly intractable. The dimensionality of the space of all possible moves for starcraft is orders, and orders of magnitude larger than that of Go. For instance, in Go, at every turn you can move one unit. You have N units. Thus each turn, you have N possible choices of which units to move. In Starcraft, if you have N units, there are 2^N possible choices of which units to select. This includes buildings. In Go, the average number of turns is in the tens. In Starcraft there's 24 frames per second, and each frame has a possible set of actions. Go is a 19x19 board, Starcraft is a thousands by thousands board.

With Go, it is reasonable (though probably still extremely computationally expensive) to set your dimensionality of the data set as all possible moves, then take games as a data set, and just straightforwardly train a neural network to that. I'm sure the Alpha Go team was more sophisticated than that, and took a crazy amount of clever shortcuts. But most of the cleverness there came from finding ways to conserve computational resource, not from trying to think of the deep strategies. How deep the strategies are in Go had virtually nothing to do with the difficulty of their task. Starcraft will be insurmountably harder to train than Go, regardless of whether or not its strategies are deeper (and I agree with you, Go is much deeper in this regard). Like I said, with Go, even though the team was more clever, it is feasible to brute force train a NN on the exact moves. With Starcraft that isn't even remotely feasible. Not with the greatest super computers in the world. The number of possible moves scales exponentially with the number of units you have, and again exponentially with the number of frames the game takes. They'll have to take extremely different and far more clever strategies in training a SC2 AI, and it will be a hell of a lot more difficult.


Though I agree with your basic thoughts, I have to point out that it is NOT POSSIBLE to brute force solve the game of Go. There are 10^761 possible board states. That is a number beyond our capability to imagine. With current technologies it is impossible to store the entire game tree.


The number of board states is irrelevant. It's the number of possible moves you can make, given a board state, that's relevant. That's all you really need to give for a training AI. My issue is the dimensionality of a training data set of moves. The number of dimensions of that set is the number of possible moves. For Go, with a given board state, it's actually not that much. (In the hundreds, I'd say.) For Starcraft, it's absurdly high. Think of the easiest machine learning problem (training a linear model in one dimension). The set of possible parameters is infinite, but there are only two parameters, and thus it's really quick and easy to train.
The_Red_Viper
Profile Blog Joined August 2013
19533 Posts
Last Edited: 2016-03-28 15:12:01
March 28 2016 15:11 GMT
#91
On March 29 2016 00:05 playa wrote:
Starcraft 2 doesn't even deserve to have Starcraft in its name. I'd be surprised if Deepmind didn't get the better of the pros, and it will be a black eye for this game. This game used to be chess like and strategical. Now it's a game with 0 options that lives on the past of BW. Should have strategy stripped from the genre if SC 2 pros lose. Deepmind should challenge pros in Z vs P. I wonder how long it would take Deepmind to learn that every opponent will go phoenix into mass immortals.

A true test of AI here, ladies in gents. Is tac tac toe next? Look out world. My only question is will the AI bm the creators of this boring fraudulent strategy game. I'd like to see Deepmind BM someone for wasting its time.

I agree completely with you! The same happened with chess and now go, these games aren't worthy of the title "strategy game" when an AI can beat humans.
I heard the first go players are already retiring and starting to play bw because they read the bw forums of tl, they came to the conclusion (based on the intelligent posts there) that bw is the epitome of strategic/tactical complexity.
Preach it!
IU | Sohyang || There is no God and we are his prophets | For if ‘Thou mayest’—it is also true that ‘Thou mayest not.” | Ignorance is the parent of fear |
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 28 2016 15:14 GMT
#92
On March 29 2016 00:05 The Bottle wrote:
Show nested quote +
On March 29 2016 00:02 kingjames01 wrote:
On March 28 2016 23:50 The Bottle wrote:
On March 28 2016 19:32 Mendelfist wrote:
On March 28 2016 19:15 MockHamill wrote:
Starcraft is so much more complex than Go it is absurd. Basically the amount of permutations in the first 3 minutes of Starcraft is larger then any match of Go.

As someone who has played chess (no rank), Go (10 kyu) and Starcraft (platinum) at low amateur level I say you are wrong. At the strategic level Go is enormously more complex and deep than both chess and Starcraft. There is absolutely no comparison. Go completely dwarfs any game that I have come in contact with. Measuring complexity in "number of permutations" is not meaningful. How many permutations are there in a game of billiard? Do you think it would be hard for a computer?


You miss the point.

Number of permutations is indeed important, in fact it's the most important factor when it comes to making a self learning AI. You still have to actually use the inputs of a game in order to train the gaming algorithm. The data set is way larger, and the task of purely training a neural network on this is highly intractable. The dimensionality of the space of all possible moves for starcraft is orders, and orders of magnitude larger than that of Go. For instance, in Go, at every turn you can move one unit. You have N units. Thus each turn, you have N possible choices of which units to move. In Starcraft, if you have N units, there are 2^N possible choices of which units to select. This includes buildings. In Go, the average number of turns is in the tens. In Starcraft there's 24 frames per second, and each frame has a possible set of actions. Go is a 19x19 board, Starcraft is a thousands by thousands board.

With Go, it is reasonable (though probably still extremely computationally expensive) to set your dimensionality of the data set as all possible moves, then take games as a data set, and just straightforwardly train a neural network to that. I'm sure the Alpha Go team was more sophisticated than that, and took a crazy amount of clever shortcuts. But most of the cleverness there came from finding ways to conserve computational resource, not from trying to think of the deep strategies. How deep the strategies are in Go had virtually nothing to do with the difficulty of their task. Starcraft will be insurmountably harder to train than Go, regardless of whether or not its strategies are deeper (and I agree with you, Go is much deeper in this regard). Like I said, with Go, even though the team was more clever, it is feasible to brute force train a NN on the exact moves. With Starcraft that isn't even remotely feasible. Not with the greatest super computers in the world. The number of possible moves scales exponentially with the number of units you have, and again exponentially with the number of frames the game takes. They'll have to take extremely different and far more clever strategies in training a SC2 AI, and it will be a hell of a lot more difficult.


Though I agree with your basic thoughts, I have to point out that it is NOT POSSIBLE to brute force solve the game of Go. There are 10^761 possible board states. That is a number beyond our capability to imagine. With current technologies it is impossible to store the entire game tree.


The number of board states is irrelevant. It's the number of possible moves you can make, given a board state, that's relevant. That's all you really need to give for a training AI. My issue is the dimensionality of a training data set of moves. The number of dimensions of that set is the number of possible moves. For Go, with a given board state, it's actually not that much. (In the hundreds, I'd say.) For Starcraft, it's absurdly high. Think of the easiest machine learning problem (training a linear model in one dimension). The set of possible parameters is infinite, but there are only two parameters, and thus it's really quick and easy to train.


Upon re-reading your previous post, I see that you were in fact discussing possible moves, not possible board states.
Who would sup with the mighty, must walk the path of daggers.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
Last Edited: 2016-03-28 16:34:40
March 28 2016 16:12 GMT
#93
On March 28 2016 23:50 The Bottle wrote:
Number of permutations is indeed important, in fact it's the most important factor when it comes to making a self learning AI. You still have to actually use the inputs of a game in order to train the gaming algorithm. The data set is way larger, and the task of purely training a neural network on this is highly intractable.

The problem is only more complex if you count every possible game state as a "possible move". That's a ridiculous approach to the problem, in the same way as it's ridiculous trying to do it in billiard, which you didn't comment. I have no notion that Starcraft is an easy problem for an AI, or even easier than Go.

This is a different problem, in the same way that Go is a different problem than chess, and that's no doubt why DeepMind think it's interesting. Starcraft is much more similar to real world problems than Go. The number of possible moves are for all practical purposes infinite, and there is no point in putting a number on it or saying that it has anything to do with complexity or how hard it is. It's a meaningless number. If we doubled the precision for all game state variables, the number of possible game states would increase dramatically. Does that mean that the problem becomes much harder? No. It's exactly the same problem. The infinite state space makes it belong to a different class of problems, an unsolved one. I'll not argue that.
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 16:44:36
March 28 2016 16:33 GMT
#94
On March 29 2016 01:12 Mendelfist wrote:
Show nested quote +
On March 28 2016 23:50 The Bottle wrote:
Number of permutations is indeed important, in fact it's the most important factor when it comes to making a self learning AI. You still have to actually use the inputs of a game in order to train the gaming algorithm. The data set is way larger, and the task of purely training a neural network on this is highly intractable.

The problem is only more complex if you count every possible game state as a "possible move". That's a ridiculous approach to the problem, in the same way as it's ridiculous trying to do it in billiard, which you didn't comment. I have no notion that Starcraft is an easy problem for an AI, or even easier than Go.

This is a different problem, in the same way that Go is a different problem than chess, and that's no doubt why DeepMind think it's interesting. Starcraft is much more similar to real world problems than Go. The number of possible moves are for all practical purposes infinite, and there is no point in putting a number on it or saying that it has anything to do with complexity or how hard it is. It's a meaningless number. If we doubled the precision for all game state variables, the number of possible game states would increase dramatically. Does that mean that the problem becomes much harder? No. It's exactly the same problem. The infinite state space makes it belong to a different class of problems, an unsolved one. I'l not argue that.


I'm not talking about the number of possible game states. That's not important for a machine learning algorithm. Number of possible moves you can make is important. That is, the way you encode a particular action. This is essential for making a training data set for your algorithm to learn.

Your billiard example doesn't work here, because there's no self-learning AI algorithm for billiard, at least not that I know of. (If there is, let me know.) The ones that exist simply do an on-the-spot calculation for each move. And while it is intractable to consider all possible moves, it takes a clever shortcut (one example is taking a large sample of N moves, properly separated in the angle-power space by use of k-means clustering, and picking the best one). It's scripting, similar to what Starcraft AI does now. The other problem with the billiard example is that each move can be completely encoded by three continuous variables (position along the rectangle, angle of shot, and power). Even though they're continuous, and have an infinite set of values, encoding this into a data set of moves for a given game is extremely easy. It's a Nx3 matrix of values, where N is the number of turns and 3 is the dimensions of moves. (Although given the nature of billiard, I'm not sure self learning would be any better than scripting; probably worse, in fact.) There's no analogous simple reduction like that in Starcraft.

The reason self learning is so much harder than scripting, is because you have to be so much more meticulous about giving your algorithm the necessary degrees of freedom in the former case. Because you can't feed it any particular strategy, it has to be able to infer those strategies from knowing the moves it can make, and using historic data to calculate which combination of moves maximizes the probability of success. They will have to find clever ways to transform the game input data in order to remove redundancies, and coarsen the scale of discrete moves. I'm sure they did something like that with Go already, but it will be substantially harder for SC2. You say it's just a different problem, sure. But a much, much harder problem, one I'm not quite sure they'll solve, even knowing their success with Go.
Shuffleblade
Profile Joined February 2012
Sweden1903 Posts
March 28 2016 16:43 GMT
#95
Why is everyone discussing APM, shouldn't EAPM be a more suitable metric. I doubt the AI will spam keys so even if you cap APM it will still outmatch a human easy. Because 500 APM from a human could be 200 EAPM while 500 APM from an AI could be actually be 500 EAPM and thus the AI are allowed double the actual actions than a human.
Maru, Bomber, TY, Dear, Classic, DeParture and Rogue!
lichter
Profile Blog Joined September 2010
1001 YEARS KESPAJAIL22272 Posts
March 28 2016 17:08 GMT
#96
One aspect that people also misunderstand in comparing Go and SC2 is that of input/output.

For Go, it is relatively simple. The input is the game board at a given game state, which is all the current pieces in their positions. The output is a position on the board that corresponds to the move done once per turn. The cycle goes back and forth until the game is resolved. This means that the AI and the human have exactly the same input/output, and the game is 100% strategy.

For SC2, will that same stipulation hold? Will they require the AI to use the same input/output as a human being? I believe they should, otherwise it's too big an advantage for the AI and it will no longer be a test of strategy, but rather the efficiency of input/output that they are using.

First, the input. There's the screen, minimap, control groups, unit selection, command card, supplies. That's a lot of information that the AI must learn to interpret. They aren't going to program the AI what these things mean and how to use them, the AI has to learn that on its own. There's also the limitation that the AI can only look at one sector of the map at any given time. It has to learn to how to do that as well.

For output, it's just as difficult. How would they program the AI so that its outputs mimic that of a human being? I don't even know how they'd do that. Limiting APM is one way, but that's an artificial cap and I don't know if that's going to be effective. This is a problem they have to figure out and I doubt anyone here has an answer. The way they output has to be as similar as possible. Even then a machine will still have the advantage of 100% accuracy (perfect force fields, perfectly timed casting, perfect micro given APM restrictions).

The AI has to learn this input/output system on its own, from scratch. There is a lot more to consider compared to Go. This obstacle alone would be an amazing one to solve.

This is an important part of the discussion because the purpose of this test is to show advances in AI learning through strategy games. Unless the game is as close to 100% strategy as possible like Go, it's pointless to play.
AdministratorYOU MUST HEED MY INSTRUCTIONS TAKE OFF YOUR THIIIINGS
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 28 2016 17:09 GMT
#97
On March 28 2016 19:32 Mendelfist wrote:
Show nested quote +
On March 28 2016 19:15 MockHamill wrote:
Starcraft is so much more complex than Go it is absurd. Basically the amount of permutations in the first 3 minutes of Starcraft is larger then any match of Go.

As someone who has played chess (no rank), Go (10 kyu) and Starcraft (platinum) at low amateur level I say you are wrong. At the strategic level Go is enormously more complex and deep than both chess and Starcraft. There is absolutely no comparison. Go completely dwarfs any game that I have come in contact with. Measuring complexity in "number of permutations" is not meaningful. How many permutations are there in a game of billiard? Do you think it would be hard for a computer?


Thank you for this post. It's so annoying to see so many people saying that Go is easy.

The reason why Go was such an interesting game for AI to tackle is precisely that it has so many possible moves that backtracking sucks too much.
What qxc said.
rockslave
Profile Blog Joined January 2008
Brazil318 Posts
March 28 2016 17:10 GMT
#98
On March 29 2016 02:08 lichter wrote:
One aspect that people also misunderstand in comparing Go and SC2 is that of input/output.

For Go, it is relatively simple. The input is the game board at a given game state, which is all the current pieces in their positions. The output is a position on the board that corresponds to the move done once per turn. The cycle goes back and forth until the game is resolved. This means that the AI and the human have exactly the same input/output, and the game is 100% strategy.

For SC2, will that same stipulation hold? Will they require the AI to use the same input/output as a human being? I believe they should, otherwise it's too big an advantage for the AI and it will no longer be a test of strategy, but rather the efficiency of input/output that they are using.

First, the input. There's the screen, minimap, control groups, unit selection, command card, supplies. That's a lot of information that the AI must learn to interpret. They aren't going to program the AI what these things mean and how to use them, the AI has to learn that on its own. There's also the limitation that the AI can only look at one sector of the map at any given time. It has to learn to how to do that as well.

For output, it's just as difficult. How would they program the AI so that its outputs mimic that of a human being? I don't even know how they'd do that. Limiting APM is one way, but that's an artificial cap and I don't know if that's going to be effective. This is a problem they have to figure out and I doubt anyone here has an answer. The way they output has to be as similar as possible. Even then a machine will still have the advantage of 100% accuracy (perfect force fields, perfectly timed casting, perfect micro given APM restrictions).

The AI has to learn this input/output system on its own, from scratch. There is a lot more to consider compared to Go. This obstacle alone would be an amazing one to solve.

This is an important part of the discussion because the purpose of this test is to show advances in AI learning through strategy games. Unless the game is as close to 100% strategy as possible like Go, it's pointless to play.


I would bet they won't even try to solve that. It's besides the point. They will probably just use the API and have the AI access unit positioning directly.
What qxc said.
lichter
Profile Blog Joined September 2010
1001 YEARS KESPAJAIL22272 Posts
March 28 2016 17:12 GMT
#99
On March 29 2016 02:10 rockslave wrote:
Show nested quote +
On March 29 2016 02:08 lichter wrote:
One aspect that people also misunderstand in comparing Go and SC2 is that of input/output.

For Go, it is relatively simple. The input is the game board at a given game state, which is all the current pieces in their positions. The output is a position on the board that corresponds to the move done once per turn. The cycle goes back and forth until the game is resolved. This means that the AI and the human have exactly the same input/output, and the game is 100% strategy.

For SC2, will that same stipulation hold? Will they require the AI to use the same input/output as a human being? I believe they should, otherwise it's too big an advantage for the AI and it will no longer be a test of strategy, but rather the efficiency of input/output that they are using.

First, the input. There's the screen, minimap, control groups, unit selection, command card, supplies. That's a lot of information that the AI must learn to interpret. They aren't going to program the AI what these things mean and how to use them, the AI has to learn that on its own. There's also the limitation that the AI can only look at one sector of the map at any given time. It has to learn to how to do that as well.

For output, it's just as difficult. How would they program the AI so that its outputs mimic that of a human being? I don't even know how they'd do that. Limiting APM is one way, but that's an artificial cap and I don't know if that's going to be effective. This is a problem they have to figure out and I doubt anyone here has an answer. The way they output has to be as similar as possible. Even then a machine will still have the advantage of 100% accuracy (perfect force fields, perfectly timed casting, perfect micro given APM restrictions).

The AI has to learn this input/output system on its own, from scratch. There is a lot more to consider compared to Go. This obstacle alone would be an amazing one to solve.

This is an important part of the discussion because the purpose of this test is to show advances in AI learning through strategy games. Unless the game is as close to 100% strategy as possible like Go, it's pointless to play.


I would bet they won't even try to solve that. It's besides the point. They will probably just use the API and have the AI access unit positioning directly.


I feel like that would be disingenuous to the purpose of this project. The amazing thing about Go is it's 100% a battle of the mind because input/output is the same regardless of computer or human. Unless they do that for SC2 as well, it won't be as meaningful as the breakthrough with Go.
AdministratorYOU MUST HEED MY INSTRUCTIONS TAKE OFF YOUR THIIIINGS
Mendelfist
Profile Joined September 2010
Sweden356 Posts
Last Edited: 2016-03-28 17:23:07
March 28 2016 17:14 GMT
#100
On March 29 2016 01:33 The Bottle wrote:
I'm not talking about the number of possible game states. That's not important for a machine learning algorithm. Number of possible moves you can make is important. That is, the way you encode a particular action. This is essential for making a training data set for your algorithm to learn.

Why do you assume that the best way to solve this problem is to throw every single game state variable or pixel at a neural net and then hope that it somehow works out?

Your billiard example doesn't work here, because there's no self-learning AI algorithm for billiard, at least not that I know of.

Then imagine one. One that learns for example by self play. Do you really think that it would have a hard time finding "the right moves" just because they are infinite in number? Edit: And the question is not if it can be better than a script, which can be near perfect. The question is if you think it's a hard problem.

They will have to find clever ways to transform the game input data in order to remove redundancies, and coarsen the scale of discrete moves. I'm sure they did something like that with Go already, but it will be substantially harder for SC2. You say it's just a different problem, sure. But a much, much harder problem, one I'm not quite sure they'll solve, even knowing their success with Go.

Yes, THIS is the problem, and once you have done this the "number of possible moves" in the original problem is irrelevant. That only tells you that you have THIS problem on your hands and that you can't solve it by ordinary search algorithms. You have to find a way to reduce the original problem by levels of abstraction. I don't know if there is a way for an AI find these abstractions by itself. Maybe that's how Alphago works, In any case, I'm not in the least convinced that it's as hard as you are trying to make it sound. In the simplest form you can have an ordinary scripted bot that asks an AI for advice. "Attack now", "build sentries", "expand there" etc. Or you could throw every pixel at it, like you want. I don't think that would work. Or something in between. How about that?
Prev 1 2 3 4 5 6 7 14 15 16 Next All
Please log in or register to reply.
Live Events Refresh
Next event in 7m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Nina 128
StarCraft: Brood War
Zeus 829
BeSt 371
firebathero 340
Light 120
sorry 77
Mind 56
Shine 53
Shinee 47
sSak 35
NaDa 17
[ Show more ]
yabsab 14
Movie 13
Bale 5
PianO 0
Dota 2
XaKoH 495
XcaliburYe440
canceldota120
League of Legends
JimRising 537
Counter-Strike
Stewie2K1034
shoxiejesuss810
x6flipin236
allub185
sgares18
Super Smash Bros
Mew2King144
Other Games
singsing1196
Happy297
Fuzer 253
crisheroes231
SortOf188
DeMusliM95
Trikslyr34
Organizations
Other Games
gamesdonequick3444
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 12 non-featured ]
StarCraft 2
• Berry_CruncH384
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• lizZardDota2191
Upcoming Events
Replay Cast
7m
WardiTV European League
6h 7m
ShoWTimE vs sebesdes
Percival vs NightPhoenix
Shameless vs Nicoract
Krystianer vs Scarlett
ByuN vs uThermal
Harstem vs HeRoMaRinE
PiGosaur Monday
14h 7m
uThermal 2v2 Circuit
1d 6h
Replay Cast
1d 14h
The PondCast
2 days
WardiTV European League
2 days
Replay Cast
2 days
Epic.LAN
3 days
CranKy Ducklings
4 days
[ Show More ]
Epic.LAN
4 days
CSO Contender
4 days
BSL20 Non-Korean Champi…
4 days
Bonyth vs Sziky
Dewalt vs Hawk
Hawk vs QiaoGege
Sziky vs Dewalt
Mihu vs Bonyth
Zhanhun vs QiaoGege
QiaoGege vs Fengzi
Sparkling Tuna Cup
5 days
Online Event
5 days
BSL20 Non-Korean Champi…
5 days
Bonyth vs Zhanhun
Dewalt vs Mihu
Hawk vs Sziky
Sziky vs QiaoGege
Mihu vs Hawk
Zhanhun vs Dewalt
Fengzi vs Bonyth
Liquipedia Results

Completed

2025 ACS Season 2: Qualifier
RSL Revival: Season 1
Murky Cup #2

Ongoing

JPL Season 2
BSL 2v2 Season 3
CSL 17: 2025 SUMMER
Copa Latinoamericana 4
Jiahua Invitational
BSL20 Non-Korean Championship
Championship of Russia 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters

Upcoming

CSL Xiamen Invitational
CSL Xiamen Invitational: ShowMatche
2025 ACS Season 2
CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
BSL Season 21
K-Championship
RSL Revival: Season 2
SEL Season 2 Championship
uThermal 2v2 Main Event
FEL Cracov 2025
Esports World Cup 2025
Underdog Cup #2
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.