• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 02:04
CEST 08:04
KST 15:04
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun0[ASL21] Ro8 Preview Pt1: Inheritors15[ASL21] Ro16 Preview Pt2: All Star10Team Liquid Map Contest #22 - The Finalists19[ASL21] Ro16 Preview Pt1: Fresh Flow9
Community News
2026 GSL Season 1 Qualifiers24Maestros of the Game 2 announced92026 GSL Tour plans announced15Weekly Cups (April 6-12): herO doubles, "Villains" prevail1MaNa leaves Team Liquid25
StarCraft 2
General
Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun Team Liquid Map Contest #22 - The Finalists MaNa leaves Team Liquid Maestros of the Game 2 announced
Tourneys
2026 GSL Season 1 Qualifiers Sparkling Tuna Cup - Weekly Open Tournament INu's Battles#14 <BO.9 2Matches> GSL CK: More events planned pending crowdfunding RSL Revival: Season 5 - Qualifiers and Main Event
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players [M] (2) Frigid Storage
External Content
The PondCast: SC2 News & Results Mutation # 523 Firewall Mutation # 522 Flip My Base Mutation # 521 Memorable Boss
Brood War
General
ASL21 General Discussion Leta's ASL S21 Ro.16 review [ASL21] Ro8 Preview Pt1: Inheritors BGH Auto Balance -> http://bghmmr.eu/ FlaSh: This Will Be My Final ASL【ASL S21 Ro.16】
Tourneys
[ASL21] Ro8 Day 2 [ASL21] Ro8 Day 1 [Megathread] Daily Proleagues [ASL21] Ro16 Group D
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers What's the deal with APM & what's its true value Any training maps people recommend?
Other Games
General Games
Stormgate/Frost Giant Megathread Dawn of War IV Diablo IV Nintendo Switch Thread Total Annihilation Server - TAForever
Dota 2
The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
US Politics Mega-thread 3D technology/software discussion Russo-Ukrainian War Thread European Politico-economics QA Mega-thread Canadian Politics Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece [Req][Books] Good Fantasy/SciFi books Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion McBoner: A hockey love story
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
Sexual Health Of Gamers
TrAiDoS
lurker extra damage testi…
StaticNine
Broowar part 2
qwaykee
Funny Nicknames
LUCKY_NOOB
Iranian anarchists: organize…
XenOsky
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1854 users

DeepMind sets AlphaGo's sights on SCII - Page 7

Forum Index > SC2 General
Post a Reply
Prev 1 5 6 7 8 9 16 Next All
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 20:26:34
March 28 2016 20:24 GMT
#121
On March 29 2016 04:32 Mendelfist wrote:
Show nested quote +
On March 29 2016 03:49 The Bottle wrote:
The more complex the game is (and I mean this in terms of permutations of moves, not in terms of heuristic strategy), the harder this task is. Thus it will be monumentally difficult for Starcraft.

And I'm saying that you're just making things up. An intractable large number of possible moves (or number of input variables) doesn't necessarily mean that the problem is hard (although it is a requirement) and reducing Starcraft to a problem on a higher level of abstraction than pixels or coordinates isn't necessarily very hard either. At least you haven't showed any arguments for it. Once you have moved to a high abstraction level Starcraft IS simple compared to Go, which cannot be reduced to builds or strategies in any similar way. This is the reason why I think it's at least possible that Starcraft is even easier to master than Go for an AI.

Show nested quote +
it is incredibly difficult to reduce the space in an intelligent enough way to minimally reduce the information of best moves possible

We are not trying to find the best moves possible. We are trying to beat the world champion, or someone similar. You are again making this harder than it is.


You said "reducing Starcraft to a problem on a higher level of abstraction than pixels or coordinates isn't necessarily very hard either". For scripting that's true. For supervised and reinforced learning... well... I really think you have to work a lot harder to justify this statement. Unless you think scripting is the way that they're going to beat a pro player, in which case... well, I agree with all of Boxer's objections to that.

I can try and explain why moving Starcraft to a high level of abstraction to make a training set for a learning algorithm is incredibly difficult, but it's hard to do so without explicitly explaining how a neural network or Monte Carlo search tree works. But without explaining it, I can say that with these algorithms, you need to associate some sort of metric with the set of legal moves for a given board state. For Alpha Go this is not hard to do, and in fact they explain exactly how they do it in the paper I linked in my last post. And if the space of actions is too large they can do sampling methods (hence the Monte Carlo tree search). However, the proportion of intelligent actions within the space of all actions cannot be so sparse that the vast majority of monte carlo iterations probably won't even find one in many samples. Not unless you significantly reduce that space, and coarse grain it.

The space reduction and coarse graining is extremely difficult, though. You can, for example, decide that sending a specific army composition to attack a general area is as fine as you want to make it. That would work fine for some army compositions (like a WoL-style protoss deathball) but it certainly wouldn't work for others (like ling-bane, tank-medivacs, or disruptors). You have to finely tune the degrees of freedom to quite an extent if you don't want an army to simply throw the game by not even using 10% of its utility. If you want to coarse grain the learning process to a very high level of macro strategy then it would be easy to train, but you would never get good micro (or any micro, really) without adding scripting to it. (Which may not be off the table.) However, if you think that with pure learning and no scripting, that if you coarse grain the training sets to very high level macro strategy and still train an algorithm that can beat pros, then we probably just completely disagree on the level of importance of small scale micro.

We are not trying to find the best moves possible. We are trying to beat the world champion, or someone similar. You are again making this harder than it is.

But you do that by approximating the best possible set of moves you can make. That's what a machine learning algorithm does. You start with an ideal (in this case, there exists a set of moves that are maximally good) and you try to approximate this ideal as well as you can.
BronzeKnee
Profile Joined March 2011
United States5219 Posts
March 28 2016 20:30 GMT
#122
On March 29 2016 04:15 WinterViewbot420 wrote:
Terran drops could be very abusive by this robot if we assume it's not restricted.


Well, then the computer would be cheating. It needs to have the same restriction a human has, a mouse, a keyboard and monitor. Alternatively, I believe we'd be better off letting humans control the game with their mind. That would be a more fair match, with only the monitor being the limiting factor.
chocorush
Profile Joined June 2009
694 Posts
Last Edited: 2016-03-28 20:44:03
March 28 2016 20:43 GMT
#123
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.
Mendelfist
Profile Joined September 2010
Sweden356 Posts
March 28 2016 20:57 GMT
#124
On March 29 2016 05:24 The Bottle wrote:
You said "reducing Starcraft to a problem on a higher level of abstraction than pixels or coordinates isn't necessarily very hard either". For scripting that's true. For supervised and reinforced learning... well... I really think you have to work a lot harder to justify this statement. Unless you think scripting is the way that they're going to beat a pro player, in which case... well, I agree with all of Boxer's objections to that.

No, I'm not talking about a pure scripted bot, nor a pure AI. I'm imagining a hybrid as I hinted about earlier. And yes, I have basic laymans knowledge about both neural nets and the Monte Carlo search method. I have followed the development of Go programs since the very beginning and have seen revolution of Monte Carlo search myself, but I'm not even sure Monte Carlo or any variant of current tree search methods are applicable for Starcraft. I can't see how. I know I'm not being very specific but todays Starcraft scripts can play a decent game without ANY AI techniques at all, just hard coded rules. I think that using just a policy network would help a lot. At what "granularity" you want to apply it depends on how much money you want to throw at the problem.

Show nested quote +
We are not trying to find the best moves possible. We are trying to beat the world champion, or someone similar. You are again making this harder than it is.

But you do that by approximating the best possible set of moves you can make. That's what a machine learning algorithm does. You start with an ideal (in this case, there exists a set of moves that are maximally good) and you try to approximate this ideal as well as you can.

With unlimited resources perhaps, but a compromise may be good enough to beat a world champion, which Lee Sedol discovered. The games used to train the policy network were games from public Go servers. They weren't even professional games. It was good enough.
todespolka
Profile Joined November 2012
221 Posts
March 28 2016 21:00 GMT
#125
Yes, input needs to be limited. Its similar to chess. If we let the machine check an infinite amount of moves, it will find the winning move by brute force. That is a weak form of intelligence and has more in common with a machine. The interesting question is if it is able to find solution in less than N moves and in case of sc2 in less than N inputs.

Second point is unit recognition. A human focuses generally on the center screen. The ai will focus on the whole screen and have the clear advantage.

Third point is adaptation on the fly in an incomplete knowledge problem. The advantage goes clear to human brain because it is literally made for that task. Imagine the micro you do, even if its muscle memory, you adapt on the fly, e.g. skill recognition: How good does he micro? Can i outmicro him? How can i minimize my costs?


The question, I am interested in, is if it finds a better strategy than what we call standard today.
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
March 28 2016 21:03 GMT
#126
It's like everyone is just willfully ignoring lichter and the fact that this AI will have to learn how to select its command center, how to create hotkeys, and how to navigate the interface based only on visual input. They will make it use at least an emulation of a keyboard and mouse. It will not have access to the game code. Any deviation from this would defeat the entire purpose of deepmind AI. If you watch the video where they show off the atari AI, the first two hundred games it played of Breakout, it could hardly hit the ball with the paddle.

If google can manage an AI that teaches itself how to become the automaton 2000 with scouting and decision making from scratch I will still be completely amazed.

Watch this video from the 9:14 mark, anyone who is having doubts.

ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
todespolka
Profile Joined November 2012
221 Posts
March 28 2016 21:09 GMT
#127
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!
chocorush
Profile Joined June 2009
694 Posts
March 28 2016 21:26 GMT
#128
On March 29 2016 06:09 todespolka wrote:
Show nested quote +
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.
Karis Vas Ryaar
Profile Blog Joined July 2011
United States4396 Posts
Last Edited: 2016-03-28 21:30:12
March 28 2016 21:29 GMT
#129
On March 29 2016 06:26 chocorush wrote:
Show nested quote +
On March 29 2016 06:09 todespolka wrote:
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.


as artosis put it on 2 articles on ESPN, not limiting apm would be like putting a world class runner in a race with a car. completely unfair. (good articles by the way)
"I'm not agreeing with a lot of Virus's decisions but they are working" Tasteless. Ipl4 Losers Bracket Virus 2-1 Maru
chocorush
Profile Joined June 2009
694 Posts
Last Edited: 2016-03-28 21:36:49
March 28 2016 21:36 GMT
#130
On March 29 2016 06:29 Karis Vas Ryaar wrote:
Show nested quote +
On March 29 2016 06:26 chocorush wrote:
On March 29 2016 06:09 todespolka wrote:
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.


as artosis put it on 2 articles on ESPN, not limiting apm would be like putting a world class runner in a race with a car. completely unfair. (good articles by the way)


Putting humans against computers in chess is also completely unfair. That doesn't make the AI problem illegitimate, and it's not like the technology is even there to make the right decision fast enough. If AI takes one second to decide an optimal move, how much APM does it really have?
JeffKim
Profile Blog Joined November 2013
Korea (South)36 Posts
March 28 2016 22:13 GMT
#131
They need to focus on BW first.

People in this thread seem to fail to acknowledge key points of AI.

Artosis as well in his article "series".
The Bottle
Profile Joined July 2010
242 Posts
Last Edited: 2016-03-28 23:02:40
March 28 2016 22:56 GMT
#132
On March 29 2016 05:57 Mendelfist wrote:
Show nested quote +
On March 29 2016 05:24 The Bottle wrote:
You said "reducing Starcraft to a problem on a higher level of abstraction than pixels or coordinates isn't necessarily very hard either". For scripting that's true. For supervised and reinforced learning... well... I really think you have to work a lot harder to justify this statement. Unless you think scripting is the way that they're going to beat a pro player, in which case... well, I agree with all of Boxer's objections to that.

No, I'm not talking about a pure scripted bot, nor a pure AI. I'm imagining a hybrid as I hinted about earlier. And yes, I have basic laymans knowledge about both neural nets and the Monte Carlo search method. I have followed the development of Go programs since the very beginning and have seen revolution of Monte Carlo search myself, but I'm not even sure Monte Carlo or any variant of current tree search methods are applicable for Starcraft. I can't see how. I know I'm not being very specific but todays Starcraft scripts can play a decent game without ANY AI techniques at all, just hard coded rules. I think that using just a policy network would help a lot. At what "granularity" you want to apply it depends on how much money you want to throw at the problem.

Show nested quote +
We are not trying to find the best moves possible. We are trying to beat the world champion, or someone similar. You are again making this harder than it is.

But you do that by approximating the best possible set of moves you can make. That's what a machine learning algorithm does. You start with an ideal (in this case, there exists a set of moves that are maximally good) and you try to approximate this ideal as well as you can.

With unlimited resources perhaps, but a compromise may be good enough to beat a world champion, which Lee Sedol discovered. The games used to train the policy network were games from public Go servers. They weren't even professional games. It was good enough.


A hybrid of scripting at the small scale and deep learning at the large scale may be the best way to go, but there's still a huge problem, one that doesn't exist in simply defined discrete board games like Go. As you can probably predict I'm going to say, that problem is the coarse graining of the data itself.I think you're really underestimating the scale of this problem because it is a huge problem.

The problem is divided into two parts. Actually defining the set of "rules" (i.e. possible actions) that the computer can take, and then getting an algorithm to recognize, from the metadata of historic games, when those actions have actually been taken (i.e. transforming the set of discrete actions in the game into the actions that you created in your state space of moves).

The first is problematic because it's difficult to make the choices. You have to strike a balance between making the set of actions robust enough that the algorithm can actually learn interesting strategies on its own, yet constrained enough that the computation is tractable. Any time you design a new "move" you have to ask yourself, "how badly did I just restrict the freedom of the algorithm? I just set move A and move B to be equivalent, but how many scenarios exist in which the outcome of said moves are drastically different?" With a game like Starcraft, that's extremely sensitive to small missteps, you're pretty much never going to do this the "right" way. But will the cumulative flaws of your coarse graining scheme be small enough not to completely botch the execution? That's hard to answer, and I honestly don't know. But this is a huge problem, and it doesn't even exist for Go.

The second part is actually recognizing when the set of "moves" that you defined are actually executed based on the metadata of a game. The only real objective way to store the data of a game is to store the exact actions, since machines don't recognize heuristics such as "he transitioned from ultras to mutas and then diverted his forces to his main to harass his fourth". So you have to be able to recognize what set of actions correspond to what "move" in your coarse grained space, and that's well out of the realm of the methods they used for the Go algorithm.

Now maybe the above that I described is doable. (Assuming they do a mix of deep learning for high level strategy and scripting for small scale actions). In fact I hope it is, and I hope they make an amazing algorithm that can beat anyone. But it's way harder. And, what I have been trying to argue here is that the reason that it's harder is because of the sheer complexity of the space of possible actions in Starcraft, compared to Go. The entire reason the problems I listed above exist is because of that complexity. And the more sensitive the outcome of the game is to small deviations in those discrete actions, the harder it is to coarse grain them. And this is all just to create a workable training set for the model, whereas doing the same in Go is pretty much trivial, and explained in a couple of sentences in the paper.
ThunderBum
Profile Joined November 2010
Australia192 Posts
March 28 2016 23:32 GMT
#133
We see videos of micro where a moved banelings follow marines off creep and lose. Maybe a human player won't a move his banelings off creep? Or where zerglings avoid splash from tanks, can they also do that against marines? Or medivac+tank micro against projectile attacks, maybe the human will target the medivac or build air units?

The amazing micro tricks an ai can do are limited in application but we see the videos and focus so much on how amazing one side of the micro is that we neglect to consider that the other player wouldn't take that engagement.
redviper
Profile Joined May 2010
Pakistan2333 Posts
March 28 2016 23:47 GMT
#134
On March 28 2016 13:46 a4bisu wrote:
APM is not the point. A large portion of human APM is meaningless and just for warming up purpose toward spikes in big fights.

SC is a totally different game to Go. People call it not "perfect information", meaning AI does not know exactly what its opponent doing in a SC game, not as completely as black and white stones on a Go board.

Suppose two medivacs are approaching the zerg AI's 2nd base, the AI does not know what's inside, marines, marines and mines, or empty? AI's need to predict the possible drop location, it is main base, or 2nd base, or one for each? All the possible scenarios requires defense strategies accordingly with consideration of limited resource and efficiency. All these scenarios are developing in real time and could shifting from each other within milliseconds as the medivacs make a boosted turn.


People keep bringing this up but how do people do it? They use their experience to predict whats going to happen. Sometimes they are wrong, sometimes they are right. The AI will be the same, probably a bit better because recall of existing information in the game is perfect (unlike a person who could forget some small thing).
intotheheart
Profile Blog Joined January 2011
Canada33091 Posts
March 29 2016 00:07 GMT
#135
Do we know if they'll cap the EAPM from any official sources?
kiss kiss fall in love
kingjames01
Profile Blog Joined April 2009
Canada1603 Posts
March 29 2016 00:16 GMT
#136
On March 29 2016 09:07 IntoTheheart wrote:
Do we know if they'll cap the EAPM from any official sources?


Nothing yet from anything that I've read.
Who would sup with the mighty, must walk the path of daggers.
DuckloadBlackra
Profile Joined July 2011
225 Posts
March 29 2016 00:24 GMT
#137
On March 29 2016 06:36 chocorush wrote:
Show nested quote +
On March 29 2016 06:29 Karis Vas Ryaar wrote:
On March 29 2016 06:26 chocorush wrote:
On March 29 2016 06:09 todespolka wrote:
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.


as artosis put it on 2 articles on ESPN, not limiting apm would be like putting a world class runner in a race with a car. completely unfair. (good articles by the way)


Putting humans against computers in chess is also completely unfair. That doesn't make the AI problem illegitimate, and it's not like the technology is even there to make the right decision fast enough. If AI takes one second to decide an optimal move, how much APM does it really have?


Invalid comparison, computers in chess are not AI. They're exactly what we're trying to avoid.
DuckloadBlackra
Profile Joined July 2011
225 Posts
Last Edited: 2016-03-29 00:28:00
March 29 2016 00:27 GMT
#138
On March 29 2016 06:26 chocorush wrote:
Show nested quote +
On March 29 2016 06:09 todespolka wrote:
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.


That kind of micro is so stupidly effective that it would hardly need to play intelligently to secure the win. Defeats the purpose of what we're trying to achieve here.
chocorush
Profile Joined June 2009
694 Posts
March 29 2016 00:34 GMT
#139
On March 29 2016 09:24 DuckloadBlackra wrote:
Show nested quote +
On March 29 2016 06:36 chocorush wrote:
On March 29 2016 06:29 Karis Vas Ryaar wrote:
On March 29 2016 06:26 chocorush wrote:
On March 29 2016 06:09 todespolka wrote:
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.


as artosis put it on 2 articles on ESPN, not limiting apm would be like putting a world class runner in a race with a car. completely unfair. (good articles by the way)


Putting humans against computers in chess is also completely unfair. That doesn't make the AI problem illegitimate, and it's not like the technology is even there to make the right decision fast enough. If AI takes one second to decide an optimal move, how much APM does it really have?


Invalid comparison, computers in chess are not AI. They're exactly what we're trying to avoid.


Please explain how computer chess is not AI if you want to invalidate the comparison.
Fecalfeast
Profile Joined January 2010
Canada11355 Posts
March 29 2016 00:36 GMT
#140
On March 29 2016 09:34 chocorush wrote:
Show nested quote +
On March 29 2016 09:24 DuckloadBlackra wrote:
On March 29 2016 06:36 chocorush wrote:
On March 29 2016 06:29 Karis Vas Ryaar wrote:
On March 29 2016 06:26 chocorush wrote:
On March 29 2016 06:09 todespolka wrote:
On March 29 2016 05:43 chocorush wrote:
I think it will be more interesting to see how the AI does first before we start worrying about handicapping for it to be "fair" (is the point of showing an AI that can play Starcraft supposed to be fair?).

Considering how many seconds even the simplest move took to compute in Go, I'm not convinced that we would need to worry about too high of an APM rate to worry about the engineering problem more than the computer science problem.


The problem is the following. If you allow unlimited apm, it will move all units away from splash and keep units at distance. Thats a few lines of code, they wont even need a real AI and the human will be helpless. But that has more in common with a machine than with an intelligence. Its not the question we seek. We want to know if it can solve the problem in less than N inputs. That requires strategy!


Is that really the end result of unlimited apm? I expect the computer to learn that focusing only on micro won't win games. And how will it attack if it's scripted to stay out of range? It will still need to learn army composition and learn how to engage strategically.

Just requiring it to follow the same rules as humans like only being able to micro units while its focus is on the screen will require it to have to learn to ration its focus as a resource.

I don't expect it to be able to micro while thinking efficiently, unless it learns to do a lot of meaningless spamming while it decides the next best optimal move, which is pretty humanlike already.


as artosis put it on 2 articles on ESPN, not limiting apm would be like putting a world class runner in a race with a car. completely unfair. (good articles by the way)


Putting humans against computers in chess is also completely unfair. That doesn't make the AI problem illegitimate, and it's not like the technology is even there to make the right decision fast enough. If AI takes one second to decide an optimal move, how much APM does it really have?


Invalid comparison, computers in chess are not AI. They're exactly what we're trying to avoid.


Please explain how computer chess is not AI if you want to invalidate the comparison.

Apologies for answering something not directed at me but one of the main goals of deepmind AI is to create "General AI" rather than "Focused AI" so while a chess bot is certainly 'AI' it's not the TYPE of AI that deepmind is looking to create. The video I linked earlier explains this more in depth before the atari part.
ModeratorINFLATE YOUR POST COUNT; PLAY TL MAFIA
Prev 1 5 6 7 8 9 16 Next All
Please log in or register to reply.
Live Events Refresh
Replay Cast
00:00
PiGosaur Cup #68
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
ProTech126
Nina 105
Livibee 58
StarCraft: Brood War
GuemChi 5095
Dewaltoss 51
Shinee 27
soO 27
scan(afreeca) 25
ajuk12(nOOB) 15
ZergMaN 9
Icarus 6
Bale 6
Dota 2
NeuroSwarm263
League of Legends
JimRising 740
Counter-Strike
Coldzera 1733
m0e_tv542
Other Games
summit1g8025
C9.Mang0527
WinterStarcraft397
PiGStarcraft158
-ZergGirl65
Organizations
Other Games
gamesdonequick603
Dota 2
PGL Dota 2 - Main Stream181
Other Games
BasetradeTV175
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 13 non-featured ]
StarCraft 2
• practicex 35
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Rush1422
• Stunt461
Upcoming Events
Replay Cast
2h 56m
Afreeca Starleague
3h 56m
Leta vs YSC
Kung Fu Cup
4h 56m
GSL
1d 3h
Rogue vs Percival
Zoun vs Solar
Replay Cast
1d 17h
GSL
2 days
Cure vs TriGGeR
ByuN vs Bunny
The PondCast
2 days
KCM Race Survival
2 days
Replay Cast
2 days
Replay Cast
3 days
[ Show More ]
Escore
3 days
OSC
3 days
Replay Cast
3 days
Replay Cast
4 days
IPSL
4 days
Ret vs Art_Of_Turtle
Radley vs TBD
BSL
4 days
Replay Cast
4 days
uThermal 2v2 Circuit
5 days
BSL
5 days
IPSL
5 days
eOnzErG vs TBD
G5 vs Nesh
Replay Cast
6 days
Wardi Open
6 days
Afreeca Starleague
6 days
Jaedong vs Light
Monday Night Weeklies
6 days
Replay Cast
6 days
Liquipedia Results

Completed

Escore Tournament S2: W4
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
CSL 2026 SPRING (S20)
IPSL Spring 2026
KCM Race Survival 2026 Season 2
StarCraft2 Community Team League 2026 Spring
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026

Upcoming

Escore Tournament S2: W5
KK 2v2 League Season 1
Acropolis #4
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
2026 GSL S2
RSL Revival: Season 5
2026 GSL S1
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.