• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 09:32
CEST 15:32
KST 22:32
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0TL Team Map Contest #5: Presented by Monster Energy4Code S RO8 Preview: herO, Zoun, Bunny, Classic7Code S RO8 Preview: Rogue, GuMiho, Solar, Maru3BGE Stara Zagora 2025: Info & Preview27
Community News
Classic & herO RO8 Interviews: "I think it’s time to teach [Rogue] a lesson."1Rogue & GuMiho RO8 interviews: "Lifting that trophy would be a testament to all I’ve had to overcome over the years and how far I’ve come on this journey.3Code S RO8 Results + RO4 Bracket (2025 Season 2)12BGE Stara Zagora 2025 - Replay Pack2Weekly Cups (June 2-8): herO doubles down1
StarCraft 2
General
Code S RO8 Results + RO4 Bracket (2025 Season 2) Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho Classic & herO RO8 Interviews: "I think it’s time to teach [Rogue] a lesson." Rogue & GuMiho RO8 interviews: "Lifting that trophy would be a testament to all I’ve had to overcome over the years and how far I’ve come on this journey. I have an extra ticket to the GSL Ro4/finals
Tourneys
Sea Duckling Open (Global, Bronze-Diamond) SOOPer7s Showmatches 2025 RSL: Revival, a new crowdfunded tournament series [GSL 2025] Code S: Season 2 - Ro8 - Group A [GSL 2025] Code S: Season 2 - Ro8 - Group B
Strategy
[G] Darkgrid Layout Simple Questions Simple Answers [G] PvT Cheese: 13 Gate Proxy Robo
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 477 Slow and Steady Mutation # 476 Charnel House Mutation # 475 Hard Target Mutation # 474 Futile Resistance
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ Recent recommended BW games BW General Discussion FlaSh Witnesses SCV Pull Off the Impossible vs Shu StarCraft & BroodWar Campaign Speedrun Quest
Tourneys
[Megathread] Daily Proleagues [BSL 2v2] ProLeague Season 3 - Friday 21:00 CET Small VOD Thread 2.0 [BSL20] ProLeague Bracket Stage - Day 4
Strategy
I am doing this better than progamers do. [G] How to get started on ladder as a new Z player
Other Games
General Games
Path of Exile Nintendo Switch Thread Stormgate/Frost Giant Megathread Beyond All Reason What do you want from future RTS games?
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
Things Aren’t Peaceful in Palestine US Politics Mega-thread UK Politics Mega-thread Russo-Ukrainian War Thread Vape Nation Thread
Fan Clubs
Maru Fan Club Serral Fan Club
Media & Entertainment
Korean Music Discussion [Manga] One Piece
Sports
2024 - 2025 Football Thread NHL Playoffs 2024 TeamLiquid Health and Fitness Initiative For 2023 Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
A Better Routine For Progame…
TrAiDoS
StarCraft improvement
iopq
Heero Yuy & the Tax…
KrillinFromwales
I was completely wrong ab…
jameswatts
Need Your Help/Advice
Glider
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 28151 users

Go - AlphaGo (Google) vs Lee Sedol (world champ)

Forum Index > General Games
Post a Reply
Normal
BlueRoyaL
Profile Blog Joined February 2006
United States2493 Posts
March 09 2016 17:06 GMT
#1
Hey guys,

I'm not sure how many of you are aware of the old Chinese game "Go", and of the current Bo5 match going on between Google's deep learning AI machine AlphaGo versus the world's champion, Lee Sedol hailing from South Korea.

Late last year, AlphaGo was able to knock off Europe's champion 5 to nil. So what's the big deal this time around?

Simply put, the level of competition (against AlphaGo). Europe's champion, Fan Hui, is a level 2-dan player. Lee Sedol is 9-dan. The statistical probability of a 9-dan player beating a 2-dan player is over 95%.

Last night, the first game was played, with AlphaGo taking the early lead 1-0. This marks an amazing point of progress for AI, and more specifically, deep learning capabilities. The reason AI hasn't been able to handily beat professional Go players until very recently is simply because of the complexity of the game. The possible permutations in Go, exceeds that of the number of atoms in the universe (and for marked effect, by several orders of magnitude at that!). If you're a visual person, Go has about 1,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible permutations. So it's pretty clear, that computers don't have the capability to win these games through brute-force computations. There simply doesn't exist the computational power to do so, especially considering that these matches are played with an allotted total time for each player, similar to how professional Chess matches are played.

Google's AlphaGo has gotten to this point by deep learning. It's able to view recordings of professional matches and learn. It can also not only learn from the mistakes and moves of its' own games played, but it also has the huge advantage that it can play against itself at speeds incomprehensible for us simple humans

Anyways, I thought some of you might find it interesting. I don't play Go much myself, and I don't have the time to watch the live streams of the matches. But I think it's fascinating because at some point, there may not exist a game out there that some form of AI can't beat us at.

Here's a few extras (including live stream link for those of you interested):

1st match recording:
www.youtube.com

List of live recordings and their schedules:
www.youtube.com

An awesome Wired article, detailing the history of Go and why (at that time, 2014) computers still can't beat humans:
www.wired.com

An ArsTechnica article about this current matchup:
arstechnica.com

And an awesome and relevant xkcd image (although it now needs updating!):
[image loading]

Hope this has stirred up some interested. These are amazing times we live in!
WHAT'S HAPPENIN
Powerpill
Profile Blog Joined October 2002
United States1692 Posts
March 09 2016 17:42 GMT
#2
SKynet begins with go? Seriously though, I was impressed with deep blue all those years ago and am equally impressed now (Go I think would be much harder to program than a chess AI, as you said with the near infinite possible permutations).
The pretty things are going to hell, they wore it out but they wore it well
RoyGBiv_13
Profile Blog Joined August 2010
United States1275 Posts
Last Edited: 2016-03-09 19:27:21
March 09 2016 19:26 GMT
#3
The algorithm that AlphaGo used is a Monte Carlo tree search algorithm. In this algorithm, you start with a couple guesses as to reasonable plays based on heuristics, then evaluate each of those guesses by picking random counter moves for each potential move and seeing what can work and what can't. What AlphaGo has improved upon since this algorithm first came into Go in 2006 is two neural networks, and the training techniques for each one:
1) A network for predicting good potential moves in a present situation
2) A network for evaluating a board position

These neural networks make the tree search algorithm much more effective by eliminating obviously bad moves from the search and allowing the algorithm to evaluate board positions without having to simulate more future plays.

Also:
Any sufficiently advanced technology is indistinguishable from magic
ne4aJIb
Profile Blog Joined July 2011
Russian Federation3209 Posts
March 09 2016 19:47 GMT
#4
I guess they mean Brood War
Bisu,Best,Stork,Jangbi and Flash, Fantasy, Leta, Light and Jaedong, Hydra, Zero, Soulkey assemble in ACE now!
TheEmulator
Profile Blog Joined July 2010
28085 Posts
March 09 2016 19:48 GMT
#5
I watched a bit of the first match and will hopefully catch more of the next one. I have a feeling Lee Sedol is gonna get 5-0'd though.
Administrator
nepeta
Profile Blog Joined May 2008
1872 Posts
March 09 2016 19:58 GMT
#6
Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.
Broodwar AI :) http://sscaitournament.com http://www.starcraftai.com/wiki/Main_Page
Oshuy
Profile Joined September 2011
Netherlands529 Posts
March 09 2016 20:10 GMT
#7
On March 10 2016 04:26 RoyGBiv_13 wrote:
The algorithm that AlphaGo used is a Monte Carlo tree search algorithm. In this algorithm, you start with a couple guesses as to reasonable plays based on heuristics, then evaluate each of those guesses by picking random counter moves for each potential move and seeing what can work and what can't. What AlphaGo has improved upon since this algorithm first came into Go in 2006 is two neural networks, and the training techniques for each one:
1) A network for predicting good potential moves in a present situation
2) A network for evaluating a board position

These neural networks make the tree search algorithm much more effective by eliminating obviously bad moves from the search and allowing the algorithm to evaluate board positions without having to simulate more future plays.

Also: https://twitter.com/deeplearning4j/status/706541229543071745


The learning of the various networks is a fun part. First supervised learning to get a network that predicts correctly the next "human" move on a set of games (they let it train on a games database until it got ~60% of the time the human move), then get several copies of the network to play against itself in reinforcement learning on the whole game (when the game is won, the network changes for that game are valued more). The best network after ... lots of games was elected to predict the potential moves.

The second network is then used on the complete set, trained until it accurately provides an outcome (color winning) from a given position.

Combine the two with a standard Monte Carlo and you get a very good engine.

The only surprising part for me is how well the convergence on the two networks seems to have worked. In my days the networks tended to spend a few days learning to end up with barely better results than randomness (except on a select few problems). Then again, we worked with 3 layers and 50 neurones total (not including the programmer's).
Coooot
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
Last Edited: 2016-03-09 20:12:34
March 09 2016 20:11 GMT
#8
I would imagine the real development will be when AlphaGo knows the Human has lost before the player has made his/her move or counter.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Manit0u
Profile Blog Joined August 2004
Poland17238 Posts
March 09 2016 20:16 GMT
#9
Snakes & Ladders - lol. I guess you can put Monopoly in there too...
Time is precious. Waste it wisely.
DarkPlasmaBall
Profile Blog Joined March 2010
United States44116 Posts
March 09 2016 21:30 GMT
#10
Yeah there are a few TL threads about Go

Like this one: http://www.teamliquid.net/forum/games/89504-tl-go-group?page=9#165
"There is nothing more satisfying than looking at a crowd of people and helping them get what I love." ~Day[9] Daily #100
mahrgell
Profile Blog Joined December 2009
Germany3943 Posts
March 09 2016 21:49 GMT
#11
Or: http://www.teamliquid.net/forum/games/129753-go-igo-weiqi-baduk which is even on the first page of this subforum... (for obvious reasons)
Yurie
Profile Blog Joined August 2010
11790 Posts
March 09 2016 21:51 GMT
#12
Sad the first game recording is so low quality, audio choppy and cuts to the wrong camera all the time.
mahrgell
Profile Blog Joined December 2009
Germany3943 Posts
March 09 2016 22:15 GMT
#13
For those looking for a (English) video live analysis of the game adressed to a more Go educated crowd:

The same channel will broadcast as well in the future. But as I said, you should have a decent understanding of Go to be able to follow that broadcast, while the regular stream is more adressed at beginners.
For those just looking for a quick summary and analysis of the game in written form, you can look here:
https://gogameguru.com/alphago-defeats-lee-sedol-game-1/
It should be noted, that this summary is done by another 9p, but is actually summarizing a lot of different korean media/professional opinions on the match.
datscilly
Profile Blog Joined November 2007
United States528 Posts
March 09 2016 22:16 GMT
#14
On March 10 2016 06:51 Yurie wrote:
Sad the first game recording is so low quality, audio choppy and cuts to the wrong camera all the time.


Check out American Go Association's stream vod: https://www.youtube.com/watch?v=6ZugVil2v4w
It starts an hour into the match, commentary is quicker and less oriented towards beginners.

Interesting thing about that xkcd comic: in Arimaa a bot beat the best human players just last year (in April). So the field of AI in games appears to be advancing rapidly.

I'm personally excited by the possibility of AI becoming superhumanly good at other tasks, as well as games. Like how driverless cars + Show Spoiler +
I hope the term auto catches on
will be better than humans, but for medicine, or scientific research.
SoSexy
Profile Blog Joined February 2011
Italy3725 Posts
March 10 2016 08:27 GMT
#15
The robot won again o.o 2-0
Dating thread on TL LUL
TheEmulator
Profile Blog Joined July 2010
28085 Posts
March 10 2016 08:28 GMT
#16
AlphaGo owns. We're doomed.
Administrator
purakushi
Profile Joined August 2012
United States3300 Posts
March 10 2016 08:30 GMT
#17
LLWWW...

Sigh, but no really. Humanity lost. Probably 0-5.
T P Z sagi
SChlafmann
Profile Joined September 2011
France725 Posts
March 10 2016 10:34 GMT
#18
Close match still if I understood correctly. Some poor quality news website says that Lee lost from only 2 points after a 5h game. Crazy shit. I believe that if Lee doesn't win next one it will be a clean sweep 5-0.
Otherwise 4-1, but I don't see him defeating the AI twice.
Damn.
"More GG, more skill" - Nope! Chuck Testa - #BISU2013
ETisME
Profile Blog Joined April 2011
12355 Posts
March 10 2016 11:02 GMT
#19
could there exist a person who excel in beating computer go since as some said, go is a less logic game compared to chess
其疾如风,其徐如林,侵掠如火,不动如山,难知如阴,动如雷震。
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
March 10 2016 13:04 GMT
#20
I wonder if they're going to continue developing AlphaGo. Maybe it was only a proof of concept type product? For instance, chess engines are not that interesting any more for research programs. I kinda hope they won't, I felt like chess was more mysterious before computers.
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
Hesmyrr
Profile Blog Joined May 2010
Canada5776 Posts
March 10 2016 15:05 GMT
#21
So... should we discuss the games here on the Go thread?
"If watching the MSL finals makes you a progamer, then anyone in Korea can do it." - Ha Tae Ki
Andre
Profile Blog Joined August 2009
Slovenia3523 Posts
Last Edited: 2016-03-10 15:13:43
March 10 2016 15:08 GMT
#22
On March 10 2016 04:58 nepeta wrote:
Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.

Really? That's quite fascinating. What level does the current AIs play at? Years ago when I watched I think there was an AI that got to D+ by cheesing. But then again iiccup rankings aren't what they used to be, so I assume its easier now.

As for Go, I only watched a little because I don't understand the game much. But is it true that some of the moves AlphaGo did in the midgame were considered 'bad' yet they were instrumental in winning him the game later on?
You must gather your party before venturing forth.
LaNague
Profile Blog Joined April 2010
Germany9118 Posts
March 10 2016 15:21 GMT
#23
we really need that stuff in games, ai is still 1990s level in games.
bardtown
Profile Joined June 2011
England2313 Posts
March 10 2016 16:41 GMT
#24
Can the best AIs really not beat the best players in Starcraft? Intuitively you'd think the AI would have no problem because it could just abuse micro way beyond what any human is capable of, as we saw with marines perfectly splitting against banelings for example. Starcraft is an analogue game while Chess/Go are digital, but in this case that would be to the advantage of the AI.


nimbim
Profile Blog Joined June 2009
Germany983 Posts
March 10 2016 16:57 GMT
#25
There is just a lot more to Starcraft than micro. That avoiding siege splash micro needs to directly read the targeting info from the tanks, so it's not exactly micro in the way a human handles this problem. Strategy, making decisions that affect the whole map and understanding all the implications, that kind of thinking is too much for an AI, at least for now. Furthermore, in Starcraft the player has incomplete information, that complicates matters even more.
If Google poured ressources into developing a Starcraft AI, I think it can be as successful as AlphaGo, but it would still take some time to pull that off.
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
Last Edited: 2016-03-10 18:16:11
March 10 2016 18:13 GMT
#26
Too bad it wasn't Ke Jie.
Moderator
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-10 19:09:20
March 10 2016 19:07 GMT
#27
On March 10 2016 04:58 nepeta wrote:
Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.


It is not an issue of time (or processing power). It is an issue of money. And the money is a function of how much promotion an AI beating a top human at Brood War would yield (which is near 0).

You can brute force/Monte Carlo Go. You can't do the same for Brood War. Well, you can try but it makes even less sense. But I guess in a way you can brute force anything eventually. Just record the mouse movement of all top SC games ever play and data mine it.
chocorush
Profile Joined June 2009
694 Posts
March 10 2016 19:40 GMT
#28
On March 11 2016 04:07 trulojucreathrma.com wrote:
Show nested quote +
On March 10 2016 04:58 nepeta wrote:
Idk about calvinball, but after the latest broodwar AI conference, broodwar got estimated to take another 5-20 years to be 'humanly solved'. We've sent them an invitation though, just in case their brute force methods won't work for broodwar.


It is not an issue of time (or processing power). It is an issue of money. And the money is a function of how much promotion an AI beating a top human at Brood War would yield (which is near 0).

You can brute force/Monte Carlo Go. You can't do the same for Brood War. Well, you can try but it makes even less sense. But I guess in a way you can brute force anything eventually. Just record the mouse movement of all top SC games ever play and data mine it.


I think computers will get pretty confused by all the click spamming that pros do just to stay warmed up. There's just too much noise to reasonably understand what pros are doing just by mining their inputs.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
March 10 2016 19:47 GMT
#29
Ofc, but that's how they solve Go currently. That's why I said you can, but can't.

If you take tons of programmers and set up some good AI, you don't need to do crazy calculations.
sertas
Profile Joined April 2012
Sweden881 Posts
March 10 2016 19:52 GMT
#30
why are people saying that computer uses brute force? its not even near using bruteforce in chess or go. It uses very smart parameters for deciding on what move to make making its calculations much more accurate then bruteforce. Computers would still get absolutely destroyed in chess if they were using bruteforce but they dont,
PhoenixVoid
Profile Blog Joined December 2011
Canada32740 Posts
March 10 2016 20:05 GMT
#31
Alpha Go pulled off a move that hasn't been tried yet despite hundreds of years of progress in the game. Quite fascinating how exponentially AI has been developing from a game people believed to be incredibly difficult to beat a medium tier player to now stumping a 9th dan pro.
I'm afraid of demented knife-wielding escaped lunatic libertarian zombie mutants
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
Last Edited: 2016-03-10 21:27:39
March 10 2016 21:27 GMT
#32
Someone had to have performed the move seeing how the game has existed for over 3000 years, no?

Upon learning that Google Deepmind, Alphabet’s artificial intelligence wing, won the first of five matches against the 33-year-old grandmaster of the ancient Chinese game Go with its AlphaGo AI program, Musk sent his congratulations via Twitter to the A.I. company, of which he was once an early investor before Google bought it back in 2014.

Go champion Lee Sedol predicted he’d sweep the machine in a 5-0 in a Tyson-style knockout, but had to resign the first round, following a three and a half hour stand-off. There are four more rounds to go, but this is the first time a computer program has ever been able to best such a skilled player in Go, a game conceived roughly 3,000 years ago and considered much harder to master than chess.

If this comes off as a sign of the impending robot apocalypse, don’t fret, Musk is worried about this too.

While the billionaire tech company mogul was quick to give praise, tweeting, “Experts in the field thought AI was 10 years away from achieving this,” he’s also highly concerned about the pitfalls of A.I. and the dystopian future it could breed.


Source
"Smokey, this is not 'Nam, this is bowling. There are rules."
Yacobs
Profile Joined March 2010
United States846 Posts
March 10 2016 21:46 GMT
#33
On March 11 2016 05:05 PhoenixVoid wrote:
Alpha Go pulled off a move that hasn't been tried yet despite hundreds of years of progress in the game. Quite fascinating how exponentially AI has been developing from a game people believed to be incredibly difficult to beat a medium tier player to now stumping a 9th dan pro.


Details please?
mierin
Profile Joined August 2010
United States4943 Posts
March 10 2016 23:47 GMT
#34
Dude watching that video was like a 4 hour braingasm. That commentator dude also struck me as solid, a lot like the Grubby of Go players.
JD, Stork, Calm, Hyuk Fighting!
ejozl
Profile Joined October 2010
Denmark3344 Posts
March 11 2016 00:02 GMT
#35
How big is the advantage of being the starting player in Go?
SC2 Archon needs "Terrible, terrible damage" as one of it's quotes.
chocorush
Profile Joined June 2009
694 Posts
March 11 2016 00:30 GMT
#36
It's somewhere around 5 and 7 points given two equally skilled players. At the rate alpha go is improving I'm sure our understanding of the starting advantage will vastly improve.
Uvantak
Profile Blog Joined June 2011
Uruguay1381 Posts
March 11 2016 01:16 GMT
#37
On March 11 2016 09:02 ejozl wrote:
How big is the advantage of being the starting player in Go?

Pretty big, but Go is no Chess, here you are not really playing to kill enemy pieces, but to control terrain (points), at the end of the game once both players decide the game is over then they count their points, then the white gets 7.5 free extra points which are there to address the inherent advantage of black doing the first move (the extra points are called komi).
@Kantuva | Mapmaker | KTVMaps.wordpress.com | Check my profile to see my TL map threads, and you can search for KTV in the Custom Games section to play them.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 02:43:44
March 11 2016 02:30 GMT
#38
On March 11 2016 04:52 sertas wrote:
why are people saying that computer uses brute force? its not even near using bruteforce in chess or go. It uses very smart parameters for deciding on what move to make making its calculations much more accurate then bruteforce. Computers would still get absolutely destroyed in chess if they were using bruteforce but they dont,



I actually published a paper where I used Monte Carlo to simulate molecular modeling. I think I know what it is.
In layman terms it can be called brute forcing. Especially if you compare human thinking to computer thinking. Imagine a human trying to run a Monte Carlo algorithm. You'd go insane.

I can kind of see if the possibility space or Go is really that big, humans prefer a small segment of that possibility space.

If the AI finds a possibility space alien to human players, but that is solid in itself, by accident or by design, the human player will suddenly not have their usual game sense.
I don't know if that is how Go can work, but I can see how that may be possible.

raNazUra
Profile Joined December 2012
United States10 Posts
March 11 2016 03:19 GMT
#39
AlphaGo is definitely not using brute force. MCTS was first developed with completely random playouts, which is why it was called Monte Carlo, but it turns out it works a lot better if you have intelligent playouts, so long as they're still fast enough.

Short summary of how AlphaGo works: It learns a deep neural net that takes as input board states and outputs a predicted move, which it trains using tens of thousands of recorded professional Go games. It actually learns two of these, one bigger and slower, but more accurate, and one that is faster that it can use for the playouts of MCTS. With these (and a bit of retraining to incentivize winning rather than accurate prediction), it then plays itself millions of times to generate a huge amount of data mapping board states to wins or losses, then learns another deep neural net that predicts the value of a board state. These two networks were respectively called the "policy net" and the "value net" by the DeepMind guy on the interview yesterday.

All of that is trained offline before a game. Using those two networks, AlphaGo does game tree search (MCTS) during a game to decide the best move. But it prunes the game tree using its policy net, so it only explores moves that are likely moves for an expert to play, based on what it has learned. That's why it's not really brute force in the way the term is usually applied, because it's only thinking about reasonable moves. The final move selected is a balance between the results of the game tree search and the evaluation of the position after the move by the value net. There's clearly a lot more complications that this, but that's the base approach.

At a talk at a recent AI conference I was at, the CEO of DeepMind said that they actually evaluated less game states in their games against Fan Hui than Deep Blue did, and that was on 20-year old technology. That means their search is most definitely pretty intelligent, and not brute forcing the game.
Speak the truth, even if your voice shakes
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 03:25:29
March 11 2016 03:24 GMT
#40
Well, Deep Blue had a human playing.


Btw, brute force doesn't mean it is stupid. As long as you randomly iterate, I think you can call it brute forcing.
Oshuy
Profile Joined September 2011
Netherlands529 Posts
March 11 2016 04:46 GMT
#41
On March 11 2016 12:24 trulojucreathrma.com wrote:
Well, Deep Blue had a human playing.

Btw, brute force doesn't mean it is stupid. As long as you randomly iterate, I think you can call it brute forcing.


Most of the time, "brute force" does mean stupid. Bruteforcing is what you do when you revert to exploring every possibility available until you reach your stop condition, abandonning anything but the most basic algorithm and relying on raw computing power to find a solution. In a game, it would be a tree search of possible moves without cutting any branches for example.

If you tag as "brute force" any iterative analysis of future lines based on candidate moves, humans are also "brute forcing".
Coooot
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 07:32 GMT
#42
I'd say brute force is when you don't use heuristics. I.e. when you try to bruteforce a pin code and just enter all 10.000 possibilities of 4-number codes until you find the one.

As soon as heuristics come into play, it's no longer brute forcing imo, but it's up for debate. You could argue the algorithm trying out all the possible moves, and picking the best one according to some heuristics, is similar to trying all pin codes until it works.
Chocolate
Profile Blog Joined December 2010
United States2350 Posts
Last Edited: 2016-03-11 07:56:57
March 11 2016 07:50 GMT
#43
On March 10 2016 05:11 {CC}StealthBlue wrote:
I would imagine the real development will be when AlphaGo knows the Human has lost before the player has made his/her move or counter.

I wouldn't be so sure. The algorithm (to me understanding) is mostly "pattern finding". Being able to determine definitively the outcome of a game given certain circumstances would likely require an extremely expensive depth first search on the remaining game states given the current one. By that I mean, the AI could one day be able to say "99% of victory" at a certain point but could likely never determine the inevitable outcome of a game from a nontrivial starting state.

To all of you crying "brute-force" :

A brute force solution would look like this: given the current board state, determine all possible next states, then all the next states for those, etc. until you have computed every possible end state, then use that to inform you as to what next state to choose.

Obviously alphago cannot do that, because when the game is not near the end go just has too many possible permutations to compute.
Furikawari
Profile Joined February 2014
France2522 Posts
March 11 2016 08:48 GMT
#44
AlphaGo "knew" game 2 was won as in the late game it did some suboptimal moves just to settle the center. In go this kind of plays are often made by humans when they know they win for sure. You simplify the game to not let a chance to your opponent to pull out a tricky sequence that could reverse the result.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 12:21:09
March 11 2016 12:08 GMT
#45
So you can only brute force when you move through all of possibility space in a totally arbitrary manner, ignoring all information about where in the possibility space the solution is most likely to be found?


I disagree. Monte Carlo randomly picks something. That's not thinking. That's relying in sheer calculation power in being able to evaluate so many positions. You force a solution through sheer calculation power.


It is like a human Chess/Go player deciding what move to make by having a hundred (trillion) million people play all his candidate moves and then playing the move that wins most often.

Brute force isn't a technical term. I can use it just fine.
Now many in game AI you have no real smart algorithms, because there are no definite ways to measure a position. You have to evaluate it. But in physical sciences, you can measure and Monte Carlo is brute force.
And in the mind of a lay person, it also is.
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 12:19 GMT
#46
On March 11 2016 21:08 trulojucreathrma.com wrote:
So you can only brute force when you move through all of possibility space in a totally arbitrary manner, ignoring all information about where in the possibility space the solution is most likely to be found?


Yes, that is the definition of brute force. Search by exhaustion. No reasoning.

You can disagree with the semantics if you like, most people won't.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 12:29:57
March 11 2016 12:21 GMT
#47
That's stupid to say. Also, many people don't agree. Go google "Monte Carlo" " brute force".
If only exhaustive methods are brute force, then that word loses 99.999 of it's meaning. When do you ever do an exhaustive search?
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 12:26 GMT
#48
It is used all the time in cryptography and security. If we're gonna throw google terms around, have a look at "brute force attack"
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
March 11 2016 12:30 GMT
#49
They don't use it. That's why our encryption is safe.
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 12:37 GMT
#50
Dude, it is used all the time, and what you call "safe" encryption is never safe for long periods.

Some years ago the 'DES' algorithm was a standard for symmetric encryption. Then it got brute-forced.
So Triple-DES and AES were introduced. With current computing power, they cannot be brute-forced. Yet. NIST estimates Triple-DES will be brute-forced by 2030. And of course who knows if the NSA has a supercomputer that can do it already.

Encryption is never safe.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
March 11 2016 12:47 GMT
#51
DES was developed in 1970's.
Furikawari
Profile Joined February 2014
France2522 Posts
March 11 2016 12:50 GMT
#52
Laurens, Truloblablah is obviously trolling you.
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 12:51 GMT
#53
Yes, and it was considered "secure" until 1998 or so. Just like you think our current encryption is safe. Wait some years and computational power has increased to the point where our current encryption can be brute-forced, and the cycle continues.

Hence my point that brute-force is used all the time. And calling AlphaGo brute force is an insult to the team behind it.
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 12:51 GMT
#54
On March 11 2016 21:50 Furikawari wrote:
Laurens, Truloblablah is obviously trolling you.


Oh.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 13:02:11
March 11 2016 13:00 GMT
#55
My name was randomly generated. What you see is your own bias.

So DES was declared no longer secure before you were born. Not sure why you bring it up.
I can say that the electrical telegraph is no longer used. But then you bring up that it is, by hobbyists. It's disingenuous to bring that up considering the nature of the debate we were having.
Laurens
Profile Joined September 2010
Belgium4539 Posts
March 11 2016 13:03 GMT
#56
I was born in 91, and in the context of our discussion it was very clear why i brought it up.

But I do believe you are trolling now, so I'll just stop responding.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 13:15:22
March 11 2016 13:08 GMT
#57
Really? PM me when you publish your first MC paper.

User was warned for this post
johanes
Profile Blog Joined May 2008
Czech Republic2227 Posts
Last Edited: 2016-03-11 14:36:49
March 11 2016 14:32 GMT
#58
when is the 3rd game to be played?

EDIT:

ahh its tomorrow

The matches will be held at the Four Seasons Hotel, Seoul, South Korea, starting at 1pm local time
(4am GMT; day before 11pm ET, 8pm PT) on March 9th, 10th, 12th, 13th and 15th.
beg
Profile Blog Joined May 2010
991 Posts
March 11 2016 14:37 GMT
#59
On March 11 2016 21:50 Furikawari wrote:
Laurens, Truloblablah is obviously trolling you.

They were having a rather mature conversation until you showed up.

It's interesting how you escalated them to the point where one of them it acutally thinking he's being trolled, lol.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 15:24:55
March 11 2016 15:15 GMT
#60
On March 11 2016 22:03 Laurens wrote:
I was born in 91, and in the context of our discussion it was very clear why i brought it up.

But I do believe you are trolling now, so I'll just stop responding.



Ahuh ahum, I am sorry for being so mature. I don't really care about your future publications on Monte Carlo anymore

No! You are a troll!! omgz


I guess you are allowed to call people trolls, but calling a Monte Carlo algorithm 'brute force' is a deep deeply cutting demeaning insult.

I like Monte Carlo. MCMC's are about the only real comp sci algorithms we use in our lab.
Google is obviously using something much more advanced. But we are a simple lab where no one even has a comp sci degree.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 11 2016 15:27 GMT
#61
Can we please stop mentioning brute force ? Deep reinforcement learning with double networks - the kind of techniques used by Deepmind - specifically has one neural network dedicated to the task of 'guessing' which few moves to explore next. These chosen few are then sent to another part of the algorithm for evaluation. Effectively this is smart pruning of the tree of possibles, and is nothing like MC methods with O(n) at best or O(sqrt n) accuracy.

If you want to find out about how this was first exploited in chess, check out Matthew Lai's 'Giraffe' chess engine in september 2015, it's on ArXiv.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
stuchiu
Profile Blog Joined June 2010
Fiddler's Green42661 Posts
March 11 2016 15:42 GMT
#62
It's a narrative thing. People want the computer to be about brute force.
Moderator
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-11 15:46:41
March 11 2016 15:42 GMT
#63
Well, to be fair to those that brought up 'brute force' first, as it wasn't me. The weren't referring to AlphaGo or Deepmind.

On March 12 2016 00:42 stuchiu wrote:
It's a narrative thing. People want the computer to be about brute force.



No. It is about context. It is not a technical term. Apparently, in cryptography, they call anything that's not exhaustive 'brute force'.

In physical sciences, we call everything that's expensive computationally but simple to implement 'brute force'.

We can code for years and get all the laws of physics right, then get an answer with little computational cost. Or we can implement something simple and just run it for a relatively long time.

Of course, calculating something from first principle is impossible. You fail at that when you get to the level of a water molecule.
Hesmyrr
Profile Blog Joined May 2010
Canada5776 Posts
March 11 2016 17:52 GMT
#64
http://media.daum.net/m/channel/view/media/20160311060203778

Loooooooooooool at all the salt.
"If watching the MSL finals makes you a progamer, then anyone in Korea can do it." - Ha Tae Ki
PhoenixVoid
Profile Blog Joined December 2011
Canada32740 Posts
March 11 2016 17:56 GMT
#65
On March 12 2016 02:52 Hesmyrr wrote:
http://media.daum.net/m/channel/view/media/20160311060203778

Loooooooooooool at all the salt.

Hard to see the salt when it's in a language few people on this website can understand.
I'm afraid of demented knife-wielding escaped lunatic libertarian zombie mutants
Hesmyrr
Profile Blog Joined May 2010
Canada5776 Posts
Last Edited: 2016-03-11 18:30:45
March 11 2016 18:25 GMT
#66
To briefly translate, a random lawyer guy who specializes on IT is complaining that AlphaGo vs Lee Sedol is loaded game where it's 100% impossible for human to win, and Google is insulting the entire Baduk community by their chicanery of historic proportions.

He argues since AlphaGo is connected to internet, AI can basically overpower human with force of sheer numbers. Bear with me here. More specifically, AIphaGo doesn't make move by making prediction about Lee Sedol's future moves, but by calculating the ideal move after looking at the last play Lee Sedol made. Therefore, since AlphaGo uses brute force to analyze all the possibilities, it is not a true AI.

The fact that AlphaGo is using cloud computing is directly against the principles of Baduk where it's supposed to be fair 1v1 with no external advice. "Google says AlphaGo do not use brute force algorithm, but it's receiving advice from another program that is using brute force. This is blatant cheating. Because AlphaGo can run thousands of AlphaGo at the same time over the internet, and can add more computers to its resource network when running out of time, it's impossible for it to lose by time unlike Lee Sedol", says this lawyer while adding that "Google offered million dollars but if Google wins, it will make much more higher profits due to being the frontrunner in AI technology."

He concludes that Google should publicly apologize to Lee Sedol, Fan Hui, and entire Baduk community in general since the company is deceiving them using AlphaGo that does not truly understand how Baduk works nor can be considered a true AI.


True gold is 1000+ subsequent netizen comment that unambiguously blames Google for being lying piece of shit, that what AlphaGo is doing is same thing as bringing textbook to the exam, and that it should be disconnected from the internet for rest of the match so it becomes Lee Sedol vs one laptop program.

It's beautiful.
"If watching the MSL finals makes you a progamer, then anyone in Korea can do it." - Ha Tae Ki
Nakama
Profile Joined May 2010
Germany584 Posts
Last Edited: 2016-03-11 18:47:25
March 11 2016 18:47 GMT
#67
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....
Chocolate
Profile Blog Joined December 2010
United States2350 Posts
March 11 2016 20:26 GMT
#68
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point
mierin
Profile Joined August 2010
United States4943 Posts
Last Edited: 2016-03-11 20:36:44
March 11 2016 20:35 GMT
#69
It almost seems like to me that to play vs. this thing would be like playing against the luckiest idiot savant ever. Like, the game doesn't know HOW to play at all...all it knows is that X move is statistically the best given where the pieces are on the board. Would personifying it be kind of like AlphaGo just plays where it "feels best" every time, without understanding a single intricacy of the game besides where pieces can physically be played and what it means to win?
JD, Stork, Calm, Hyuk Fighting!
Gorsameth
Profile Joined April 2010
Netherlands21593 Posts
March 11 2016 21:20 GMT
#70
Is what AlphaGo does really so different from what an actual human player does? Take a potential move, evaluate how it would play out and discard it if it is not good enough?
The difference is that as a computer AlphaGo can do that same process far faster and more extensive then a human could but the basic principle is the same.
It ignores such insignificant forces as time, entropy, and death
Nakama
Profile Joined May 2010
Germany584 Posts
March 11 2016 21:24 GMT
#71
On March 12 2016 05:26 Chocolate wrote:
Show nested quote +
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.
Goolpsy
Profile Joined November 2010
Denmark301 Posts
March 11 2016 22:01 GMT
#72
Brute force is as precise as saying: "the primary technique it uses, is programmed algorithms". Brute force 'might' be true by 'some definitions', but honestly such a basic and inaccurate description that it is like saying Flash uses his mouse to become a champion :S
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
Last Edited: 2016-03-11 22:21:18
March 11 2016 22:13 GMT
#73
I've finally finished reading the paper:
http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html

For those who are interested in AI and statistics, it is definitely worth reading.

First and foremost, it's worth to point out that according to the paper, their setup only had 40 search threads, 1202 CPUs and 176 GPUs. I don't think this is even remotely close to a super computer in today's age. The computing power probably isn't even as strong as Deep Blue that was built decades ago.

One of the greatest challenges in go is how to evaluate a certain board state. The number of potential moves are large and it is incredibly difficult to assign a score to any board position. The board gets easier and easier to evaluate the deeper you go down a tree (less moves possible towards the end game). The early/mid game where the decision tree has many branches makes any brute force algos unfeasible.

In layman's terms, neuro networks allowed the program to develop a strong ability to predict moves by "guessing". This ability is reinforced by playing games against itself using a large library of recorded professional games, and assigning a probabilistic score to each simulated situation. This is how the program "learns" on its own. The use of bayesian conditional probability is what differentiates this program from other brute force algos found in the market.

In a live game, it estimates only the moves with high values/payoffs, this reduces the number of branches on the search tree, and allows the program to analyze them into much greater depth. This ultimately results in board values that are much more accurate. I think this process is very similar to what a human would do, which is only to focus on a handful of key possibilities. A brute force approach would have been to analyze all possible moves to a much shallower depth, resulting in less reliable value networks.

The key here is that the program remembers its past "training" games when it plays, so it spends much less time evaluating situations that it has seen before.
The value network was trained for 30 million mini-batches of 32 positions, using 50 GPUs, for one week.


Alphago incorporated so many modern AI techniques, and the fact that it is working this well is truly revolutionary.
ZigguratOfUr
Profile Blog Joined April 2012
Iraq16955 Posts
March 11 2016 22:29 GMT
#74
On March 12 2016 06:24 Nakama wrote:
Show nested quote +
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.
mierin
Profile Joined August 2010
United States4943 Posts
March 12 2016 00:08 GMT
#75
On March 12 2016 07:29 ZigguratOfUr wrote:
Show nested quote +
On March 12 2016 06:24 Nakama wrote:
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.


I feel the same way, but also that makes me respect human brains even more. Everything in Go happens in one dimension, even though it's on a 2-d board. Stones can only move in a 1-d fashion. In Starcraft, every unit on the map is represented in 2-d and has those degrees of freedom. So what if a computer can beat us at Go? It's a revolution, but I don't know enough about computers unfortunately to say how much harder it is for this type of AI to consider moves in more than 1 dimension.
JD, Stork, Calm, Hyuk Fighting!
Petrosidius
Profile Joined March 2016
United States10 Posts
March 12 2016 00:30 GMT
#76
How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go.
angrybacon
Profile Joined March 2010
United States98 Posts
March 12 2016 01:11 GMT
#77
On March 12 2016 09:30 Petrosidius wrote:
How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go.


You can consider the Go board to be one dimensional because the two dimensions are both discrete. You can take the 19 rows, and place them end to end and reduce them to a computationally equivalent single 361 row. This is called vectorization.

This process only applies when the dimensions are discrete and finite.
Nakama
Profile Joined May 2010
Germany584 Posts
March 12 2016 01:57 GMT
#78
On March 12 2016 07:29 ZigguratOfUr wrote:
Show nested quote +
On March 12 2016 06:24 Nakama wrote:
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.



My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.
Petrosidius
Profile Joined March 2016
United States10 Posts
March 12 2016 02:17 GMT
#79
On March 12 2016 10:11 angrybacon wrote:
Show nested quote +
On March 12 2016 09:30 Petrosidius wrote:
How exactly is it in one dimension? The board is two dimensions, and stones don't move at all. They are just placed in an (x,y) location each turn and potentially removed if captured. Units in starcraft also exist in an (x,y) coordinate system and are removed if their HP reaches 0. Obviously starcraft is more complex in that the units move and fire projectiles and such but it's not a dimension higher than go.


You can consider the Go board to be one dimensional because the two dimensions are both discrete. You can take the 19 rows, and place them end to end and reduce them to a computationally equivalent single 361 row. This is called vectorization.

This process only applies when the dimensions are discrete and finite.


Every starcraft map is also finite and discrete. Maybe much bigger than a Go board but it's finite and there is a minimum x and y distance.
PhoenixVoid
Profile Blog Joined December 2011
Canada32740 Posts
Last Edited: 2016-03-12 03:27:00
March 12 2016 02:49 GMT
#80
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.
I'm afraid of demented knife-wielding escaped lunatic libertarian zombie mutants
Superbanana
Profile Joined May 2014
2369 Posts
Last Edited: 2016-03-12 03:03:40
March 12 2016 02:51 GMT
#81
For those questioning the "brute force part" i just checked the wikipedia to get some numbers.

"November 2006 match between Deep Fritz and world chess champion Vladimir Kramnik, the program ran on a personal computer containing two Intel Core 2 Duo CPUs, capable of evaluating only 8 million positions per second, but searching to an average depth of 17 to 18 plies in the middlegame thanks to heuristics"

Deep Fritz, in 2006, was superior to deep blue (from 1996!) and ran in a core duo. 8 million positions per second is already completely out of human reach. Even in chess heuristics was the key step and in Go, more is required. Anyway a potato can evaluate millions of moves while the human is racing against time to check one by one.

I don't even know why people make such a big deal. Its not a fair match, and i don't think its supposed to be. Just don't be fools and swallow the idea that its a battle of "minds" or that Google is actually challenging the man. The computer is not even emulating the human thought process. They know they will win, what is done is to showcase what they are capable of to the general public. But discussing the fairness of a match between a 1202 CPUs computer using cloud computing and a human makes no sense.

Your cellphone can brute force you in a simple arithimetics contest while running youtube, the point is that despite that nobody ever did it with GO before because "the technology just wasn't there yet".You cannot calculate to win at go. Simple calculation is not enough for a human OR computer (at least for now).
In PvZ the zerg can make the situation spire out of control but protoss can adept to the situation.
Wegandi
Profile Joined March 2011
United States2455 Posts
Last Edited: 2016-03-12 05:11:38
March 12 2016 05:10 GMT
#82
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.
Thank you bureaucrats for all your hard work, your commitment to public service and public good is essential to the lives of so many. Also, for Pete's sake can we please get some gun control already, no need for hand guns and assault rifles for the public
rabidch
Profile Joined January 2010
United States20289 Posts
Last Edited: 2016-03-12 05:24:32
March 12 2016 05:24 GMT
#83
people questioning the use of the words "brute force" actually know what "brute force" means in computer science. how alphago got very strong is due to the hyperbolic time chamber ability to train against itself and other AIs/emulated players very quickly in a short amount of time. the computing power that alphago has been given over time isnt sufficient enough to make the program one of the best in the world by simply stupidly calculating all possible moves on the board.
LiquidDota StaffOnly a true king can play the King.
andrewlt
Profile Joined August 2009
United States7702 Posts
March 12 2016 06:26 GMT
#84
On March 12 2016 10:57 Nakama wrote:
Show nested quote +
On March 12 2016 07:29 ZigguratOfUr wrote:
On March 12 2016 06:24 Nakama wrote:
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.



My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.


And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.

If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.
betaflame
Profile Joined November 2010
175 Posts
March 12 2016 07:03 GMT
#85
On March 12 2016 15:26 andrewlt wrote:
Show nested quote +
On March 12 2016 10:57 Nakama wrote:
On March 12 2016 07:29 ZigguratOfUr wrote:
On March 12 2016 06:24 Nakama wrote:
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.



My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.


And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.

If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.


That might be true that it is "industry specific" term but it is still frustrating to see people calling Alphago bruteforce when that is just simply not doing it justice because it is just not "brute force" in a computer science context (which this is, since we're talking about AI/ML).

In a sense, AlphaGo does have a "decision making process" since it is deciding that some moves will give a higher probability of victory than others. Alphago is basically doing what Lee Sedol's brain is doing but on a far more precise level, but not to the point of brute force, since that would mean all possible variations which it just isn't doing, so Alphago's algorithm is far more intelligent than a simple "brute force" mechanism.
Taosu
Profile Joined August 2010
Ukraine1074 Posts
March 12 2016 07:44 GMT
#86
Lee Sedol should work on improving his APM
Also fan of Hyuk, Pure, free, Action, Stats, Leta, Horang2, Snow, Flying, Shuttle, Movie, Paralyze
strongwind
Profile Joined July 2007
United States862 Posts
March 12 2016 08:18 GMT
#87
Crazy 3-0. The pained look on Sedol's face...no shame though, losing to 1200 CPUs is nothing to hang your head about
Taek Bang Fighting!
JazVM
Profile Joined October 2012
Germany1196 Posts
March 12 2016 08:26 GMT
#88
Frankly, I have no clue about go but I wonder if go engines will revolutionize the way the game is played the same way it did with chess.
mind mind mind mind mind mind
b0lt
Profile Joined March 2009
United States790 Posts
March 12 2016 09:24 GMT
#89
On March 12 2016 14:10 Wegandi wrote:
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.


Heads up limit poker is solved.
Makro
Profile Joined March 2011
France16890 Posts
March 12 2016 10:04 GMT
#90
lee sedol went full foreigner
Matthew 5:10 "Blessed are those who are persecuted because of shitposting, for theirs is the kingdom of heaven".
TL+ Member
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
March 12 2016 11:13 GMT
#91
On March 12 2016 10:57 Nakama wrote:
Show nested quote +
On March 12 2016 07:29 ZigguratOfUr wrote:
On March 12 2016 06:24 Nakama wrote:
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.



My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.

What makes the proccess of AlphaGO so diffrent from humans?
The harder it becomes, the more you should focus on the basics.
Gorsameth
Profile Joined April 2010
Netherlands21593 Posts
March 12 2016 11:21 GMT
#92
On March 12 2016 14:10 Wegandi wrote:
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.

Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available.
Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
It ignores such insignificant forces as time, entropy, and death
rabidch
Profile Joined January 2010
United States20289 Posts
March 12 2016 11:41 GMT
#93
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3

Show nested quote +


"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???
LiquidDota StaffOnly a true king can play the King.
Grettin
Profile Joined April 2010
42381 Posts
March 12 2016 12:11 GMT
#94
On March 12 2016 20:41 rabidch wrote:
Show nested quote +
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough
"If I had force-fields in Brood War, I'd never lose." -Bisu
rabidch
Profile Joined January 2010
United States20289 Posts
Last Edited: 2016-03-12 12:17:04
March 12 2016 12:16 GMT
#95
On March 12 2016 21:11 Grettin wrote:
Show nested quote +
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI
LiquidDota StaffOnly a true king can play the King.
Grettin
Profile Joined April 2010
42381 Posts
March 12 2016 12:28 GMT
#96
On March 12 2016 21:16 rabidch wrote:
Show nested quote +
On March 12 2016 21:11 Grettin wrote:
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI


Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.

"If I had force-fields in Brood War, I'd never lose." -Bisu
rabidch
Profile Joined January 2010
United States20289 Posts
March 12 2016 12:35 GMT
#97
On March 12 2016 21:28 Grettin wrote:
Show nested quote +
On March 12 2016 21:16 rabidch wrote:
On March 12 2016 21:11 Grettin wrote:
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI


Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.

https://www.youtube.com/watch?v=DXUOWXidcY0

what i'd love to see is if the AI can find different build orders to try and create new strategies, ie like the fast corsair strategies in pvz
LiquidDota StaffOnly a true king can play the King.
Grettin
Profile Joined April 2010
42381 Posts
Last Edited: 2016-03-12 12:48:48
March 12 2016 12:47 GMT
#98
On March 12 2016 21:35 rabidch wrote:
Show nested quote +
On March 12 2016 21:28 Grettin wrote:
On March 12 2016 21:16 rabidch wrote:
On March 12 2016 21:11 Grettin wrote:
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI


Very true. Always reminds me of the Automaton2000 videos about the marine split micro. Regardless, i think it would be entertaining to see what would happen.

https://www.youtube.com/watch?v=DXUOWXidcY0

what i'd love to see is if the AI can find different build orders to try and create new strategies, ie like the fast corsair strategies in pvz


Definitely, and this would probably end up happening too. Noticed a comment on the Reddit thread about AlphaGo's 3rd victory, that sums this up well i think.

Just remember, this is not the end of Go. As it was in chess, computers will gradually go from our nemesis to part of Go culture, assisting us and enhancing the game for human play.
"If I had force-fields in Brood War, I'd never lose." -Bisu
Taf the Ghost
Profile Joined December 2010
United States11751 Posts
March 12 2016 13:34 GMT
#99
On March 12 2016 21:16 rabidch wrote:
Show nested quote +
On March 12 2016 21:11 Grettin wrote:
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI


The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.

But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.

Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.
rabidch
Profile Joined January 2010
United States20289 Posts
Last Edited: 2016-03-12 15:35:25
March 12 2016 15:27 GMT
#100
On March 12 2016 22:34 Taf the Ghost wrote:
Show nested quote +
On March 12 2016 21:16 rabidch wrote:
On March 12 2016 21:11 Grettin wrote:
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI


The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.

But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.

Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.

google would have to work something out with blizzard to do it legally anyway, but if they really wanted to crack open BW to suit their needs they could certainly do it.

and lastly, google is making a huge advertisement for the power of cloud computing. moore's law will not hold up, at least for now. however, companies have found it lucrative to sell processing power through cloud computing. perhaps one day having big enough server farms and more efficient parallelism will enable alphago's improvements to be available to the average person.
LiquidDota StaffOnly a true king can play the King.
Garrl
Profile Blog Joined February 2010
Scotland1972 Posts
March 12 2016 16:35 GMT
#101
presumably BW checks every frame for inputs rather than using an event-driven model? So it would be limited to the native fps of BW?

*** http://www.starcraftai.com/wiki/Frame_Rate some resources on this
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-12 17:55:39
March 12 2016 17:52 GMT
#102
They can use two robot arms with robot hands to use the mouse and keyboard physically, and use a camera to detect movement.

That's only fair as with chess or go, moving the piece is immaterial, even for a robot. Unless maybe it is speed chess.
The chess or go board would be very very easy to interpret compared to a video game screen.

The eye-brain-hand coordination challenge is probably as hard of a problem as the game itself.
If you are going to pick a challenge just because it is hard to solve, to show you can do things never done before, why exclude half of the problem. Let's see if they can make a hand that can do 350 apm all across the keyboard, using any key combination.
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 12 2016 18:43 GMT
#103
On March 12 2016 22:34 Taf the Ghost wrote:
Show nested quote +
On March 12 2016 21:16 rabidch wrote:
On March 12 2016 21:11 Grettin wrote:
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

to be serious though, once deepmind gets over the initial hurdle of limited information and studying build orders it wont even be fair in either SC2 or BW because of the perfect micro aspect. theyd have to give a lot of handicaps to the AI


The one problem with the DeepMind vs SC pro thought is that DeepMind should be required to be limited to the input speed of the keyboard and mouse. It's incredibly dishonest to allow the computer to perform tasks that a Player simply isn't allowed to because of the interface. That's not really a competition at that point, it's simply allowing the computer to abuse parts of the game engine the human player doesn't have access to.

But there'd also be some fairly serious technical issues to work through. It's one thing to have access to the direct API of SC:BW, it's wholly another thing to find a way to be able to play a match instantly. That's what would be required for it to constantly run simulations it would need to "learn" the game.

Lastly, on the Go matches, considering they're throwing a parallelized super-computer at the problem, even if Moore's Law holds for the next decade, thus making the processing power available to the home user, there's still the issue of 10s of millions in programming money that went into the code to make this work. That's never going to be common.


Under current approach, Deepmind for 2D video games would very likely be restricted to 60 or 120 APM ( one keyboard and/or one mouse for each rendered frame ).

"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
Nakama
Profile Joined May 2010
Germany584 Posts
Last Edited: 2016-03-12 19:23:48
March 12 2016 19:19 GMT
#104
On March 12 2016 15:26 andrewlt wrote:
Show nested quote +
On March 12 2016 10:57 Nakama wrote:
On March 12 2016 07:29 ZigguratOfUr wrote:
On March 12 2016 06:24 Nakama wrote:
On March 12 2016 05:26 Chocolate wrote:
On March 12 2016 03:47 Nakama wrote:
Funny how some ppl in here think a machine can "play" GO......
But i guess its normal when science and philosophy come close to each other and the scientist tries to be a philosopher or visa verse....

Well the important part is that you managed to be pretentious without actually elaborating on your point



Yes i have to admit that but hey its the internet and there is no way to discuss this topic in any reasonable way in a forum like this without beeing so simplisitc that it gets wrong... and i was just baffled by the reaction and arguments of some folks in here when some other dude called the mehtod AlphaGO uses "brute force" so i expressed it =)

And for me the best way to show my own opinion on this topic was to give the hint that we are talking about a "machine" and therefore words like "smart" "evaluation" "decision" "thinking" etc.can only be meant metaphorically so in the end AlphaGO uses "brute force" to achieve/mimic what a human beeing does by thinking.

I am sure there are lightyears between trying out all possible options to solve a game or code (what u call brute force) and the method AlphaGO uses and thats why some of u got mad about it but if u think about it there is not much diffrence between those two methods and i think brute force is an accurate way of describing the diffrence between the method alphaGo uses and the one lee sedol is using.


Your definition of "brute force" seems to be so broad as to encompass all of human and machine thinking. When it comes down to it no one understands how humans make decisions. There's no reason to consider AlphaGO's decision making process inferior to the human process if it can obtain better results in this context.



My point is that AlphaGO has no "decision making process" which is even suitable to compare it to what we as humans do... its a machine and if we talk about it like it "makes decisions" "acts" etc. we mean it in a metaphorical way or otherwise our speech about it makes no sense.


And the point you are missing is that it really is pretentious to argue semantics about industry jargon as an outsider. I work in aerospace. We have our own acronyms and jargon and code words like every industry does. There are many terms that have a very specific meaning in the aerospace industry.

If I learned anything in reading 5 pages of this thread, it is that the term "brute force" has a specific meaning in the computer industry, a meaning that the people who sound like they work in the industry or follow it closely all use, a meaning that you are pointlessly trying to argue the semantics of.


I agree with you . But the problem arises when the people from the industry forget that they use the words in a very specific meaning and then use it in an other enviroment etc. for example if u then try to explain the human brain etc. in analogy to the method AlphaGO uses just look at some posts in here and you will see that it´s not even a rare case
Pwere
Profile Joined April 2010
Canada1556 Posts
Last Edited: 2016-03-12 19:36:18
March 12 2016 19:35 GMT
#105
On March 12 2016 20:21 Gorsameth wrote:
Show nested quote +
On March 12 2016 14:10 Wegandi wrote:
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.

Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available.
Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
This explanation is wrong. Current AIs do not beat the best poker players in complex variations of poker (NLHE, PLO, etc.). While I'm pretty sure it's more a matter of resources than technology, the reason they're not winning is because poker is a game of balance. Your bluffs have to be balanced with strong hands. Your greedy value bets have to appear in spots you bluff a significant amount. Math for a single hand has almost nothing to do with it.

If a team with the resources of AlphaGo made a PokerAI, they would get there in a few years, at most, but meanwhile, a few dozen players still beat the best AIs.

Same goes for games like Magic or Hearthstone. This is the spot Chess was in ~25 years ago, and now your phones are GM level.
Grumbels
Profile Blog Joined May 2009
Netherlands7031 Posts
Last Edited: 2016-03-12 19:39:00
March 12 2016 19:36 GMT
#106
wrong thread :/
Well, now I tell you, I never seen good come o' goodness yet. Him as strikes first is my fancy; dead men don't bite; them's my views--amen, so be it.
BisuDagger
Profile Blog Joined October 2009
Bisutopia19224 Posts
March 13 2016 02:30 GMT
#107
Has anyone considered what a game of go would be like if it was AlphaGo vs AlphaGo?
ModeratorFormer Afreeca Starleague Caster: http://afreeca.tv/ASL2ENG2
Sbrubbles
Profile Joined October 2010
Brazil5776 Posts
March 13 2016 02:36 GMT
#108
Except to really high level players I assume it would look like your usual high level game.
Bora Pain minha porra!
MyLovelyLurker
Profile Joined April 2007
France756 Posts
March 13 2016 03:42 GMT
#109
On March 13 2016 11:30 BisuDagger wrote:
Has anyone considered what a game of go would be like if it was AlphaGo vs AlphaGo?


This is how AlphaGo was trained. After an initial phase of learning through playing on an online human server, it has been playing millions of games versus itself in the cloud.
"I just say, it doesn't matter win or lose, I just love Starcraft 2, I love this game, I love this stage, just play like in practice" - TIME/Oliveira
JazVM
Profile Joined October 2012
Germany1196 Posts
March 13 2016 08:48 GMT
#110
So they let the poor guy win today?
mind mind mind mind mind mind
CCow
Profile Joined August 2010
Germany335 Posts
March 13 2016 08:56 GMT
#111
No, "they" did not.
He just won.
Makro
Profile Joined March 2011
France16890 Posts
March 13 2016 08:57 GMT
#112
honor is safe
Matthew 5:10 "Blessed are those who are persecuted because of shitposting, for theirs is the kingdom of heaven".
TL+ Member
Buddhist
Profile Joined April 2010
United States658 Posts
Last Edited: 2016-03-13 09:02:12
March 13 2016 09:00 GMT
#113
The main advantage a human player would have is exploiting fog of war and strategies that the AI has never seen before. AI will optimize for everything it already knows, it can't optimize for things it doesn't know.

There are other ways to exploit the AI's weaknesses also, for example making seemingly illogical decisions (such as building a CC in the enemy's natural, only to cancel it later); it might trick the AI into believing something is happening that really isn't, causing it to do completely the wrong thing.

The main issue will be that any advantage the human gains has to overcome the incredible micro and macro advantage of the AI's perfect mechanics.

To make the competition fair, it would be sensible to limit the AI to hardware inputs and outputs, meaning it has to read from the monitor and input through mouse and keyboard. You might then also want to set an APM limit, for example of 300.

This will force the AI to work with what a human has available to him, and test the AI's strategic and tactical abilities, rather than raw mechanics.
JazVM
Profile Joined October 2012
Germany1196 Posts
March 13 2016 09:11 GMT
#114
On March 13 2016 18:00 Buddhist wrote:
The main advantage a human player would have is exploiting fog of war and strategies that the AI has never seen before. AI will optimize for everything it already knows, it can't optimize for things it doesn't know.

There are other ways to exploit the AI's weaknesses also, for example making seemingly illogical decisions (such as building a CC in the enemy's natural, only to cancel it later); it might trick the AI into believing something is happening that really isn't, causing it to do completely the wrong thing.
.


But isn't the point of AlphaGo exactly the opposite? It would "know" that the cc-cancel is fake and would play accordingly because it learned it from previous games?
mind mind mind mind mind mind
Salazarz
Profile Blog Joined April 2012
Korea (South)2591 Posts
March 13 2016 09:56 GMT
#115
On March 13 2016 18:00 Buddhist wrote:
The main advantage a human player would have is exploiting fog of war and strategies that the AI has never seen before. AI will optimize for everything it already knows, it can't optimize for things it doesn't know.

There are other ways to exploit the AI's weaknesses also, for example making seemingly illogical decisions (such as building a CC in the enemy's natural, only to cancel it later); it might trick the AI into believing something is happening that really isn't, causing it to do completely the wrong thing.

The main issue will be that any advantage the human gains has to overcome the incredible micro and macro advantage of the AI's perfect mechanics.

To make the competition fair, it would be sensible to limit the AI to hardware inputs and outputs, meaning it has to read from the monitor and input through mouse and keyboard. You might then also want to set an APM limit, for example of 300.

This will force the AI to work with what a human has available to him, and test the AI's strategic and tactical abilities, rather than raw mechanics.


That's not how it works, though. The AI can optimize for things that could 'potentially' be happening in fog of war just like a human can (or better) because it would have access to all the replays of previously played games fed to it. Tricking AIs with seemingly illogical decisions is what people tried a lot in games like Chess and it doesn't work -- a properly built AI will be able to predict potential outcomes of whatever shenanigans you're doing and react accordingly.
CCow
Profile Joined August 2010
Germany335 Posts
March 13 2016 10:01 GMT
#116
It would consider "the best option" for it's opponent.
So if it's an obvious fake, I am pretty sure that it would act accordingly.

Also since it can literally react within no time, it will be incredibly tough to keep it from having next to perfect map knowledge at all times.

Limiting the micro might be forced, but it would set arbitrary limits. You could do that, but it is weird. The thing why I feel it is an amazing thing to have for Go is because it has the potential to show "us humans" the boundaries of what is possible.
Putting arbitrary limits on that will completely destroy that point.
trulojucreathrma.com
Profile Blog Joined December 2015
United States327 Posts
Last Edited: 2016-03-13 12:47:23
March 13 2016 12:44 GMT
#117
You can only play mindgames vs something that knows it can be tricked.


If you build a CC and cancel, the only info you give it is that you have spend 400 minerals less than you normally would. This allows it to cut corners.

Point is exaclty that a AI plays like a robot. It won't panic or overreact like a human. When a human makes a mistake, they overcompensate it the next game, making again a mistake.

An AI can more easily find the optimal path because it has no human bias.



I do agree you can adapt to a bad static AI in RTS. You can see through it's pattern very quickly. If the AI is static, you can do it in 3 games.
Wegandi
Profile Joined March 2011
United States2455 Posts
March 13 2016 13:32 GMT
#118
On March 13 2016 04:35 Pwere wrote:
Show nested quote +
On March 12 2016 20:21 Gorsameth wrote:
On March 12 2016 14:10 Wegandi wrote:
Almost all of the games that have AI beating humans are information symmetrical. I'll be impressed when AI can beat the best players of all-time heads up at asymmetric information games like poker, magic the gathering, etc. more than 50% of the time.

Poker is a game of percentages. It is trivial for a computer to calculate its chance of winning at any single point in the game and react "perfectly" to the information available.
Over a large enough sample size to even out the element of chance a computer will win, no doubt about it.
This explanation is wrong. Current AIs do not beat the best poker players in complex variations of poker (NLHE, PLO, etc.). While I'm pretty sure it's more a matter of resources than technology, the reason they're not winning is because poker is a game of balance. Your bluffs have to be balanced with strong hands. Your greedy value bets have to appear in spots you bluff a significant amount. Math for a single hand has almost nothing to do with it.

If a team with the resources of AlphaGo made a PokerAI, they would get there in a few years, at most, but meanwhile, a few dozen players still beat the best AIs.

Same goes for games like Magic or Hearthstone. This is the spot Chess was in ~25 years ago, and now your phones are GM level.


Hearthstone is childs play. I would love to watch an AI try to play Vintage (MtG) Grixis control mirror against LSV. Even Caw Blade standard mirror would be quite fun. It would have to be a mirror since that would make it an even playing field, unless you had something close to 50/50 (like say Bgx vs UWx control). You can't have say the AI play Tron against UG Infect. Even if the AI had perfect information and did everything perfect they'd still lose 65%+ of the time.

In the more complex formats of Magic there simply is too many branching decisions and asymmetric information for the AI to "dominate" like it does in information perfect games like Chess and Go. There's a reason poker people like David Williams and Efro love playing Magic.
Thank you bureaucrats for all your hard work, your commitment to public service and public good is essential to the lives of so many. Also, for Pete's sake can we please get some gun control already, no need for hand guns and assault rifles for the public
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
Last Edited: 2016-03-15 08:08:18
March 15 2016 08:07 GMT
#119
I watched the VOD from the third game (all of it, stayed up until 4am despite having to go to work at 9 the day after) and got hooked! :o

Now watching game 5 on https://gogameguru.com/alphago-5/

Is Lee Sedol winning? I don't understand....

Where can I play go online (vs comp, but not alphaGo plz)?

edit: Can we live-comment the game?
Furikawari
Profile Joined February 2014
France2522 Posts
March 15 2016 08:12 GMT
#120
To play online you can check KGS or IGS. KGS has a room dedicated to bots.
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 15 2016 08:33 GMT
#121
The commentators board doesn't match the live game board... it bugs me out! :o

Also, I have no idea how he is calculating those points... But he just said that alphaGo may have a slight advantage. Very close though it seems.
Draconicfire
Profile Joined May 2010
Canada2562 Posts
March 15 2016 09:01 GMT
#122
Gg.
@Drayxs | Drayxs.221 | Drayxs#1802
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
Last Edited: 2016-03-15 09:06:00
March 15 2016 09:05 GMT
#123
Lee resigns just a few moves away from the end (I think?) so we don't get an official count. Seems like alphaGo was winning by just a few points, but that is how it plays apparently: win small, lose big.

alphaGo beats Lee 4-1: WWWLW.

ggs.

press conference up next.
nayumi
Profile Blog Joined May 2009
Australia6499 Posts
Last Edited: 2016-03-15 09:28:10
March 15 2016 09:28 GMT
#124
well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement
Sugoi monogatari onii-chan!
Cascade
Profile Blog Joined March 2006
Australia5405 Posts
March 15 2016 09:39 GMT
#125
On March 15 2016 18:28 nayumi wrote:
well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement

Good point. The last human to take a map from the best computer.
Railgan
Profile Joined August 2010
Switzerland1507 Posts
Last Edited: 2016-03-16 09:26:30
March 16 2016 09:26 GMT
#126
On March 15 2016 18:39 Cascade wrote:
Show nested quote +
On March 15 2016 18:28 nayumi wrote:
well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement

Good point. The last human to take a map from the best computer.

And in 2 years people will say: "But [insert current champion] could beat Alphago no problem. Lee just played bad."
Grandmaster Zerg from Switzerland!!! www.twitch.tv/railgan // www.twitter.com/railgansc // www.youtube.com/c/railgansc
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
March 16 2016 13:37 GMT
#127
On March 16 2016 18:26 Railgan wrote:
Show nested quote +
On March 15 2016 18:39 Cascade wrote:
On March 15 2016 18:28 nayumi wrote:
well Lee might go down in the history of mankind as the only human being to ever beat Alphago ... i guess that's an achievement

Good point. The last human to take a map from the best computer.

And in 2 years people will say: "But [insert current champion] could beat Alphago no problem. Lee just played bad."

Ke Jie in China is already claiming that he can beat it. I really hope Google takes up that challenge. My money is on AlphaGo.
Madars
Profile Joined December 2011
Latvia166 Posts
March 19 2016 23:11 GMT
#128
On March 12 2016 21:11 Grettin wrote:
Show nested quote +
On March 12 2016 20:41 rabidch wrote:
On March 12 2016 11:49 PhoenixVoid wrote:
A relevant read for the site.

http://uk.businessinsider.com/google-deepmind-could-play-starcraft-2016-3



"'StarCraft,' I think, is our likely next target," Google Senior Fellow Jeff Dean said at today's Structure Data event in San Francisco.

...

"The thing about Go is obviously you can see everything on the board, so that makes it slightly easier for computers," Google DeepMind founder Demis Hassabis told The Verge recently.

Meanwhile, games like "StarCraft" and its sequel keep your opponents' moves largely secret, at least until you come in to conflict — skilled players watch closely for clues as to their opponents' strategy and try to anticipate their next move.

"You have to keep track of things happening off the screen," Dean says.

It means that Google's DeepMind would have a brand-new challenge of trying to outguess their opponent, and react if and when they come up with something totally crazy. It would test a new set of skills for artificial intelligence.


Though I wouldn't take it as an absolute promise until we get confirmation.

very amazed that jeff dean of all people is talking about starcraft as the next target.

google trying to destroy korean esports???


Give Flash couple of months to get back to form and BO5 against AlphaGo. Yes please.

+ Show Spoiler +
Who am i kidding. Even EffOrt or Bisu would be enough

I would love to see Flash playing Go.
<3 Alexis Eusebio, Lee Shin Hyung, Choi Seong Hun, Joo Sung Wook, Jang Min Chul, Kim Yoo Jin, Lee Young Ho, Lee Shin Hyung, Yun Young Seo, Kim Joon Ho, Jeong Jong Hyeon, Eo Yoon Su, Johan Lucchesi, Ilyes Satouri
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
November 30 2017 05:34 GMT
#129
Bump!!!



Also there is apparently a new version of AlphaGo called AlphaGo Zero.

https://en.wikipedia.org/wiki/AlphaGo_Zero
"Smokey, this is not 'Nam, this is bowling. There are rules."
SlayerS_BunkiE
Profile Blog Joined May 2009
Canada1707 Posts
Last Edited: 2017-11-30 10:45:14
November 30 2017 10:44 GMT
#130
On November 30 2017 14:34 {CC}StealthBlue wrote:
Bump!!!

https://www.youtube.com/watch?v=8tq1C8spV_g

Also there is apparently a new version of AlphaGo called AlphaGo Zero.

https://en.wikipedia.org/wiki/AlphaGo_Zero

Amazing. Completely self taught? 100% artificial intelligence?
Had a fascination for Go since reading and watching HnG. Amazing that this is actually happening today when all I could read back then was how it was yet impossible for computers to beat humans in Go.
So computers will be the first to achieve the hand of god...
iloveby.SlayerS_BunkiE[Shield]
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
December 07 2017 18:30 GMT
#131
AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours.

The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up.

AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.

“Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.

“It’s a remarkable achievement, even if we should have expected it after AlphaGo,” former world chess champion Garry Kasparov told Chess.com. “We have always assumed that chess required too much empirical knowledge for a machine to play so well from scratch, with no human knowledge added at all.”

Computer programs have been able to beat the best human chess players ever since IBM’s Deep Blue supercomputer defeated Kasparov on 12 May 1997.

DeepMind said the difference between AlphaZero and its competitors is that its machine-learning approach is given no human input apart from the basic rules of chess. The rest it works out by playing itself over and over with self-reinforced knowledge. The result, according to DeepMind, is that AlphaZero took an “arguably more human-like approach” to the search for moves, processing around 80,000 positions per second in chess compared to Stockfish 8’s 70m.

After winning 25 games of chess versus Stockfish 8 starting as white, with first-mover advantage, a further three starting with black and drawing a further 72 games, AlphaZero also learned shogi in two hours before beating the leading program Elmo in a 100-game matchup. AlphaZero won 90 games, lost eight and drew 2.

The new generalised AlphaZero was also able to beat the “super human” former version of itself AlphaGo at the Chinese game of Go after only eight-hours of self-training, winning 60 games and losing 40 games.

While experts said the results are impressive, and have potential across a wide-range of applications to complement human knowledge, professor Joanna Bryson, a computer scientist and AI researcher at the University of Bath, warned that it was “still a discrete task”.


Source
"Smokey, this is not 'Nam, this is bowling. There are rules."
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
December 14 2017 19:54 GMT
#132
After reading up on the research around Alphago and its subsequent iterations, it feels like the framework can be generalized to solve most discrete decision making problems.

It can also be used to evaluate if a game is balanced or not. I'd love to see it applied to other games such as Hearthstone, Poker, etc in future iterations.
andrewlt
Profile Joined August 2009
United States7702 Posts
December 14 2017 20:07 GMT
#133
It's going to learn to use 75 seconds on every Hearthstone turn until the other guy quits.
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
December 15 2017 14:51 GMT
#134
Haha, I think having it learn to play the game optimally is far less interesting than making the best deck, both in arena and constructed.
BrTarolg
Profile Blog Joined June 2009
United Kingdom3574 Posts
Last Edited: 2017-12-22 01:34:46
December 22 2017 01:34 GMT
#135
On December 15 2017 04:54 Glacierz wrote:
After reading up on the research around Alphago and its subsequent iterations, it feels like the framework can be generalized to solve most discrete decision making problems.

It can also be used to evaluate if a game is balanced or not. I'd love to see it applied to other games such as Hearthstone, Poker, etc in future iterations.


as far as my understanding goes, we might be getting much further ahead than that, and getting to the point of being able to solve *most* simulatable games
check out their atari attack which is where it first got really interesting (inputs = pixels on screen only)

100% certain that they can use the same framework to solve poker and ALOT of similar games within that realm (hearthstone included)
though solving hearthstone will be of course, not quite as interesting haha

also, i use the word "solve" tentatively in the way that alphazero is "solving" chess and go
TelecoM
Profile Blog Joined January 2010
United States10668 Posts
December 22 2017 01:51 GMT
#136
This is amazing wow.
AKA: TelecoM[WHITE] Protoss fighting
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
Last Edited: 2017-12-22 13:47:54
December 22 2017 13:47 GMT
#137
Heads up the AlphaGo documentary will be on Netflix January 1st.

http://www.imdb.com/streaming/netflix-january-2018/ls027295189/mediaviewer/rm4129634048
"Smokey, this is not 'Nam, this is bowling. There are rules."
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
January 02 2018 05:29 GMT
#138
Documentary is now up. Just finished it, was amazing. Loved the footage of the Korean commentators especially during the first game where Lee was blocked by AlphaGo for the first time the woman was outraged "How dare it interrupt him!" was hilarious.
"Smokey, this is not 'Nam, this is bowling. There are rules."
BrTarolg
Profile Blog Joined June 2009
United Kingdom3574 Posts
January 02 2018 15:21 GMT
#139
Watched it, was great!

What's absolutely incredible is how much simpler alpha zero is compared to it's previous iterations, and yet it is many orders of magnitude stronger (To give you an idea, it crushes the lee version 100-0)

This step is just as huge of an achievement as alpha go itself, and is essentially a fair claim that deepmind have an algorithm that "solves" a big portion of all conceivable games within reason
ItsFunToLose
Profile Joined December 2010
United States776 Posts
Last Edited: 2018-01-02 16:29:42
January 02 2018 16:20 GMT
#140
No (man-made) computer will ever beat me in a game of Rocket League 1v1 hoops taking exclusively pixels as input(ie, not hooked deep into the game logic itself granting it obscene levels of predictive omniscience.)

I submit readily to the idea that I currently exist within the framework of a quantum computer universe, and that it is simulating a fraction of the current population and that I have already lost games of Rocket League to such an advanced intelligence.

"skillshots are inherently out of your control whether they hit or not" -PrinceXizor
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
January 02 2018 18:37 GMT
#141
On January 03 2018 01:20 ItsFunToLose wrote:
No (man-made) computer will ever beat me in a game of Rocket League 1v1 hoops taking exclusively pixels as input(ie, not hooked deep into the game logic itself granting it obscene levels of predictive omniscience.)

I submit readily to the idea that I currently exist within the framework of a quantum computer universe, and that it is simulating a fraction of the current population and that I have already lost games of Rocket League to such an advanced intelligence.


What makes you think this? Rocket League (regardless of which game mode) is quite simplistic with regards to physics, rules, and total complexity. Especially compared to a game like Starcraft.
The harder it becomes, the more you should focus on the basics.
Gorsameth
Profile Joined April 2010
Netherlands21593 Posts
January 02 2018 18:47 GMT
#142
Rocket League would take some visual recognition if it has to use the screen input, but the rest of it is relative child's play.
Just basic physics calculations.
It ignores such insignificant forces as time, entropy, and death
emperorchampion
Profile Blog Joined December 2008
Canada9496 Posts
Last Edited: 2018-01-02 19:40:22
January 02 2018 19:39 GMT
#143
On December 22 2017 22:47 {CC}StealthBlue wrote:
Heads up the AlphaGo documentary will be on Netflix January 1st.

http://www.imdb.com/streaming/netflix-january-2018/ls027295189/mediaviewer/rm4129634048


Damn not on Swiss Netflix yet

Was it a good documentary, or something only someone interested in comp sci or go would enjoy?
TRUEESPORTS || your days as a respected member of team liquid are over
BrTarolg
Profile Blog Joined June 2009
United Kingdom3574 Posts
January 06 2018 14:13 GMT
#144
From an AI perspective, it would be 100% possible to make a bot that would be able to beat you in rocket league only using pixel screen inputs

Such a task however, would probably require the entire deepmind team and google's support working on the problem for a fairly significant amount of time

But we are making our steps there
arb
Profile Blog Joined April 2008
Noobville17921 Posts
January 06 2018 18:40 GMT
#145
On January 03 2018 01:20 ItsFunToLose wrote:
No (man-made) computer will ever beat me in a game of Rocket League 1v1 hoops taking exclusively pixels as input(ie, not hooked deep into the game logic itself granting it obscene levels of predictive omniscience.)

I submit readily to the idea that I currently exist within the framework of a quantum computer universe, and that it is simulating a fraction of the current population and that I have already lost games of Rocket League to such an advanced intelligence.


yeah a computer will never beat me in a fist fight either
Artillery spawned from the forges of Hell
andrewlt
Profile Joined August 2009
United States7702 Posts
January 06 2018 23:12 GMT
#146
Seems the entire point of this research is to help humans in things like folding proteins and designing molecules. It's going to be used to assist in developing new materials and drugs. Solving turn based games help. Solving real time games like Starcraft seem useless.
sabas123
Profile Blog Joined December 2010
Netherlands3122 Posts
January 09 2018 17:14 GMT
#147
On January 03 2018 04:39 emperorchampion wrote:
Show nested quote +
On December 22 2017 22:47 {CC}StealthBlue wrote:
Heads up the AlphaGo documentary will be on Netflix January 1st.

http://www.imdb.com/streaming/netflix-january-2018/ls027295189/mediaviewer/rm4129634048


Damn not on Swiss Netflix yet

Was it a good documentary, or something only someone interested in comp sci or go would enjoy?

It was a good documentary, it focused more on the story and the context behind the match than anything else. No comp sci knowledge required to enjoy this one
The harder it becomes, the more you should focus on the basics.
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
February 04 2019 16:42 GMT
#148
Bump:

"Smokey, this is not 'Nam, this is bowling. There are rules."
Vanessa1
Profile Joined August 2019
2 Posts
August 05 2019 14:17 GMT
#149
--- Nuked ---
Vanessa1
Profile Joined August 2019
2 Posts
August 05 2019 14:51 GMT
#150
--- Nuked ---
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
May 01 2020 14:27 GMT
#151
The documentary has been uploaded to YouTube.

"Smokey, this is not 'Nam, this is bowling. There are rules."
Normal
Please log in or register to reply.
Live Events Refresh
WardiTV Invitational
11:00
WardiTV June Playoffs
MaxPax vs SolarLIVE!
MaNa vs TBD
Reynor vs Creator
Gerald vs Spirit
WardiTV1368
TKL 271
Rex192
IndyStarCraft 140
LiquipediaDiscussion
CranKy Ducklings
10:00
Sea Duckling Open #135
CranKy Ducklings69
LiquipediaDiscussion
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
TKL 275
mouzHeroMarine 252
Rex 161
IndyStarCraft 124
Livibee 113
BRAT_OK 93
ProTech88
Dewaltoss 48
MindelVK 23
StarCraft: Brood War
Britney 46121
Calm 7427
Rain 3695
Horang2 2352
Bisu 1242
Hyuk 845
Jaedong 674
GuemChi 622
Zeus 477
Mini 428
[ Show more ]
Sharp 414
BeSt 317
Light 249
Nal_rA 220
Soulkey 163
Last 128
Mind 88
ToSsGirL 34
scan(afreeca) 21
sSak 19
HiyA 16
Terrorterran 13
IntoTheRainbow 12
ajuk12(nOOB) 12
Icarus 11
SilentControl 8
Noble 5
Dota 2
Gorgc5613
qojqva1540
XcaliburYe282
LuMiX1
Counter-Strike
flusha496
allub358
Heroes of the Storm
Khaldor279
Other Games
singsing2199
B2W.Neo924
C9.Mang0616
DeMusliM513
Lowko352
Fuzer 198
XaKoH 107
Beastyqt56
ArmadaUGS37
Trikslyr32
FunKaTv 23
Organizations
Dota 2
PGL Dota 2 - Main Stream12881
PGL Dota 2 - Secondary Stream5210
StarCraft: Brood War
CasterMuse 22
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 17 non-featured ]
StarCraft 2
• 3DClanTV 22
• Adnapsc2 10
• intothetv
• LaughNgamezSOOP
• AfreecaTV YouTube
• sooper7s
• Migwel
• Kozan
• IndyKCrew
StarCraft: Brood War
• Michael_bg 1
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 2649
• WagamamaTV356
League of Legends
• Nemesis6546
• Stunt524
Upcoming Events
Cheesadelphia
1h 28m
CSO Cup
3h 28m
BSL: ProLeague
4h 28m
Hawk vs UltrA
Sziky vs spx
TerrOr vs JDConan
GSL Code S
18h 28m
Rogue vs herO
Classic vs GuMiho
Sparkling Tuna Cup
20h 28m
WardiTV Qualifier
1d 2h
BSL: ProLeague
1d 4h
Bonyth vs Dewalt
Cross vs Doodle
MadiNho vs Dragon
Replay Cast
1d 10h
Wardi Open
1d 21h
Replay Cast
2 days
[ Show More ]
Replay Cast
2 days
RSL Revival
2 days
Cure vs Percival
ByuN vs Spirit
RSL Revival
3 days
herO vs sOs
Zoun vs Clem
Replay Cast
4 days
The PondCast
4 days
RSL Revival
4 days
Serral vs SHIN
Solar vs Cham
Replay Cast
5 days
RSL Revival
5 days
Reynor vs Scarlett
ShoWTimE vs Classic
uThermal 2v2 Circuit
6 days
SC Evo League
6 days
Liquipedia Results

Completed

Proleague 2025-06-11
BGE Stara Zagora 2025
Heroes 10 EU

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
KCM Race Survival 2025 Season 2
NPSL S3
Rose Open S1
CSL 17: 2025 SUMMER
2025 GSL S2
Murky Cup #2
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025
YaLLa Compass Qatar 2025
PGL Bucharest 2025

Upcoming

Copa Latinoamericana 4
CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
K-Championship
SEL Season 2 Championship
Esports World Cup 2025
HSC XXVII
Championship of Russia 2025
BLAST Open Fall 2025
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.