• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 06:18
CET 12:18
KST 20:18
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
TL.net Map Contest #21: Winners11Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting12[ASL20] Ro4 Preview: Descent11
Community News
StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7[BSL21] RO32 Group Stage4Weekly Cups (Oct 26-Nov 2): Liquid, Clem, Solar win; LAN in Philly2Weekly Cups (Oct 20-26): MaxPax, Clem, Creator win10
StarCraft 2
General
Mech is the composition that needs teleportation t TL.net Map Contest #21: Winners StarCraft, SC2, HotS, WC3, Returning to Blizzcon! RotterdaM "Serral is the GOAT, and it's not close" Weekly Cups (Oct 20-26): MaxPax, Clem, Creator win
Tourneys
Constellation Cup - Main Event - Stellar Fest Sparkling Tuna Cup - Weekly Open Tournament $5,000+ WardiTV 2025 Championship Merivale 8 Open - LAN - Stellar Fest Sea Duckling Open (Global, Bronze-Diamond)
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
FlaSh on: Biggest Problem With SnOw's Playstyle BW General Discussion [ASL20] Ask the mapmakers — Drop your questions BGH Auto Balance -> http://bghmmr.eu/ Where's CardinalAllin/Jukado the mapmaker?
Tourneys
[ASL20] Grand Finals [BSL21] RO32 Group A - Saturday 21:00 CET [Megathread] Daily Proleagues [BSL21] RO32 Group B - Sunday 21:00 CET
Strategy
PvZ map balance Current Meta How to stay on top of macro? Soma's 9 hatch build from ASL Game 2
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Path of Exile Should offensive tower rushing be viable in RTS games? Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine The Games Industry And ATVI Russo-Ukrainian War Thread YouTube Thread
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread Movie Discussion! Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Learning my new SC2 hotkey…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Our Last Hope in th…
KrillinFromwales
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1500 users

Neural networks

Blogs > Qzy
Post a Reply
1 2 Next All
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 13:20:14
September 17 2010 13:16 GMT
#1
Teamliquid community is pretty smart..

Does anyone understand neural networks and how they work with multiple layers? Got a bunch of questions for it, to even being able to understand it slightly - most scientific texts on neural networks are very strong in math, but not doing a good job explaining what the... is going on.

TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Tabbris
Profile Blog Joined June 2010
Bangladesh2839 Posts
Last Edited: 2010-09-17 13:28:38
September 17 2010 13:28 GMT
#2
You should try the TL manpowerthread http://www.teamliquid.net/forum/viewmessage.php?topic_id=84245
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 13:34 GMT
#3
Why not start from Wikipedia
ZBiR
Profile Blog Joined August 2003
Poland1092 Posts
Last Edited: 2010-09-17 13:43:56
September 17 2010 13:42 GMT
#4
It depends on what type of network you have, but in the most basic version, each neuron receives signals from each neuron of previous layer, multilies each one by it's specific weight (each neuron has different set of weights for the signals from previous layer, usually it's the changing weights that are considered the learning element in a network) and sums them, then operates on that summed signal and sends the result to each neutron in the next layer. Simple
meeple
Profile Blog Joined April 2009
Canada10211 Posts
September 17 2010 14:03 GMT
#5
You should go ahead and ask the questions... and state what exactly you don't understand or what you do understand about them and you'll have a much better chance of getting a real answer.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 14:27:39
September 17 2010 14:16 GMT
#6
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 14:33 GMT
#7
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 14:39 GMT
#8
On September 17 2010 23:33 Glacierz wrote:
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.


Can't :/. I'm following my specialization course. This week it's neural networks, where we have to make a ludo player (in 1 week), next up is generic algorithms and then reinforcement learning.

Sigh.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
Last Edited: 2010-09-17 14:57:41
September 17 2010 14:55 GMT
#9
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.
When you want something, all the universe conspires in helping you to achieve it.
illu
Profile Blog Joined December 2008
Canada2531 Posts
September 17 2010 14:57 GMT
#10
Come to think of it, a professor at University of Toronto sort of specializes in this subject.
:]
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 14:59 GMT
#11
In any case, neural networks are easy to implement in Matlab with the NN toolbox. The difficult part is to choose the correct activation function and the number of neurons and the number of layers (you can just let this run for days on a box)
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 15:03:26
September 17 2010 15:02 GMT
#12
On September 17 2010 23:55 Cambium wrote:
Show nested quote +
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 15:09 GMT
#13
On September 18 2010 00:02 Qzy wrote:
Show nested quote +
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 15:13 GMT
#14
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


Then it's good i got 3 more days to come up with a ludo player :D

God I love university with their "1 week to understand 50 years of AI, and implement it - kkthxbye"

. So lost in this - how's it possible to create a working neural network in a week.. seriously.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 16:03 GMT
#15
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN
AcrossFiveJulys
Profile Blog Joined September 2005
United States3612 Posts
September 17 2010 16:05 GMT
#16
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


I wouldn't recommend using a neural network for learning a game unless integrated as part of a self play system. If you want to read about a way to kick some serious ass, look up reinforcement learning backgammon.

Cadmium is suggesting that you use the NN as a state utility evaluator (and then presumably use those evaluations to implement minimax?). That is a viable approach, but I think you could implement the evaluation step yourself and get pretty far and skip the NN part.

If you want to do this the simplest way possible, get someone (or a program) that plays the game pretty well and collect a lot of data saying "at this state, do action a". It will be important to choose your inputs wisely. You could describe the entire board state as your inputs, but that will make it harder for the NN to generalize. Instead you should consider coming up with some features of the state that are interesting.

As for the parameters, such as the number of hidden nodes, hidden layers, learning rate, momentum term, how much data you need, etc, you have to understand that there isn't hard theory that says you need exactly this much. In practice getting neural networks to work is a form of black magic: you must empirically determine a good parameter setting through your own intuition and lots and lots of experimentation.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:10:22
September 17 2010 16:06 GMT
#17
On September 18 2010 01:03 Glacierz wrote:
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN


You mean minmax not alpha-beta pruning. But alpha-beta pruning does help to speed up the search. But this is neural networks - always 2nd best method, but very suitable, when search space becomes too big, ie chess.

I'll try training my neural network with some common states in my game, where i know some good outputs - and then i hope it can generalize this to more states which looks like it.

Is this an okay way to do it? It's pretty slow to "hand feed" it everything.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:14:04
September 17 2010 16:11 GMT
#18
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:19:13
September 17 2010 16:17 GMT
#19
On September 18 2010 01:11 ToxNub wrote:
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.


Actually I have to implement a ludo player for my AI course - but tictactoe is more simple to start out with (yes even tho it's complex solution to a simple question).

Like I wrote earlier, I'll try giving it some states, ie start state, and tell it: The correct output here is to move this piece. And some more states, and hope it can generalize from it.

Is that even possible?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
Last Edited: 2010-09-17 16:31:13
September 17 2010 16:30 GMT
#20
I think you would need a huge training set for this to work unless you can develop a reasonable set of heuristics.

Had to do it for reversi player for my AI class, just used minimax with A-B pruning.
1 2 Next All
Please log in or register to reply.
Live Events Refresh
OSC
09:00
OSC Elite Rising Star #17
CranKy Ducklings118
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Hui .145
Livibee 129
Rex 12
StarCraft: Brood War
Rain 8447
Jaedong 3494
GuemChi 2092
Horang2 1793
Sea 1784
Pusan 545
Stork 329
Mini 288
Larva 270
Hyun 236
[ Show more ]
Zeus 226
Light 118
PianO 116
Backho 91
Killer 89
ZerO 62
ggaemo 56
sSak 55
Barracks 53
Aegong 51
ToSsGirL 50
JulyZerg 47
JYJ42
Sharp 37
soO 33
zelot 16
Sacsri 11
Sea.KH 11
Icarus 10
Noble 10
SilentControl 8
Dota 2
XcaliburYe396
Dendi291
KheZu101
League of Legends
JimRising 369
Reynor96
Counter-Strike
olofmeister1263
shoxiejesuss572
x6flipin309
allub224
zeus104
Other Games
B2W.Neo383
Pyrionflax354
Sick335
crisheroes213
Mew2King112
Fuzer 56
ZerO(Twitch)11
Organizations
StarCraft: Brood War
lovetv 8
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 12 non-featured ]
StarCraft 2
• LUISG 45
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Other Games
• WagamamaTV118
Upcoming Events
Wardi Open
42m
Rex12
Wardi Open
4h 42m
Replay Cast
11h 42m
WardiTV Korean Royale
1d
Replay Cast
1d 11h
Replay Cast
1d 21h
Kung Fu Cup
2 days
Classic vs Solar
herO vs Cure
Reynor vs GuMiho
ByuN vs ShoWTimE
Tenacious Turtle Tussle
2 days
The PondCast
2 days
RSL Revival
2 days
Solar vs Zoun
MaxPax vs Bunny
[ Show More ]
Kung Fu Cup
3 days
WardiTV Korean Royale
3 days
PiGosaur Monday
3 days
RSL Revival
3 days
Classic vs Creator
Cure vs TriGGeR
Kung Fu Cup
4 days
CranKy Ducklings
4 days
RSL Revival
4 days
herO vs Gerald
ByuN vs SHIN
Kung Fu Cup
5 days
BSL 21
5 days
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
5 days
RSL Revival
5 days
Reynor vs sOs
Maru vs Ryung
Kung Fu Cup
6 days
WardiTV Korean Royale
6 days
BSL 21
6 days
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
Liquipedia Results

Completed

Proleague 2025-11-07
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual

Upcoming

SLON Tour Season 2
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
RSL Revival: Season 3
META Madness #9
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.