• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 23:37
CET 05:37
KST 13:37
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
TL.net Map Contest #21: Winners11Intel X Team Liquid Seoul event: Showmatches and Meet the Pros10[ASL20] Finals Preview: Arrival13TL.net Map Contest #21: Voting12[ASL20] Ro4 Preview: Descent11
Community News
Weekly Cups (Nov 3-9): Clem Conquers in Canada0SC: Evo Complete - Ranked Ladder OPEN ALPHA2StarCraft, SC2, HotS, WC3, Returning to Blizzcon!45$5,000+ WardiTV 2025 Championship7[BSL21] RO32 Group Stage4
StarCraft 2
General
Mech is the composition that needs teleportation t Craziest Micro Moments Of All Time? SC: Evo Complete - Ranked Ladder OPEN ALPHA Weekly Cups (Nov 3-9): Clem Conquers in Canada RotterdaM "Serral is the GOAT, and it's not close"
Tourneys
Constellation Cup - Main Event - Stellar Fest Tenacious Turtle Tussle Sparkling Tuna Cup - Weekly Open Tournament $5,000+ WardiTV 2025 Championship Merivale 8 Open - LAN - Stellar Fest
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 499 Chilling Adaptation Mutation # 498 Wheel of Misfortune|Cradle of Death Mutation # 497 Battle Haredened Mutation # 496 Endless Infection
Brood War
General
BW General Discussion FlaSh on: Biggest Problem With SnOw's Playstyle BGH Auto Balance -> http://bghmmr.eu/ [ASL20] Ask the mapmakers — Drop your questions Where's CardinalAllin/Jukado the mapmaker?
Tourneys
[Megathread] Daily Proleagues [ASL20] Grand Finals [BSL21] RO32 Group A - Saturday 21:00 CET [BSL21] RO32 Group B - Sunday 21:00 CET
Strategy
Current Meta PvZ map balance How to stay on top of macro? Soma's 9 hatch build from ASL Game 2
Other Games
General Games
Nintendo Switch Thread Stormgate/Frost Giant Megathread Should offensive tower rushing be viable in RTS games? Path of Exile Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread SPIRED by.ASL Mafia {211640}
Community
General
Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread US Politics Mega-thread Canadian Politics Mega-thread The Games Industry And ATVI
Fan Clubs
White-Ra Fan Club The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread Movie Discussion! Korean Music Discussion Series you have seen recently...
Sports
2024 - 2026 Football Thread Formula 1 Discussion NBA General Discussion MLB/Baseball 2023 TeamLiquid Health and Fitness Initiative For 2023
World Cup 2022
Tech Support
SC2 Client Relocalization [Change SC2 Language] Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Dyadica Gospel – a Pulp No…
Hildegard
Coffee x Performance in Espo…
TrAiDoS
Saturation point
Uldridge
DnB/metal remix FFO Mick Go…
ImbaTosS
Reality "theory" prov…
perfectspheres
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1665 users

Neural networks

Blogs > Qzy
Post a Reply
Normal
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 13:20:14
September 17 2010 13:16 GMT
#1
Teamliquid community is pretty smart..

Does anyone understand neural networks and how they work with multiple layers? Got a bunch of questions for it, to even being able to understand it slightly - most scientific texts on neural networks are very strong in math, but not doing a good job explaining what the... is going on.

TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Tabbris
Profile Blog Joined June 2010
Bangladesh2839 Posts
Last Edited: 2010-09-17 13:28:38
September 17 2010 13:28 GMT
#2
You should try the TL manpowerthread http://www.teamliquid.net/forum/viewmessage.php?topic_id=84245
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 13:34 GMT
#3
Why not start from Wikipedia
ZBiR
Profile Blog Joined August 2003
Poland1092 Posts
Last Edited: 2010-09-17 13:43:56
September 17 2010 13:42 GMT
#4
It depends on what type of network you have, but in the most basic version, each neuron receives signals from each neuron of previous layer, multilies each one by it's specific weight (each neuron has different set of weights for the signals from previous layer, usually it's the changing weights that are considered the learning element in a network) and sums them, then operates on that summed signal and sends the result to each neutron in the next layer. Simple
meeple
Profile Blog Joined April 2009
Canada10211 Posts
September 17 2010 14:03 GMT
#5
You should go ahead and ask the questions... and state what exactly you don't understand or what you do understand about them and you'll have a much better chance of getting a real answer.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 14:27:39
September 17 2010 14:16 GMT
#6
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 14:33 GMT
#7
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 14:39 GMT
#8
On September 17 2010 23:33 Glacierz wrote:
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.


Can't :/. I'm following my specialization course. This week it's neural networks, where we have to make a ludo player (in 1 week), next up is generic algorithms and then reinforcement learning.

Sigh.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
Last Edited: 2010-09-17 14:57:41
September 17 2010 14:55 GMT
#9
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.
When you want something, all the universe conspires in helping you to achieve it.
illu
Profile Blog Joined December 2008
Canada2531 Posts
September 17 2010 14:57 GMT
#10
Come to think of it, a professor at University of Toronto sort of specializes in this subject.
:]
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 14:59 GMT
#11
In any case, neural networks are easy to implement in Matlab with the NN toolbox. The difficult part is to choose the correct activation function and the number of neurons and the number of layers (you can just let this run for days on a box)
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 15:03:26
September 17 2010 15:02 GMT
#12
On September 17 2010 23:55 Cambium wrote:
Show nested quote +
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 15:09 GMT
#13
On September 18 2010 00:02 Qzy wrote:
Show nested quote +
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 15:13 GMT
#14
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


Then it's good i got 3 more days to come up with a ludo player :D

God I love university with their "1 week to understand 50 years of AI, and implement it - kkthxbye"

. So lost in this - how's it possible to create a working neural network in a week.. seriously.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 16:03 GMT
#15
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN
AcrossFiveJulys
Profile Blog Joined September 2005
United States3612 Posts
September 17 2010 16:05 GMT
#16
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


I wouldn't recommend using a neural network for learning a game unless integrated as part of a self play system. If you want to read about a way to kick some serious ass, look up reinforcement learning backgammon.

Cadmium is suggesting that you use the NN as a state utility evaluator (and then presumably use those evaluations to implement minimax?). That is a viable approach, but I think you could implement the evaluation step yourself and get pretty far and skip the NN part.

If you want to do this the simplest way possible, get someone (or a program) that plays the game pretty well and collect a lot of data saying "at this state, do action a". It will be important to choose your inputs wisely. You could describe the entire board state as your inputs, but that will make it harder for the NN to generalize. Instead you should consider coming up with some features of the state that are interesting.

As for the parameters, such as the number of hidden nodes, hidden layers, learning rate, momentum term, how much data you need, etc, you have to understand that there isn't hard theory that says you need exactly this much. In practice getting neural networks to work is a form of black magic: you must empirically determine a good parameter setting through your own intuition and lots and lots of experimentation.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:10:22
September 17 2010 16:06 GMT
#17
On September 18 2010 01:03 Glacierz wrote:
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN


You mean minmax not alpha-beta pruning. But alpha-beta pruning does help to speed up the search. But this is neural networks - always 2nd best method, but very suitable, when search space becomes too big, ie chess.

I'll try training my neural network with some common states in my game, where i know some good outputs - and then i hope it can generalize this to more states which looks like it.

Is this an okay way to do it? It's pretty slow to "hand feed" it everything.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:14:04
September 17 2010 16:11 GMT
#18
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:19:13
September 17 2010 16:17 GMT
#19
On September 18 2010 01:11 ToxNub wrote:
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.


Actually I have to implement a ludo player for my AI course - but tictactoe is more simple to start out with (yes even tho it's complex solution to a simple question).

Like I wrote earlier, I'll try giving it some states, ie start state, and tell it: The correct output here is to move this piece. And some more states, and hope it can generalize from it.

Is that even possible?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
Last Edited: 2010-09-17 16:31:13
September 17 2010 16:30 GMT
#20
I think you would need a huge training set for this to work unless you can develop a reasonable set of heuristics.

Had to do it for reversi player for my AI class, just used minimax with A-B pruning.
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:52:26
September 17 2010 16:43 GMT
#21
I am not familiar with ludo, but I'll give what advice I can.

If the game is retardedly simple, and you can play it optimally, sure, you can tell it "given gamestate a, make move 3" in your training examples. This is the trivial case where the NN just memorizes your input/output mappings. Depending on your teacher, he may not accept this. If you have enough of them, you might be able to give it a state it's never seen, but it's hard to say if it will be able to generalize. Usually you need a lot of training data for that, and it sounds to me like you would have to generate it on your own.

If you can't play it optimally yourself (and hence can't provide good training data) or you actually want it to LEARN how to play ludo (and not tell it how to play ludo), you need to come up a way to provide feedback. You need to be able to say "this is your goal", and let it figure out how to get there on its own. The goal of learning, at its heart, is to get the program to do something you did not explictly write. If you just give it all the answers, you really just have a sounding board. So for tic-tac-toe you need to abstract a "move". What defines a bad move and what defines a good move? Obviously a move that ends with a loss is a bad move, and a move that wins the game is a good move. Unfortunately, there is much more to it than that. What defines a move that leads to a good move? What defines a move that is good now but loses you the game next move? These are not simple questions, and I'll stop rambling on and confusing you. Basically, this is where you have to get creative.
Normal
Please log in or register to reply.
Live Events Refresh
Next event in 7h 23m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RuFF_SC2 186
StarCraft: Brood War
Britney 27029
Shuttle 907
Tasteless 34
Noble 30
Icarus 9
Dota 2
NeuroSwarm84
Super Smash Bros
hungrybox479
Other Games
summit1g14210
JimRising 586
ViBE131
C9.Mang0121
Organizations
Other Games
gamesdonequick1102
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 14 non-featured ]
StarCraft 2
• Berry_CruncH54
• davetesta25
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Scarra1571
• Stunt315
Upcoming Events
WardiTV Korean Royale
7h 23m
OSC
12h 23m
Replay Cast
18h 23m
Replay Cast
1d 4h
Kung Fu Cup
1d 7h
Classic vs Solar
herO vs Cure
Reynor vs GuMiho
ByuN vs ShoWTimE
Tenacious Turtle Tussle
1d 18h
The PondCast
2 days
RSL Revival
2 days
Solar vs Zoun
MaxPax vs Bunny
Kung Fu Cup
2 days
WardiTV Korean Royale
2 days
[ Show More ]
PiGosaur Monday
2 days
RSL Revival
3 days
Classic vs Creator
Cure vs TriGGeR
Kung Fu Cup
3 days
CranKy Ducklings
4 days
RSL Revival
4 days
herO vs Gerald
ByuN vs SHIN
Kung Fu Cup
4 days
BSL 21
4 days
Tarson vs Julia
Doodle vs OldBoy
eOnzErG vs WolFix
StRyKeR vs Aeternum
Sparkling Tuna Cup
5 days
RSL Revival
5 days
Reynor vs sOs
Maru vs Ryung
Kung Fu Cup
5 days
WardiTV Korean Royale
5 days
BSL 21
5 days
JDConan vs Semih
Dragon vs Dienmax
Tech vs NewOcean
TerrOr vs Artosis
Wardi Open
6 days
Monday Night Weeklies
6 days
Liquipedia Results

Completed

Proleague 2025-11-07
Stellar Fest: Constellation Cup
Eternal Conflict S1

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
SOOP Univ League 2025
YSL S2
BSL Season 21
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual

Upcoming

SLON Tour Season 2
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
HSC XXVIII
RSL Offline Finals
WardiTV 2025
RSL Revival: Season 3
META Madness #9
BLAST Bounty Winter 2026
BLAST Bounty Winter 2026: Closed Qualifier
eXTREMESLAND 2025
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.