• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 12:45
CEST 18:45
KST 01:45
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Code S Season 1 (2026) - RO4 & Finals Preview4[ASL21] Ro4 Preview: On Course12Code S Season 1 - RO8 Preview7[ASL21] Ro8 Preview Pt2: Progenitors8Code S Season 1 - RO12 Group A: Rogue, Percival, Solar, Zoun13
Community News
Code S Season 1 (2026) - RO8 Results2Weekly Cups (May 4-10): Clem, MaxPax, herO win1Maestros of The Game 2 announcement and schedule !11Weekly Cups (April 27-May 4): Clem takes triple0RSL Revival: Season 5 - Qualifiers and Main Event12
StarCraft 2
General
Team Liquid Map Contest #22 - The Finalists Code S Season 1 (2026) - RO4 & Finals Preview Code S Season 1 (2026) - RO8 Results Code S Season 1 (2026) - RO12 Results MaNa leaves Team Liquid
Tourneys
GSL Code S Season 1 (2026) Sparkling Tuna Cup - Weekly Open Tournament KSL Week 89 2026 GSL Season 2 Qualifiers Maestros of The Game 2 announcement and schedule !
Strategy
Custom Maps
[D]RTS in all its shapes and glory <3 [A] Nemrods 1/4 players
External Content
Mutation # 525 Wheel of Misfortune The PondCast: SC2 News & Results Mutation # 524 Death and Taxes Mutation # 523 Firewall
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ vespene.gg — BW replays in browser Pros React to: TvT Masterclass in FlaSh vs Light BW General Discussion ASL21 General Discussion
Tourneys
[BSL22] RO8 Bracket Stage + Another TieBreaker [ASL21] Semifinals B [ASL21] Ro8 Day 4 Escore Tournament StarCraft Season 2
Strategy
Muta micro map competition Fighting Spirit mining rates [G] Hydra ZvZ: An Introduction Simple Questions, Simple Answers
Other Games
General Games
Nintendo Switch Thread Path of Exile Stormgate/Frost Giant Megathread Warcraft III: The Frozen Throne Starcraft Tabletop Miniature Game
Dota 2
The Story of Wings Gaming
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Vanilla Mini Mafia Mafia Game Mode Feedback/Ideas TL Mafia Community Thread Five o'clock TL Mafia
Community
General
European Politico-economics QA Mega-thread US Politics Mega-thread YouTube Thread Russo-Ukrainian War Thread UK Politics Mega-thread
Fan Clubs
The herO Fan Club!
Media & Entertainment
[Manga] One Piece Anime Discussion Thread [Req][Books] Good Fantasy/SciFi books
Sports
2024 - 2026 Football Thread McBoner: A hockey love story Formula 1 Discussion
World Cup 2022
Tech Support
streaming software Strange computer issues (software) [G] How to Block Livestream Ads
TL Community
The Automated Ban List
Blogs
How EEG Data Can Predict Gam…
TrAiDoS
ramps on octagon
StaticNine
Funny Nicknames
LUCKY_NOOB
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1515 users

Neural networks

Blogs > Qzy
Post a Reply
Normal
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 13:20:14
September 17 2010 13:16 GMT
#1
Teamliquid community is pretty smart..

Does anyone understand neural networks and how they work with multiple layers? Got a bunch of questions for it, to even being able to understand it slightly - most scientific texts on neural networks are very strong in math, but not doing a good job explaining what the... is going on.

TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Tabbris
Profile Blog Joined June 2010
Bangladesh2839 Posts
Last Edited: 2010-09-17 13:28:38
September 17 2010 13:28 GMT
#2
You should try the TL manpowerthread http://www.teamliquid.net/forum/viewmessage.php?topic_id=84245
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
September 17 2010 13:34 GMT
#3
Why not start from Wikipedia
ZBiR
Profile Blog Joined August 2003
Poland1092 Posts
Last Edited: 2010-09-17 13:43:56
September 17 2010 13:42 GMT
#4
It depends on what type of network you have, but in the most basic version, each neuron receives signals from each neuron of previous layer, multilies each one by it's specific weight (each neuron has different set of weights for the signals from previous layer, usually it's the changing weights that are considered the learning element in a network) and sums them, then operates on that summed signal and sends the result to each neutron in the next layer. Simple
meeple
Profile Blog Joined April 2009
Canada10211 Posts
September 17 2010 14:03 GMT
#5
You should go ahead and ask the questions... and state what exactly you don't understand or what you do understand about them and you'll have a much better chance of getting a real answer.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 14:27:39
September 17 2010 14:16 GMT
#6
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
September 17 2010 14:33 GMT
#7
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 14:39 GMT
#8
On September 17 2010 23:33 Glacierz wrote:
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.


Can't :/. I'm following my specialization course. This week it's neural networks, where we have to make a ludo player (in 1 week), next up is generic algorithms and then reinforcement learning.

Sigh.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
Last Edited: 2010-09-17 14:57:41
September 17 2010 14:55 GMT
#9
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.
When you want something, all the universe conspires in helping you to achieve it.
illu
Profile Blog Joined December 2008
Canada2531 Posts
September 17 2010 14:57 GMT
#10
Come to think of it, a professor at University of Toronto sort of specializes in this subject.
:]
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 14:59 GMT
#11
In any case, neural networks are easy to implement in Matlab with the NN toolbox. The difficult part is to choose the correct activation function and the number of neurons and the number of layers (you can just let this run for days on a box)
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 15:03:26
September 17 2010 15:02 GMT
#12
On September 17 2010 23:55 Cambium wrote:
Show nested quote +
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 15:09 GMT
#13
On September 18 2010 00:02 Qzy wrote:
Show nested quote +
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 15:13 GMT
#14
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


Then it's good i got 3 more days to come up with a ludo player :D

God I love university with their "1 week to understand 50 years of AI, and implement it - kkthxbye"

. So lost in this - how's it possible to create a working neural network in a week.. seriously.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
September 17 2010 16:03 GMT
#15
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN
AcrossFiveJulys
Profile Blog Joined September 2005
United States3612 Posts
September 17 2010 16:05 GMT
#16
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


I wouldn't recommend using a neural network for learning a game unless integrated as part of a self play system. If you want to read about a way to kick some serious ass, look up reinforcement learning backgammon.

Cadmium is suggesting that you use the NN as a state utility evaluator (and then presumably use those evaluations to implement minimax?). That is a viable approach, but I think you could implement the evaluation step yourself and get pretty far and skip the NN part.

If you want to do this the simplest way possible, get someone (or a program) that plays the game pretty well and collect a lot of data saying "at this state, do action a". It will be important to choose your inputs wisely. You could describe the entire board state as your inputs, but that will make it harder for the NN to generalize. Instead you should consider coming up with some features of the state that are interesting.

As for the parameters, such as the number of hidden nodes, hidden layers, learning rate, momentum term, how much data you need, etc, you have to understand that there isn't hard theory that says you need exactly this much. In practice getting neural networks to work is a form of black magic: you must empirically determine a good parameter setting through your own intuition and lots and lots of experimentation.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:10:22
September 17 2010 16:06 GMT
#17
On September 18 2010 01:03 Glacierz wrote:
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN


You mean minmax not alpha-beta pruning. But alpha-beta pruning does help to speed up the search. But this is neural networks - always 2nd best method, but very suitable, when search space becomes too big, ie chess.

I'll try training my neural network with some common states in my game, where i know some good outputs - and then i hope it can generalize this to more states which looks like it.

Is this an okay way to do it? It's pretty slow to "hand feed" it everything.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:14:04
September 17 2010 16:11 GMT
#18
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:19:13
September 17 2010 16:17 GMT
#19
On September 18 2010 01:11 ToxNub wrote:
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.


Actually I have to implement a ludo player for my AI course - but tictactoe is more simple to start out with (yes even tho it's complex solution to a simple question).

Like I wrote earlier, I'll try giving it some states, ie start state, and tell it: The correct output here is to move this piece. And some more states, and hope it can generalize from it.

Is that even possible?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
Last Edited: 2010-09-17 16:31:13
September 17 2010 16:30 GMT
#20
I think you would need a huge training set for this to work unless you can develop a reasonable set of heuristics.

Had to do it for reversi player for my AI class, just used minimax with A-B pruning.
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:52:26
September 17 2010 16:43 GMT
#21
I am not familiar with ludo, but I'll give what advice I can.

If the game is retardedly simple, and you can play it optimally, sure, you can tell it "given gamestate a, make move 3" in your training examples. This is the trivial case where the NN just memorizes your input/output mappings. Depending on your teacher, he may not accept this. If you have enough of them, you might be able to give it a state it's never seen, but it's hard to say if it will be able to generalize. Usually you need a lot of training data for that, and it sounds to me like you would have to generate it on your own.

If you can't play it optimally yourself (and hence can't provide good training data) or you actually want it to LEARN how to play ludo (and not tell it how to play ludo), you need to come up a way to provide feedback. You need to be able to say "this is your goal", and let it figure out how to get there on its own. The goal of learning, at its heart, is to get the program to do something you did not explictly write. If you just give it all the answers, you really just have a sounding board. So for tic-tac-toe you need to abstract a "move". What defines a bad move and what defines a good move? Obviously a move that ends with a loss is a bad move, and a move that wins the game is a good move. Unfortunately, there is much more to it than that. What defines a move that leads to a good move? What defines a move that is good now but loses you the game next move? These are not simple questions, and I'll stop rambling on and confusing you. Basically, this is where you have to get creative.
Normal
Please log in or register to reply.
Live Events Refresh
IPSL
16:00
Ro16 Group B
Bonyth vs Napoleon
G5 vs JDConan
Airneanach43
Liquipedia
Showmatch
15:00
Shopify Rebellion Sunday
YoungYakov vs sOs
Scarlett vs Nicoract
Reynor vs ByuN
Harstem370
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
ByuN 455
Harstem 370
LamboSC2 223
Railgan 103
elazer 100
BRAT_OK 39
StarCraft: Brood War
Sea 6320
Mini 709
firebathero 174
Movie 110
ToSsGirL 97
Dewaltoss 89
Sea.KH 79
Hyun 65
Aegong 44
Rock 28
[ Show more ]
yabsab 17
IntoTheRainbow 15
Shine 12
EG.Machine 10
Dota 2
Gorgc7548
qojqva1494
LuMiX1
Counter-Strike
pashabiceps2037
Heroes of the Storm
MindelVK12
Other Games
Grubby24369
singsing2536
Liquid`RaSZi1467
Beastyqt1062
B2W.Neo793
ceh9732
FrodaN685
KnowMe338
Hui .253
QueenE172
crisheroes143
monkeys_forever123
Organizations
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
[ Show 18 non-featured ]
StarCraft 2
• StrangeGG 116
• Reevou 3
• Migwel
• AfreecaTV YouTube
• sooper7s
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
StarCraft: Brood War
• HerbMon 26
• blackmanpl 19
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• lizZardDota294
League of Legends
• Nemesis2547
Other Games
• Shiphtur223
• WagamamaTV219
Upcoming Events
BSL
2h 15m
OyAji vs JDConan
DragOn vs TBD
OSC
7h 15m
Replay Cast
16h 15m
Monday Night Weeklies
23h 15m
Replay Cast
1d 7h
The PondCast
1d 17h
Kung Fu Cup
1d 18h
GSL
2 days
Replay Cast
3 days
GSL
3 days
[ Show More ]
WardiTV Spring Champion…
3 days
Replay Cast
4 days
Sparkling Tuna Cup
4 days
WardiTV Spring Champion…
4 days
Replay Cast
5 days
RSL Revival
5 days
Classic vs SHIN
Rogue vs Bunny
BSL
6 days
Replay Cast
6 days
Afreeca Starleague
6 days
RSL Revival
6 days
Liquipedia Results

Completed

Escore Tournament S2: W7
WardiTV TLMC #16
Nations Cup 2026

Ongoing

BSL Season 22
ASL Season 21
IPSL Spring 2026
KCM Race Survival 2026 Season 2
Acropolis #4
KK 2v2 League Season 1
BSL 22 Non-Korean Championship
SCTL 2026 Spring
RSL Revival: Season 5
Heroes Pulsing #1
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2

Upcoming

YSL S3
Escore Tournament S2: W8
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
Maestros of the Game 2
WardiTV Spring 2026
2026 GSL S2
BLAST Bounty Summer 2026
BLAST Bounty Summer Qual
Stake Ranked Episode 3
XSE Pro League 2026
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.