• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 00:51
CET 06:51
KST 14:51
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
Weekly Cups (Dec 15-21): Classic wins big, MaxPax & Clem take weeklies3ComeBackTV's documentary on Byun's Career !11Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win4Weekly Cups (Dec 1-7): Clem doubles, Solar gets over the hump1Weekly Cups (Nov 24-30): MaxPax, Clem, herO win2
StarCraft 2
General
ComeBackTV's documentary on Byun's Career ! Team TLMC #5: Winners Announced! What's the best tug of war? The Grack before Christmas Weekly Cups (Dec 15-21): Classic wins big, MaxPax & Clem take weeklies
Tourneys
OSC Season 13 World Championship $5,000+ WardiTV 2025 Championship $100 Prize Pool - Winter Warp Gate Masters Showdow Sparkling Tuna Cup - Weekly Open Tournament Winter Warp Gate Amateur Showdown #1
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 506 Warp Zone Mutation # 505 Rise From Ashes Mutation # 504 Retribution Mutation # 503 Fowl Play
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ What are former legends up to these days? BW General Discussion How soO Began His ProGaming Dreams Klaucher discontinued / in-game color settings
Tourneys
[Megathread] Daily Proleagues [BSL21] LB SemiFinals - Saturday 21:00 CET [BSL21] WB & LB Finals - Sunday 21:00 CET Small VOD Thread 2.0
Strategy
Simple Questions, Simple Answers Game Theory for Starcraft Current Meta Fighting Spirit mining rates
Other Games
General Games
Nintendo Switch Thread Mechabellum Stormgate/Frost Giant Megathread Beyond All Reason Path of Exile
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Survivor II: The Amazon Sengoku Mafia TL Mafia Community Thread
Community
General
Has Anyone Tried Kamagra Chewable for ED? US Politics Mega-thread Russo-Ukrainian War Thread 12 Days of Starcraft The Games Industry And ATVI
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List TL+ Announced Where to ask questions and add stream?
Blogs
National Diversity: A Challe…
TrAiDoS
I decided to write a webnov…
DjKniteX
James Bond movies ranking - pa…
Topin
Thanks for the RSL
Hildegard
Saturation point
Uldridge
Customize Sidebar...

Website Feedback

Closed Threads



Active: 926 users

Neural networks

Blogs > Qzy
Post a Reply
Normal
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 13:20:14
September 17 2010 13:16 GMT
#1
Teamliquid community is pretty smart..

Does anyone understand neural networks and how they work with multiple layers? Got a bunch of questions for it, to even being able to understand it slightly - most scientific texts on neural networks are very strong in math, but not doing a good job explaining what the... is going on.

TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Tabbris
Profile Blog Joined June 2010
Bangladesh2839 Posts
Last Edited: 2010-09-17 13:28:38
September 17 2010 13:28 GMT
#2
You should try the TL manpowerthread http://www.teamliquid.net/forum/viewmessage.php?topic_id=84245
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
September 17 2010 13:34 GMT
#3
Why not start from Wikipedia
ZBiR
Profile Blog Joined August 2003
Poland1092 Posts
Last Edited: 2010-09-17 13:43:56
September 17 2010 13:42 GMT
#4
It depends on what type of network you have, but in the most basic version, each neuron receives signals from each neuron of previous layer, multilies each one by it's specific weight (each neuron has different set of weights for the signals from previous layer, usually it's the changing weights that are considered the learning element in a network) and sums them, then operates on that summed signal and sends the result to each neutron in the next layer. Simple
meeple
Profile Blog Joined April 2009
Canada10211 Posts
September 17 2010 14:03 GMT
#5
You should go ahead and ask the questions... and state what exactly you don't understand or what you do understand about them and you'll have a much better chance of getting a real answer.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 14:27:39
September 17 2010 14:16 GMT
#6
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
September 17 2010 14:33 GMT
#7
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 14:39 GMT
#8
On September 17 2010 23:33 Glacierz wrote:
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.


Can't :/. I'm following my specialization course. This week it's neural networks, where we have to make a ludo player (in 1 week), next up is generic algorithms and then reinforcement learning.

Sigh.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
Last Edited: 2010-09-17 14:57:41
September 17 2010 14:55 GMT
#9
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.
When you want something, all the universe conspires in helping you to achieve it.
illu
Profile Blog Joined December 2008
Canada2531 Posts
September 17 2010 14:57 GMT
#10
Come to think of it, a professor at University of Toronto sort of specializes in this subject.
:]
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 14:59 GMT
#11
In any case, neural networks are easy to implement in Matlab with the NN toolbox. The difficult part is to choose the correct activation function and the number of neurons and the number of layers (you can just let this run for days on a box)
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 15:03:26
September 17 2010 15:02 GMT
#12
On September 17 2010 23:55 Cambium wrote:
Show nested quote +
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 15:09 GMT
#13
On September 18 2010 00:02 Qzy wrote:
Show nested quote +
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 15:13 GMT
#14
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


Then it's good i got 3 more days to come up with a ludo player :D

God I love university with their "1 week to understand 50 years of AI, and implement it - kkthxbye"

. So lost in this - how's it possible to create a working neural network in a week.. seriously.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
September 17 2010 16:03 GMT
#15
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN
AcrossFiveJulys
Profile Blog Joined September 2005
United States3612 Posts
September 17 2010 16:05 GMT
#16
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


I wouldn't recommend using a neural network for learning a game unless integrated as part of a self play system. If you want to read about a way to kick some serious ass, look up reinforcement learning backgammon.

Cadmium is suggesting that you use the NN as a state utility evaluator (and then presumably use those evaluations to implement minimax?). That is a viable approach, but I think you could implement the evaluation step yourself and get pretty far and skip the NN part.

If you want to do this the simplest way possible, get someone (or a program) that plays the game pretty well and collect a lot of data saying "at this state, do action a". It will be important to choose your inputs wisely. You could describe the entire board state as your inputs, but that will make it harder for the NN to generalize. Instead you should consider coming up with some features of the state that are interesting.

As for the parameters, such as the number of hidden nodes, hidden layers, learning rate, momentum term, how much data you need, etc, you have to understand that there isn't hard theory that says you need exactly this much. In practice getting neural networks to work is a form of black magic: you must empirically determine a good parameter setting through your own intuition and lots and lots of experimentation.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:10:22
September 17 2010 16:06 GMT
#17
On September 18 2010 01:03 Glacierz wrote:
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN


You mean minmax not alpha-beta pruning. But alpha-beta pruning does help to speed up the search. But this is neural networks - always 2nd best method, but very suitable, when search space becomes too big, ie chess.

I'll try training my neural network with some common states in my game, where i know some good outputs - and then i hope it can generalize this to more states which looks like it.

Is this an okay way to do it? It's pretty slow to "hand feed" it everything.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:14:04
September 17 2010 16:11 GMT
#18
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:19:13
September 17 2010 16:17 GMT
#19
On September 18 2010 01:11 ToxNub wrote:
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.


Actually I have to implement a ludo player for my AI course - but tictactoe is more simple to start out with (yes even tho it's complex solution to a simple question).

Like I wrote earlier, I'll try giving it some states, ie start state, and tell it: The correct output here is to move this piece. And some more states, and hope it can generalize from it.

Is that even possible?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1245 Posts
Last Edited: 2010-09-17 16:31:13
September 17 2010 16:30 GMT
#20
I think you would need a huge training set for this to work unless you can develop a reasonable set of heuristics.

Had to do it for reversi player for my AI class, just used minimax with A-B pruning.
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:52:26
September 17 2010 16:43 GMT
#21
I am not familiar with ludo, but I'll give what advice I can.

If the game is retardedly simple, and you can play it optimally, sure, you can tell it "given gamestate a, make move 3" in your training examples. This is the trivial case where the NN just memorizes your input/output mappings. Depending on your teacher, he may not accept this. If you have enough of them, you might be able to give it a state it's never seen, but it's hard to say if it will be able to generalize. Usually you need a lot of training data for that, and it sounds to me like you would have to generate it on your own.

If you can't play it optimally yourself (and hence can't provide good training data) or you actually want it to LEARN how to play ludo (and not tell it how to play ludo), you need to come up a way to provide feedback. You need to be able to say "this is your goal", and let it figure out how to get there on its own. The goal of learning, at its heart, is to get the program to do something you did not explictly write. If you just give it all the answers, you really just have a sounding board. So for tic-tac-toe you need to abstract a "move". What defines a bad move and what defines a good move? Obviously a move that ends with a loss is a bad move, and a move that wins the game is a good move. Unfortunately, there is much more to it than that. What defines a move that leads to a good move? What defines a move that is good now but loses you the game next move? These are not simple questions, and I'll stop rambling on and confusing you. Basically, this is where you have to get creative.
Normal
Please log in or register to reply.
Live Events Refresh
Next event in 3h 9m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
WinterStarcraft551
RuFF_SC2 296
NeuroSwarm 194
StarCraft: Brood War
Britney 7838
Nal_rA 141
Shuttle 84
ZergMaN 70
Bale 22
Noble 22
Icarus 11
Dota 2
monkeys_forever359
League of Legends
C9.Mang0548
Other Games
JimRising 606
Mew2King35
Liquid`Ken23
Organizations
Other Games
gamesdonequick1068
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• practicex 2
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• RayReign 34
• Diggity3
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Lourlo1448
Other Games
• Scarra2969
Upcoming Events
Replay Cast
3h 9m
Wardi Open
6h 9m
OSC
1d 6h
Solar vs MaxPax
ByuN vs Krystianer
Spirit vs TBD
OSC
4 days
Korean StarCraft League
4 days
OSC
5 days
OSC
5 days
OSC
6 days
uThermal 2v2 Circuit
6 days
Liquipedia Results

Completed

CSL Season 19: Qualifier 2
WardiTV 2025
META Madness #9

Ongoing

C-Race Season 1
IPSL Winter 2025-26
BSL Season 21
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025

Upcoming

CSL 2025 WINTER (S19)
Escore Tournament S1: W2
Escore Tournament S1: W3
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Big Gabe Cup #3
OSC Championship Season 13
Nations Cup 2026
ESL Pro League Season 23
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.