• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 14:07
CEST 20:07
KST 03:07
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL19] Finals Recap: Standing Tall9HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0TL Team Map Contest #5: Presented by Monster Energy6
Community News
Flash Announces Hiatus From ASL50Weekly Cups (June 23-29): Reynor in world title form?12FEL Cracov 2025 (July 27) - $8000 live event16Esports World Cup 2025 - Final Player Roster16Weekly Cups (June 16-22): Clem strikes back1
StarCraft 2
General
The SCII GOAT: A statistical Evaluation The GOAT ranking of GOAT rankings How does the number of casters affect your enjoyment of esports? Statistics for vetoed/disliked maps Esports World Cup 2025 - Final Player Roster
Tourneys
Master Swan Open (Global Bronze-Master 2) RSL: Revival, a new crowdfunded tournament series [GSL 2025] Code S: Season 2 - Semi Finals & Finals $5,100+ SEL Season 2 Championship (SC: Evo) FEL Cracov 2025 (July 27) - $8000 live event
Strategy
How did i lose this ZvP, whats the proper response Simple Questions Simple Answers
Custom Maps
[UMS] Zillion Zerglings
External Content
Mutation # 480 Moths to the Flame Mutation # 479 Worn Out Welcome Mutation # 478 Instant Karma Mutation # 477 Slow and Steady
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ Player “Jedi” cheat on CSL Unit and Spell Similarities Help: rep cant save Flash Announces Hiatus From ASL
Tourneys
[Megathread] Daily Proleagues [BSL20] Grand Finals - Sunday 20:00 CET Small VOD Thread 2.0 [BSL20] GosuLeague RO16 - Tue & Wed 20:00+CET
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
Stormgate/Frost Giant Megathread Nintendo Switch Thread Path of Exile What do you want from future RTS games? Beyond All Reason
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread Trading/Investing Thread The Games Industry And ATVI
Fan Clubs
SKT1 Classic Fan Club! Maru Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece [\m/] Heavy Metal Thread
Sports
Formula 1 Discussion 2024 - 2025 Football Thread NBA General Discussion TeamLiquid Health and Fitness Initiative For 2023 NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
Blogs
Culture Clash in Video Games…
TrAiDoS
from making sc maps to makin…
Husyelt
Blog #2
tankgirl
StarCraft improvement
iopq
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 763 users

Neural networks

Blogs > Qzy
Post a Reply
1 2 Next All
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 13:20:14
September 17 2010 13:16 GMT
#1
Teamliquid community is pretty smart..

Does anyone understand neural networks and how they work with multiple layers? Got a bunch of questions for it, to even being able to understand it slightly - most scientific texts on neural networks are very strong in math, but not doing a good job explaining what the... is going on.

TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Tabbris
Profile Blog Joined June 2010
Bangladesh2839 Posts
Last Edited: 2010-09-17 13:28:38
September 17 2010 13:28 GMT
#2
You should try the TL manpowerthread http://www.teamliquid.net/forum/viewmessage.php?topic_id=84245
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 13:34 GMT
#3
Why not start from Wikipedia
ZBiR
Profile Blog Joined August 2003
Poland1092 Posts
Last Edited: 2010-09-17 13:43:56
September 17 2010 13:42 GMT
#4
It depends on what type of network you have, but in the most basic version, each neuron receives signals from each neuron of previous layer, multilies each one by it's specific weight (each neuron has different set of weights for the signals from previous layer, usually it's the changing weights that are considered the learning element in a network) and sums them, then operates on that summed signal and sends the result to each neutron in the next layer. Simple
meeple
Profile Blog Joined April 2009
Canada10211 Posts
September 17 2010 14:03 GMT
#5
You should go ahead and ask the questions... and state what exactly you don't understand or what you do understand about them and you'll have a much better chance of getting a real answer.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 14:27:39
September 17 2010 14:16 GMT
#6
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 14:33 GMT
#7
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 14:39 GMT
#8
On September 17 2010 23:33 Glacierz wrote:
Based on your questions, I suggest you start out with Bayesian networks first before getting into neural networks.


Can't :/. I'm following my specialization course. This week it's neural networks, where we have to make a ludo player (in 1 week), next up is generic algorithms and then reinforcement learning.

Sigh.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
Last Edited: 2010-09-17 14:57:41
September 17 2010 14:55 GMT
#9
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.
When you want something, all the universe conspires in helping you to achieve it.
illu
Profile Blog Joined December 2008
Canada2531 Posts
September 17 2010 14:57 GMT
#10
Come to think of it, a professor at University of Toronto sort of specializes in this subject.
:]
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 14:59 GMT
#11
In any case, neural networks are easy to implement in Matlab with the NN toolbox. The difficult part is to choose the correct activation function and the number of neurons and the number of layers (you can just let this run for days on a box)
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 15:03:26
September 17 2010 15:02 GMT
#12
On September 17 2010 23:55 Cambium wrote:
Show nested quote +
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Cambium
Profile Blog Joined June 2004
United States16368 Posts
September 17 2010 15:09 GMT
#13
On September 18 2010 00:02 Qzy wrote:
Show nested quote +
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.
When you want something, all the universe conspires in helping you to achieve it.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
September 17 2010 15:13 GMT
#14
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


Then it's good i got 3 more days to come up with a ludo player :D

God I love university with their "1 week to understand 50 years of AI, and implement it - kkthxbye"

. So lost in this - how's it possible to create a working neural network in a week.. seriously.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
September 17 2010 16:03 GMT
#15
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN
AcrossFiveJulys
Profile Blog Joined September 2005
United States3612 Posts
September 17 2010 16:05 GMT
#16
On September 18 2010 00:09 Cambium wrote:
Show nested quote +
On September 18 2010 00:02 Qzy wrote:
On September 17 2010 23:55 Cambium wrote:
On September 17 2010 23:16 Qzy wrote:
Thanks for already answering.

Okay here's a few questions.

I get the basics of it - but ...

How many neurons should you use, with 2 inputs? Do you HAVE to use 2 neurons in the hidden layer, or can you simple use 1? Can you use 5? What's the benefit for using less/more?

You can have more hidden layers - but with what benefits? Should it have the same amount of neurons as the other hidden layers?

When teaching the NN how to play, for instance, tictactoe, do I give it training examples: This is the input, i expect this output...? How many examples does it need to adjust to play decent?

Can it generalize, once it has seen a few examples?


It's been a while since I took ML, so I don't remember too much about NN, I'll give it a shot

You should have at least N+1 nodes in each layer where N is your # of input nodes. You can, of course, build a layer with any number of nodes, you just won't reach the accuracy you desire. I remember at one point, this made intuitive sense to me, but I don't remember it well enough to describe it back to you. There is no hard restriction on the number of neurons in each layer, this is something you have to experiment by running the NN multiple times wrt your training data, and choose the one with the highest accuracy. You can also do this with any number of hidden layers. The reason why you shouldn't use excessive number of neurons and hidden layers is to avoid overfitting (I think...).

Each layer can have a different number of nodes, and the optimal number of layers depend largely on your input data and your activation function (such as gradient descent, sigmoid). Too few nodes cause underfitting, and too many nodes cause overfitting (again, I think...).

Tic-Tac-Toe is actually a difficult problem to solve with NN (I'd use decision tree actually) since it's adaptive Your first task would be to digitize all of the moves in a given game (so every game is one piece of training datum), and the output would be win, lose and tie. Alternatively, you can assign state a value (much like Chess) so that every move can be a row in your training data.

I hope this helps.


Thanks it clears it up a bit..

Right now, I have to implement a ludo player - with a lot of possible states (like chess). Do i simply give it a few examples (inputs, and expected output), and it can generalize from these examples, once properly trained (output has reached desired)?


Well, you first need to classify your inputs and outputs. In a game of tic-tac-toe, say you are red, it would be along the lines of, how many red on each line (attack), how many blacks on each line (defence), and maybe a few more. Your output would be the quantification of the state after you place your piece. If you win or prevent a loss, it would probably be max, and go from there.

You would need a lot more data than a "few lines"; I would think in the order of hundreds if not thousands. I would try to find existing data for tic-tac-toe and see how experts classified the game. The best way to obtain data is to either find it, or to create an online version and ask your friends to play so that you can classify their respective inputs and outputs.


I wouldn't recommend using a neural network for learning a game unless integrated as part of a self play system. If you want to read about a way to kick some serious ass, look up reinforcement learning backgammon.

Cadmium is suggesting that you use the NN as a state utility evaluator (and then presumably use those evaluations to implement minimax?). That is a viable approach, but I think you could implement the evaluation step yourself and get pretty far and skip the NN part.

If you want to do this the simplest way possible, get someone (or a program) that plays the game pretty well and collect a lot of data saying "at this state, do action a". It will be important to choose your inputs wisely. You could describe the entire board state as your inputs, but that will make it harder for the NN to generalize. Instead you should consider coming up with some features of the state that are interesting.

As for the parameters, such as the number of hidden nodes, hidden layers, learning rate, momentum term, how much data you need, etc, you have to understand that there isn't hard theory that says you need exactly this much. In practice getting neural networks to work is a form of black magic: you must empirically determine a good parameter setting through your own intuition and lots and lots of experimentation.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:10:22
September 17 2010 16:06 GMT
#17
On September 18 2010 01:03 Glacierz wrote:
Tic-tac-toe is easily solved by alpha-beta pruning, no need for complex frameworks like NN


You mean minmax not alpha-beta pruning. But alpha-beta pruning does help to speed up the search. But this is neural networks - always 2nd best method, but very suitable, when search space becomes too big, ie chess.

I'll try training my neural network with some common states in my game, where i know some good outputs - and then i hope it can generalize this to more states which looks like it.

Is this an okay way to do it? It's pretty slow to "hand feed" it everything.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
ToxNub
Profile Joined June 2010
Canada805 Posts
Last Edited: 2010-09-17 16:14:04
September 17 2010 16:11 GMT
#18
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2010-09-17 16:19:13
September 17 2010 16:17 GMT
#19
On September 18 2010 01:11 ToxNub wrote:
I've written a few neural networks by hand before. I don't really understand how (why) you would get a neural network to play tic-tac-toe tho. NNs are supervised learning algorithms, which means you need to already know the answer in advance. Your network just sorta "remembers" the answers you've trained it to remember. Granted, if you get enough data, you can provide a novel input and you MIGHT get novel output, but for your purposes you will likely just be "remembering" what to do given gamestate a, b, c... Boring. Not an interesting application of NN at all :p

When people used NNs to play chess or whatever, it's really not just a NN. The NN in those games is really just a function approximator that tells you whether or not a given gamestate (board) is "good". Both chess, backgammon, and tic-tac-toe all rely on sequential moves, which means you've added a temporal element. ML techniques like temporal difference learning have successfully been applied in the form of TD-gammon, TD-Chess (and probably TD-tic-tac-toe) and are more suited to your needs.


Actually I have to implement a ludo player for my AI course - but tictactoe is more simple to start out with (yes even tho it's complex solution to a simple question).

Like I wrote earlier, I'll try giving it some states, ie start state, and tell it: The correct output here is to move this piece. And some more states, and hope it can generalize from it.

Is that even possible?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Glacierz
Profile Blog Joined May 2010
United States1244 Posts
Last Edited: 2010-09-17 16:31:13
September 17 2010 16:30 GMT
#20
I think you would need a huge training set for this to work unless you can develop a reasonable set of heuristics.

Had to do it for reversi player for my AI class, just used minimax with A-B pruning.
1 2 Next All
Please log in or register to reply.
Live Events Refresh
Big Brain Bouts
16:00
#97
RotterdaM680
Liquipedia
FEL
16:00
Polish Championship: Qualifier
IndyStarCraft 219
CranKy Ducklings71
Liquipedia
WardiTV European League
16:00
Swiss Groups Day 2
Jumy vs ArTLIVE!
YoungYakov vs Shameless
uThermal vs Fjant
Nicoract vs goblin
Harstem vs Gerald
WardiTV838
TKL 239
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
RotterdaM 680
TKL 239
IndyStarCraft 219
Hui .193
UpATreeSC 70
StarCraft: Brood War
Britney 30653
Calm 3783
Rain 2340
Horang2 881
Larva 458
BeSt 232
Mind 182
Movie 82
Mong 70
sas.Sziky 53
[ Show more ]
Barracks 47
Shinee 33
yabsab 32
soO 20
Free 19
Shine 13
Stormgate
NightEnD11
Dota 2
Gorgc12527
qojqva3028
League of Legends
Grubby3576
singsing2146
Counter-Strike
ScreaM2729
fl0m1597
Foxcn372
byalli239
edward86
Other Games
FrodaN1291
Beastyqt690
Fuzer 258
Trikslyr59
ZombieGrub22
Nathanias11
Organizations
Other Games
BasetradeTV26
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 16 non-featured ]
StarCraft 2
• intothetv
• AfreecaTV YouTube
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• WagamamaTV515
League of Legends
• Nemesis5118
• Jankos1791
• TFBlade1053
Other Games
• imaqtpie899
• Shiphtur377
Upcoming Events
Korean StarCraft League
8h 53m
CranKy Ducklings
15h 53m
RSL Revival
15h 53m
ByuN vs Cham
herO vs Reynor
FEL
21h 53m
RSL Revival
1d 15h
Clem vs Classic
SHIN vs Cure
FEL
1d 17h
BSL: ProLeague
1d 23h
Dewalt vs Bonyth
Replay Cast
3 days
Sparkling Tuna Cup
3 days
The PondCast
4 days
[ Show More ]
Replay Cast
5 days
RSL Revival
5 days
Replay Cast
6 days
RSL Revival
6 days
Liquipedia Results

Completed

Proleague 2025-06-28
HSC XXVII
Heroes 10 EU

Ongoing

JPL Season 2
BSL 2v2 Season 3
BSL Season 20
Acropolis #3
KCM Race Survival 2025 Season 2
CSL 17: 2025 SUMMER
Copa Latinoamericana 4
Championship of Russia 2025
RSL Revival: Season 1
Murky Cup #2
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters
CCT Season 2 Global Finals
IEM Melbourne 2025

Upcoming

2025 ACS Season 2: Qualifier
CSLPRO Last Chance 2025
2025 ACS Season 2
CSLPRO Chat StarLAN 3
K-Championship
uThermal 2v2 Main Event
SEL Season 2 Championship
FEL Cracov 2025
Esports World Cup 2025
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.