• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 20:39
CET 01:39
KST 09:39
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
[ASL21] Ro24 Preview Pt1: New Chaos0Team Liquid Map Contest #22 - Presented by Monster Energy5ByuL: The Forgotten Master of ZvT30Behind the Blue - Team Liquid History Book19Clem wins HomeStory Cup 289
Community News
Weekly Cups (March 16-22): herO doubles, Cure surprises0Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool42Weekly Cups (March 9-15): herO, Clem, ByuN win42026 KungFu Cup Announcement6BGE Stara Zagora 2026 cancelled12
StarCraft 2
General
What mix of new & old maps do you want in the next ladder pool? (SC2) Weekly Cups (March 16-22): herO doubles, Cure surprises Blizzard Classic Cup @ BlizzCon 2026 - $100k prize pool Potential Updates Coming to the SC2 CN Server Weekly Cups (March 2-8): ByuN overcomes PvT block
Tourneys
World University TeamLeague (500$+) | Signups Open RSL Season 4 announced for March-April Sparkling Tuna Cup - Weekly Open Tournament WardiTV Team League Season 10 KSL Week 87
Strategy
Custom Maps
Publishing has been re-enabled! [Feb 24th 2026]
External Content
The PondCast: SC2 News & Results Mutation # 518 Radiation Zone Mutation # 517 Distant Threat Mutation # 516 Specter of Death
Brood War
General
ASL21 General Discussion Soulkey's decision to leave C9 BGH Auto Balance -> http://bghmmr.eu/ JaeDong's form before ASL [ASL21] Ro24 Preview Pt1: New Chaos
Tourneys
ASL Season 21 LIVESTREAM with English Commentary [Megathread] Daily Proleagues [ASL21] Ro24 Group A [BSL22] Open Qualifiers & Ladder Tours
Strategy
Fighting Spirit mining rates Simple Questions, Simple Answers Soma's 9 hatch build from ASL Game 2
Other Games
General Games
General RTS Discussion Thread Stormgate/Frost Giant Megathread Nintendo Switch Thread Path of Exile Dawn of War IV
Dota 2
Official 'what is Dota anymore' discussion The Story of Wings Gaming
League of Legends
G2 just beat GenG in First stand
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Five o'clock TL Mafia Mafia Game Mode Feedback/Ideas Vanilla Mini Mafia
Community
General
US Politics Mega-thread European Politico-economics QA Mega-thread Things Aren’t Peaceful in Palestine YouTube Thread Canadian Politics Mega-thread
Fan Clubs
The IdrA Fan Club
Media & Entertainment
[Req][Books] Good Fantasy/SciFi books Movie Discussion! [Manga] One Piece
Sports
2024 - 2026 Football Thread Cricket [SPORT] Formula 1 Discussion Tokyo Olympics 2021 Thread General nutrition recommendations
World Cup 2022
Tech Support
Laptop capable of using Photoshop Lightroom?
TL Community
U4GM Tips Counter Enemy Gadgets Fast in Black Ops rsvsr How to Keep Reward Chains Rolling in Monopol u4gm What to Do First in MLB The Show 26 Spring
Blogs
Funny Nicknames
LUCKY_NOOB
Money Laundering In Video Ga…
TrAiDoS
Iranian anarchists: organize…
XenOsky
FS++
Kraekkling
Shocked by a laser…
Spydermine0240
Unintentional protectionism…
Uldridge
ASL S21 English Commentary…
namkraft
Customize Sidebar...

Website Feedback

Closed Threads



Active: 3760 users

Reinforcement learning

Blogs > Qzy
Post a Reply
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
Last Edited: 2011-01-17 20:11:12
January 17 2011 20:08 GMT
#1
Hi my fellow nerds =)

I'm studying for my exam in "modern artificial intelligence in games". I'm a bit confused about some of the many types of reinforcement learning. Perhaps someone knows a good way to tell them all apart? I got some holes in my knowledge - can someone help me fill them?

Q-learning Link
Q-learning looks at the next state (s+1), and updates the current state as such:

[image loading]

Q-learning uses bootstrapping:
Bootstrapping: Estimate how good a state is based on how good we think the next state is

TD(λ)
Is exactly like Q-learning, but uses λ to find out how far it should bootstrap. TD(0) = Q-learning.

SARSA - Link
Looks at State(t+1), Action(t+1), Reward(t+2), State(t+2), Action(t+2).
[image loading]

(What's the difference between SARSA and Q-learning? Looks very alike)

MC Link
Monte Carlo methods uses no bootstrapping.
Updates a state purely based on values returned by performing actions in the given state.

Dynamic Programming
It's a bit out of scope, but I have no idea how it works.

Any input on these subjects is appreciated - many papers on this is poorly explained (well I think so at least).

Thanks!

*****
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
darmousseh
Profile Blog Joined May 2010
United States3437 Posts
January 17 2011 20:51 GMT
#2
Dynamic programming is a completely different topic altogether and is an algorithm rather than anything to do with AI.

You have Q learning correct. TD is the base method and updates all of the previous states, but at a varying factor depending on how much the current state is relevant to previous states.

Q learning follows a specific pattern for how to learn. Sarsa is like a dynamic Q learning method where it is learning the most efficient way of getting new information.


Monte Carlo is as you said, simply a method of evaluating a specific move by taking a huge sample. The best Go program in the world uses monte carlo and has no information other than the current state. It can only work in certain situations.

Dynamic programming is any algorithm which solves a problem by solving the individual parts such as the shortest path problem.
Developer for http://mtgfiddle.com
ScrubS
Profile Joined September 2010
Netherlands436 Posts
January 17 2011 20:53 GMT
#3
I am not really into all of this, but I find this really intresting. Wikipedia does wonders:

Diffrence between TD and SARSA:
'The difference may be explained as SARSA learns the Q values associated with taking the policy it follows itself, while Watkin's Q-learning learns the Q values associated with taking the exploitation policy while following an exploration/exploitation policy'

TD is a combination of Dynamic Programming and MC:
'TD resembles a Monte Carlo method because it learns by sampling the environment according to some policy. TD is related to dynamic programming techniques because it approximates its current estimate based on previously learned estimates (a process known as bootstrapping).'

Probably could find some more if I would keep on looking. As I only understand half of this stuff, it might not help you but I did really found this to be very intresting

Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
January 17 2011 21:12 GMT
#4
On January 18 2011 05:51 darmousseh wrote:
Dynamic programming is a completely different topic altogether and is an algorithm rather than anything to do with AI.

You have Q learning correct. TD is the base method and updates all of the previous states, but at a varying factor depending on how much the current state is relevant to previous states.

Q learning follows a specific pattern for how to learn. Sarsa is like a dynamic Q learning method where it is learning the most efficient way of getting new information.


Monte Carlo is as you said, simply a method of evaluating a specific move by taking a huge sample. The best Go program in the world uses monte carlo and has no information other than the current state. It can only work in certain situations.

Dynamic programming is any algorithm which solves a problem by solving the individual parts such as the shortest path problem.


Thanks, I'm still a bit confused about SARSA. Could you give an example?

.
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
darmousseh
Profile Blog Joined May 2010
United States3437 Posts
January 17 2011 21:19 GMT
#5
On January 18 2011 06:12 Qzy wrote:
Show nested quote +
On January 18 2011 05:51 darmousseh wrote:
Dynamic programming is a completely different topic altogether and is an algorithm rather than anything to do with AI.

You have Q learning correct. TD is the base method and updates all of the previous states, but at a varying factor depending on how much the current state is relevant to previous states.

Q learning follows a specific pattern for how to learn. Sarsa is like a dynamic Q learning method where it is learning the most efficient way of getting new information.


Monte Carlo is as you said, simply a method of evaluating a specific move by taking a huge sample. The best Go program in the world uses monte carlo and has no information other than the current state. It can only work in certain situations.

Dynamic programming is any algorithm which solves a problem by solving the individual parts such as the shortest path problem.


Thanks, I'm still a bit confused about SARSA. Could you give an example?

.


If you already have full information about the environment (such as chess) then you would use Q learning since you would know how to exploit the environment already. The goal in chess is to capture to opponents king

If you have little to no information about the environment you would likely use SARSA since they typically use an annotated nueral network with it. For example, a maze solving algorithm with no information about the maze other than simple feedback.
Developer for http://mtgfiddle.com
Qzy
Profile Blog Joined July 2010
Denmark1121 Posts
January 17 2011 21:39 GMT
#6
On January 18 2011 06:19 darmousseh wrote:
Show nested quote +
On January 18 2011 06:12 Qzy wrote:
On January 18 2011 05:51 darmousseh wrote:
Dynamic programming is a completely different topic altogether and is an algorithm rather than anything to do with AI.

You have Q learning correct. TD is the base method and updates all of the previous states, but at a varying factor depending on how much the current state is relevant to previous states.

Q learning follows a specific pattern for how to learn. Sarsa is like a dynamic Q learning method where it is learning the most efficient way of getting new information.


Monte Carlo is as you said, simply a method of evaluating a specific move by taking a huge sample. The best Go program in the world uses monte carlo and has no information other than the current state. It can only work in certain situations.

Dynamic programming is any algorithm which solves a problem by solving the individual parts such as the shortest path problem.


Thanks, I'm still a bit confused about SARSA. Could you give an example?

.


If you already have full information about the environment (such as chess) then you would use Q learning since you would know how to exploit the environment already. The goal in chess is to capture to opponents king

If you have little to no information about the environment you would likely use SARSA since they typically use an annotated nueral network with it. For example, a maze solving algorithm with no information about the maze other than simple feedback.


I assume it's due to the exploration vs. exploitation in Q-learning? SARSA doesn't utilize such thing it builds its own?
TG Sambo... Intel classic! Life of lively to live to life of full life thx to shield battery
Please log in or register to reply.
Live Events Refresh
Next event in 9h 21m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
PiGStarcraft388
elazer 152
SpeCial 95
Nina 82
CosmosSc2 39
RuFF_SC2 0
StarCraft: Brood War
GuemChi 5930
Artosis 717
Stork 141
ggaemo 69
NaDa 18
LancerX 17
ajuk12(nOOB) 16
Terrorterran 13
Dota 2
monkeys_forever684
Counter-Strike
taco 533
Super Smash Bros
C9.Mang0385
hungrybox343
AZ_Axe125
PPMD50
Liquid`Ken14
Other Games
summit1g10200
ToD153
Maynarde152
Trikslyr58
UpATreeSC32
JuggernautJason8
deth6
Organizations
Other Games
gamesdonequick828
Dota 2
PGL Dota 2 - Main Stream83
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 15 non-featured ]
StarCraft 2
• Hupsaiya 196
• davetesta17
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• RayReign 5
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Other Games
• Scarra988
• imaqtpie960
Upcoming Events
Sparkling Tuna Cup
9h 21m
Afreeca Starleague
9h 21m
Soulkey vs Ample
JyJ vs sSak
Replay Cast
1d 8h
Afreeca Starleague
1d 9h
hero vs YSC
Larva vs Shine
Kung Fu Cup
1d 10h
Replay Cast
1d 23h
KCM Race Survival
2 days
The PondCast
2 days
WardiTV Team League
2 days
Replay Cast
2 days
[ Show More ]
WardiTV Team League
3 days
RSL Revival
4 days
Cure vs Zoun
herO vs Rogue
WardiTV Team League
4 days
Platinum Heroes Events
4 days
BSL
4 days
RSL Revival
5 days
ByuN vs Maru
MaxPax vs TriGGeR
WardiTV Team League
5 days
BSL
5 days
Replay Cast
6 days
Afreeca Starleague
6 days
Light vs Calm
Royal vs Mind
Wardi Open
6 days
Monday Night Weeklies
6 days
Liquipedia Results

Completed

Proleague 2026-03-23
WardiTV Winter 2026
Underdog Cup #3

Ongoing

KCM Race Survival 2026 Season 1
BSL Season 22
CSL Elite League 2026
CSL Season 20: Qualifier 1
ASL Season 21
Acropolis #4 - TS6
RSL Revival: Season 4
Nations Cup 2026
NationLESS Cup
BLAST Open Spring 2026
ESL Pro League S23 Finals
ESL Pro League S23 Stage 1&2
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual

Upcoming

2026 Changsha Offline CUP
CSL Season 20: Qualifier 2
CSL 2026 SPRING (S20)
Acropolis #4
IPSL Spring 2026
BSL 22 Non-Korean Championship
CSLAN 4
Kung Fu Cup 2026 Grand Finals
HSC XXIX
uThermal 2v2 2026 Main Event
IEM Cologne Major 2026
Stake Ranked Episode 2
CS Asia Championships 2026
Asian Champions League 2026
IEM Atlanta 2026
PGL Astana 2026
BLAST Rivals Spring 2026
CCT Season 3 Global Finals
IEM Rio 2026
PGL Bucharest 2026
Stake Ranked Episode 1
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2026 TLnet. All Rights Reserved.