• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 14:50
CEST 20:50
KST 03:50
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
Team TLMC #5 - Finalists & Open Tournaments0[ASL20] Ro16 Preview Pt2: Turbulence10Classic Games #3: Rogue vs Serral at BlizzCon9[ASL20] Ro16 Preview Pt1: Ascent10Maestros of the Game: Week 1/Play-in Preview12
Community News
Weekly Cups (Sept 8-14): herO & MaxPax split cups4WardiTV TL Team Map Contest #5 Tournaments1SC4ALL $6,000 Open LAN in Philadelphia8Weekly Cups (Sept 1-7): MaxPax rebounds & Clem saga continues29LiuLi Cup - September 2025 Tournaments3
StarCraft 2
General
#1: Maru - Greatest Players of All Time Weekly Cups (Sept 8-14): herO & MaxPax split cups Team Liquid Map Contest #21 - Presented by Monster Energy SpeCial on The Tasteless Podcast Team TLMC #5 - Finalists & Open Tournaments
Tourneys
Maestros of The Game—$20k event w/ live finals in Paris Sparkling Tuna Cup - Weekly Open Tournament SC4ALL $6,000 Open LAN in Philadelphia WardiTV TL Team Map Contest #5 Tournaments RSL: Revival, a new crowdfunded tournament series
Strategy
Custom Maps
External Content
Mutation # 491 Night Drive Mutation # 490 Masters of Midnight Mutation # 489 Bannable Offense Mutation # 488 What Goes Around
Brood War
General
Soulkey on ASL S20 A cwal.gg Extension - Easily keep track of anyone BGH Auto Balance -> http://bghmmr.eu/ ASL20 General Discussion Pros React To: SoulKey's 5-Peat Challenge
Tourneys
[ASL20] Ro16 Group C [ASL20] Ro16 Group D [Megathread] Daily Proleagues SC4ALL $1,500 Open Bracket LAN
Strategy
Simple Questions, Simple Answers Muta micro map competition Fighting Spirit mining rates [G] Mineral Boosting
Other Games
General Games
Stormgate/Frost Giant Megathread Borderlands 3 Path of Exile General RTS Discussion Thread Nintendo Switch Thread
Dota 2
Official 'what is Dota anymore' discussion LiquidDota to reintegrate into TL.net
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread
Community
General
Things Aren’t Peaceful in Palestine US Politics Mega-thread UK Politics Mega-thread Canadian Politics Mega-thread Russo-Ukrainian War Thread
Fan Clubs
The Happy Fan Club!
Media & Entertainment
Movie Discussion! [Manga] One Piece Anime Discussion Thread
Sports
2024 - 2026 Football Thread Formula 1 Discussion MLB/Baseball 2023
World Cup 2022
Tech Support
Linksys AE2500 USB WIFI keeps disconnecting Computer Build, Upgrade & Buying Resource Thread High temperatures on bridge(s)
TL Community
BarCraft in Tokyo Japan for ASL Season5 Final The Automated Ban List
Blogs
The Personality of a Spender…
TrAiDoS
A very expensive lesson on ma…
Garnet
hello world
radishsoup
Lemme tell you a thing o…
JoinTheRain
RTS Design in Hypercoven
a11
Evil Gacha Games and the…
ffswowsucks
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1308 users

US Politics Mega-thread - Page 10051

Forum Index > Closed
Post a Reply
Prev 1 10049 10050 10051 10052 10053 10093 Next
Read the rules in the OP before posting, please.

In order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a re-read to refresh your memory! The vast majority of you are contributing in a healthy way, keep it up!

NOTE: When providing a source, explain why you feel it is relevant and what purpose it adds to the discussion if it's not obvious.
Also take note that unsubstantiated tweets/posts meant only to rekindle old arguments can result in a mod action.
iamthedave
Profile Joined February 2011
England2814 Posts
Last Edited: 2018-03-13 12:20:01
March 13 2018 12:13 GMT
#201001
I'm confused as to why people think Hilary would have made a 'bad' President. Her primary flaws are being as approachable as the Terminator and decades of vilification and/or dubious behaviour that warrants it.

Her strengths are a vice-like grasp of legislative procedure and policy-making, and I presume deal-making as well given she seems to have a lot of support despite being widely disliked.

Her weaknesses are things a President should ideally possess.

Her strengths are things a President absolutely must have.

Excluding the political association, she'd most likely have been a perfectly competent President who'd have got shit done. Assuming the house didn't follow through on their threat to literally impeach her the moment she got into the White House. And I'll bet she wouldn't have got into a fight with the Mayor of Puerto Rico, either.

But any criticism from the right rings hollow. The absolutely hysterical whinging about Obama is proof that it doesn't matter what they do, that D means it'll be presented as hell's own concept. If anything, a Hilary Presidency would probably have generated tons of funny Hilary-centric memes as the tiniest, stupidest details get picked out to be criticised. Obama had dijon mustard and a tan suit; just think of what they'd have found for H.C. Alas, we'll never know.
I'm not bad at Starcraft; I just think winning's rude.
Kickboxer
Profile Blog Joined November 2010
Slovenia1308 Posts
March 13 2018 12:26 GMT
#201002
Well, or you could say Obama was a warmongering Oreo in a suit who bailed out Wall Street while acting smug on non-issues, and his pal Hillary is criminally insane and gobbling on corporate penis (while also married to the guy who repealed Glass-Steagal). A matter of perspective.

User was temp banned for this post.
Zaros
Profile Blog Joined September 2010
United Kingdom3692 Posts
March 13 2018 12:47 GMT
#201003
a_flayer
Profile Blog Joined April 2010
Netherlands2826 Posts
March 13 2018 12:48 GMT
#201004
Oh, shit.
When you came along so righteous with a new national hate, so convincing is the ardor of war and of men, it's harder to breathe than to believe you're a friend. The wars at home, the wars abroad, all soaked in blood and lies and fraud.
Adreme
Profile Joined June 2011
United States5574 Posts
March 13 2018 12:59 GMT
#201005
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.
a_flayer
Profile Blog Joined April 2010
Netherlands2826 Posts
Last Edited: 2018-03-13 13:43:48
March 13 2018 13:00 GMT
#201006
On March 13 2018 21:59 Adreme wrote:
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.

Mike Pompeo was recently in the headlines for saying Putin lied about Russia not interfering in the election. He switches his position depending on who says what. If Trump says Russia didn't interfere, it's true. If Putin says it, it's false. Tillerson is probably just sick of it - he never really wanted the job in the first place.

This is a disturbing move, though. Now it's no longer 'corporate oil foreign policy' but rather 'CIA foreign policy'. Fun times. The chances for war with Iran just doubled.
When you came along so righteous with a new national hate, so convincing is the ardor of war and of men, it's harder to breathe than to believe you're a friend. The wars at home, the wars abroad, all soaked in blood and lies and fraud.
GreenHorizons
Profile Blog Joined April 2011
United States23295 Posts
Last Edited: 2018-03-13 13:01:55
March 13 2018 13:01 GMT
#201007
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.

The idea of the person who was running the CIA running the state department should really terrify people.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
ShoCkeyy
Profile Blog Joined July 2008
7815 Posts
Last Edited: 2018-03-13 13:04:42
March 13 2018 13:02 GMT
#201008
Fun, let's see how this plays out.

On March 13 2018 21:59 Adreme wrote:
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.


From what I'm seeing, Trump fired Tillerson.

On March 13 2018 22:01 GreenHorizons wrote:
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.

The idea of the person who was running the CIA running the state department should really terrify people.


It is an issue. Especially when it's some one who barely has any experience in my opinion to be handling the State...
Life?
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:04 GMT
#201009
Pompeo is more hawkish towards Russia than Tillerson ever was.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Pandemona *
Profile Blog Joined March 2011
Charlie Sheens House51490 Posts
March 13 2018 13:06 GMT
#201010
Yeah from what im reading it seems the sacking was due to clashes on more a personal scale than policy scale. Might just be my bad news sources though :D
ModeratorTeam Liquid Football Thread Guru! - Chelsea FC ♥
farvacola
Profile Blog Joined January 2011
United States18832 Posts
March 13 2018 13:11 GMT
#201011
I think it was both, though with all the fog surrounding the White House, it's practically impossible to tell.
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
Last Edited: 2018-03-13 13:14:29
March 13 2018 13:14 GMT
#201012
The grey area is the new CIA director.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Simberto
Profile Blog Joined July 2010
Germany11554 Posts
March 13 2018 13:14 GMT
#201013
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

Show nested quote +
This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"
m4ini
Profile Joined February 2014
4215 Posts
March 13 2018 13:20 GMT
#201014
Please imagine a link to a youtube video of "Another one bites the dust".

Thank god you guys are draining the swamp over there.
On track to MA1950A.
farvacola
Profile Blog Joined January 2011
United States18832 Posts
March 13 2018 13:22 GMT
#201015
On March 13 2018 22:14 Simberto wrote:
Show nested quote +
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"

While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:24 GMT
#201016
"Smokey, this is not 'Nam, this is bowling. There are rules."
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:28 GMT
#201017
Also if the name Haspel rings a bell she was the one involved in the Bush torture programs and destruction of said tapes.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Simberto
Profile Blog Joined July 2010
Germany11554 Posts
March 13 2018 13:35 GMT
#201018
On March 13 2018 22:22 farvacola wrote:
Show nested quote +
On March 13 2018 22:14 Simberto wrote:
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"

While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.


I am also fine with the experience of people involved in the topic. Which is why i tend to value DPBs opinion on educational matter more than that of someone who is not in any way involved with teaching except having been a student once. How to weigh experience against empirical studies is hard, i would usually tend towards empirical studies, because experience is usually very subjective and not necessarily universal, but both definitively have value in an argument.

I also find that people actually involved in a subject are less likely to assert total dominance, and more likely to try to learn more about parts of that subject that might be new to them, unless it directly and completely contradicts something that they know to be true.

When i said "argue based on feeling", i mean statements like the one made by Sadist, where there is neither any actual experience used as a basis, nor any studies, just an "it seems like it would make sense if stuff worked that way, thus i am going to be convinced that it works that way"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:38 GMT
#201019
"Smokey, this is not 'Nam, this is bowling. There are rules."
Sadist
Profile Blog Joined October 2002
United States7255 Posts
March 13 2018 13:39 GMT
#201020
On March 13 2018 20:53 Simberto wrote:
Show nested quote +
On March 13 2018 20:37 farvacola wrote:
On March 13 2018 20:25 Simberto wrote:
On March 13 2018 13:26 GreenHorizons wrote:
On March 13 2018 13:19 CatharsisUT wrote:
On March 13 2018 08:09 DarkPlasmaBall wrote:
On March 13 2018 07:52 Simberto wrote:
"I would argue raising standards and holding people back a few times would help."

I would like to mention that this is in no way supported by empirical evidence. Holding people back a grade actually reduces the amount of stuff they learn during their next year. Retention is roughly as bad as corporal punishment (very bad) at home with regards to its effect on learning, and way worse than for example television at home.


This is interesting to me. Do you happen to have sources I can read regarding the supposed detrimental effects that holding students back a grade has on their learning? I'm particularly interested in math education as an example, since I feel these negative effects are counterintuitive. Considering the years of math build on each other, I would think it's of the utmost importance for high school students to have a strong foundation in arithmetic and algebra before starting the higher maths, even if that means spending another year (or summer school) on algebra. Students who have weak algebra skills will struggle even more in trigonometry and calculus, for example. You think it would be worse for them to spend more time on algebra than to be pushed through to the next math? Even if they're failing?


As I read Simberto's comment, it seems fairly misleading. I think what he is saying is "if you have a first grader and make them repeat first grade, they will learn less in the following year if they repeat first grade vs. going to second grade." That seems obvious and kind of useless. Of course they will. If they learned 70% of the first grade material the first time they went through and got up to 90% the second time through, they only learned 20% of a year's worth the second time! That kid could have gone on to second grade and learned 50% of a new year's worth of stuff.

Of course that is the wrong comparison. What we care about is how that kid does in second grade after a repeat year vs. without.


I think if someone learned 70% of the material, having a system that can't get them the other 30% without repeating the 70% they already know is failing that student.

I think you are right that the statistic as presented is less substantial than implied but I don't think an excellent performance in 2nd grade would mean that holding the kid back was a great choice either.


That is how i meant it. And no, i dont think that that is the wrong comparison. The point of school is learning stuff. You automatically learn more stuff the more time you spend, so the only valid metric of good education is "stuff learned/time spent", not something totally arbitrary like "stuff learned/grade you are in". Otherwise, the optimal system would be to just have children repeat a grade over and over until they learn everything there is to learn in that grade. "Hey, our second graders know quantum mechanics, because they have been in second grade for 57 years!" obviously that is a reduction to the absurd, but it think it clarifies why i think that the valid timeline is the life of the student, not the position in the education system.

The goal should be to have the maximum learning effect in each year. And retention is really bad at that. But of course, putting children into progressively harder grades while they lack parts of the necessary background knowledge is also not optimal. It is just less bad than retention. A better system would find ways to allow for the students to fill up the holes in their knowledge instead of putting them into situations where they are basically forced to fail or situations where they are stuck repeating the same stuff that they already know most of the time.
Your conclusions conflict with your reasoning, Simberto. If a student's educational timeline is more important and figurative than his or her position in the education system, then holding students back or otherwise stopping them from taking part in the routine step-based grade system shouldn't be as bad as you're suggesting. Further, you're making value judgments in terms of the adequacy of a student's knowledge as it matches up with the mechanics of failure and/or being held back a grade. For example, "repeating the same stuff" deserves a lot of qualification; are the students literally repeating the same material or is the teacher tailoring some of the repeat material? Further, say that the student did very poorly the prior year because their homelife fell apart and they literally paid almost no attention at school that whole time; does "repeating the same stuff" still ring as negatively then? I'd say no.

More generally, I think the point Sadist and others were getting at is that the US absolutely has a problem with the concept of failure as it relates to worth and place in society, and though holding students back a grade seems like strong medicine, allowing students to graduate from high school while barely being able to read, write, or do math is weak medicine by the same margin. Further, this stigma associated with poor academic performance spills over into our problem with vocations and trade skills, so in the sense that holding a student back makes less sense than sending them somewhere where their talents are better put to use, then I suppose I agree with your criticism.


But as the Hattie-metastudy shows, if you hold students back for a year, they learn less during the year they just repeated. That was my whole point. That seems like a bad system. And i agree that the standard stepladder is probably not the best system for enhancing students abilities. And i also agree that even within the stepladder system, you require a way to deal with students who fall further and further behind due to a lack of knowledge required to learn the current material spiral. Holding students back a year is just really bad at doing that and basically wastes a whole year of the students time for small gains.

Of the back of my head, all of the following sound instinctively better (though one would obviously have to test whether they actually work or not): additional schooling during the summer break, additional basic classes during the following year, differentiating the classes during the following year based on student ability.

The problem is that all of the above require effort and money (for additional teacher hours). Having the student repeat a year isn't as directly expensive (though if you actually calculate the cost of a complete additional years worth of teacher hours, that isn't neglectable either, it just seems like it is free because you can just sit the student into a class that is already there), and organisatorially easy.




I think the idea that you only measure their learning the following year, (if I understand the study) is pointless.

The real measure would be how well they perform the rest of their academic career. Not just one year.

Im not sure how you measure that exactly, but the point of repeating a grade isnt just to fix a one year problem. Its to try to jar into the person that they need to fix a lifelong problem
How do you go from where you are to where you want to be? I think you have to have an enthusiasm for life. You have to have a dream, a goal and you have to be willing to work for it. Jim Valvano
Prev 1 10049 10050 10051 10052 10053 10093 Next
Please log in or register to reply.
Live Events Refresh
Next event in 10m
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
UpATreeSC 146
IndyStarCraft 111
JuggernautJason67
MindelVK 49
SteadfastSC 14
StarCraft: Brood War
Britney 24203
Calm 2281
Rain 1745
Shuttle 584
BeSt 334
Dewaltoss 101
Rock 13
Hm[arnc] 10
Dota 2
qojqva4169
Dendi1739
XcaliburYe168
Pyrionflax129
boxi98112
Counter-Strike
ScreaM927
fl0m765
flusha197
Stewie2K82
Heroes of the Storm
Liquid`Hasu319
Other Games
gofns28997
tarik_tv26341
Grubby2904
FrodaN1601
Beastyqt581
B2W.Neo210
ToD177
Hui .163
ArmadaUGS96
QueenE80
C9.Mang057
Trikslyr54
NeuroSwarm38
Organizations
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 24 non-featured ]
StarCraft 2
• Hupsaiya 36
• Reevou 4
• intothetv
• sooper7s
• Migwel
• LaughNgamezSOOP
• AfreecaTV YouTube
• IndyKCrew
• Kozan
StarCraft: Brood War
• FirePhoenix17
• 80smullet 14
• Pr0nogo 6
• STPLYoutube
• ZZZeroYoutube
• BSLYoutube
Dota 2
• C_a_k_e 3684
• masondota21401
• lizZardDota258
League of Legends
• Nemesis4846
• TFBlade731
Other Games
• imaqtpie653
• Scarra522
• WagamamaTV333
• Shiphtur202
Upcoming Events
OSC
10m
Cure vs Iba
MaxPax vs Lemon
Gerald vs ArT
Solar vs goblin
Nicoract vs TBD
Spirit vs Percival
Cham vs TBD
ByuN vs Jumy
SteadfastSC0
RSL Revival
15h 10m
Maru vs Reynor
Cure vs TriGGeR
Map Test Tournament
16h 10m
The PondCast
18h 10m
RSL Revival
1d 15h
Zoun vs Classic
Korean StarCraft League
2 days
BSL Open LAN 2025 - War…
2 days
RSL Revival
2 days
BSL Open LAN 2025 - War…
3 days
RSL Revival
3 days
[ Show More ]
Online Event
3 days
Wardi Open
4 days
Monday Night Weeklies
4 days
Sparkling Tuna Cup
5 days
LiuLi Cup
6 days
Liquipedia Results

Completed

Proleague 2025-09-10
Chzzk MurlocKing SC1 vs SC2 Cup #2
HCC Europe

Ongoing

BSL 20 Team Wars
KCM Race Survival 2025 Season 3
BSL 21 Points
ASL Season 20
CSL 2025 AUTUMN (S18)
LASL Season 20
RSL Revival: Season 2
Maestros of the Game
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
FISSURE Playground #1

Upcoming

2025 Chongqing Offline CUP
BSL World Championship of Poland 2025
IPSL Winter 2025-26
BSL Season 21
SC4ALL: Brood War
BSL 21 Team A
Stellar Fest
SC4ALL: StarCraft II
EC S1
ESL Impact League Season 8
SL Budapest Major 2025
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
MESA Nomadic Masters Fall
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22
StarSeries Fall 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.