• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EDT 06:38
CEST 12:38
KST 19:38
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Season 1 - Final Week6[ASL19] Finals Recap: Standing Tall12HomeStory Cup 27 - Info & Preview18Classic wins Code S Season 2 (2025)16Code S RO4 & Finals Preview: herO, Rogue, Classic, GuMiho0
Community News
Esports World Cup 2025 - Brackets Revealed14Weekly Cups (July 7-13): Classic continues to roll6Team TLMC #5 - Submission extension3Firefly given lifetime ban by ESIC following match-fixing investigation17$25,000 Streamerzone StarCraft Pro Series announced7
StarCraft 2
General
Who will win EWC 2025? Weekly Cups (July 7-13): Classic continues to roll Esports World Cup 2025 - Final Player Roster Esports World Cup 2025 - Brackets Revealed The GOAT ranking of GOAT rankings
Tourneys
FEL Cracov 2025 (July 27) - $8000 live event Sea Duckling Open (Global, Bronze-Diamond) RSL: Revival, a new crowdfunded tournament series $5,100+ SEL Season 2 Championship (SC: Evo) WardiTV Mondays
Strategy
How did i lose this ZvP, whats the proper response
Custom Maps
External Content
Mutation # 482 Wheel of Misfortune Mutation # 481 Fear and Lava Mutation # 480 Moths to the Flame Mutation # 479 Worn Out Welcome
Brood War
General
Flash Announces (and Retracts) Hiatus From ASL BW General Discussion Help: rep cant save ASL20 Preliminary Maps BGH Auto Balance -> http://bghmmr.eu/
Tourneys
Cosmonarchy Pro Showmatches CSL Xiamen International Invitational [Megathread] Daily Proleagues [BSL20] Non-Korean Championship 4x BSL + 4x China
Strategy
Simple Questions, Simple Answers I am doing this better than progamers do.
Other Games
General Games
Stormgate/Frost Giant Megathread Path of Exile Nintendo Switch Thread CCLP - Command & Conquer League Project The PlayStation 5
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Heroes of StarCraft mini-set
TL Mafia
TL Mafia Community Thread Vanilla Mini Mafia
Community
General
US Politics Mega-thread Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread Stop Killing Games - European Citizens Initiative Summer Games Done Quick 2025!
Fan Clubs
SKT1 Classic Fan Club! Maru Fan Club
Media & Entertainment
[Manga] One Piece Movie Discussion! Anime Discussion Thread [\m/] Heavy Metal Thread
Sports
Formula 1 Discussion TeamLiquid Health and Fitness Initiative For 2023 2024 - 2025 Football Thread NBA General Discussion NHL Playoffs 2024
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List
Blogs
Men Take Risks, Women Win Ga…
TrAiDoS
momentary artworks from des…
tankgirl
from making sc maps to makin…
Husyelt
StarCraft improvement
iopq
Trip to the Zoo
micronesia
Customize Sidebar...

Website Feedback

Closed Threads



Active: 743 users

US Politics Mega-thread - Page 10051

Forum Index > Closed
Post a Reply
Prev 1 10049 10050 10051 10052 10053 10093 Next
Read the rules in the OP before posting, please.

In order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a re-read to refresh your memory! The vast majority of you are contributing in a healthy way, keep it up!

NOTE: When providing a source, explain why you feel it is relevant and what purpose it adds to the discussion if it's not obvious.
Also take note that unsubstantiated tweets/posts meant only to rekindle old arguments can result in a mod action.
iamthedave
Profile Joined February 2011
England2814 Posts
Last Edited: 2018-03-13 12:20:01
March 13 2018 12:13 GMT
#201001
I'm confused as to why people think Hilary would have made a 'bad' President. Her primary flaws are being as approachable as the Terminator and decades of vilification and/or dubious behaviour that warrants it.

Her strengths are a vice-like grasp of legislative procedure and policy-making, and I presume deal-making as well given she seems to have a lot of support despite being widely disliked.

Her weaknesses are things a President should ideally possess.

Her strengths are things a President absolutely must have.

Excluding the political association, she'd most likely have been a perfectly competent President who'd have got shit done. Assuming the house didn't follow through on their threat to literally impeach her the moment she got into the White House. And I'll bet she wouldn't have got into a fight with the Mayor of Puerto Rico, either.

But any criticism from the right rings hollow. The absolutely hysterical whinging about Obama is proof that it doesn't matter what they do, that D means it'll be presented as hell's own concept. If anything, a Hilary Presidency would probably have generated tons of funny Hilary-centric memes as the tiniest, stupidest details get picked out to be criticised. Obama had dijon mustard and a tan suit; just think of what they'd have found for H.C. Alas, we'll never know.
I'm not bad at Starcraft; I just think winning's rude.
Kickboxer
Profile Blog Joined November 2010
Slovenia1308 Posts
March 13 2018 12:26 GMT
#201002
Well, or you could say Obama was a warmongering Oreo in a suit who bailed out Wall Street while acting smug on non-issues, and his pal Hillary is criminally insane and gobbling on corporate penis (while also married to the guy who repealed Glass-Steagal). A matter of perspective.

User was temp banned for this post.
Zaros
Profile Blog Joined September 2010
United Kingdom3692 Posts
March 13 2018 12:47 GMT
#201003
a_flayer
Profile Blog Joined April 2010
Netherlands2826 Posts
March 13 2018 12:48 GMT
#201004
Oh, shit.
When you came along so righteous with a new national hate, so convincing is the ardor of war and of men, it's harder to breathe than to believe you're a friend. The wars at home, the wars abroad, all soaked in blood and lies and fraud.
Adreme
Profile Joined June 2011
United States5574 Posts
March 13 2018 12:59 GMT
#201005
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.
a_flayer
Profile Blog Joined April 2010
Netherlands2826 Posts
Last Edited: 2018-03-13 13:43:48
March 13 2018 13:00 GMT
#201006
On March 13 2018 21:59 Adreme wrote:
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.

Mike Pompeo was recently in the headlines for saying Putin lied about Russia not interfering in the election. He switches his position depending on who says what. If Trump says Russia didn't interfere, it's true. If Putin says it, it's false. Tillerson is probably just sick of it - he never really wanted the job in the first place.

This is a disturbing move, though. Now it's no longer 'corporate oil foreign policy' but rather 'CIA foreign policy'. Fun times. The chances for war with Iran just doubled.
When you came along so righteous with a new national hate, so convincing is the ardor of war and of men, it's harder to breathe than to believe you're a friend. The wars at home, the wars abroad, all soaked in blood and lies and fraud.
GreenHorizons
Profile Blog Joined April 2011
United States23206 Posts
Last Edited: 2018-03-13 13:01:55
March 13 2018 13:01 GMT
#201007
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.

The idea of the person who was running the CIA running the state department should really terrify people.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
ShoCkeyy
Profile Blog Joined July 2008
7815 Posts
Last Edited: 2018-03-13 13:04:42
March 13 2018 13:02 GMT
#201008
Fun, let's see how this plays out.

On March 13 2018 21:59 Adreme wrote:
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.


From what I'm seeing, Trump fired Tillerson.

On March 13 2018 22:01 GreenHorizons wrote:
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.

The idea of the person who was running the CIA running the state department should really terrify people.


It is an issue. Especially when it's some one who barely has any experience in my opinion to be handling the State...
Life?
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:04 GMT
#201009
Pompeo is more hawkish towards Russia than Tillerson ever was.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Pandemona *
Profile Blog Joined March 2011
Charlie Sheens House51481 Posts
March 13 2018 13:06 GMT
#201010
Yeah from what im reading it seems the sacking was due to clashes on more a personal scale than policy scale. Might just be my bad news sources though :D
ModeratorTeam Liquid Football Thread Guru! - Chelsea FC ♥
farvacola
Profile Blog Joined January 2011
United States18825 Posts
March 13 2018 13:11 GMT
#201011
I think it was both, though with all the fog surrounding the White House, it's practically impossible to tell.
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
Last Edited: 2018-03-13 13:14:29
March 13 2018 13:14 GMT
#201012
The grey area is the new CIA director.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Simberto
Profile Blog Joined July 2010
Germany11498 Posts
March 13 2018 13:14 GMT
#201013
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

Show nested quote +
This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"
m4ini
Profile Joined February 2014
4215 Posts
March 13 2018 13:20 GMT
#201014
Please imagine a link to a youtube video of "Another one bites the dust".

Thank god you guys are draining the swamp over there.
On track to MA1950A.
farvacola
Profile Blog Joined January 2011
United States18825 Posts
March 13 2018 13:22 GMT
#201015
On March 13 2018 22:14 Simberto wrote:
Show nested quote +
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"

While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:24 GMT
#201016
"Smokey, this is not 'Nam, this is bowling. There are rules."
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:28 GMT
#201017
Also if the name Haspel rings a bell she was the one involved in the Bush torture programs and destruction of said tapes.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Simberto
Profile Blog Joined July 2010
Germany11498 Posts
March 13 2018 13:35 GMT
#201018
On March 13 2018 22:22 farvacola wrote:
Show nested quote +
On March 13 2018 22:14 Simberto wrote:
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"

While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.


I am also fine with the experience of people involved in the topic. Which is why i tend to value DPBs opinion on educational matter more than that of someone who is not in any way involved with teaching except having been a student once. How to weigh experience against empirical studies is hard, i would usually tend towards empirical studies, because experience is usually very subjective and not necessarily universal, but both definitively have value in an argument.

I also find that people actually involved in a subject are less likely to assert total dominance, and more likely to try to learn more about parts of that subject that might be new to them, unless it directly and completely contradicts something that they know to be true.

When i said "argue based on feeling", i mean statements like the one made by Sadist, where there is neither any actual experience used as a basis, nor any studies, just an "it seems like it would make sense if stuff worked that way, thus i am going to be convinced that it works that way"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:38 GMT
#201019
"Smokey, this is not 'Nam, this is bowling. There are rules."
Sadist
Profile Blog Joined October 2002
United States7219 Posts
March 13 2018 13:39 GMT
#201020
On March 13 2018 20:53 Simberto wrote:
Show nested quote +
On March 13 2018 20:37 farvacola wrote:
On March 13 2018 20:25 Simberto wrote:
On March 13 2018 13:26 GreenHorizons wrote:
On March 13 2018 13:19 CatharsisUT wrote:
On March 13 2018 08:09 DarkPlasmaBall wrote:
On March 13 2018 07:52 Simberto wrote:
"I would argue raising standards and holding people back a few times would help."

I would like to mention that this is in no way supported by empirical evidence. Holding people back a grade actually reduces the amount of stuff they learn during their next year. Retention is roughly as bad as corporal punishment (very bad) at home with regards to its effect on learning, and way worse than for example television at home.


This is interesting to me. Do you happen to have sources I can read regarding the supposed detrimental effects that holding students back a grade has on their learning? I'm particularly interested in math education as an example, since I feel these negative effects are counterintuitive. Considering the years of math build on each other, I would think it's of the utmost importance for high school students to have a strong foundation in arithmetic and algebra before starting the higher maths, even if that means spending another year (or summer school) on algebra. Students who have weak algebra skills will struggle even more in trigonometry and calculus, for example. You think it would be worse for them to spend more time on algebra than to be pushed through to the next math? Even if they're failing?


As I read Simberto's comment, it seems fairly misleading. I think what he is saying is "if you have a first grader and make them repeat first grade, they will learn less in the following year if they repeat first grade vs. going to second grade." That seems obvious and kind of useless. Of course they will. If they learned 70% of the first grade material the first time they went through and got up to 90% the second time through, they only learned 20% of a year's worth the second time! That kid could have gone on to second grade and learned 50% of a new year's worth of stuff.

Of course that is the wrong comparison. What we care about is how that kid does in second grade after a repeat year vs. without.


I think if someone learned 70% of the material, having a system that can't get them the other 30% without repeating the 70% they already know is failing that student.

I think you are right that the statistic as presented is less substantial than implied but I don't think an excellent performance in 2nd grade would mean that holding the kid back was a great choice either.


That is how i meant it. And no, i dont think that that is the wrong comparison. The point of school is learning stuff. You automatically learn more stuff the more time you spend, so the only valid metric of good education is "stuff learned/time spent", not something totally arbitrary like "stuff learned/grade you are in". Otherwise, the optimal system would be to just have children repeat a grade over and over until they learn everything there is to learn in that grade. "Hey, our second graders know quantum mechanics, because they have been in second grade for 57 years!" obviously that is a reduction to the absurd, but it think it clarifies why i think that the valid timeline is the life of the student, not the position in the education system.

The goal should be to have the maximum learning effect in each year. And retention is really bad at that. But of course, putting children into progressively harder grades while they lack parts of the necessary background knowledge is also not optimal. It is just less bad than retention. A better system would find ways to allow for the students to fill up the holes in their knowledge instead of putting them into situations where they are basically forced to fail or situations where they are stuck repeating the same stuff that they already know most of the time.
Your conclusions conflict with your reasoning, Simberto. If a student's educational timeline is more important and figurative than his or her position in the education system, then holding students back or otherwise stopping them from taking part in the routine step-based grade system shouldn't be as bad as you're suggesting. Further, you're making value judgments in terms of the adequacy of a student's knowledge as it matches up with the mechanics of failure and/or being held back a grade. For example, "repeating the same stuff" deserves a lot of qualification; are the students literally repeating the same material or is the teacher tailoring some of the repeat material? Further, say that the student did very poorly the prior year because their homelife fell apart and they literally paid almost no attention at school that whole time; does "repeating the same stuff" still ring as negatively then? I'd say no.

More generally, I think the point Sadist and others were getting at is that the US absolutely has a problem with the concept of failure as it relates to worth and place in society, and though holding students back a grade seems like strong medicine, allowing students to graduate from high school while barely being able to read, write, or do math is weak medicine by the same margin. Further, this stigma associated with poor academic performance spills over into our problem with vocations and trade skills, so in the sense that holding a student back makes less sense than sending them somewhere where their talents are better put to use, then I suppose I agree with your criticism.


But as the Hattie-metastudy shows, if you hold students back for a year, they learn less during the year they just repeated. That was my whole point. That seems like a bad system. And i agree that the standard stepladder is probably not the best system for enhancing students abilities. And i also agree that even within the stepladder system, you require a way to deal with students who fall further and further behind due to a lack of knowledge required to learn the current material spiral. Holding students back a year is just really bad at doing that and basically wastes a whole year of the students time for small gains.

Of the back of my head, all of the following sound instinctively better (though one would obviously have to test whether they actually work or not): additional schooling during the summer break, additional basic classes during the following year, differentiating the classes during the following year based on student ability.

The problem is that all of the above require effort and money (for additional teacher hours). Having the student repeat a year isn't as directly expensive (though if you actually calculate the cost of a complete additional years worth of teacher hours, that isn't neglectable either, it just seems like it is free because you can just sit the student into a class that is already there), and organisatorially easy.




I think the idea that you only measure their learning the following year, (if I understand the study) is pointless.

The real measure would be how well they perform the rest of their academic career. Not just one year.

Im not sure how you measure that exactly, but the point of repeating a grade isnt just to fix a one year problem. Its to try to jar into the person that they need to fix a lifelong problem
How do you go from where you are to where you want to be? I think you have to have an enthusiasm for life. You have to have a dream, a goal and you have to be willing to work for it. Jim Valvano
Prev 1 10049 10050 10051 10052 10053 10093 Next
Please log in or register to reply.
Live Events Refresh
The PondCast
10:00
Episode 55
CranKy Ducklings41
Liquipedia
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
Nina 234
Lowko80
StarCraft: Brood War
firebathero 3159
Sea 2996
Mind 976
Stork 763
Larva 529
BeSt 450
Zeus 210
Light 187
TY 155
PianO 143
[ Show more ]
Leta 141
Barracks 140
Last 106
sorry 71
ToSsGirL 57
sSak 50
Sacsri 34
Backho 31
GoRush 31
Rush 29
JulyZerg 25
Sharp 21
Pusan 18
ajuk12(nOOB) 13
scan(afreeca) 13
Noble 10
IntoTheRainbow 9
Snow 6
Hm[arnc] 5
Bale 3
Dota 2
Gorgc6308
singsing1675
canceldota479
XaKoH 334
League of Legends
JimRising 412
Counter-Strike
shoxiejesuss712
x6flipin449
sgares258
allub217
Other Games
Fuzer 239
DeMusliM180
SortOf122
Mew2King54
Trikslyr19
Organizations
Other Games
gamesdonequick2501
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 12 non-featured ]
StarCraft 2
• LUISG 23
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
League of Legends
• Jankos1120
Upcoming Events
OSC
2h 23m
WardiTV European League
5h 23m
Fjant vs Babymarine
Mixu vs HiGhDrA
Gerald vs ArT
goblin vs MaNa
Jumy vs YoungYakov
Replay Cast
13h 23m
Epic.LAN
1d 1h
CranKy Ducklings
1d 23h
Epic.LAN
2 days
CSO Contender
2 days
BSL20 Non-Korean Champi…
2 days
Bonyth vs Sziky
Dewalt vs Hawk
Hawk vs QiaoGege
Sziky vs Dewalt
Mihu vs Bonyth
Zhanhun vs QiaoGege
QiaoGege vs Fengzi
Sparkling Tuna Cup
2 days
Online Event
3 days
[ Show More ]
BSL20 Non-Korean Champi…
3 days
Bonyth vs Zhanhun
Dewalt vs Mihu
Hawk vs Sziky
Sziky vs QiaoGege
Mihu vs Hawk
Zhanhun vs Dewalt
Fengzi vs Bonyth
Esports World Cup
4 days
ByuN vs Astrea
Lambo vs HeRoMaRinE
Clem vs TBD
Solar vs Zoun
SHIN vs Reynor
Maru vs TriGGeR
herO vs Lancer
Cure vs ShoWTimE
Esports World Cup
5 days
Esports World Cup
6 days
Liquipedia Results

Completed

JPL Season 2
RSL Revival: Season 1
Murky Cup #2

Ongoing

BSL 2v2 Season 3
Copa Latinoamericana 4
Jiahua Invitational
BSL20 Non-Korean Championship
Championship of Russia 2025
FISSURE Playground #1
BLAST.tv Austin Major 2025
ESL Impact League Season 7
IEM Dallas 2025
PGL Astana 2025
Asian Champions League '25
BLAST Rivals Spring 2025
MESA Nomadic Masters

Upcoming

CSL Xiamen Invitational
CSL Xiamen Invitational: ShowMatche
2025 ACS Season 2
CSLPRO Last Chance 2025
CSLPRO Chat StarLAN 3
BSL Season 21
K-Championship
RSL Revival: Season 2
SEL Season 2 Championship
uThermal 2v2 Main Event
FEL Cracov 2025
Esports World Cup 2025
Underdog Cup #2
ESL Pro League S22
StarSeries Fall 2025
FISSURE Playground #2
BLAST Open Fall 2025
BLAST Open Fall Qual
Esports World Cup 2025
BLAST Bounty Fall 2025
BLAST Bounty Fall Qual
IEM Cologne 2025
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.