• Log InLog In
  • Register
Liquid`
Team Liquid Liquipedia
EST 23:28
CET 05:28
KST 13:28
  • Home
  • Forum
  • Calendar
  • Streams
  • Liquipedia
  • Features
  • Store
  • EPT
  • TL+
  • StarCraft 2
  • Brood War
  • Smash
  • Heroes
  • Counter-Strike
  • Overwatch
  • Liquibet
  • Fantasy StarCraft
  • TLPD
  • StarCraft 2
  • Brood War
  • Blogs
Forum Sidebar
Events/Features
News
Featured News
RSL Revival - 2025 Season Finals Preview8RSL Season 3 - Playoffs Preview0RSL Season 3 - RO16 Groups C & D Preview0RSL Season 3 - RO16 Groups A & B Preview2TL.net Map Contest #21: Winners12
Community News
Weekly Cups (Dec 15-21): Classic wins big, MaxPax & Clem take weeklies1ComeBackTV's documentary on Byun's Career !10Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win4Weekly Cups (Dec 1-7): Clem doubles, Solar gets over the hump1Weekly Cups (Nov 24-30): MaxPax, Clem, herO win2
StarCraft 2
General
Weekly Cups (Dec 15-21): Classic wins big, MaxPax & Clem take weeklies ComeBackTV's documentary on Byun's Career ! Micro Lags When Playing SC2? When will we find out if there are more tournament Weekly Cups (Dec 8-14): MaxPax, Clem, Cure win
Tourneys
$5,000+ WardiTV 2025 Championship Sparkling Tuna Cup - Weekly Open Tournament $100 Prize Pool - Winter Warp Gate Masters Showdow Winter Warp Gate Amateur Showdown #1 RSL Offline Finals Info - Dec 13 and 14!
Strategy
Custom Maps
Map Editor closed ?
External Content
Mutation # 505 Rise From Ashes Mutation # 504 Retribution Mutation # 503 Fowl Play Mutation # 502 Negative Reinforcement
Brood War
General
BGH Auto Balance -> http://bghmmr.eu/ Klaucher discontinued / in-game color settings Anyone remember me from 2000s Bnet EAST server? How Rain Became ProGamer in Just 3 Months FlaSh on: Biggest Problem With SnOw's Playstyle
Tourneys
[BSL21] LB QuarterFinals - Sunday 21:00 CET Small VOD Thread 2.0 [Megathread] Daily Proleagues [BSL21] WB SEMIFINALS - Saturday 21:00 CET
Strategy
Simple Questions, Simple Answers Game Theory for Starcraft Current Meta Fighting Spirit mining rates
Other Games
General Games
Nintendo Switch Thread Stormgate/Frost Giant Megathread Beyond All Reason Path of Exile General RTS Discussion Thread
Dota 2
Official 'what is Dota anymore' discussion
League of Legends
Heroes of the Storm
Simple Questions, Simple Answers Heroes of the Storm 2.0
Hearthstone
Deck construction bug Heroes of StarCraft mini-set
TL Mafia
Mafia Game Mode Feedback/Ideas Survivor II: The Amazon Sengoku Mafia TL Mafia Community Thread
Community
General
US Politics Mega-thread The Games Industry And ATVI Things Aren’t Peaceful in Palestine Russo-Ukrainian War Thread YouTube Thread
Fan Clubs
White-Ra Fan Club
Media & Entertainment
Anime Discussion Thread [Manga] One Piece Movie Discussion!
Sports
2024 - 2026 Football Thread Formula 1 Discussion
World Cup 2022
Tech Support
Computer Build, Upgrade & Buying Resource Thread
TL Community
The Automated Ban List TL+ Announced Where to ask questions and add stream?
Blogs
The (Hidden) Drug Problem in…
TrAiDoS
I decided to write a webnov…
DjKniteX
James Bond movies ranking - pa…
Topin
Thanks for the RSL
Hildegard
Customize Sidebar...

Website Feedback

Closed Threads



Active: 1449 users

US Politics Mega-thread - Page 10051

Forum Index > Closed
Post a Reply
Prev 1 10049 10050 10051 10052 10053 10093 Next
Read the rules in the OP before posting, please.

In order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a re-read to refresh your memory! The vast majority of you are contributing in a healthy way, keep it up!

NOTE: When providing a source, explain why you feel it is relevant and what purpose it adds to the discussion if it's not obvious.
Also take note that unsubstantiated tweets/posts meant only to rekindle old arguments can result in a mod action.
iamthedave
Profile Joined February 2011
England2814 Posts
Last Edited: 2018-03-13 12:20:01
March 13 2018 12:13 GMT
#201001
I'm confused as to why people think Hilary would have made a 'bad' President. Her primary flaws are being as approachable as the Terminator and decades of vilification and/or dubious behaviour that warrants it.

Her strengths are a vice-like grasp of legislative procedure and policy-making, and I presume deal-making as well given she seems to have a lot of support despite being widely disliked.

Her weaknesses are things a President should ideally possess.

Her strengths are things a President absolutely must have.

Excluding the political association, she'd most likely have been a perfectly competent President who'd have got shit done. Assuming the house didn't follow through on their threat to literally impeach her the moment she got into the White House. And I'll bet she wouldn't have got into a fight with the Mayor of Puerto Rico, either.

But any criticism from the right rings hollow. The absolutely hysterical whinging about Obama is proof that it doesn't matter what they do, that D means it'll be presented as hell's own concept. If anything, a Hilary Presidency would probably have generated tons of funny Hilary-centric memes as the tiniest, stupidest details get picked out to be criticised. Obama had dijon mustard and a tan suit; just think of what they'd have found for H.C. Alas, we'll never know.
I'm not bad at Starcraft; I just think winning's rude.
Kickboxer
Profile Blog Joined November 2010
Slovenia1308 Posts
March 13 2018 12:26 GMT
#201002
Well, or you could say Obama was a warmongering Oreo in a suit who bailed out Wall Street while acting smug on non-issues, and his pal Hillary is criminally insane and gobbling on corporate penis (while also married to the guy who repealed Glass-Steagal). A matter of perspective.

User was temp banned for this post.
Zaros
Profile Blog Joined September 2010
United Kingdom3692 Posts
March 13 2018 12:47 GMT
#201003
a_flayer
Profile Blog Joined April 2010
Netherlands2826 Posts
March 13 2018 12:48 GMT
#201004
Oh, shit.
When you came along so righteous with a new national hate, so convincing is the ardor of war and of men, it's harder to breathe than to believe you're a friend. The wars at home, the wars abroad, all soaked in blood and lies and fraud.
Adreme
Profile Joined June 2011
United States5574 Posts
March 13 2018 12:59 GMT
#201005
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.
a_flayer
Profile Blog Joined April 2010
Netherlands2826 Posts
Last Edited: 2018-03-13 13:43:48
March 13 2018 13:00 GMT
#201006
On March 13 2018 21:59 Adreme wrote:
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.

Mike Pompeo was recently in the headlines for saying Putin lied about Russia not interfering in the election. He switches his position depending on who says what. If Trump says Russia didn't interfere, it's true. If Putin says it, it's false. Tillerson is probably just sick of it - he never really wanted the job in the first place.

This is a disturbing move, though. Now it's no longer 'corporate oil foreign policy' but rather 'CIA foreign policy'. Fun times. The chances for war with Iran just doubled.
When you came along so righteous with a new national hate, so convincing is the ardor of war and of men, it's harder to breathe than to believe you're a friend. The wars at home, the wars abroad, all soaked in blood and lies and fraud.
GreenHorizons
Profile Blog Joined April 2011
United States23515 Posts
Last Edited: 2018-03-13 13:01:55
March 13 2018 13:01 GMT
#201007
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.

The idea of the person who was running the CIA running the state department should really terrify people.
"People like to look at history and think 'If that was me back then, I would have...' We're living through history, and the truth is, whatever you are doing now is probably what you would have done then" "Scratch a Liberal..."
ShoCkeyy
Profile Blog Joined July 2008
7815 Posts
Last Edited: 2018-03-13 13:04:42
March 13 2018 13:02 GMT
#201008
Fun, let's see how this plays out.

On March 13 2018 21:59 Adreme wrote:
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.


From what I'm seeing, Trump fired Tillerson.

On March 13 2018 22:01 GreenHorizons wrote:
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.

The idea of the person who was running the CIA running the state department should really terrify people.


It is an issue. Especially when it's some one who barely has any experience in my opinion to be handling the State...
Life?
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:04 GMT
#201009
Pompeo is more hawkish towards Russia than Tillerson ever was.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Pandemona *
Profile Blog Joined March 2011
Charlie Sheens House51493 Posts
March 13 2018 13:06 GMT
#201010
Yeah from what im reading it seems the sacking was due to clashes on more a personal scale than policy scale. Might just be my bad news sources though :D
ModeratorTeam Liquid Football Thread Guru! - Chelsea FC ♥
farvacola
Profile Blog Joined January 2011
United States18840 Posts
March 13 2018 13:11 GMT
#201011
I think it was both, though with all the fog surrounding the White House, it's practically impossible to tell.
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
Last Edited: 2018-03-13 13:14:29
March 13 2018 13:14 GMT
#201012
The grey area is the new CIA director.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Simberto
Profile Blog Joined July 2010
Germany11686 Posts
March 13 2018 13:14 GMT
#201013
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

Show nested quote +
This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"
m4ini
Profile Joined February 2014
4215 Posts
March 13 2018 13:20 GMT
#201014
Please imagine a link to a youtube video of "Another one bites the dust".

Thank god you guys are draining the swamp over there.
On track to MA1950A.
farvacola
Profile Blog Joined January 2011
United States18840 Posts
March 13 2018 13:22 GMT
#201015
On March 13 2018 22:14 Simberto wrote:
Show nested quote +
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"

While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.
"when the Dead Kennedys found out they had skinhead fans, they literally wrote a song titled 'Nazi Punks Fuck Off'"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:24 GMT
#201016
"Smokey, this is not 'Nam, this is bowling. There are rules."
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:28 GMT
#201017
Also if the name Haspel rings a bell she was the one involved in the Bush torture programs and destruction of said tapes.
"Smokey, this is not 'Nam, this is bowling. There are rules."
Simberto
Profile Blog Joined July 2010
Germany11686 Posts
March 13 2018 13:35 GMT
#201018
On March 13 2018 22:22 farvacola wrote:
Show nested quote +
On March 13 2018 22:14 Simberto wrote:
On March 13 2018 20:59 farvacola wrote:
I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following:

This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.

Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.

I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.

My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.

I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.

A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.

Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.

Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings.


Source


I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"

While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.


I am also fine with the experience of people involved in the topic. Which is why i tend to value DPBs opinion on educational matter more than that of someone who is not in any way involved with teaching except having been a student once. How to weigh experience against empirical studies is hard, i would usually tend towards empirical studies, because experience is usually very subjective and not necessarily universal, but both definitively have value in an argument.

I also find that people actually involved in a subject are less likely to assert total dominance, and more likely to try to learn more about parts of that subject that might be new to them, unless it directly and completely contradicts something that they know to be true.

When i said "argue based on feeling", i mean statements like the one made by Sadist, where there is neither any actual experience used as a basis, nor any studies, just an "it seems like it would make sense if stuff worked that way, thus i am going to be convinced that it works that way"
{CC}StealthBlue
Profile Blog Joined January 2003
United States41117 Posts
March 13 2018 13:38 GMT
#201019
"Smokey, this is not 'Nam, this is bowling. There are rules."
Sadist
Profile Blog Joined October 2002
United States7298 Posts
March 13 2018 13:39 GMT
#201020
On March 13 2018 20:53 Simberto wrote:
Show nested quote +
On March 13 2018 20:37 farvacola wrote:
On March 13 2018 20:25 Simberto wrote:
On March 13 2018 13:26 GreenHorizons wrote:
On March 13 2018 13:19 CatharsisUT wrote:
On March 13 2018 08:09 DarkPlasmaBall wrote:
On March 13 2018 07:52 Simberto wrote:
"I would argue raising standards and holding people back a few times would help."

I would like to mention that this is in no way supported by empirical evidence. Holding people back a grade actually reduces the amount of stuff they learn during their next year. Retention is roughly as bad as corporal punishment (very bad) at home with regards to its effect on learning, and way worse than for example television at home.


This is interesting to me. Do you happen to have sources I can read regarding the supposed detrimental effects that holding students back a grade has on their learning? I'm particularly interested in math education as an example, since I feel these negative effects are counterintuitive. Considering the years of math build on each other, I would think it's of the utmost importance for high school students to have a strong foundation in arithmetic and algebra before starting the higher maths, even if that means spending another year (or summer school) on algebra. Students who have weak algebra skills will struggle even more in trigonometry and calculus, for example. You think it would be worse for them to spend more time on algebra than to be pushed through to the next math? Even if they're failing?


As I read Simberto's comment, it seems fairly misleading. I think what he is saying is "if you have a first grader and make them repeat first grade, they will learn less in the following year if they repeat first grade vs. going to second grade." That seems obvious and kind of useless. Of course they will. If they learned 70% of the first grade material the first time they went through and got up to 90% the second time through, they only learned 20% of a year's worth the second time! That kid could have gone on to second grade and learned 50% of a new year's worth of stuff.

Of course that is the wrong comparison. What we care about is how that kid does in second grade after a repeat year vs. without.


I think if someone learned 70% of the material, having a system that can't get them the other 30% without repeating the 70% they already know is failing that student.

I think you are right that the statistic as presented is less substantial than implied but I don't think an excellent performance in 2nd grade would mean that holding the kid back was a great choice either.


That is how i meant it. And no, i dont think that that is the wrong comparison. The point of school is learning stuff. You automatically learn more stuff the more time you spend, so the only valid metric of good education is "stuff learned/time spent", not something totally arbitrary like "stuff learned/grade you are in". Otherwise, the optimal system would be to just have children repeat a grade over and over until they learn everything there is to learn in that grade. "Hey, our second graders know quantum mechanics, because they have been in second grade for 57 years!" obviously that is a reduction to the absurd, but it think it clarifies why i think that the valid timeline is the life of the student, not the position in the education system.

The goal should be to have the maximum learning effect in each year. And retention is really bad at that. But of course, putting children into progressively harder grades while they lack parts of the necessary background knowledge is also not optimal. It is just less bad than retention. A better system would find ways to allow for the students to fill up the holes in their knowledge instead of putting them into situations where they are basically forced to fail or situations where they are stuck repeating the same stuff that they already know most of the time.
Your conclusions conflict with your reasoning, Simberto. If a student's educational timeline is more important and figurative than his or her position in the education system, then holding students back or otherwise stopping them from taking part in the routine step-based grade system shouldn't be as bad as you're suggesting. Further, you're making value judgments in terms of the adequacy of a student's knowledge as it matches up with the mechanics of failure and/or being held back a grade. For example, "repeating the same stuff" deserves a lot of qualification; are the students literally repeating the same material or is the teacher tailoring some of the repeat material? Further, say that the student did very poorly the prior year because their homelife fell apart and they literally paid almost no attention at school that whole time; does "repeating the same stuff" still ring as negatively then? I'd say no.

More generally, I think the point Sadist and others were getting at is that the US absolutely has a problem with the concept of failure as it relates to worth and place in society, and though holding students back a grade seems like strong medicine, allowing students to graduate from high school while barely being able to read, write, or do math is weak medicine by the same margin. Further, this stigma associated with poor academic performance spills over into our problem with vocations and trade skills, so in the sense that holding a student back makes less sense than sending them somewhere where their talents are better put to use, then I suppose I agree with your criticism.


But as the Hattie-metastudy shows, if you hold students back for a year, they learn less during the year they just repeated. That was my whole point. That seems like a bad system. And i agree that the standard stepladder is probably not the best system for enhancing students abilities. And i also agree that even within the stepladder system, you require a way to deal with students who fall further and further behind due to a lack of knowledge required to learn the current material spiral. Holding students back a year is just really bad at doing that and basically wastes a whole year of the students time for small gains.

Of the back of my head, all of the following sound instinctively better (though one would obviously have to test whether they actually work or not): additional schooling during the summer break, additional basic classes during the following year, differentiating the classes during the following year based on student ability.

The problem is that all of the above require effort and money (for additional teacher hours). Having the student repeat a year isn't as directly expensive (though if you actually calculate the cost of a complete additional years worth of teacher hours, that isn't neglectable either, it just seems like it is free because you can just sit the student into a class that is already there), and organisatorially easy.




I think the idea that you only measure their learning the following year, (if I understand the study) is pointless.

The real measure would be how well they perform the rest of their academic career. Not just one year.

Im not sure how you measure that exactly, but the point of repeating a grade isnt just to fix a one year problem. Its to try to jar into the person that they need to fix a lifelong problem
How do you go from where you are to where you want to be? I think you have to have an enthusiasm for life. You have to have a dream, a goal and you have to be willing to work for it. Jim Valvano
Prev 1 10049 10050 10051 10052 10053 10093 Next
Please log in or register to reply.
Live Events Refresh
Next event in 1d 7h
[ Submit Event ]
Live Streams
Refresh
StarCraft 2
WinterStarcraft450
RuFF_SC2 152
Nathanias 103
StarCraft: Brood War
Britney 23078
ZergMaN 214
ggaemo 98
GoRush 80
sorry 66
Noble 44
Bale 11
Icarus 4
Dota 2
NeuroSwarm119
LuMiX1
League of Legends
C9.Mang0441
Trikslyr59
Counter-Strike
summit1g8151
minikerr47
Super Smash Bros
hungrybox402
Other Games
JimRising 506
Mew2King72
Organizations
Other Games
gamesdonequick1133
StarCraft 2
Blizzard YouTube
StarCraft: Brood War
BSLTrovo
sctven
[ Show 16 non-featured ]
StarCraft 2
• Hupsaiya 101
• practicex 28
• Mapu25
• AfreecaTV YouTube
• intothetv
• Kozan
• IndyKCrew
• LaughNgamezSOOP
• Migwel
• sooper7s
StarCraft: Brood War
• RayReign 24
• Azhi_Dahaki10
• BSLYoutube
• STPLYoutube
• ZZZeroYoutube
Dota 2
• masondota22271
Upcoming Events
WardiTV Invitational
1d 7h
Gerald vs YoungYakov
Spirit vs MaNa
SHIN vs Percival
Creator vs Scarlett
Replay Cast
2 days
WardiTV Invitational
2 days
ByuN vs Solar
Clem vs Classic
Cure vs herO
Reynor vs MaxPax
Replay Cast
3 days
Sparkling Tuna Cup
5 days
Krystianer vs TBD
TriGGeR vs SKillous
Percival vs TBD
ByuN vs Nicoract
Replay Cast
6 days
Wardi Open
6 days
Liquipedia Results

Completed

YSL S2
WardiTV 2025
META Madness #9

Ongoing

C-Race Season 1
IPSL Winter 2025-26
KCM Race Survival 2025 Season 4
BSL Season 21
Slon Tour Season 2
CSL Season 19: Qualifier 2
eXTREMESLAND 2025
SL Budapest Major 2025
ESL Impact League Season 8
BLAST Rivals Fall 2025
IEM Chengdu 2025
PGL Masters Bucharest 2025
Thunderpick World Champ.
CS Asia Championships 2025
ESL Pro League S22

Upcoming

CSL 2025 WINTER (S19)
BSL 21 Non-Korean Championship
Acropolis #4
IPSL Spring 2026
Bellum Gens Elite Stara Zagora 2026
HSC XXVIII
Big Gabe Cup #3
OSC Championship Season 13
Nations Cup 2026
ESL Pro League Season 23
PGL Cluj-Napoca 2026
IEM Kraków 2026
BLAST Bounty Winter 2026
BLAST Bounty Winter Qual
TLPD

1. ByuN
2. TY
3. Dark
4. Solar
5. Stats
6. Nerchio
7. sOs
8. soO
9. INnoVation
10. Elazer
1. Rain
2. Flash
3. EffOrt
4. Last
5. Bisu
6. Soulkey
7. Mini
8. Sharp
Sidebar Settings...

Advertising | Privacy Policy | Terms Of Use | Contact Us

Original banner artwork: Jim Warren
The contents of this webpage are copyright © 2025 TLnet. All Rights Reserved.