|
Read the rules in the OP before posting, please.In order to ensure that this thread continues to meet TL standards and follows the proper guidelines, we will be enforcing the rules in the OP more strictly. Be sure to give them a re-read to refresh your memory! The vast majority of you are contributing in a healthy way, keep it up! NOTE: When providing a source, explain why you feel it is relevant and what purpose it adds to the discussion if it's not obvious. Also take note that unsubstantiated tweets/posts meant only to rekindle old arguments can result in a mod action. |
I'm confused as to why people think Hilary would have made a 'bad' President. Her primary flaws are being as approachable as the Terminator and decades of vilification and/or dubious behaviour that warrants it.
Her strengths are a vice-like grasp of legislative procedure and policy-making, and I presume deal-making as well given she seems to have a lot of support despite being widely disliked.
Her weaknesses are things a President should ideally possess.
Her strengths are things a President absolutely must have.
Excluding the political association, she'd most likely have been a perfectly competent President who'd have got shit done. Assuming the house didn't follow through on their threat to literally impeach her the moment she got into the White House. And I'll bet she wouldn't have got into a fight with the Mayor of Puerto Rico, either.
But any criticism from the right rings hollow. The absolutely hysterical whinging about Obama is proof that it doesn't matter what they do, that D means it'll be presented as hell's own concept. If anything, a Hilary Presidency would probably have generated tons of funny Hilary-centric memes as the tiniest, stupidest details get picked out to be criticised. Obama had dijon mustard and a tan suit; just think of what they'd have found for H.C. Alas, we'll never know.
|
Well, or you could say Obama was a warmongering Oreo in a suit who bailed out Wall Street while acting smug on non-issues, and his pal Hillary is criminally insane and gobbling on corporate penis (while also married to the guy who repealed Glass-Steagal). A matter of perspective.
User was temp banned for this post.
|
|
|
Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.
|
On March 13 2018 21:59 Adreme wrote: Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that. Mike Pompeo was recently in the headlines for saying Putin lied about Russia not interfering in the election. He switches his position depending on who says what. If Trump says Russia didn't interfere, it's true. If Putin says it, it's false. Tillerson is probably just sick of it - he never really wanted the job in the first place.
This is a disturbing move, though. Now it's no longer 'corporate oil foreign policy' but rather 'CIA foreign policy'. Fun times. The chances for war with Iran just doubled.
|
I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.
The idea of the person who was running the CIA running the state department should really terrify people.
|
Fun, let's see how this plays out.
On March 13 2018 21:59 Adreme wrote: Of course Tillerson had to step down, he dared to criticize Russia. It cannot be a coincidence that he was asked to step down so soon after that.
From what I'm seeing, Trump fired Tillerson.
On March 13 2018 22:01 GreenHorizons wrote: I feel like the shift from CIA to Secretary of state isn't going to be as problematic in many people's eyes as I think it should be.
The idea of the person who was running the CIA running the state department should really terrify people.
It is an issue. Especially when it's some one who barely has any experience in my opinion to be handling the State...
|
Pompeo is more hawkish towards Russia than Tillerson ever was.
|
Pandemona
Charlie Sheens House51449 Posts
Yeah from what im reading it seems the sacking was due to clashes on more a personal scale than policy scale. Might just be my bad news sources though :D
|
I think it was both, though with all the fog surrounding the White House, it's practically impossible to tell.
|
The grey area is the new CIA director.
|
On March 13 2018 20:59 farvacola wrote:I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following: Show nested quote +This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.
Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.
I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.
My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.
I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.
A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.
Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.
Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings. Source
I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work"
|
Please imagine a link to a youtube video of "Another one bites the dust".
Thank god you guys are draining the swamp over there.
|
On March 13 2018 22:14 Simberto wrote:Show nested quote +On March 13 2018 20:59 farvacola wrote:I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following: This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.
Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.
I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.
My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.
I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.
A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.
Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.
Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings. Source I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work" While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.
|
|
Also if the name Haspel rings a bell she was the one involved in the Bush torture programs and destruction of said tapes.
|
On March 13 2018 22:22 farvacola wrote:Show nested quote +On March 13 2018 22:14 Simberto wrote:On March 13 2018 20:59 farvacola wrote:I think this reliance on the findings of a single (albeit huge in scale and thorough) study presents interpretive problems and it would seem that I'm not alone. For example, a reviewer wrote the following: This book by John Hattie – Professor of Education at the University of Auckland – is the culmination of more than a decade of research during which he and his team have set out to summarise and synthesise the empirical research on the effects of various educational influences and interventions on student achievement. Probably due to the huge scope of this project – comprising 800 meta-analyses, more than 50,000 smaller studies and more than 80 million pupils – this study has been widely acclaimed. According to a review in the Times Educational Supplement, Hattie’s work “reveals teaching’s Holy Grail”.
Hattie starts from the observation that in education “everything seems to work”, as educational interventions of almost any kind seem to have a positive effect on student achievement. He then proposes to move beyond “everything goes”, towards the development of a barometer of “what works best”. To this end he applies the tools of meta-analysis to a huge body of empirical research and calculates effect sizes (denoted d) for 138 influences in the following domains: student, home, school, teacher, curricula and teaching approaches. Hattie neatly presents the effect sizes in a graphical barometer and convincingly argues that only effect sizes higher than 0.4 are in the so-called zone of desired effects (in other words, are worth the effort). Prior to presenting the barometers and effect size rankings, Hattie develops his visible learning story, which is summarised in the following quote: “Visible teaching and learning occurs when learning is the explicit goal, when it is appropriately challenging, when the teacher and student both seek to ascertain whether and to what degree the challenging goal is attained, when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate and engaging people participating in the act of learning” (p. 22). The visible learning story is illustrated using the example of outdoor training. An instructor teaching rock-climbing will have continuous visual feedback on the success of his teaching efforts (pupils climbing high or falling down) and be able to adjust his teaching accordingly.
I find the visible learning story a convincing story. I believe most teachers will agree with the book’s main message that effective instruction cannot take place without proper feedback from student to teacher on the effectiveness of the instruction. Hattie also convincingly argues that the effectiveness of teaching increases when teachers act as activator instead of as facilitator, a view which I find refreshing in a time when teaching approaches such as problem-based learning have the effect of sidelining the instructor. My problem with the book is, however, that I would have been convinced even without the empirical analysis. If anything, Hattie’s meta-meta-analysis casts a few doubts on the validity of his research, as I will explain below.
My first comment, however, relates to Hattie’s goal in writing this book. He states that his aim is “to develop an explanatory story about key influences on student learning”, not to build another “what works recipe”. Yet this aim fits uneasily with the barometers and rankings which are scattered across the book. By presenting these measures so prominently, the author automatically invites the reader to make a clear distinction between what works and what doesn’t work. If Hattie doesn’t want us to draw such conclusions, he should not have presented the material in this way. Related to this is the tension between story-telling and ranking influences. The visible learning story is told in Chapter 3 and naturally refers to some of the effect sizes calculated in the remainder of the book. Yet the relationship between story and effect sizes remains implicit and qualitative. The reader has no indication or test result of how well the effect sizes fit the visible learning story.
I next turn to the way in which the meta-meta-analysis has been conducted. Hattie discusses the various pros and cons of meta-analysis extensively and concludes that this is a valid research methodology. I will not take issue with this point, as meta-analysis is a generally accepted tool of academic research. As a general statistical point, however, I was surprised that Hattie has chosen to summarise the effect sizes of the 800 meta-analyses using unweighted averages. Small and large meta-analyses have equal weight, while I would assume that the number of studies on which a meta-analysis is based indicates its validity and importance. Instead I would have opted for weighted averaging by number of studies, students or effect sizes. At a minimum, it would be interesting to see whether the results are robust to the choice of averaging.
A great asset of Hattie’s book is the reference list, which allows the inquisitive reader to dig a little bit deeper, by moving from the rankings to the underlying meta-studies. I have done this for the top-ranking influence, which is “self-reported grades” (d = 1.44). This result is dominated by the Kuncel et al. (2005) meta-analysis (d = 3.1) (Kuncel et al. 2005). This paper is about the validity of ex-post self-reported grades (due to imperfect storage and retrieval from memory or intentional deception), not about students’ expectations or their predictive power of their own study performance, as Hattie claims. The paper thus should not have been included in the analysis. My N = 1 sampling obviously has its limits, but this example does raise questions regarding the remaining average effect sizes.
Two final comments relate to the application of Hattie’s work. While it is certainly valuable to know “what works best” in education, educational institutions will need to know not just the benefit of educational interventions, but also their cost. So the question which really needs to be answered is “what works best per monetary unit spent”. On the cost side, however, Hattie’s book is silent. Also, given the importance of two-way feedback in teaching, a major challenge for large-scale educational institutions (such as universities) is to organise feedback in a cost-effective manner.
Visible learning should be lauded for emphasising the importance of the student–teacher relationship and of adequate feedback, but at the same time presents managers with the challenge of organising this feedback in large scale educational settings. Source I would be totally fine with discussing stuff using additional sources if someone else brought any. The problem i am having is that people basically argue completely based on feelings. A single study is better than no study. Of course i could try finding 12 more studies to support my points, but i don't really think it is fair to demand that i put in that much effort, while other people argue solely based on "Oh, but i think this should work" While true in the most general sense, I think it's a mistake to discount the views of individuals who speak from experience as nothing more than feelings turned thoughts. There are a lot of compelling areas of disagreement that turn on practically non-quantifiable information sources and education, particularly given its "teach to the test" predicament, is one in at least some respects. Further, when an actual teacher like DPB speaks on an educational issue, I think relegating his views as necessarily subservient to a study is a mistake, especially when the study itself seems to disclaim prescriptive use of its findings.
I am also fine with the experience of people involved in the topic. Which is why i tend to value DPBs opinion on educational matter more than that of someone who is not in any way involved with teaching except having been a student once. How to weigh experience against empirical studies is hard, i would usually tend towards empirical studies, because experience is usually very subjective and not necessarily universal, but both definitively have value in an argument.
I also find that people actually involved in a subject are less likely to assert total dominance, and more likely to try to learn more about parts of that subject that might be new to them, unless it directly and completely contradicts something that they know to be true.
When i said "argue based on feeling", i mean statements like the one made by Sadist, where there is neither any actual experience used as a basis, nor any studies, just an "it seems like it would make sense if stuff worked that way, thus i am going to be convinced that it works that way"
|
|
On March 13 2018 20:53 Simberto wrote:Show nested quote +On March 13 2018 20:37 farvacola wrote:On March 13 2018 20:25 Simberto wrote:On March 13 2018 13:26 GreenHorizons wrote:On March 13 2018 13:19 CatharsisUT wrote:On March 13 2018 08:09 DarkPlasmaBall wrote:On March 13 2018 07:52 Simberto wrote: "I would argue raising standards and holding people back a few times would help."
I would like to mention that this is in no way supported by empirical evidence. Holding people back a grade actually reduces the amount of stuff they learn during their next year. Retention is roughly as bad as corporal punishment (very bad) at home with regards to its effect on learning, and way worse than for example television at home. This is interesting to me. Do you happen to have sources I can read regarding the supposed detrimental effects that holding students back a grade has on their learning? I'm particularly interested in math education as an example, since I feel these negative effects are counterintuitive. Considering the years of math build on each other, I would think it's of the utmost importance for high school students to have a strong foundation in arithmetic and algebra before starting the higher maths, even if that means spending another year (or summer school) on algebra. Students who have weak algebra skills will struggle even more in trigonometry and calculus, for example. You think it would be worse for them to spend more time on algebra than to be pushed through to the next math? Even if they're failing? As I read Simberto's comment, it seems fairly misleading. I think what he is saying is "if you have a first grader and make them repeat first grade, they will learn less in the following year if they repeat first grade vs. going to second grade." That seems obvious and kind of useless. Of course they will. If they learned 70% of the first grade material the first time they went through and got up to 90% the second time through, they only learned 20% of a year's worth the second time! That kid could have gone on to second grade and learned 50% of a new year's worth of stuff. Of course that is the wrong comparison. What we care about is how that kid does in second grade after a repeat year vs. without. I think if someone learned 70% of the material, having a system that can't get them the other 30% without repeating the 70% they already know is failing that student. I think you are right that the statistic as presented is less substantial than implied but I don't think an excellent performance in 2nd grade would mean that holding the kid back was a great choice either. That is how i meant it. And no, i dont think that that is the wrong comparison. The point of school is learning stuff. You automatically learn more stuff the more time you spend, so the only valid metric of good education is "stuff learned/time spent", not something totally arbitrary like "stuff learned/grade you are in". Otherwise, the optimal system would be to just have children repeat a grade over and over until they learn everything there is to learn in that grade. "Hey, our second graders know quantum mechanics, because they have been in second grade for 57 years!" obviously that is a reduction to the absurd, but it think it clarifies why i think that the valid timeline is the life of the student, not the position in the education system. The goal should be to have the maximum learning effect in each year. And retention is really bad at that. But of course, putting children into progressively harder grades while they lack parts of the necessary background knowledge is also not optimal. It is just less bad than retention. A better system would find ways to allow for the students to fill up the holes in their knowledge instead of putting them into situations where they are basically forced to fail or situations where they are stuck repeating the same stuff that they already know most of the time. Your conclusions conflict with your reasoning, Simberto. If a student's educational timeline is more important and figurative than his or her position in the education system, then holding students back or otherwise stopping them from taking part in the routine step-based grade system shouldn't be as bad as you're suggesting. Further, you're making value judgments in terms of the adequacy of a student's knowledge as it matches up with the mechanics of failure and/or being held back a grade. For example, "repeating the same stuff" deserves a lot of qualification; are the students literally repeating the same material or is the teacher tailoring some of the repeat material? Further, say that the student did very poorly the prior year because their homelife fell apart and they literally paid almost no attention at school that whole time; does "repeating the same stuff" still ring as negatively then? I'd say no. More generally, I think the point Sadist and others were getting at is that the US absolutely has a problem with the concept of failure as it relates to worth and place in society, and though holding students back a grade seems like strong medicine, allowing students to graduate from high school while barely being able to read, write, or do math is weak medicine by the same margin. Further, this stigma associated with poor academic performance spills over into our problem with vocations and trade skills, so in the sense that holding a student back makes less sense than sending them somewhere where their talents are better put to use, then I suppose I agree with your criticism. But as the Hattie-metastudy shows, if you hold students back for a year, they learn less during the year they just repeated. That was my whole point. That seems like a bad system. And i agree that the standard stepladder is probably not the best system for enhancing students abilities. And i also agree that even within the stepladder system, you require a way to deal with students who fall further and further behind due to a lack of knowledge required to learn the current material spiral. Holding students back a year is just really bad at doing that and basically wastes a whole year of the students time for small gains. Of the back of my head, all of the following sound instinctively better (though one would obviously have to test whether they actually work or not): additional schooling during the summer break, additional basic classes during the following year, differentiating the classes during the following year based on student ability. The problem is that all of the above require effort and money (for additional teacher hours). Having the student repeat a year isn't as directly expensive (though if you actually calculate the cost of a complete additional years worth of teacher hours, that isn't neglectable either, it just seems like it is free because you can just sit the student into a class that is already there), and organisatorially easy.
I think the idea that you only measure their learning the following year, (if I understand the study) is pointless.
The real measure would be how well they perform the rest of their academic career. Not just one year.
Im not sure how you measure that exactly, but the point of repeating a grade isnt just to fix a one year problem. Its to try to jar into the person that they need to fix a lifelong problem
|
|
|
|