Feeds:
Posts
Comments

Archive for the ‘Student Assessment’ Category

Canada’s most populous province aspires to education leadership and tends to exert influence far beyond our coast-to-coast provincial school systems. That is why the latest Ontario student assessment initiative, A Learning Province, is worth tracking and deserves much closer scrutiny. It was officially launched in September of 2017, in the wake of a well-publicized decline in provincial Math test scores and cleverly packaged as a plan to address wider professional concerns about testing and accountability.

Declining Math test scores among public elementary school students in Ontario were big news in late August 2017 for one one good reason- the Ontario Ministry’s much-touted $60-million “renewed math strategy” completely bombed when it came to alieviating the problem. On the latest round of  provincial standardized tests — conducted by the Education Quality and Accountability Office (EQAO)only half of Grade 6 students met the provincial standard in math, unchanged from the previous year. In 2013, about 57 per cent of Grade 6 students met the standard  Among Grade 3 students, 62 per cent met the provincial standard in math, a decrease of one percentage point since last year.

The Ontario government’s response, championed by Premier Kathleen Wynne and Education Minister Mitzie Hunter, was not only designed to change the channel, but to initiate a “student assessment review” targeting the messenger, the EQAO, and attempting to chip away at its hard-won credibility, built up over the past twenty years. While the announcement conveyed the impression of “open and authentic” consultation, the Discussion Paper made it crystal clear that the provincial agency charged with ensuring educational accountability was now under the microscope.  Reading the paper and digesting the EQAO survey questions, it becomes obvious that the provincial tests are now on trial themselves, and being assessed on criteria well outside their current mandate.

Ontario’s provincial testing regime should be fair game when it comes to public scrutiny. When spending ballooned to $50 million a year in the late 1990s, taxpayers had a right to be concerned. Since 2010, EQAO costs have hovered around $34 million or $17 per student, the credibility of the test results remain widely accepted, and the testing model continues to be free of interference or manipulation.  It’s working the way it was intended — to provide a regular, reasonably reliable measure of student competencies in literacy and numeracy.

The EQAO is far from perfect, but is still considered the ‘gold standard’ right across Canada.  It has succeeded in providing much greater transparency, but — like other such testing regimes – has not nudged education departments far enough in the direction of improving teacher specialist qualifications or changing the curriculum to secure better student results.  The Grade 10 Literacy Test remains an embarrassment. In May 2010, the EQAO report, for example, revealed that hundreds of students who failed the 2006 test were simply moved along trough the system without passing that graduation standard. Consistently, about 19 to 24 per cent of all students fall short of acceptable literacy, and 56 per cent of all Applied students, yet graduation rates have risen from 68% to 86% province-wide.

The Ontario Ministry is now ‘monkeying around’ with the EQAO and seems inclined toward either neutering the agency to weaken student performance transparency or broadening its mandate to include assessing students for “social and emotional learning’ (SEL), formerly termed “non-cognitive learning.”  The “Independent Review of Assessment and Reporting” is being supervised by some familiar Ontario education names, including the usual past and present OISE insiders, Michael Fullan, Andy Hargreaves, and Carol Campbell.  It’s essentially the same Ontario-focused group, minus Dr. Avis Glaze, that populates the International Education Panel of Advisors in Scotland attempting to rescue the Scottish National Party’s faltering “Excellence for All” education reforms.

The published mandate of the Student Assessment Review gives it all away in a few critical passages.  Most of the questions focus on EQAO testing and accountability and approach the tests through a “student well-being” and “diversity” lens.  An “evidence-informed” review of the current model of assessment and reporting is promised, but it’s nowhere to be found in the discussion paper. Instead, we are treated to selected excerpts from official Ontario policy documents, all supporting the current political agenda, espoused in the 2014 document, Achieving Excellence: A Renewed Vision for Education in Ontario. The familiar four pillars, achieving excellence, ensuing equity, promoting well-being, and enhancing public confidence are repeated as secular articles of faith.

Where’s the research to support the proposed direction?  The Discussion Paper does provide capsule summaries of two assessment approaches, termed “large-scale assessments” and “classroom assessments, ” but critical analysis of only the first of the two approaches.  There’s no indication in A Learning Province that the reputedly independent experts recognize let alone heed the latest research pointing out the pitfalls and problems associated with Teacher Assessments (TA) or the acknowledged “failure” of Assessment for Learning (AfL).  Instead, we are advised, in passing, that the Ontario Ministry has a research report, produced in August 2017, by the University of Ottawa, examining how to integrate “student well-being” into provincial K-12 assessments.

The Ontario Discussion Paper is not really about best practice in student assessment.  It’s essentially based upon rather skewed research conducted in support of “broadening student assessments” rather that the latest research on what works in carrying out student assessments in the schools.  Critical issues such as the “numeracy gap” now being seriously debated by leading education researchers and student assessment experts are not even addressed in the Ontario policy paper.

Educators and parents reading A Learning Province would have benefited from a full airing of the latest research on what actually works in student assessment, whether or not it conforms with provincial education dogma.  Nowhere does the Ontario document recognize Dylan Wiliam’s recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments.  Nor does the Ontario student assessment review team heed the recent findings of British assessment expert, Daisy Christodoulou.  In her 2017 book, Making Good Progress, Christodoulou provides compelling research evidence to demonstrate why and how standardized assessments are not only more reliable measures, but fairer for students form unprivileged families.  She also challenges nearly every assumption built into the Ontario student assessment initiative.

The latest research and best practice in student assessment cut in a direction that’s different from where the Ontario Ministry of Education appears to be heading. Christodoulou’s Making Good Progress cannot be ignored, particularly because it comes with a ringing endorsement from the architect of Assessment for Learning, Dylan Wiliam.  Classroom teachers everywhere are celebrating Christodoulou for blowing the whistle on “generic skills” assessment, ‘rubric-mania,’ impenetrable verbal descriptors, and the mountains of assessment paperwork. Bad student assessment practices, she shows, lead to serious workload problems for classroom teachers.  Proceeding to integrate SEL into province-wide assessments when American experts Angela Duckworth and David Scott Yeager warn that it’s premature and likely to fail is simply foolhardy.  No education jurisdiction priding itself on being “A Learning Province” would plow ahead when the lights turn to amber.

The Ontario Student Assessment document, A Learning Province, may well be running high risks with public accountability for student performance.  It does not really pass the sound research ‘sniff test.’  It looks very much like another Ontario provincial initiative offering a polished, but rather thinly veiled, rationale for supporting the transition away from “large-scale assessment” to “classroom assessment” and grafting unproven SEL competencies onto EQAO, running the risk of distorting its core mandate.

Where is Ontario really heading with its current Student Assessment policy initiative?  Where’s the sound research to support a transition from sound, large-scale testing to broader measures that can match its reliability and provide a level playing field for all?  Should Ontario be heeding leading assessment experts like Dylan Wiliam, Daisy Christodoulou, and Angela Duckworth? Is it reasonable to ask whether a Ministry of Education would benefit from removing a nagging burr in its saddle? 

 

Advertisements

Read Full Post »

Starting next year, students from Kindergarten to Grade 12 in Canada’s largest province, Ontario, will be bringing home report cards that showcase six “transferable skills”: critical thinking, creativity, self-directed learning, collaboration, communication, and citizenship. It’s the latest example of the growing influence of education policy organizations, consultants and researchers promoting “broader measures of success” formerly known as “non-cognitive” domains of learning.

Portrait of Primary Schoolboys and Schoolgirls Standing in a Line in a Classroom

In announcing the latest provincial report card initiative in September 2017, Education Minister Mitzie Hunter sought to change the channel in the midst of a public outcry over continuing declines in province-wide testing results, particularly in Grade 3 and 6 mathematics. While Minister Hunter assured concerned parents that standardized testing was not threatened with elimination, she attempted to cast the whole reform as a move toward “measuring those things that really matter to how kids learn and how they apply that learning to the real world, after school.”

Her choice of words had a most familiar ring because it echoed the core message promoted assiduously since 2013 by Ontario’s most influential education lobby group, People for Education, and professionally-packaged in its well-funded Measuring What Matters‘ assessment reform initiative. In this respect, it’s remarkably similar in its focus to the Boston-based organization Transforming Education.   Never a supporter of Ontario’s highly-regarded provincial testing system, managed by the Education Quality and Accountability Office (EQAO), the Toronto-based group led by parent activist Annie Kidder has spent much of the past five years seeking to construct an alternative model that, in the usual P4E progressive education lexicon, “moves beyond the 3R’s.”

Kidder and her People for Education organization have always been explicit about their intentions and goals. The proposed framework for broader success appeared, almost fully formed, in its first 2013 policy paper.  After referring, in passing, to the focus of policy-makers on “evidence-based decision making,” the project summary disputed the primacy of “narrow goals” such as “literacy and numeracy” and argued for the construction of (note the choice of words) a “broader set of goals” that would be “measurable so students, parents, educators, and the public can see how Canada is making progress” in education.

Five proposed “dimensions of learning” were proposed, in advance of any research being undertaken to confirm their validity or recognition that certain competing dimensions had been ruled out, including resilience and its attendant personal qualities “grit’/conscientiousness, character, and “growth mindset.” Those five dimensions, physical and mental health, social-emotional development, creativity and innovation, and school climate, reflected the socially-progressive orientation of People for Education rather than any evidence-based analysis of student assessment policy and practice.

Two years into the project, the Measuring What Matters (MWM) student success framework had hardened into what began to sound, more and more, like a ‘new catechism.’  The Research Director, Dr. David Hagen Cameron, a PhD in Education from the University of London, hired from the Ontario Ministry of Education, began to focus on how to implement the model with what he termed “MWM change theory.” His mandate was crystal clear – to take the theory and transform it into Ontario school practice in four years, then take it national in 2017-18. Five friendly education researchers were recruited to write papers making the case for including each of the domains, some 78 educators were appointed to advisory committees, and the proposed measures were “field-tested” in 26 different public and Catholic separate schools (20 elementary, 6 secondary), representing a cross-section of urban and rural Ontario.

As an educational sociologist who cut his research teeth studying the British New Labour educational “interventionist machine,” Dr. Cameron was acutely aware that educational initiatives usually flounder because of poorly executed implementation. Much of his focus, in project briefings and academic papers from 2014 onward was on how to “find congruence” between MWM priorities and Ministry mandates and how to tackle the tricky business of winning the concurrence of teachers, and particularly in overcoming their instinctive resistance to  district “education consultants” who arrive promising support but end up extending more “institutional control over teachers in their classrooms.”

Stumbling blocks emerged when the MWM theory met up with the everyday reality of teaching and learning in the schools. Translating the proposed SEL domains into “a set of student competencies” and ensuring “supportive conditions” posed immediate difficulties. The MWM reform promoters came four square up against achieving “system coherence” with the existing EQAO assessment system and the challenge of bridging gaps between the system and local levels. Dr. Cameron and his MWM team were unable to effectively answer questions voicing concerns about increased teacher workload, the misuse of collected data, the mandate creep of schools, and the public’s desire for simple, easy to understand reports. 

Three years into the project, the research base supporting the whole venture began to erode, as more critical independent academic studies appeared questioning the efficacy of assessing Social and Emotional Learning traits or attributes. Dr. Angela L. Duckworth, the University of Pennsylvania psychologist who championed SEL and introduced “grit” into the educational lexicon, produced a comprehensive 2015 research paper with University of Texas scholar David Scott Yeager that raised significant concerns about the wisdom of proceeding, without effective measures, to assess “personal qualities” other than cognitive ability for educational purposes.

Coming from the leading SEL researcher and author of the best-selling book, GRIT, the Duckworth and Yeager research report in Education Researcher, dealt a blow to all state and provincial initiatives attempting to implement SEL measures of assessment. While Duckworth and Yeager held that personal attributes can be powerful predictors of academic, social and physical “well-being,” they claimed “not that everything that counts can be counted or that that everything that can be counted counts.” The two prominent SEL researchers warned that it was premature to proceed with such school system accountability systems. “Our working title, ” she later revealed, “was all measures suck, and they all suck in their own way.”

The Duckworth-Yeager report provided the most in-depth analysis (to date) of the challenges and pitfalls involved in advancing a project like Ontario’s Measuring What Works.  Assessing for cognitive knowledge was long-established and had proven reasonably reliable in measuring academic achievement, they pointed out, but constructing alternative measures remained in its infancy. They not only identified a number of serious limitations of Student Self-Report and Teacher Questionnaires and Performance Tasks (Table 1), but also provided a prescription for fixing what was wrong with system-wide implementation plans (Table 2).

 

 

 

 

 

 

 

 

 

 

 

Duckworth went public with her concerns in February of 2016.  She revealed to The New York Times that she had resigned from a California advisory board fronting a SEL initiative spearheaded by the California Office to Reform Education (CORE), and no longer supported using such tests to evaluate school performance. University of Chicago researcher Camille A. Farrington found Duckworth’s findings credible, stating: “There are so many ways to do this wrong.” The California initiative, while focused on a different set of measures, including student attendance and expulsions, had much in common philosophically with the Ontario venture.

The wisdom of proceeding to adopt SEL system-wide and to recast student assessment in that mold remains contentious.  Anya Kamenetz‘s recent National Public Radio commentary(August 16, 2017) explained, in some detail, why SEL is problematic because, so far, it’s proven impossible to assess what has yet to be properly defined as student outcomes.  It would also seem unwise to overlook Carol Dweck’s recently expressed concerns about using her “Growth Mindset” research for other purposes, such as proposing a system-wide SEL assessment plan.

The Ontario Measuring What Matters initiative, undeterred by such research findings, continues to plow full steam ahead. The five “dimensions of learning” have now morphed into five “domains and competencies” making no reference whatsoever to the place of the cognitive domain in the overall scheme.  It’s a classic example of three phenomena which bedevil contemporary education policy-making: tautology, bias confirmation and the sunk cost trap.  Repeatedly affirming a concept in theory (as logically irrefutable truth) without much supporting research evidence, gathering evidence to support preconceived criteria and plans, and proceeding because its too late to take a pause, or turn back, may not be the best guarantor of long-term success in implementing a system-wide reform agenda.

The whole Ontario Measuring What Works student assessment initiative raises far more questions than it answers. Here are a few pointed questions to get the discussion started and spark some re-thinking. 

On the Research Base:  Does the whole MWM plan pass the research sniff test?  Where does the cognitive domain and the acquisition of knowledge fit in the MWM scheme?  If the venture focuses on Social and Emotional Learning(SEL), whatever happened to the whole student resilience domain, including grit, character and growth mindset? Is it sound to construct a theory and then commission studies to confirm your choice of SEL domains and competencies?

On Implementation: Will introducing the new Social Learning criteria on Ontario student reports do any real harm? Is it feasible to introduce the full MWM plan on top of the current testing regime without imposing totally unreasonable additional burdens on classroom teachers?  Since the best practice research supports a rather costly “multivariate, multi-instrumental approach,” is any of this affordable or sustainable outside of education jurisdictions with significant and expandable capacity to fund such initiatives? 

 

Read Full Post »

With the release of the 2015 Program for International Student Assessment (PISA) on the horizon,  the Organization for Economic Cooperation and Development (OECD) Education Office has stoked-up the “Math Wars” with a new study. While the October 2016 report examines a number of key questions related to teaching Mathematics, OECD Education chose to highlight its findings on “memorization,” presumably to dispel perceptions about “classroom drill” and its use in various countries.

mathsubtractionboardThe OECD, which administers the PISA assessments every three years to 15-year-olds from around the globe, periodically publishes reports looking at slices of the data. It’s most October 2016 report,  Ten Questions for Mathematics Teachers and How PISA Can Help Answer Them, based upon the most recent 2012 results, tends to zero-in on “memorization” and attempts to show that high-performing territories, like Shanghai-China, Korea, and Chinese-Taipei, rely less on memory work than lower-performing places like Ireland, the UK, and Australia.

American Mathematics educator Jo Boaler, renowned for “Creative Math,” jumped upon the PISA Study to buttress her case  against “memorization” in elementary classrooms. In a highly contentious November 2016 Scientific American article, Boaler and co-author Pablo Zoido, contended that PISA findings confirmed that “memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment.” Students who relied on memorization, they further argued, were “approximately half a year behind students who used relational and self-monitoring strategies” such as those in Japan and France. 

Australian education researcher Greg Ashman took a closer look at the PISA Study and called into question such hasty interpretations of the findings.  Figure 1.2: How teachers teach and students learn caught his eye and he went to work interrogating the survey responses on “memorization” and the axes used to present the data.  The PISA analysis, he discovered, also did not include an assessment of how teaching methods might be correlated with PISA scores in Mathematics.  Manitoba Mathematics professor Robert Craigen spotted a giant hole in the PISA analysis and noted that the “memorization” data related to “at-home strategies of students” not their instructional experiences and may wel;l indicate that students who are improperly instructed in class resort to memorization on their own.

mathpisateacherdirectedgraphWhat would it look like, Ashman wondered, if the PISA report had plotted how students performed in relation to the preferred methods used on the continuum from “more student-oriented instruction” to “more teacher-directed instruction.” Breaking down all the data, he generated a new graph that actually showed how teaching method correlated with higher math performance and found a “positive correlation” between teacher-directed instruction and higher Math scores. “Correlations,” he duly noted, “do not necessarily imply causal relationships but clearly a higher ratio of teacher-directed activity to student orientation.”

Jumping on the latest research to seek justification for her own “meta-beliefs” are normal practice for Boaler and her “Discovery Math” education disciples. After junking, once again, the ‘strawmen’ of traditional Mathematics — “rote memorization” and “drill,” Boaler and Zoido wax philosophical and poetic: “If American classrooms begin to present the subject as one of open, visual, creative inquiry, accompanied by growth-mindset messages, more students will engage with math’s real beauty. PISA scores would rise, and, more important, our society could better tap the unlimited mathematical potential of our children.” That’s definitely stretching the evidence far beyond the breaking point.

The “Math Wars” do generate what University of Virginia psychologist Daniel T. Willingham has aptly described as “a fair amount of caricature.” The recent Boaler-Zoido Scientific American article is a prime example of that tendency. Most serious scholars of cognition tend to support the common ground position that learning mathematics requires three distinct types of knowledge: factual, procedural and conceptual. “Factual knowledge,” Willingham points out, “includes having already in memory the answers to a small set of problems of addition, subtraction, multiplication, and division.” While some students can learn Mathematics through invented strategies, it cannot be relied upon for all children. On the other hand, knowledge of procedures is no guarantee of conceptual understanding, particularly when it comes to complexites such as dividing fractions. It’s clear to most sensible observers that knowing math facts, procedures and concepts is  what counts when it comes to mastering mathematics.

mathtimestableimageSimply ignoring research that contradicts your ‘meta-beliefs’ is common on the Math Education battlefield. Recent academic research on “memorization” that contradicts Boaler and her entourage, is simply ignored, even that emanating from her own university. Two years ago, Shaozheng Qin and Vinod Menon of Stanford University Medical School led a team that provided scientifically-validated evidence that “rote memorization” plays a critical role in building capacity to solve complex calculations.

Based upon a clinical study of 68 children, aged 7 to 9, studied over the course of one year, their 2014 Nature Neuroscience study, Qin, Menon et al. found that memorizing the answers to simple math problems, such as basic addition or multiplication, forms a key step in a child’s cognitive development, helping bridge the gap between counting on fingers and tackling more complex calculations. Memorizing the basics, they concluded, is the gateway to activating the “hippocampus,” a key brain structure for memory, which gradually expands in “overlapping waves” to accommodate the greater demands of more complex math.

The whole debate over memorization is suspect because of the imprecision in the use of the term. Practice, drilling, and memorization are not the same, even though they get conflated in Jo Boaler’s work and in much of the current Mathematics Education literature. Back in July 2012, D.T. Willingham made this crucial point and provided some valuable points of distinction. “Practice,” as defined by Anders Ericsson, involves performing tasks and feedback on that performance, executed for the purpose of improvement. “Drilling’ connotes repetition for the purpose of achieving automaticity, which – at its worst, amounts to mindless repetition or parroting. “Memorization,” on the other hand, relates to the goal of something ending up in long-term memory with ready access, but does not imply using any particular method to achieve that goal.

Memorization has become a dirty word in teaching and learning laden with so much baggage to the point where it conjures up mental pictures of “drill and kill” in the classroom. The 2016 PISA Study appears to perpetuate such stereotyping and, worst of all, completely misses the “positive correlation” between teacher-directed or explicit instruction and better performance in mathematics.

Why does the PISA Study tend to associate memorization in home-study settings with the drudgery of drill in the classroom?  To what extent does the PISA Study on Mathematics Teaching support the claims made by Jo Boaler and her ‘Discovery Math’ advocates? When it comes to assessing the most effective teaching methods, why did the PISA researchers essentially take a pass? 

 

Read Full Post »

Educational talk about “grit” – being passionate about long-term goals, and showing the determination to see them through –seems too be everywhere in and around schools. Everywhere, that is, except in the rather insular Canadian educational world. Teaching and measuring social-emotional skills are on the emerging policy agenda, but “grit” is (so far) not among them.

GritFaceGirlGrit is trendy in American K-12 education and school systems are scrambling to get on board the latest trend.  A 2007 academic article, researched and written by Angela Duckworth, made a compelling case that grit plays a critical role in success.  Author Paul Tough introduced grit to a broad audience in his 2013 book How Children Succeed: Grit, Curiosity, and the Hidden Power of Character, which went on to spend a year on the New York Times bestseller list.  And in the same year, Duckworth herself gave a TED talk, which has been viewed more than 8 million times online.

Since then, grit initiatives have flourished in United States school systems. Some schools are seeking to teach grit, and some districts are attempting to measure children’s grit, with the outcome contributing to assessments of school effectiveness. Angela Duckworth’s new book, Grit: The Power of Passion and Perseverance, is one of the hottest North American non-fiction titles this publishing season.  In spite of the flurry of public interest, it has yet to register in the Canadian educational domain.

GritDuckworthBookCoverOver the past three years the Ontario-based People for Education (P4ED) advocacy organization has been pursuing the goal of broadening the existing measures of student success to embrace “social-emotional skills” or competencies. With a clear commitment to “move beyond the ‘3R’s” and redefine the established testing/accountability framework, P4ED founder Annie Kidder and the well-funded Toronto-centred research team have been creating a “broad set of foundational skills” and developing a method of “measuring schools’ progress toward those goals.”

The Ontario P4ED initiative, billed as “Measuring What Matters “(MWM), proposes a draft set of “Competencies and Skills” identified as Creativity, Citizenship, Social-Emotional Learning, and Health — all to be embedded in what is termed “quality learning environments” both in schools and the community. The proposed Ontario model makes no reference whatsoever to cognitive learning and subject knowledge or to the social-emotional aspects of grit, perseverance or work ethic.

The P4ED project has a life of its own, driven by a team of Canadian education researchers with their own well-known hobby horses. Co-Chair of the MWM initiative, former BC Deputy Minister of Education Charles Ungerleider, has assembled a group of academics with impeccable “progressive education” (anti-testing) credentials, including OISE teacher workload researcher Nina Bascia and York University self-regulation expert Stuart Shanker.

A 2015 MWM project progress report claimed that the initiative was moving from theory to practice with “field trials” in Ontario public schools. It simply reaffirmed the proposed social-emotional domains and made no mention of Duckworth’s research or her “Grit Scale” for assessing student performance on that benchmark. While Duckworth is cited in the report, it is for a point unrelated to her key research findings. The paper also assumes that Ontario is a “medium stakes” testing environment in need of softer, non-cognitive measures of student progress, an implicit criticism of the highly regarded Ontario Quality and Accountability Office system of provincial achievement testing.

GritGrowthMindsetWhether “grit” or any other social-emotional skills can be taught — or reliably measured — is very much in question. Leading American cognitive learning researcher Daniel T. Willingham’s latest American Educator essay (Summer 2016) addresses the whole matter squarely and punches holes in the argument that “grit” can be easily taught, let alone assessed in schools. Although Willingham is a well-known critic of “pseudoscience” in education, he does favour utilizing “personality characteristics” for the purpose of “cultivating” in students such attributes as conscientiousness, self-control, kindness, honesty, optimism, courage and empathy, among others.

The movement to assess students for social-emotional skills has also raised alarms, even among the biggest proponents of teaching them. American education researchers, including Angela Duckworth, are leery that the terms used are unclear and the first battery of tests faulty as assessment measures.  She recently resigned from the advisory board of a California project, claiming the proposed social-emotional tests were not suitable for measuring school performance.  “I don’t think we should be doing this; it is a bad idea,” she told The New York Times.

Why are leading Canadian educators so committed to developing “social-emotional” measures as alternatives to current student achievement assessment programs? Should social-emotional competencies such as “joy for learning” or “grit”  be taught more explicity in schools?  How reliable are measures of such “social-emotional skills” as creativity, citizenship, empathy, and self-regulation? 

Read Full Post »

Alberta’s most unlikely hero, Physics teacher Lynden Dorval, has finally been vindicated.  Two years after he was fired in September 2012 by the Edmonton Public Schools for giving his high school students zeros for incomplete work, an Alberta appeal tribunal ruled on August 29, 2014 that he was “unfairly dismissed” and restored his lost salary and pension. There is justice, it seems, in the education world.  The bigger question is – how did it happen and will it encourage more teachers to stand-up  against eroding educational standards?

LyndenDorvalEPSPhotoThe Physics teacher at Ross Sheppard High School, was a 33-year veteran with an “unblemished” teaching record.  He stood his ground when a new Principal arrived and intervened to end the common practice of teaching students a valuable life lesson – failing to hand in an assignment or missing a test without a valid reason – would result in a mark of zero. In Dorval’s case, he even gave students fair warning and a second chance before taking that step.  It worked because Dorval , according to the tribunal, had “the best record in the school and perhaps the province for completion rates.”

The “no zeros” issue  came to a head when the school’s computer generated reports were programmed to substitute “blanks” for zeros, eliminating the practice.  Dorval considered banning zeros “a stupid idea” and said he “simply couldn’t follow it.”  Two other teachers did the same but escaped any repercussions.

The Alberta tribunal’s decision supported Dorval because he had raised very legitimate questions about whether the policy was good for students.  In the wording of the decision, “the school board did not act reasonably in suspending the teacher. The implementation of the new assessment policy has several demonstrable problems.” Specifically, since there was “no accountability or penalty for missing assignments in the new policy, there was little incentive for a student to actually complete the assignment.”

The written ruling was particularly harsh in its criticism of the principal and former superintendent Edgar Schmidt.  It agreed that Dorval was made an example for challenging the principal’s authority and found that the policy was imposed without proper consultation with teachers, students, or parents. Even more telling, the tribunal was very critical of the Edmonton board for denying Dorval due process during its September 2012 dismissal hearing.

The sheer idiocy of the Edmonton Public Schools student assessment policy was clear to most outside the system. Faced with a groundswell of resistance, the Edmonton board of elected trustees itself backtracked, approving a revised student assessment policy (protecting the Lynden Dorvals) and explicitly allowing zero as a possible mark.

School system Student Evaluation policy remains a total mystery to most parents and to tuned-in high school students.  Over the past two decades, provincial testing programs and school-based student evaluation have been moving in opposite directions.

Provincial tests such as the Ontario EQAO assessments hold students accountable for measuring up to criteria-referenced standards, while school board consultants promote the new “Assessment for Learning” (AfL) theories, pushing-up graduation rates through a combination of “no fail” and “do-over” student evaluation practices.  Defenders of such ‘soft, pass everyone’ practices like AfL consultant Damian Cooper   tend to see enforcing higher standards as a dire threat to student self-esteem.

Public school authorities have a way of silencing independently-minded teachers and many pay a professional price for openly expressing dissenting views. A small number of those educators stumble upon Canadian independent schools which generally thrive on giving teachers the freedom to challenge students and to actually teach.  Thousands of public school teachers just accept the limits on freedom of expression, soldier on and mutter, below their breath, “I’m a teacher, so I’m not allowed to have an opinion.”

Why did Lynden Dorval become an Alberta teacher hero?  It comes down to this: He said “No” to further erosion of teacher autonomy and standards.

 

Read Full Post »

A Calgary Catholic District school, St. Basil Elementary and Junior High, made headlines in late October when principal Craig Kittelson sent a letter to Grade 7 to 9 parents announcing the elimination of the academic honour roll and end-of-year awards ceremonies.  The controversial Letter to Parents cited the work of American popular writer Alfie Kohn, including the contention that “dangling rewards in front of children are at best ineffective, and at worst counterproductive.”  A Postmedia news story by Trevor Howell in the Calgary Herald and the National Post gave extensive coverage to the eruption of “parent outrage” over both the decision and the way it was summarily announced to the community.

Alfie-KohnAxing the Academic Honour Roll reignited a public debate over the common practice of giving awards as an incentive to encourage academic achievement. The Calgary Catholic District School Board was caught flat-footed by the outrage. Scrambling for a plausible explanation, the National Post turned to Alfie Kohn’s leading Canadian disciple, Red Deer elementary teacher Joe Bower who operates the blog for the love of learning.  While news reports referenced Joe Bower’s 2007 move to end awards ceremonies at Red Deer’s Westpark Middle School, they made no mention of his related initiatives abandoning homework and refusing to give grades. Nor did the media report that he did so after experiencing an epiphany while reading Kohn’s article “The Costs of Overemphasizing Achievement.”

After “discovering” Kohn, Bower has been on a mission.  He’s become a serial @AlfieKohn retweeter,  while bouncing from school to school and ending up teaching special needs kids in ungraded classes at the Red Deer Regional Hospital.  In September 2013, Bower published a co-edited collection of so-called “progressive education” articles entitled de-testing and de-grading schools, complete with a glowing foreword by none other than his mentor,  Alfie Kohn.  Almost simultaneously, the Canadian Education Association published a feature article by Bower in Education Canada (Fall 2013) on “Telling Time with a Broken Clock,” the trouble with standardized testing. Kohn’s fingerprints are all over Bower’s articles and posts, hammering away at the evils of academic rewards, homework, and student testing of any kind. It makes you wonder whether this once repudiated, retooled agenda is actually the hidden curriculum of the CEA and its acolytes.

Whatever got into the Calgary Catholic District Board to actually sanction the axing of academic awards?  When pressed for a rationale, the CCDSB posted a rather bizarre summary of the “education research” intended to support the decision and come to the rescue of Kittelson,  the beleaguered school principal.  Surveying that short brief, makes for fascinating reading because it leads off by quoting American radical critic John Taylor Gatto, a leading “unschooler” opposed to compulsory schooling, then cherry picks evidence from Alfie Kohn’s favourite sources.  As a validation for the policy, it’s a classic example of a selective, politically-driven education research “mash-up” — the very kind that has landed education research in such bad odour in academe.

Just when it appeared that America’s leading progressive gadfly was fading in influence, Bower and a new generation of disciples are taking up the cause. Having heard Alfie Kohn speak at a Quebec  English Teachers Conference in Montreal in the early 2000s, I have seen–first hand– his tremendous gifts as an orator and felt the allure of his iconoclastic ideas, until I began to consider the consequences of putting those ideas into actual practice.  Born in Miami Beach, Florida, the preppy-looking, reed thin author and lecturer, now in his late 50s, has authored a dozen books with catchy titles such as No Contest: The Case Against Competition (1986), Punished by Rewards: The Trouble with Gold Stars (1993), The Case Against Standardized Testing (2000), The Homework Myth (2006), and Feel Bad Education (2011).  He has staying power, judging from the steady stream of simple Kohn axioms spewing out of Bower and his other camp followers.

Like most educational evangelicals, Kohn has undeniable appeal, especially to North American teachers, tapping into their very real feelings of alienation, powerlessness, and resistance to imposed change.  He finds a ready audience because he has identified a vein of dissent and resistance running though the rank and file teacher forces, often manifested in opposition to top-down educational decision-making.  Academic critics like Daniel Willingham, author of Why Don’t Students Like School, point out that Kohn is effective as an agent provocateur and likely “not bad for you or dangerous to your children.”  He raises important questions, but, according to Willingham, “should not be read as a guide to the answers” because his writings “cannot be trusted as an accurate summary of the research literature.” In his reply to Willingham, Kohn held his ground, while conceding that some of his distillations run the risk of oversimplying complex issues.

One of the most incisive assessments of Alfie Kohn comes from Michael J. Petrilli of The Thomas B. Fordham Institute, an American education gadfly of a different stripe. Writing in the March 2012 issue of Wisconsin Interest, Petrilli hit the mark: “Kohn’s arguments are half-crazy and half-true, which is what makes him so effective — and so maddening.” He also provides a useful corrective to Kohn’s particular educational worldview. “What fuels the modern school reform movement,” he claims, “is not acquiescence to Corporate America but outrage over the nation’s lack of social mobility.” You can be sure this will not appear on Joe Bower’s blog or in one of his next tweets.

What fuels American education gadfly Alfie Kohn’s zealous contrarianism and various progressive education crusades?  How much of Kohn’s core progressivist ideology is rooted in the teachings of John Dewey and Jean Piaget — and what proportion is pure creative imagination?  What has Kohn actually contributed to the education world in terms of sound policy ideas?  What does explain his continuing influence and undeniable capacity to attract new adherents?

Read Full Post »

The arrival of Nova Scotia Student Report Cards in June 2013 provoked quite a reaction from parents, particularly those with students in Atlantic Canada’s largest school board.   “Ridiculous.” “Meaningless.” “Mumbo-jumbo.”  Those were just a few of the words used by Halifax region parents to describe the computer-generated InSchool reports utilizing PowerSchool, the province’s  new $6 million student information system.   After an initial attempt by Deputy Education Minister Carole Olsen to deflect the stinging criticism, the Minister Ramona Jennex, Chair of CMEC, was compelled to intervene, promising to look into the concerns.

InSchoolBannerEducation officials in Nova Scotia were clearly taken aback by the reaction and acted like it was news to them that the canned reports were incomprehensible to most parents and virtually every student, certainly in the elementary grades.  Education Reporter Frances Willick deserves credit for unearthing the latest parental outcry, but the concerns are not new, nor will they be fixed by a few cosmetic changes on student reports.

Some twenty years ago, educators were confronted with a wave of educational reform focused on introducing Outcome Based Education (OBE) and increasing demand for standardized testing to provide more assurance of student achievement levels. Most educational administrators and consultants reacted instinctively against such intrusions and adapted by developing some rather ingenious counter measures.  Unable to stop the advance of standardized assessments, they resorted to retaining control of student reporting and turning it to different purposes.

Student reports cards have been filled with gibberish since at least  the early 1990s. Much of it can be traced back to a February 1993 Ontario Ministry of Education document known as The Common Curriculum, Grades 1 -9.  That document introduced teachers to the term “learning outcomes” and attempted to destream Grade 9 and replace a subject-based curriculum with more holistic cross-curricular understandings.  In the case of Language Learning Outcomes, reading was downgraded and the word “spelling” dropped from the elementary lexicon.  After parents rose up calling for “a set curriculum with specific goals,” the so-called “dumbed-down curriculum” was shelved, but a new student evaluation system based upon “meeting provincial outcomes” survived.

CouldDoBetterOCERIntended as a means of providing parents with regular communication reflecting measurable standards, OBE student report cards become quite the reverse.  School curriculum was re-written around “learning outcomes” and professional development was geared to teaching teachers the new “educratic” language.  Communicating with parents gradually morphed from providing personal, often candid comments about students, to tiny snippets reflecting the “expected outcomes.”  When Student Reports became standardized and machine-generated, the “student outcomes” jargon became entrenched and accepted, rather sadly, as a demonstration of teaching competence.

Standardizing report cards became a vehicle for implementing the new orthodoxy.  In Canada’s provincial systems, OBE was captured by educators who opposed testing and sought to undercut its influence with “assessment for learning.”  Grading and ranking students, once the staple of teaching,  became dirty words and the initial standardized reports sought to replace marks with measures of formative assessment.  Ontario’s first Standardized Report Card, eventually rejected, attempted to introduce a new set of incredibly vague measures such as “developing,” developed, and “fully developed” understandings.

Today’s standardized report cards, as bad as they are, could well have been worse. Many professional teachers, particularly in high schools, resisted the “dumbing down” of curriculum and the invasion of ‘politically-correct’ report writing.  The leading teachers’ unions, the OSSTF, BCTF, and NSTU, opposed student reporting that chipped away at teacher autonomy and consumed more and more of a teacher’s time to complete.  In Nova Scotia, for example, the Grade 9 to 12 reports may be littered with edu-babble comments, but they still provide percentage grades.  In Grades 1 to 8, the reports still retain a watered down letter grade system.

A closer look at the Nova Scotia Power School report card reveals that the grading system has also been impacted. In Grades 1 to 8, for example, the range of grades has been narrowed to reflect the goal of equality of outcomes. The highest grade possible is now “A” and it signifies “meeting learning outcomes,” and “F” has been eliminated entirely. No one, it seems, can be outstanding or “exceed expected outcomes.”

The current flap over Nova Scotia report cards is simply the latest manifestation of a much deeper problem.  Education authorities favour standardized student information systems and Nova Scotia’s 2010 full adoption of Power School was mostly driven by the need to track student attendance.  The reporting module, adapted from the template with few adaptations, was essentially an afterthought. It is quite clear now that Power School was a Trojan Horse for the advance of standardized reports and the latest wave of “canned reporting” and “robo comments.”

What can we finally secure personalized, common sense Student Report Cards?  Is it just a matter of linguistic cosmetics or part of a much deeper problem entrenched in the current educational system?   What needs to be undone, before we restore sanity to student reporting?

Read Full Post »

Older Posts »