Feeds:
Posts
Comments

Posts Tagged ‘Mitzie Hunter’

Canada’s most populous province aspires to education leadership and tends to exert influence far beyond our coast-to-coast provincial school systems. That is why the latest Ontario student assessment initiative, A Learning Province, is worth tracking and deserves much closer scrutiny. It was officially launched in September of 2017, in the wake of a well-publicized decline in provincial Math test scores and cleverly packaged as a plan to address wider professional concerns about testing and accountability.

Declining Math test scores among public elementary school students in Ontario were big news in late August 2017 for one one good reason- the Ontario Ministry’s much-touted $60-million “renewed math strategy” completely bombed when it came to alieviating the problem. On the latest round of  provincial standardized tests — conducted by the Education Quality and Accountability Office (EQAO)only half of Grade 6 students met the provincial standard in math, unchanged from the previous year. In 2013, about 57 per cent of Grade 6 students met the standard  Among Grade 3 students, 62 per cent met the provincial standard in math, a decrease of one percentage point since last year.

The Ontario government’s response, championed by Premier Kathleen Wynne and Education Minister Mitzie Hunter, was not only designed to change the channel, but to initiate a “student assessment review” targeting the messenger, the EQAO, and attempting to chip away at its hard-won credibility, built up over the past twenty years. While the announcement conveyed the impression of “open and authentic” consultation, the Discussion Paper made it crystal clear that the provincial agency charged with ensuring educational accountability was now under the microscope.  Reading the paper and digesting the EQAO survey questions, it becomes obvious that the provincial tests are now on trial themselves, and being assessed on criteria well outside their current mandate.

Ontario’s provincial testing regime should be fair game when it comes to public scrutiny. When spending ballooned to $50 million a year in the late 1990s, taxpayers had a right to be concerned. Since 2010, EQAO costs have hovered around $34 million or $17 per student, the credibility of the test results remain widely accepted, and the testing model continues to be free of interference or manipulation.  It’s working the way it was intended — to provide a regular, reasonably reliable measure of student competencies in literacy and numeracy.

The EQAO is far from perfect, but is still considered the ‘gold standard’ right across Canada.  It has succeeded in providing much greater transparency, but — like other such testing regimes – has not nudged education departments far enough in the direction of improving teacher specialist qualifications or changing the curriculum to secure better student results.  The Grade 10 Literacy Test remains an embarrassment. In May 2010, the EQAO report, for example, revealed that hundreds of students who failed the 2006 test were simply moved along trough the system without passing that graduation standard. Consistently, about 19 to 24 per cent of all students fall short of acceptable literacy, and 56 per cent of all Applied students, yet graduation rates have risen from 68% to 86% province-wide.

The Ontario Ministry is now ‘monkeying around’ with the EQAO and seems inclined toward either neutering the agency to weaken student performance transparency or broadening its mandate to include assessing students for “social and emotional learning’ (SEL), formerly termed “non-cognitive learning.”  The “Independent Review of Assessment and Reporting” is being supervised by some familiar Ontario education names, including the usual past and present OISE insiders, Michael Fullan, Andy Hargreaves, and Carol Campbell.  It’s essentially the same Ontario-focused group, minus Dr. Avis Glaze, that populates the International Education Panel of Advisors in Scotland attempting to rescue the Scottish National Party’s faltering “Excellence for All” education reforms.

The published mandate of the Student Assessment Review gives it all away in a few critical passages.  Most of the questions focus on EQAO testing and accountability and approach the tests through a “student well-being” and “diversity” lens.  An “evidence-informed” review of the current model of assessment and reporting is promised, but it’s nowhere to be found in the discussion paper. Instead, we are treated to selected excerpts from official Ontario policy documents, all supporting the current political agenda, espoused in the 2014 document, Achieving Excellence: A Renewed Vision for Education in Ontario. The familiar four pillars, achieving excellence, ensuing equity, promoting well-being, and enhancing public confidence are repeated as secular articles of faith.

Where’s the research to support the proposed direction?  The Discussion Paper does provide capsule summaries of two assessment approaches, termed “large-scale assessments” and “classroom assessments, ” but critical analysis of only the first of the two approaches.  There’s no indication in A Learning Province that the reputedly independent experts recognize let alone heed the latest research pointing out the pitfalls and problems associated with Teacher Assessments (TA) or the acknowledged “failure” of Assessment for Learning (AfL).  Instead, we are advised, in passing, that the Ontario Ministry has a research report, produced in August 2017, by the University of Ottawa, examining how to integrate “student well-being” into provincial K-12 assessments.

The Ontario Discussion Paper is not really about best practice in student assessment.  It’s essentially based upon rather skewed research conducted in support of “broadening student assessments” rather that the latest research on what works in carrying out student assessments in the schools.  Critical issues such as the “numeracy gap” now being seriously debated by leading education researchers and student assessment experts are not even addressed in the Ontario policy paper.

Educators and parents reading A Learning Province would have benefited from a full airing of the latest research on what actually works in student assessment, whether or not it conforms with provincial education dogma.  Nowhere does the Ontario document recognize Dylan Wiliam’s recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments.  Nor does the Ontario student assessment review team heed the recent findings of British assessment expert, Daisy Christodoulou.  In her 2017 book, Making Good Progress, Christodoulou provides compelling research evidence to demonstrate why and how standardized assessments are not only more reliable measures, but fairer for students form unprivileged families.  She also challenges nearly every assumption built into the Ontario student assessment initiative.

The latest research and best practice in student assessment cut in a direction that’s different from where the Ontario Ministry of Education appears to be heading. Christodoulou’s Making Good Progress cannot be ignored, particularly because it comes with a ringing endorsement from the architect of Assessment for Learning, Dylan Wiliam.  Classroom teachers everywhere are celebrating Christodoulou for blowing the whistle on “generic skills” assessment, ‘rubric-mania,’ impenetrable verbal descriptors, and the mountains of assessment paperwork. Bad student assessment practices, she shows, lead to serious workload problems for classroom teachers.  Proceeding to integrate SEL into province-wide assessments when American experts Angela Duckworth and David Scott Yeager warn that it’s premature and likely to fail is simply foolhardy.  No education jurisdiction priding itself on being “A Learning Province” would plow ahead when the lights turn to amber.

The Ontario Student Assessment document, A Learning Province, may well be running high risks with public accountability for student performance.  It does not really pass the sound research ‘sniff test.’  It looks very much like another Ontario provincial initiative offering a polished, but rather thinly veiled, rationale for supporting the transition away from “large-scale assessment” to “classroom assessment” and grafting unproven SEL competencies onto EQAO, running the risk of distorting its core mandate.

Where is Ontario really heading with its current Student Assessment policy initiative?  Where’s the sound research to support a transition from sound, large-scale testing to broader measures that can match its reliability and provide a level playing field for all?  Should Ontario be heeding leading assessment experts like Dylan Wiliam, Daisy Christodoulou, and Angela Duckworth? Is it reasonable to ask whether a Ministry of Education would benefit from removing a nagging burr in its saddle? 

 

Advertisements

Read Full Post »

Starting next year, students from Kindergarten to Grade 12 in Canada’s largest province, Ontario, will be bringing home report cards that showcase six “transferable skills”: critical thinking, creativity, self-directed learning, collaboration, communication, and citizenship. It’s the latest example of the growing influence of education policy organizations, consultants and researchers promoting “broader measures of success” formerly known as “non-cognitive” domains of learning.

Portrait of Primary Schoolboys and Schoolgirls Standing in a Line in a Classroom

In announcing the latest provincial report card initiative in September 2017, Education Minister Mitzie Hunter sought to change the channel in the midst of a public outcry over continuing declines in province-wide testing results, particularly in Grade 3 and 6 mathematics. While Minister Hunter assured concerned parents that standardized testing was not threatened with elimination, she attempted to cast the whole reform as a move toward “measuring those things that really matter to how kids learn and how they apply that learning to the real world, after school.”

Her choice of words had a most familiar ring because it echoed the core message promoted assiduously since 2013 by Ontario’s most influential education lobby group, People for Education, and professionally-packaged in its well-funded Measuring What Matters‘ assessment reform initiative. In this respect, it’s remarkably similar in its focus to the Boston-based organization Transforming Education.   Never a supporter of Ontario’s highly-regarded provincial testing system, managed by the Education Quality and Accountability Office (EQAO), the Toronto-based group led by parent activist Annie Kidder has spent much of the past five years seeking to construct an alternative model that, in the usual P4E progressive education lexicon, “moves beyond the 3R’s.”

Kidder and her People for Education organization have always been explicit about their intentions and goals. The proposed framework for broader success appeared, almost fully formed, in its first 2013 policy paper.  After referring, in passing, to the focus of policy-makers on “evidence-based decision making,” the project summary disputed the primacy of “narrow goals” such as “literacy and numeracy” and argued for the construction of (note the choice of words) a “broader set of goals” that would be “measurable so students, parents, educators, and the public can see how Canada is making progress” in education.

Five proposed “dimensions of learning” were proposed, in advance of any research being undertaken to confirm their validity or recognition that certain competing dimensions had been ruled out, including resilience and its attendant personal qualities “grit’/conscientiousness, character, and “growth mindset.” Those five dimensions, physical and mental health, social-emotional development, creativity and innovation, and school climate, reflected the socially-progressive orientation of People for Education rather than any evidence-based analysis of student assessment policy and practice.

Two years into the project, the Measuring What Matters (MWM) student success framework had hardened into what began to sound, more and more, like a ‘new catechism.’  The Research Director, Dr. David Hagen Cameron, a PhD in Education from the University of London, hired from the Ontario Ministry of Education, began to focus on how to implement the model with what he termed “MWM change theory.” His mandate was crystal clear – to take the theory and transform it into Ontario school practice in four years, then take it national in 2017-18. Five friendly education researchers were recruited to write papers making the case for including each of the domains, some 78 educators were appointed to advisory committees, and the proposed measures were “field-tested” in 26 different public and Catholic separate schools (20 elementary, 6 secondary), representing a cross-section of urban and rural Ontario.

As an educational sociologist who cut his research teeth studying the British New Labour educational “interventionist machine,” Dr. Cameron was acutely aware that educational initiatives usually flounder because of poorly executed implementation. Much of his focus, in project briefings and academic papers from 2014 onward was on how to “find congruence” between MWM priorities and Ministry mandates and how to tackle the tricky business of winning the concurrence of teachers, and particularly in overcoming their instinctive resistance to  district “education consultants” who arrive promising support but end up extending more “institutional control over teachers in their classrooms.”

Stumbling blocks emerged when the MWM theory met up with the everyday reality of teaching and learning in the schools. Translating the proposed SEL domains into “a set of student competencies” and ensuring “supportive conditions” posed immediate difficulties. The MWM reform promoters came four square up against achieving “system coherence” with the existing EQAO assessment system and the challenge of bridging gaps between the system and local levels. Dr. Cameron and his MWM team were unable to effectively answer questions voicing concerns about increased teacher workload, the misuse of collected data, the mandate creep of schools, and the public’s desire for simple, easy to understand reports. 

Three years into the project, the research base supporting the whole venture began to erode, as more critical independent academic studies appeared questioning the efficacy of assessing Social and Emotional Learning traits or attributes. Dr. Angela L. Duckworth, the University of Pennsylvania psychologist who championed SEL and introduced “grit” into the educational lexicon, produced a comprehensive 2015 research paper with University of Texas scholar David Scott Yeager that raised significant concerns about the wisdom of proceeding, without effective measures, to assess “personal qualities” other than cognitive ability for educational purposes.

Coming from the leading SEL researcher and author of the best-selling book, GRIT, the Duckworth and Yeager research report in Education Researcher, dealt a blow to all state and provincial initiatives attempting to implement SEL measures of assessment. While Duckworth and Yeager held that personal attributes can be powerful predictors of academic, social and physical “well-being,” they claimed “not that everything that counts can be counted or that that everything that can be counted counts.” The two prominent SEL researchers warned that it was premature to proceed with such school system accountability systems. “Our working title, ” she later revealed, “was all measures suck, and they all suck in their own way.”

The Duckworth-Yeager report provided the most in-depth analysis (to date) of the challenges and pitfalls involved in advancing a project like Ontario’s Measuring What Works.  Assessing for cognitive knowledge was long-established and had proven reasonably reliable in measuring academic achievement, they pointed out, but constructing alternative measures remained in its infancy. They not only identified a number of serious limitations of Student Self-Report and Teacher Questionnaires and Performance Tasks (Table 1), but also provided a prescription for fixing what was wrong with system-wide implementation plans (Table 2).

 

 

 

 

 

 

 

 

 

 

 

Duckworth went public with her concerns in February of 2016.  She revealed to The New York Times that she had resigned from a California advisory board fronting a SEL initiative spearheaded by the California Office to Reform Education (CORE), and no longer supported using such tests to evaluate school performance. University of Chicago researcher Camille A. Farrington found Duckworth’s findings credible, stating: “There are so many ways to do this wrong.” The California initiative, while focused on a different set of measures, including student attendance and expulsions, had much in common philosophically with the Ontario venture.

The wisdom of proceeding to adopt SEL system-wide and to recast student assessment in that mold remains contentious.  Anya Kamenetz‘s recent National Public Radio commentary(August 16, 2017) explained, in some detail, why SEL is problematic because, so far, it’s proven impossible to assess what has yet to be properly defined as student outcomes.  It would also seem unwise to overlook Carol Dweck’s recently expressed concerns about using her “Growth Mindset” research for other purposes, such as proposing a system-wide SEL assessment plan.

The Ontario Measuring What Matters initiative, undeterred by such research findings, continues to plow full steam ahead. The five “dimensions of learning” have now morphed into five “domains and competencies” making no reference whatsoever to the place of the cognitive domain in the overall scheme.  It’s a classic example of three phenomena which bedevil contemporary education policy-making: tautology, bias confirmation and the sunk cost trap.  Repeatedly affirming a concept in theory (as logically irrefutable truth) without much supporting research evidence, gathering evidence to support preconceived criteria and plans, and proceeding because its too late to take a pause, or turn back, may not be the best guarantor of long-term success in implementing a system-wide reform agenda.

The whole Ontario Measuring What Works student assessment initiative raises far more questions than it answers. Here are a few pointed questions to get the discussion started and spark some re-thinking. 

On the Research Base:  Does the whole MWM plan pass the research sniff test?  Where does the cognitive domain and the acquisition of knowledge fit in the MWM scheme?  If the venture focuses on Social and Emotional Learning(SEL), whatever happened to the whole student resilience domain, including grit, character and growth mindset? Is it sound to construct a theory and then commission studies to confirm your choice of SEL domains and competencies?

On Implementation: Will introducing the new Social Learning criteria on Ontario student reports do any real harm? Is it feasible to introduce the full MWM plan on top of the current testing regime without imposing totally unreasonable additional burdens on classroom teachers?  Since the best practice research supports a rather costly “multivariate, multi-instrumental approach,” is any of this affordable or sustainable outside of education jurisdictions with significant and expandable capacity to fund such initiatives? 

 

Read Full Post »