Feeds:
Posts
Comments

Posts Tagged ‘Annie Kidder’

The latest student achievement results, featured in the April 30, 2018 Pan-Canadian Assessment Program (PCAP) 2016 report, prove, once again, how system-critical testing is for K-12 education. Students in every Canadian province except Ontario saw gains in Grade 8 student scores from 2010 to 2016 and we are now much the wiser. That educational reality check simply confirms that it’s no time to be jettisoning Ontario’s Grade 3 provincial tests and chipping away at the reputation of the province’s independent testing agency, the Education Quality and Accountability Office (EQAO).

The plan to end Grade 3 provincial testing arrived with the final report of Ontario: A Learning Province, produced by OISE professor Carol Campbell and her team of six supposedly independent advisors, including well-known change theorists Michael Fullan, Andy Hargreaves and Jean Clinton. Targeting of the EQAO was telegraphed in an earlier discussion paper, but the consultation phase focused ostensibly more on “broadening measures of student success” beyond achievement and into the largely uncharted realm of “social and emotional learning” (SEL).

The final report stunned many close observers in Ontario who expected much more from the review, and, in particular, an SEL framework for assessment and a new set of “student well- being” reports for the 2018-19 school year.  Tampering with Grade 3 testing made former Ontario Deputy Minister Charles Pascal uncomfortable because it interfered with diagnosis for early interventions.

It also attracted a stiff rebuke from the world’s leading authority on formative assessment, British assessment specialist Dylan Wiliam. He was not impressed at all with the Campbell review committee report. While it was billed as a student assessment review, Wiliam noted that none of the committee members is known for expertise in assessment, testing or evaluation.

Education insiders were betting that the Kathleen Wynne Liberal-friendly review team would simply unveil the plan for “broader student success” developed by Annie Kidder and her People for Education lobby group since 2012 and known as the “Measuring What Matters” project. It is now clear that something happened to disrupt the delivery of that carefully nurtured policy baby. Perhaps the impending Ontario provincial election was a factor.

Social and emotional learning is now at the very core of Ontario’s Achieving Excellence and Equity agenda and it fully embraces “supporting all students” and enabling them to achieve “a positive sense of well-being – the sense of self, identity, and belonging in the world that will help them to learn, grow and thrive.”

The Ontario model, hatched by the Education Ministry in collaboration with People for Education, is based upon a psycho-social theory that “well-being” has “four interconnected elements” critical to student development, with self/spirit at the centre. The whole formulation reflects the biases of the architects, since grit, growth mindset, respect and responsibility are nowhere to be found in the preferred set of social values inculcated in the system. Whatever the rationale, proceeding to integrate SEL into student reports and province-wide assessments is premature when recognized American experts Angela Duckworth and David Scott Yeager warn that the ‘generic skills’ are ill- defined and possibly unmeasureable.

Evidence-informed researchers such as Daisy Christodoulou, author of Making Good Progress (2017), do not support the proposed change in Ontario student assessment focus. Generic or transferable skills approaches such as Ontario is considering generate generic feedback of limited value to students in the classroom. Relying too heavily on teacher assessments is unwise because, as Christodoulou reminds us, disadvantaged students tend to fare better on larger-scale, objective tests. The proposed prose descriptors will, in all likelihood, be jargon-ridden, unintelligible to students and parents, and prove particularly inaccessible to students struggling in school.

One of the reasons Ontario has been recognized as a leading education system is because of its success over the past 20 years in establishing an independent EQAO with an established and professionally-sound provincial testing program in Grades 3, 6, and 9 and a Grade 10 literacy test that needs improvement. Legitimate teacher concerns about changes that increase marking loads do need to be addressed in any new student assessment plan and so do objections over the fuzzy, labour-intensive SEL student reports.

The proposal to phase out Ontario provincial testing may already be dead in the water.  If it is, you can guess that the April 30, 2018 editorial in The Toronto Star was definitely a contributing factor.  If the Wynne Liberals go down to defeat in the June 2018 election, the whole plan will likely be shelved or completely revamped by a new government.

Whether you support the EQAO or not, the agency has succeeded in establishing reliable quality standards for student performance in literacy and mathematics. Abandoning Grade 3 testing and gutting the EQAO is not only ill-conceived, but ill advised. Without the PCAP and provincial achievement benchmarks we would be flying blind into the future.

What can possibly be gained from eliminating system-wide Grade 3 provincial assessments?  How does that square with research suggesting early assessments are critical in addressing reading and numeracy difficulties?  Without Ontario, would it be possible to conduct comprehensive Grade 3 bench-marking across Canada?  If staff workload is the problem, then aren’t there other ways to address that matter?  And whatever happened to the proposed Social and Emotional Learning (SEL) assessments and reports? 

Advertisements

Read Full Post »

Starting next year, students from Kindergarten to Grade 12 in Canada’s largest province, Ontario, will be bringing home report cards that showcase six “transferable skills”: critical thinking, creativity, self-directed learning, collaboration, communication, and citizenship. It’s the latest example of the growing influence of education policy organizations, consultants and researchers promoting “broader measures of success” formerly known as “non-cognitive” domains of learning.

Portrait of Primary Schoolboys and Schoolgirls Standing in a Line in a Classroom

In announcing the latest provincial report card initiative in September 2017, Education Minister Mitzie Hunter sought to change the channel in the midst of a public outcry over continuing declines in province-wide testing results, particularly in Grade 3 and 6 mathematics. While Minister Hunter assured concerned parents that standardized testing was not threatened with elimination, she attempted to cast the whole reform as a move toward “measuring those things that really matter to how kids learn and how they apply that learning to the real world, after school.”

Her choice of words had a most familiar ring because it echoed the core message promoted assiduously since 2013 by Ontario’s most influential education lobby group, People for Education, and professionally-packaged in its well-funded Measuring What Matters‘ assessment reform initiative. In this respect, it’s remarkably similar in its focus to the Boston-based organization Transforming Education.   Never a supporter of Ontario’s highly-regarded provincial testing system, managed by the Education Quality and Accountability Office (EQAO), the Toronto-based group led by parent activist Annie Kidder has spent much of the past five years seeking to construct an alternative model that, in the usual P4E progressive education lexicon, “moves beyond the 3R’s.”

Kidder and her People for Education organization have always been explicit about their intentions and goals. The proposed framework for broader success appeared, almost fully formed, in its first 2013 policy paper.  After referring, in passing, to the focus of policy-makers on “evidence-based decision making,” the project summary disputed the primacy of “narrow goals” such as “literacy and numeracy” and argued for the construction of (note the choice of words) a “broader set of goals” that would be “measurable so students, parents, educators, and the public can see how Canada is making progress” in education.

Five proposed “dimensions of learning” were proposed, in advance of any research being undertaken to confirm their validity or recognition that certain competing dimensions had been ruled out, including resilience and its attendant personal qualities “grit’/conscientiousness, character, and “growth mindset.” Those five dimensions, physical and mental health, social-emotional development, creativity and innovation, and school climate, reflected the socially-progressive orientation of People for Education rather than any evidence-based analysis of student assessment policy and practice.

Two years into the project, the Measuring What Matters (MWM) student success framework had hardened into what began to sound, more and more, like a ‘new catechism.’  The Research Director, Dr. David Hagen Cameron, a PhD in Education from the University of London, hired from the Ontario Ministry of Education, began to focus on how to implement the model with what he termed “MWM change theory.” His mandate was crystal clear – to take the theory and transform it into Ontario school practice in four years, then take it national in 2017-18. Five friendly education researchers were recruited to write papers making the case for including each of the domains, some 78 educators were appointed to advisory committees, and the proposed measures were “field-tested” in 26 different public and Catholic separate schools (20 elementary, 6 secondary), representing a cross-section of urban and rural Ontario.

As an educational sociologist who cut his research teeth studying the British New Labour educational “interventionist machine,” Dr. Cameron was acutely aware that educational initiatives usually flounder because of poorly executed implementation. Much of his focus, in project briefings and academic papers from 2014 onward was on how to “find congruence” between MWM priorities and Ministry mandates and how to tackle the tricky business of winning the concurrence of teachers, and particularly in overcoming their instinctive resistance to  district “education consultants” who arrive promising support but end up extending more “institutional control over teachers in their classrooms.”

Stumbling blocks emerged when the MWM theory met up with the everyday reality of teaching and learning in the schools. Translating the proposed SEL domains into “a set of student competencies” and ensuring “supportive conditions” posed immediate difficulties. The MWM reform promoters came four square up against achieving “system coherence” with the existing EQAO assessment system and the challenge of bridging gaps between the system and local levels. Dr. Cameron and his MWM team were unable to effectively answer questions voicing concerns about increased teacher workload, the misuse of collected data, the mandate creep of schools, and the public’s desire for simple, easy to understand reports. 

Three years into the project, the research base supporting the whole venture began to erode, as more critical independent academic studies appeared questioning the efficacy of assessing Social and Emotional Learning traits or attributes. Dr. Angela L. Duckworth, the University of Pennsylvania psychologist who championed SEL and introduced “grit” into the educational lexicon, produced a comprehensive 2015 research paper with University of Texas scholar David Scott Yeager that raised significant concerns about the wisdom of proceeding, without effective measures, to assess “personal qualities” other than cognitive ability for educational purposes.

Coming from the leading SEL researcher and author of the best-selling book, GRIT, the Duckworth and Yeager research report in Education Researcher, dealt a blow to all state and provincial initiatives attempting to implement SEL measures of assessment. While Duckworth and Yeager held that personal attributes can be powerful predictors of academic, social and physical “well-being,” they claimed “not that everything that counts can be counted or that that everything that can be counted counts.” The two prominent SEL researchers warned that it was premature to proceed with such school system accountability systems. “Our working title, ” she later revealed, “was all measures suck, and they all suck in their own way.”

The Duckworth-Yeager report provided the most in-depth analysis (to date) of the challenges and pitfalls involved in advancing a project like Ontario’s Measuring What Works.  Assessing for cognitive knowledge was long-established and had proven reasonably reliable in measuring academic achievement, they pointed out, but constructing alternative measures remained in its infancy. They not only identified a number of serious limitations of Student Self-Report and Teacher Questionnaires and Performance Tasks (Table 1), but also provided a prescription for fixing what was wrong with system-wide implementation plans (Table 2).

 

 

 

 

 

 

 

 

 

 

 

Duckworth went public with her concerns in February of 2016.  She revealed to The New York Times that she had resigned from a California advisory board fronting a SEL initiative spearheaded by the California Office to Reform Education (CORE), and no longer supported using such tests to evaluate school performance. University of Chicago researcher Camille A. Farrington found Duckworth’s findings credible, stating: “There are so many ways to do this wrong.” The California initiative, while focused on a different set of measures, including student attendance and expulsions, had much in common philosophically with the Ontario venture.

The wisdom of proceeding to adopt SEL system-wide and to recast student assessment in that mold remains contentious.  Anya Kamenetz‘s recent National Public Radio commentary(August 16, 2017) explained, in some detail, why SEL is problematic because, so far, it’s proven impossible to assess what has yet to be properly defined as student outcomes.  It would also seem unwise to overlook Carol Dweck’s recently expressed concerns about using her “Growth Mindset” research for other purposes, such as proposing a system-wide SEL assessment plan.

The Ontario Measuring What Matters initiative, undeterred by such research findings, continues to plow full steam ahead. The five “dimensions of learning” have now morphed into five “domains and competencies” making no reference whatsoever to the place of the cognitive domain in the overall scheme.  It’s a classic example of three phenomena which bedevil contemporary education policy-making: tautology, bias confirmation and the sunk cost trap.  Repeatedly affirming a concept in theory (as logically irrefutable truth) without much supporting research evidence, gathering evidence to support preconceived criteria and plans, and proceeding because its too late to take a pause, or turn back, may not be the best guarantor of long-term success in implementing a system-wide reform agenda.

The whole Ontario Measuring What Works student assessment initiative raises far more questions than it answers. Here are a few pointed questions to get the discussion started and spark some re-thinking. 

On the Research Base:  Does the whole MWM plan pass the research sniff test?  Where does the cognitive domain and the acquisition of knowledge fit in the MWM scheme?  If the venture focuses on Social and Emotional Learning(SEL), whatever happened to the whole student resilience domain, including grit, character and growth mindset? Is it sound to construct a theory and then commission studies to confirm your choice of SEL domains and competencies?

On Implementation: Will introducing the new Social Learning criteria on Ontario student reports do any real harm? Is it feasible to introduce the full MWM plan on top of the current testing regime without imposing totally unreasonable additional burdens on classroom teachers?  Since the best practice research supports a rather costly “multivariate, multi-instrumental approach,” is any of this affordable or sustainable outside of education jurisdictions with significant and expandable capacity to fund such initiatives? 

 

Read Full Post »