Feeds:
Posts
Comments

Archive for the ‘Standardized Testing’ Category

The ongoing COVID-19 crisis may be claiming another victim in one of Canada’s leading education provinces – sound, reliable, standards-based and replicable summative student assessment. After thwarting a 2017-18 Learning Province plan to subvert the province’s Grade 3 provincial student assessment and broaden the ‘measures of success,’ the Ontario Doug Ford government and its education authorities appear to be falling into a similar trap.

What’s most unexpected is that the latest lubricant on the slippery slope toward ‘accountability-free’ education may well have been applied in Doug Ford’s Ontario under a government ostensibly committed to ‘back-to-basics’ and ‘measurable standards’ in the K-12 school system.


All K-12 provincial tests, administered by the Education Quality and Accountability Office (EQAO) were the first to go, rationalized as a response to the pandemic and its impact upon students, teachers, and families. More recently, Ontario’s education ministry opened the door to cancelling final exams by giving school boards the right to replace exam days with in-class instructional time.


Traditional examinations, the long-established benchmark for assessing student achievement, simply disappeared, for the second assessment cycle in a row, going back to the onset of the COVID-19 outbreak. Major metropolitan school districts, led by the Toronto District School Board, Peel District School Board and their coterminous Catholic boards, jumped in quickly to suspend exams in favour of what were loosely termed “culminating tasks” or “demonstrations of learning.”


Suspending exams was hailed in the Toronto Star news report as ‘a rare bright spot” for Ontario high school students. Elsewhere the decision to eliminate exams, once again, elicited barely a whimper, even from the universities. “Nobody’s missed standardized tests or final exams,” University of Ottawa professor Andy Hargreaves noted rather gleefully during the October 29-30 Canadian EdTech Summit.


Suspending examinations has hidden and longer-term consequences not only for students and teachers, but for what remains of school-system accountability. What’s most surprising, here in Canada, is that such decisions are rarely evidence-informed or predicated on the existence of viable, proven and sustainable alternatives.


Proposing to substitute culminating projects labelled as “demonstrations of learning” is based upon the fallacious assumption that teacher assessments are better than final exams. Cherry-picking a recent sympathetic research study, such as a May 2019 Journal of Child Psychology and Psychiatry article highlighting exam stress, may satisfy some, but it is no substitute for serious research into the effectiveness of previous competency-based “culminating activity” experiments.


Sound student evaluation is based upon a mix of assessment strategies, ranging from formative (daily interaction and feedback) assessment to standardized tests and examinations (summative assessment). It is highly desirable to base student assessment upon a suitable combination of reasonably objective testing instruments as well as teacher-driven subjective assessment. UK student assessment expert, Daisy Christodoulou, puts it this way: “Tests are inhuman – and that is what is good about them.”


Teacher-made and evaluated assessments appear, on the surface, to be more gentle and fairer than exams, but such assumptions can be misleading, given the weight of research supporting “level playing field” evaluations. The reality is that teacher assessments tend to be more impressionistic, not always reliable, and can produce outcomes less fair to students.


Eliminating provincial tests and examinations puts too much emphasis on teacher assessment, a form of student evaluation with identified biases. A rather extensive 2015 student assessment literature review, conducted by Professor Rob Coe at the Durham University Centre for Evaluation and Monitoring, identifies the typical biases. Compared to standardized tests, teacher assessment tends to exhibit biases against exceptional students, specifically those with special needs, challenging behaviour, language difficulties, or personality types different than their teacher. Teacher-marked evaluations also tend to reinforce stereotypes, such as boys are better at math or racialized students underperform in school.


Replacing final exams with teacher-graded ‘exhibitions’ or ‘demonstrations of learning mastery’ sounds attractive, but is fraught with potential problems, judging for their track record since their inception in the late 1980s. Dreamed up by the North American father of Outcome-Based Education, Dr. William Spady, assessing student competencies based upon ‘demonstrations of learning’ have a checkered history. Grappling with the OBE system and its time-consuming measurement of hundreds of competencies finished it off with classroom teachers.


A more successful version of DOLM (Demonstration of Learning Mastery), developed by Deborah Meier, Theodore Sizer and the Coalition of Essential Schools (1988 -2016), was piloted in small schools with highly-trained teachers. Such exhibitions were far from improvisational but rather “high stakes, standards aligned assessments” which aimed at securing “commitment, engagement and high-level intellectual achievement” and conceived as “a fulcrum for school transformation.” Systemic distrust, aggravated by testing and accountability, Meier conceded, “rendered attempts to create such contexts infertile.”


Constructing summative evaluation models to replace final exams is not easy and it has defeated waves of American assessment reformers. The Kentucky Commonwealth Accountability and Testing System (CATS) 2007-2008, and its predecessor, KRIS (1992-1998) serve as a case in point. Like most of these first generation reforms, the KRIS experiment was widely considered a failure. Its performance-based tools were found to be unreliable, professional development costs too high, and two elements of the program, Mathematics Portfolios and Performance Events, summarily abandoned. Writing portfolios continued under CATS but a 2008 audit revealed wide variations in marking standards and lengthy delays in returning the marked results of open answer questions.


Most of the recent generation of initiatives were sparked by a January 2015 white paper, “Performance Assessments: How State Policy Can Advance Assessments for 21st Century Learning,” produced by two leading American educators, Linda Darling-Hammond and Ace Parsi. Seven American states were granted a waiver under the Every Student Succeeds Act (ESSA) to experiment with such competency-based assessment alternatives.


Constructing a state model compliant with established national standards in New Hampshire proved to be an insurmountable challenge. While supported by Monty Neill and Fair Test Coalition advocacy forces, New Hampshire’s Performance Assessments for Competency Education (PACE) system ran into significant problems trying to integrate Classroom-Based Evidence (CBE) with state testing criteria and expectations. Establishing evaluation consistency and “comparability” across schools and districts ultimately sunk the experiment. It was anchored in state standards and required external moderation, including re-scoring of classroom-based work. Serving two masters created heavier teacher marking loads and made it unsustainable. Federal funding for such competency-based assessment experiments was cut in December 2019, effectively ending support for that initiative.


Provincial tests and exams exist for a reason and ensure that we do not fly blind into the future.. Replacing final exams with a patchwork solution is not really a wise option this school year. Simply throwing together culminating student activities to replace examinations is, judging from past experiments, most likely a recipe for inconsistency, confusion, and ultimate failure.


Teachers will, as always, do their best and especially so given the current turbulent circumstances. Knowing what we know about student assessment, let’s not pretend that the crisis measures are better than traditional and more rigorous systems that have stood the test of time.

What are the fundamental purposes of summative student assessment? Should provincial tests and final exams be suspended during the second year of the COVID-19 pandemic? Where’s the research to support the effectiveness of alternative ‘demonstration of learning’ strategies? Are we now on the slippery slope toward ‘accountability-free’ education?

Read Full Post »

Ontario’s Mathematics program for Kindergarten to Grade 12 has just undergone a significant revision in the wake of the continuing decline in student performance in recent years. On June 24, 2020, Education Minister Stephen Lecce unveiled the new mathematics curriculum for elementary school students with a promised emphasis on the development of basic concepts and fundamental skills. In a seemingly contradictory move, the Minister also announced that the government was cancelling next year’s EQAO testing in Grades 3 and 6 to give students and teachers a chance to get used to the new curriculum.

While the Doug Ford Government was elected in June 2018 on a “Back to the Basics” education pledge, the new mathematics curriculum falls considerably short of that commitment. While the phrase “back to the basics” adorned the media release, the actual public message to parents and the public put more emphasis on providing children with practical skills. Financial literacy will be taught at every grade level and all students will learn coding or computer programming skills, starting in Grade 1 in Ontario schools. A more detailed analysis of the actual math curriculum changes reveals a few modest steps toward reaffirming fundamental computation skills, but all cast within a framework emphasizing the teaching of “social-emotional learning skills.” 

The prevailing “Discovery Math” philosophy enshrined in the 2005 Ontario curriculum may no longer be officially sanctioned, but it remains entrenched in current teaching practice. Simply issuing provincial curriculum mandates will not change that unless teachers themselves take ownership of the curriculum changes. Cutting the number of learning outcomes for Grades 1 to 8 down to 465 “expectations” of learning, some 150 fewer than back in 2005, will be welcomed, especially if it leads to greater mastery of fewer outcomes in the early grades.

The parents’ guide to the new math curriculum, released with the policy document, undercuts the “back to basics” commitment and tilts in a different direction. The most significant revamp is not the reintroduction of times tables, teaching fractions earlier on, or emphasizing the mastery of standard algorithms. It is the introduction of a completely new “strand” with the descriptor “social-emotional learning skills.” That new piece is supposedly designed to help students “develop confidence, cope with challenges, and think critically.” It also embodies the ‘discovery learning‘ approach of encouraging students to “use strategies” and “be resourceful” in “working through challenging problems.”

Ontario’s most influential mathematics curriculum consultants, bracing for the worst, were quick to seize upon the unexpected gift.  Assistant professor of math education at the Ontario Institute for Studies in Education (OISE), Mary Reid, widely known for supporting the 2005 curriculum philosophy, identified the “social-emotional learning” component as “critically important” because it would “help kids tremendously.” That reaction was to be expected because Reid’s research focuses on “math anxiety” and building student confidence through social-emotional learning skills development.

Long-time advocates for higher math standards such as Math teacher Barry Garelick and Ottawa parent Clive Packer saw the recommended approach echoing the prevailing ‘discovery math’ ideology.  Expecting to see a clear statement endorsing mastering the fundamentals and building confidence through enhanced competencies, they encountered documents guiding teachers, once again, toward “making math engaging, fun and interesting for kids.” The whole notion that today’s math teachers utilizing traditional methods stress “rote memorization” and teach kids to “follow procedure without understanding why” is completely bogus. Such caricatures essentially foreclose on serious discussion about what works in the math classroom.

How does the new Ontario math curriculum compare with the former 2005 curriculum?  Identifying a few key components allows us to spot the similarities and differences:

Structure and Content:

  • New curriculum: “clear connections show how math skills build from year to year,” consistent for English-language and french-language learners.
  • Former 2005 curriculum: Difficult to make connections from year-to-year, and inconsistencies in expectations for English-speaking and French-speaking learners.

Multiplication and division:

  • Grade 3, new curriculum: “recall and demonstrate multiplication facts of 2, 5, and 10, and related division facts.” In graduated steps, students learn multiplication facts, starting with 0 X 0 to 12 X 12 to “enhance problem solving and mental math.”
  • Grade 3, 2005 curriculum: “multiply to 7 x 7 and divide to 49 ÷ 7, using a variety of mental strategies (e.g., doubles, doubles plus another set, skip counting) No explicit requirement to teach multiplication tables.

Fractions:

  • Grade 1, new curriculum: “introduced to the idea of fractions, through the context of sharing things equally.”
  • Grade 1, 2005 curriculum: Vague reference – “introducing the concept of equality using only concrete materials.”

Measurement of angles:

  • Grade 6, new curriculum: “use a protractor to measure and construct angles up to 360°, and state the relationship between angles that are measured clockwise and those that are measured counterclockwise.”
  • Grade 6, 2005 curriculum: “measure and construct angles up to 180° using a protractor, and classify them as acute, right, obtuse, or straight angles.”

Graphing data:

  • Grade 8, new curriculum: “select from among a variety of graphs, including scatter plots, the type of graph best suited to represent various sets of data; display the data in the graphs with proper sources, titles, and labels, and appropriate scales; and justify their choice of graphs “
  • Grade 8, 2005 curriculum: “select an appropriate type of graph to represent a set of data, graph the data using technology, and justify the choice of graph”

Improvements in the 2020 Math curriculum are incremental at best likely insufficient to make a significant difference. Providing students with effective instruction in mathematics is, after all, what ultimately leads to confidence, motivation, engagement, and critical thinking. Starting with confidence-building exercises gets it all backwards. Elementary mathematics teachers will be guided, first, to developing social and emotional learning (SEL) skills:  (1) identify and manage emotions; (2) recognize sources of stress  and cope with challenges; (3) maintain positive motivation and perseverance; (4) build relationships and communicate effectively; (5) develop self-awareness and sense of identity; (6) think critically and creatively. Upon closer scrutiny these are generic skills which are not only problematic but also entirely unmeasurable.

The fundamental question raised by the new Ontario math curriculum reform is whether it is equal to the task of improving stagnating student test scores. Student results in English-language schools in Grade 3 and Grade 6 mathematics, on EQAO tests, slid consistently from 2012 to 2018. Back in 2012, 68 % of Grade 3 students met provincial standards; in 2018, the mean score dropped to 58 %.  In Grade 6 mathematics, it was worse, plummeting from 58 % to 48% meeting provincial standards. On international tests, Ontario’s Program of International Student Assessment (PISA) Math scores peaked in 2003 at 530 and dropped in 2013 to 509, then recovered slightly in 2018 to 514, consistent with the provincial slide (See Graph – Greg Ashman). Tinkering with math outcomes and clinging to ineffective “mathematical processes” will likely not be enough to change that trajectory.

Building self-esteem and investing resources in more social and emotional learning (SEL) is not enough to turn-around student math achievement. Yet reviewing the new mathematics curriculum, the Ontario curriculum designers seem to have lost their way. It all looks strangely disconnected from the supposed goal of the reform — to raise provincial math standards and improve student performance on provincial, national, and international assessments.

What’s the real purpose of the new Ontario mathematics curriculum reform?  Does the latest curriculum revision reflect the 2018 commitment to move forward with fundamentals or is it a thinly-disguised attempt to integrate social and emotional learning into the program?  Where is the evidence, in the proposed curriculum, that Ontario education authorities are laser focused on improving math standards? Will this latest reform make much of a difference for students looking for a bigger challenge or struggling in math? 

Read Full Post »

Canada’s most populous province aspires to education leadership and tends to exert influence far beyond our coast-to-coast provincial school systems. That is why the latest Ontario student assessment initiative, A Learning Province, is worth tracking and deserves much closer scrutiny. It was officially launched in September of 2017, in the wake of a well-publicized decline in provincial Math test scores and cleverly packaged as a plan to address wider professional concerns about testing and accountability.

Declining Math test scores among public elementary school students in Ontario were big news in late August 2017 for one one good reason- the Ontario Ministry’s much-touted $60-million “renewed math strategy” completely bombed when it came to alieviating the problem. On the latest round of  provincial standardized tests — conducted by the Education Quality and Accountability Office (EQAO)only half of Grade 6 students met the provincial standard in math, unchanged from the previous year. In 2013, about 57 per cent of Grade 6 students met the standard  Among Grade 3 students, 62 per cent met the provincial standard in math, a decrease of one percentage point since last year.

The Ontario government’s response, championed by Premier Kathleen Wynne and Education Minister Mitzie Hunter, was not only designed to change the channel, but to initiate a “student assessment review” targeting the messenger, the EQAO, and attempting to chip away at its hard-won credibility, built up over the past twenty years. While the announcement conveyed the impression of “open and authentic” consultation, the Discussion Paper made it crystal clear that the provincial agency charged with ensuring educational accountability was now under the microscope.  Reading the paper and digesting the EQAO survey questions, it becomes obvious that the provincial tests are now on trial themselves, and being assessed on criteria well outside their current mandate.

Ontario’s provincial testing regime should be fair game when it comes to public scrutiny. When spending ballooned to $50 million a year in the late 1990s, taxpayers had a right to be concerned. Since 2010, EQAO costs have hovered around $34 million or $17 per student, the credibility of the test results remain widely accepted, and the testing model continues to be free of interference or manipulation.  It’s working the way it was intended — to provide a regular, reasonably reliable measure of student competencies in literacy and numeracy.

The EQAO is far from perfect, but is still considered the ‘gold standard’ right across Canada.  It has succeeded in providing much greater transparency, but — like other such testing regimes – has not nudged education departments far enough in the direction of improving teacher specialist qualifications or changing the curriculum to secure better student results.  The Grade 10 Literacy Test remains an embarrassment. In May 2010, the EQAO report, for example, revealed that hundreds of students who failed the 2006 test were simply moved along trough the system without passing that graduation standard. Consistently, about 19 to 24 per cent of all students fall short of acceptable literacy, and 56 per cent of all Applied students, yet graduation rates have risen from 68% to 86% province-wide.

The Ontario Ministry is now ‘monkeying around’ with the EQAO and seems inclined toward either neutering the agency to weaken student performance transparency or broadening its mandate to include assessing students for “social and emotional learning’ (SEL), formerly termed “non-cognitive learning.”  The “Independent Review of Assessment and Reporting” is being supervised by some familiar Ontario education names, including the usual past and present OISE insiders, Michael Fullan, Andy Hargreaves, and Carol Campbell.  It’s essentially the same Ontario-focused group, minus Dr. Avis Glaze, that populates the International Education Panel of Advisors in Scotland attempting to rescue the Scottish National Party’s faltering “Excellence for All” education reforms.

The published mandate of the Student Assessment Review gives it all away in a few critical passages.  Most of the questions focus on EQAO testing and accountability and approach the tests through a “student well-being” and “diversity” lens.  An “evidence-informed” review of the current model of assessment and reporting is promised, but it’s nowhere to be found in the discussion paper. Instead, we are treated to selected excerpts from official Ontario policy documents, all supporting the current political agenda, espoused in the 2014 document, Achieving Excellence: A Renewed Vision for Education in Ontario. The familiar four pillars, achieving excellence, ensuing equity, promoting well-being, and enhancing public confidence are repeated as secular articles of faith.

Where’s the research to support the proposed direction?  The Discussion Paper does provide capsule summaries of two assessment approaches, termed “large-scale assessments” and “classroom assessments, ” but critical analysis of only the first of the two approaches.  There’s no indication in A Learning Province that the reputedly independent experts recognize let alone heed the latest research pointing out the pitfalls and problems associated with Teacher Assessments (TA) or the acknowledged “failure” of Assessment for Learning (AfL).  Instead, we are advised, in passing, that the Ontario Ministry has a research report, produced in August 2017, by the University of Ottawa, examining how to integrate “student well-being” into provincial K-12 assessments.

The Ontario Discussion Paper is not really about best practice in student assessment.  It’s essentially based upon rather skewed research conducted in support of “broadening student assessments” rather that the latest research on what works in carrying out student assessments in the schools.  Critical issues such as the “numeracy gap” now being seriously debated by leading education researchers and student assessment experts are not even addressed in the Ontario policy paper.

Educators and parents reading A Learning Province would have benefited from a full airing of the latest research on what actually works in student assessment, whether or not it conforms with provincial education dogma.  Nowhere does the Ontario document recognize Dylan Wiliam’s recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments.  Nor does the Ontario student assessment review team heed the recent findings of British assessment expert, Daisy Christodoulou.  In her 2017 book, Making Good Progress, Christodoulou provides compelling research evidence to demonstrate why and how standardized assessments are not only more reliable measures, but fairer for students form unprivileged families.  She also challenges nearly every assumption built into the Ontario student assessment initiative.

The latest research and best practice in student assessment cut in a direction that’s different from where the Ontario Ministry of Education appears to be heading. Christodoulou’s Making Good Progress cannot be ignored, particularly because it comes with a ringing endorsement from the architect of Assessment for Learning, Dylan Wiliam.  Classroom teachers everywhere are celebrating Christodoulou for blowing the whistle on “generic skills” assessment, ‘rubric-mania,’ impenetrable verbal descriptors, and the mountains of assessment paperwork. Bad student assessment practices, she shows, lead to serious workload problems for classroom teachers.  Proceeding to integrate SEL into province-wide assessments when American experts Angela Duckworth and David Scott Yeager warn that it’s premature and likely to fail is simply foolhardy.  No education jurisdiction priding itself on being “A Learning Province” would plow ahead when the lights turn to amber.

The Ontario Student Assessment document, A Learning Province, may well be running high risks with public accountability for student performance.  It does not really pass the sound research ‘sniff test.’  It looks very much like another Ontario provincial initiative offering a polished, but rather thinly veiled, rationale for supporting the transition away from “large-scale assessment” to “classroom assessment” and grafting unproven SEL competencies onto EQAO, running the risk of distorting its core mandate.

Where is Ontario really heading with its current Student Assessment policy initiative?  Where’s the sound research to support a transition from sound, large-scale testing to broader measures that can match its reliability and provide a level playing field for all?  Should Ontario be heeding leading assessment experts like Dylan Wiliam, Daisy Christodoulou, and Angela Duckworth? Is it reasonable to ask whether a Ministry of Education would benefit from removing a nagging burr in its saddle? 

 

Read Full Post »

Starting next year, students from Kindergarten to Grade 12 in Canada’s largest province, Ontario, will be bringing home report cards that showcase six “transferable skills”: critical thinking, creativity, self-directed learning, collaboration, communication, and citizenship. It’s the latest example of the growing influence of education policy organizations, consultants and researchers promoting “broader measures of success” formerly known as “non-cognitive” domains of learning.

Portrait of Primary Schoolboys and Schoolgirls Standing in a Line in a Classroom

In announcing the latest provincial report card initiative in September 2017, Education Minister Mitzie Hunter sought to change the channel in the midst of a public outcry over continuing declines in province-wide testing results, particularly in Grade 3 and 6 mathematics. While Minister Hunter assured concerned parents that standardized testing was not threatened with elimination, she attempted to cast the whole reform as a move toward “measuring those things that really matter to how kids learn and how they apply that learning to the real world, after school.”

Her choice of words had a most familiar ring because it echoed the core message promoted assiduously since 2013 by Ontario’s most influential education lobby group, People for Education, and professionally-packaged in its well-funded Measuring What Matters‘ assessment reform initiative. In this respect, it’s remarkably similar in its focus to the Boston-based organization Transforming Education.   Never a supporter of Ontario’s highly-regarded provincial testing system, managed by the Education Quality and Accountability Office (EQAO), the Toronto-based group led by parent activist Annie Kidder has spent much of the past five years seeking to construct an alternative model that, in the usual P4E progressive education lexicon, “moves beyond the 3R’s.”

Kidder and her People for Education organization have always been explicit about their intentions and goals. The proposed framework for broader success appeared, almost fully formed, in its first 2013 policy paper.  After referring, in passing, to the focus of policy-makers on “evidence-based decision making,” the project summary disputed the primacy of “narrow goals” such as “literacy and numeracy” and argued for the construction of (note the choice of words) a “broader set of goals” that would be “measurable so students, parents, educators, and the public can see how Canada is making progress” in education.

Five proposed “dimensions of learning” were proposed, in advance of any research being undertaken to confirm their validity or recognition that certain competing dimensions had been ruled out, including resilience and its attendant personal qualities “grit’/conscientiousness, character, and “growth mindset.” Those five dimensions, physical and mental health, social-emotional development, creativity and innovation, and school climate, reflected the socially-progressive orientation of People for Education rather than any evidence-based analysis of student assessment policy and practice.

Two years into the project, the Measuring What Matters (MWM) student success framework had hardened into what began to sound, more and more, like a ‘new catechism.’  The Research Director, Dr. David Hagen Cameron, a PhD in Education from the University of London, hired from the Ontario Ministry of Education, began to focus on how to implement the model with what he termed “MWM change theory.” His mandate was crystal clear – to take the theory and transform it into Ontario school practice in four years, then take it national in 2017-18. Five friendly education researchers were recruited to write papers making the case for including each of the domains, some 78 educators were appointed to advisory committees, and the proposed measures were “field-tested” in 26 different public and Catholic separate schools (20 elementary, 6 secondary), representing a cross-section of urban and rural Ontario.

As an educational sociologist who cut his research teeth studying the British New Labour educational “interventionist machine,” Dr. Cameron was acutely aware that educational initiatives usually flounder because of poorly executed implementation. Much of his focus, in project briefings and academic papers from 2014 onward was on how to “find congruence” between MWM priorities and Ministry mandates and how to tackle the tricky business of winning the concurrence of teachers, and particularly in overcoming their instinctive resistance to  district “education consultants” who arrive promising support but end up extending more “institutional control over teachers in their classrooms.”

Stumbling blocks emerged when the MWM theory met up with the everyday reality of teaching and learning in the schools. Translating the proposed SEL domains into “a set of student competencies” and ensuring “supportive conditions” posed immediate difficulties. The MWM reform promoters came four square up against achieving “system coherence” with the existing EQAO assessment system and the challenge of bridging gaps between the system and local levels. Dr. Cameron and his MWM team were unable to effectively answer questions voicing concerns about increased teacher workload, the misuse of collected data, the mandate creep of schools, and the public’s desire for simple, easy to understand reports. 

Three years into the project, the research base supporting the whole venture began to erode, as more critical independent academic studies appeared questioning the efficacy of assessing Social and Emotional Learning traits or attributes. Dr. Angela L. Duckworth, the University of Pennsylvania psychologist who championed SEL and introduced “grit” into the educational lexicon, produced a comprehensive 2015 research paper with University of Texas scholar David Scott Yeager that raised significant concerns about the wisdom of proceeding, without effective measures, to assess “personal qualities” other than cognitive ability for educational purposes.

Coming from the leading SEL researcher and author of the best-selling book, GRIT, the Duckworth and Yeager research report in Education Researcher, dealt a blow to all state and provincial initiatives attempting to implement SEL measures of assessment. While Duckworth and Yeager held that personal attributes can be powerful predictors of academic, social and physical “well-being,” they claimed “not that everything that counts can be counted or that that everything that can be counted counts.” The two prominent SEL researchers warned that it was premature to proceed with such school system accountability systems. “Our working title, ” she later revealed, “was all measures suck, and they all suck in their own way.”

The Duckworth-Yeager report provided the most in-depth analysis (to date) of the challenges and pitfalls involved in advancing a project like Ontario’s Measuring What Works.  Assessing for cognitive knowledge was long-established and had proven reasonably reliable in measuring academic achievement, they pointed out, but constructing alternative measures remained in its infancy. They not only identified a number of serious limitations of Student Self-Report and Teacher Questionnaires and Performance Tasks (Table 1), but also provided a prescription for fixing what was wrong with system-wide implementation plans (Table 2).

 

 

 

 

 

 

 

 

 

 

 

Duckworth went public with her concerns in February of 2016.  She revealed to The New York Times that she had resigned from a California advisory board fronting a SEL initiative spearheaded by the California Office to Reform Education (CORE), and no longer supported using such tests to evaluate school performance. University of Chicago researcher Camille A. Farrington found Duckworth’s findings credible, stating: “There are so many ways to do this wrong.” The California initiative, while focused on a different set of measures, including student attendance and expulsions, had much in common philosophically with the Ontario venture.

The wisdom of proceeding to adopt SEL system-wide and to recast student assessment in that mold remains contentious.  Anya Kamenetz‘s recent National Public Radio commentary(August 16, 2017) explained, in some detail, why SEL is problematic because, so far, it’s proven impossible to assess what has yet to be properly defined as student outcomes.  It would also seem unwise to overlook Carol Dweck’s recently expressed concerns about using her “Growth Mindset” research for other purposes, such as proposing a system-wide SEL assessment plan.

The Ontario Measuring What Matters initiative, undeterred by such research findings, continues to plow full steam ahead. The five “dimensions of learning” have now morphed into five “domains and competencies” making no reference whatsoever to the place of the cognitive domain in the overall scheme.  It’s a classic example of three phenomena which bedevil contemporary education policy-making: tautology, bias confirmation and the sunk cost trap.  Repeatedly affirming a concept in theory (as logically irrefutable truth) without much supporting research evidence, gathering evidence to support preconceived criteria and plans, and proceeding because its too late to take a pause, or turn back, may not be the best guarantor of long-term success in implementing a system-wide reform agenda.

The whole Ontario Measuring What Works student assessment initiative raises far more questions than it answers. Here are a few pointed questions to get the discussion started and spark some re-thinking. 

On the Research Base:  Does the whole MWM plan pass the research sniff test?  Where does the cognitive domain and the acquisition of knowledge fit in the MWM scheme?  If the venture focuses on Social and Emotional Learning(SEL), whatever happened to the whole student resilience domain, including grit, character and growth mindset? Is it sound to construct a theory and then commission studies to confirm your choice of SEL domains and competencies?

On Implementation: Will introducing the new Social Learning criteria on Ontario student reports do any real harm? Is it feasible to introduce the full MWM plan on top of the current testing regime without imposing totally unreasonable additional burdens on classroom teachers?  Since the best practice research supports a rather costly “multivariate, multi-instrumental approach,” is any of this affordable or sustainable outside of education jurisdictions with significant and expandable capacity to fund such initiatives? 

 

Read Full Post »

Today’s business leaders have a clear sense of where a better future lies for Canadians, especially those in Atlantic Canada. The Canadian Chamber of Commerce initiative Ten Ways to Build a Canada That Wins has identified a list of key opportunities Canada, and the Atlantic Region, can seize right now to “regain its competitiveness, improve its productivity and grow its economy.” Competitiveness, productivity and growth are the three cornerstones of that vision for Canada at 150 and this much is also clear – it cannot be done without a K-12 and Post-Secondary education system capable of nurturing and sustaining that vision.

Yet the educational world is a strange place with its own tribal conventions, familiar rituals, ingrained behaviours, and unique lexicon. Within the K-12 school system, educational reform evolves in waves where “quick fixes” and “fads” are fashionable and yesterday’s failed innovations can return, often recycled in new guises.

Today’s business leaders –like most citizens–also find themselves on the outside looking in and puzzled by why our provincial school systems are so top down, bureaucratic, distant and seemingly impervious to change.  Since Jennifer Lewington and Graham Orpwood described the School System as a “Fortress” maintaining clear  boundaries between “insiders and outsiders” back in 1993 not much has changed.  Being on an “advisory committee” gives you some access, but can easily become a vehicle for including you in a consultation process with pre-determined conclusions determined by the system’s insiders and serving the interests of the educational status quo.

Provincial education authorities, pressed by concerned parents, business councils and independent think tanks like the Atlantic Institute for Market Studies (AIMS) have embraced standardized testing in the drive to improve literacy and numeracy, fundamentals deemed essential for success in the so-called “21st century knowledge-based economy.” Student testing and accountability may be widely accepted by the informed public, but they are far from secure. Provincial teachers’ unions remain unconvinced and continue to resist standardized testing and to propose all kinds of “softer” alternatives, including “assessment for learning,” “school accreditation,” and broadening testing to include “social and emotional learning.”

Two decades ago, the Metropolitan Toronto Learning Partnership was created and, to a large extent, that education-business alliance has tended to set the pattern for business involvement in public education. Today The Learning Partnership has expanded to become a national charitable organization dedicated to support, promote and advance publicly funded education in Canada.  With the support of major corporate donors, the LP brings together business, government, school boards, teachers, parents, labour and community organizations across Canada in “a spirit of long term committed partnerships.”  It’s time to ask whether that organization has done much to improve student achievement levels and to address concerns about the quality of high school graduates.

A change in focus and strategy is in order if the business voice for education reform is to be heard and heeded in the education sector. Our public school system is simply not good enough. Penetrating the honey-coated sheen of edu-babble and getting at the real underlying issues requires some clear-headed independent analysis. We might begin by addressing five significant issues that should be elevated to the top of the education policy agenda:

  • declining enrollment and school closures – and the potential for community-hub social enterprise schools,
  • the sunk cost trap — and the need to demonstrate that education dollars are being invested wisely,
  • the future of elected school boards — and alternatives building upon school-based governance and management,
  • the inclusive education morass — and the need to improve intensive support services;
  • the widening attainment-achievement gap — improving the quality of high school graduates.

In each case, in-depth analysis brings into sharper relief the critical need for a business voice committed to major surgery –educational restructuring and curriculum reform from the schools up rather than the top down.

The education system in Atlantic Canada, for example, has come a long way since the 1990s when the whole domain was essentially an “accountability-free zone.” Back in 2002, AIMS began to produce and publish a system of high school rankings that initially provoked howls of outrage among school board officials.  Today in Atlantic Canada, education departments and school boards have all accepted the need for provincial testing regimes to assess Primary to Grade 12 student performance, certainly in English literacy and mathematics.

Prodded and cajoled by the annual appearance of AIMS’s High School Report Cards, school boards became far more attuned to the need for improvement in student achievement results. While we have gained ground on standardized assessment of student achievement, final high school examinations have withered and, one -by-one been eliminated and graduation rates have gone through the roof, especially in the Maritime provinces. Without an active and engaged business presence, provincial tests assessing student competence in mathematics and literacy may be imperiled.  Student assessment reform aimed at broadening the focus to  “social and emotional learning” poses another threat. Most recently, a Nova Scotia School Transitions report issued in June 2016 proposed further “investment” in school-college-workplace bridging programs without ever assessing or addressing the decline in the preparedness of those very high school graduates.

Today, new and profoundly important questions are being raised:  What has the Learning Partnership actually achieved over two decades? What have we gained through the provincial testing regimes — and what have we lost?  Where is the dramatic improvement in student learning that we have been expecting?  If students and schools continue to under-perform, what comes next?  Should Canadian education reformers and our business allies begin looking at more radical reform measures such as “turnaround school” strategies, school-based management, or charter schools? 

Where might the business voice have the biggest impact? You would be best advised to either engage in these wider public policy questions or simply lobby and advocate for a respect for the fundamentals: good curriculum, quality teaching, clear student expectations, and more public accountability.  Standing on the sidelines has only served to perpetuate the status quo in a system that, first and foremost, serves the needs of educators rather than students and local school communities.

Revised and condensed from an Address the the Atlantic Chamber of Commerce, June 6, 2017, in Summerside, PEI. 

Read Full Post »

“Learning isn’t a destination, starting and stopping at the classroom door. It’s a never-ending road of discovery and wonder that has the power to transform lives. Each learning moment builds character, shapes dreams, guides futures, and strengthens communities.” Those inspiring words and the accompanying video, Learning makes us, left me tingling like the ubiquitous ‘universal values’ Coke commercials.

Eventually, I snapped out of it –and realized that I’d been transported into the global world of  British-based Pearson Education, the world’s largest learning and testing corporation, and drawn into its latest stratagem- the allure of 21st century creativity and social-emotional learning. The age of Personalized (or Pearsonalized) learning “at a distance” was upon us.

Globalization has completely reshaped education policy and practice, for better or worse. Whatever your natural ideological persuasion, it is now clear in early 2017 that the focus of K-12 education is on aligning state and provincial school systems with the high-technology economy and the instilling of workplace skills dressed-up as New Age ’21st century skills’ – disruptive innovation, creative thinking, competencies, and networked and co-operative forms of work.

The rise to dominance of “testopoly” from No Child Left Behind (NCLB) to the Common Core Standards assessment regime, and its Canadian variations, has made virtually everyone nervous, including legions of teachers and parents. Even those, like myself, who campaigned for Student Achievement Testing in the 1990s, are deeply disappointed with the meagre results in terms of improved teaching and student learning.

testopolygame

The biggest winner has been the learning corporation giants, led by Pearson PLC, who now control vast territories in the North American education sector. After building empires through business deals to digitalize textbooks and develop standardized tests with American and Canadian education authorities, and the Organization for Economic Cooperation and Development (OECD), the company was again reinventing itself in response to the growing backlash against traditional testing and accountability.

Critics on the education left, most notably American education historian Diane Ravitch and BCTF research director Larry Kuehn, were among the first to flag and document the rise of Pearson Education, aptly dubbed “the many headed corporate hydra of education.” A June 2012 research report for the BCTF  by Donald Gutstein succeeded in unmasking the hidden hand of Pearson in Canadian K-12 education, especially after its acquisition, in 2007, of PowerSchool and Chancery Software, the two leading  computerized student information tracking systems.

More recently, New York journalist Owen Davis has amply demonstrated how  Pearson “made a killing” on the whole American testing craze, including the Common Core Standards assessment program. It culminated in 2013, when Pearson won the U.S. contract to develop tests for the Partnership for Assessment of Readiness for College and Careers, or PARCC, as the only bidder.

testopolystudentprotesrnm2015When the pendulum started swinging back against testing from 2011 to 2013, Pearson PLC was on the firing line in the United States but remained relatively sheltered in Canada. From Texas to New York to California, state policy makers scaled back on standardized assessment programs, sparked by parent and student protests. In Canada, the Toronto-based People for Education lobby group, headed by veteran anti-tester Annie Kidder, saw an opening and began promoting “broader assessment” strategies encompassing “social-emotional learning” or SEL. Pearson bore the brunt of parent outrage over testing and lost several key state contracts, including the biggest in Texas, the birthplace of NCLB.

Beginning in 2012, Pearson PLC started to polish up its public image and to reinvent its core education services. Testing only represented 10 per cent of Pearson’s overall U.S. profits, but the federal policy shift represented by the 2015 Every Student Succeeds Act (ESSA) tilted in the direction of reducing “unnecessary testing.” The company responded with a plan to shift from multiple-choice tests to “broader measures of school performance,” such as school climate, a survey-based SEL metric of students’ social and emotional well-being. 

“For the past four years, Pearson’s Research & Innovation Network has been developing, implementing, and testing assessment innovations,” Vice President Kimberly O’Malley recently reported. This new Pearson PLC Plan is closely aligned with ESSA and looks mighty similar to the Canadian People for Education “Broader Measures” model being promoted by Annie Kidder and B.C. education consultant Charles Ungerleider. Whether standardized testing recedes or not, it’s abundantly clear that “testopoly” made Pearson and the dominance of the learning corporations is just entering a new phase.

How did Pearson and the learning corporations secure such control over, and influence in, public education systems?  What’s behind the recent shift from core knowledge achievement testing to social-emotional learning?  Is it even possible to measure social-emotional learning and can school systems afford the costs of labour-intensive “school improvement” models?  Will the gains in student learning, however modest, in terms of mathematics and literacy, fade away under the new regime? 

Read Full Post »

“Canadians can be proud of our showing in the 2015 Programme for International Student Assessment (PISA) report,” declared Science consultant Bonnie Schmidt and former Council of Ministers of Education (CMEC) director Andrew Parkin in their first-off-the mark December 6, 2016 response to the results. “We are, ” they added, “one of only a handful of countries that places in the top tier of the Oganization for Economic Development and Cooperation (OECD) in each of the three subjects tested:science, reading and math.”

pisa2015cmeccover“Canada” and “Canadian students,” we were told, were once again riding high in the once-every-three-years international test sweepstakes. If that that effusively positive response had a familiar ring, it was because it followed the official line advanced by a markedly similar CMEC media release, issued a few hours before the commentary.

Since our students, all students in each of our ten provincial school systems, were “excelling,” then it was time for a little national back-slapping. There’s one problem with that blanket analysis: it serves to maintain the status quo, engender complacency, obscure the critical Mathematics scores, and disguise the lopsided nature of student performance from region to region.

Hold on, not so fast, CMEC — the devil is in the real details and more clearly portrayed in the OECD’s own “Country Profile” for Canada. Yes, 15-year-olds in three Canadian provinces (Alberta, British Columbia, and Quebec) achieved some excellent results, but overall Mathematics scores were down, and students in over half of our provinces trailed-off into mediocrity in terms of performance. Our real success was not in performance, but rather in reducing the achievement gap adversely affecting disadvantaged students.

Over half a million 15-year-olds in more than 72 jurisdictions all over the world completed PISA tests, and Schmidt and Parkin were not alone in making sweeping pronouncements about why Canada and other countries are up and others down in the global rankings.

Talking in aggregate terms about the PISA performance of 20,000 Canadian students in ten different provinces can be, and is, misleading, when the performance results in mathematics continue to lag, Ontario students continue to underperform, and students in two provinces, Manitoba and Saskatchewan, struggle in science, reading, and mathematics.  Explaining all that away is what breeds complacency in the school system.

My own PISA 2015 forecast was way off-base — and taught me a lesson.  After the recent TIMSS 2015 Mathematics results released in November 2016, an  East Asian sweep, led by Singapore and Korea, seemed like a safe bet. How Finland performs also attracts far less attention than it did in its halcyon days back in 2003 and 2006. The significant OECD pivot away from excellence to equity caught me napping and I completely missed the significance of moving (2012 to 2015) from pencil-and-paper to computer-based tests. 

Some solace can be found in the erroneous forcecasts of others. The  recent Alberta Teachers’ Association (ATA) “Brace Yourself” memo with its critique of standardized testing assessment, seemed to forecast a calamitous drop in Alberta student performance levels. It only happened in Mathematics.

Advocates of the ‘Well-Being’ curriculum and broader assessment measures, championed by Toronto’s People for Education, will likely be temporarily thrown off-stride by the OECD’s new-found commitment to assessing equity in education. It will be harder now to paint PISA as evil and to discredit PISA results based upon such a narrow range of skills in reading, math and science.

The OECD’s “Country Profile” of Canada is worth studying carefully because it aggregates data from 2003 to 2015, clarifies the trends, and shows how Canadian students continue to struggle in mathematics far more than in reading and science.

Canadian students may have finished 12th in Mathematics with a 516 aggregate score, but the trend line continues to be in decline, down from 532 in 2003. Digging deeper, we see that students in only two provinces, Quebec ( 544) and BC (522) actually exceeded the national mean score. Canada’s former leader in Mathematics performance, Alberta, continued its downward spiral from the lofty heights of 549 (2003) to 511 (2015).

Since Ontario students’ provincial mathematics scores are declining, experts will be pouring over the latest PISA results to see how bad it is in relation to the world’s top performing systems. No surprises here: Ontario students scored 509, finishing 4th in Canada, and down from 530 on PISA 2003. Excellence will require a significant change in direction.

The biggest discovery in post-2015 PISA analysis was the positive link between explicit instruction and higher achievement in the 2015 core assessment subject, science. The most important factor linked with high performance remains SES (soci0-economic status), but teacher-guided instruction was weighted close behind and students taught with minimal direction, in inquiry or project-based classes, simply performed less well on the global test.

The results of the 15-year-olds are largely determined over 10 years of schooling, and not necessarily the direct consequence of the latest curriculum fad such as “discovery math.’’

It’s better to look deeper into what this cohort of students were learning when they first entered the school system, in the mid-1990s. In the case of Canadian students, for example, student-centred learning was at its height, and the country was just awakening to the value of testing to determine what students were actually learning in class.

Where the student results are outstanding, such as Singapore and Estonia, it is not solely attributable to the excellence of teaching or the rigour of the math and science curriculum.

We know from the “tutoring explosion” in Canada’s major cities that the prevalence of private tuition classes after school is a contributing factor, and may explain the current advantage still enjoyed in mathematics by Pacific Rim students.

Children of Chinese heritage in Australia actually outperformed students in Shanghai on the 2012 PISA test, and we need to explore whether that may be true for their counterparts in Greater Vancouver. The so-called “Shanghai Effect” may be attributed as much to “tiger mothers” as it is to the quality of classroom instruction.

Whether Canada and Canadians continue to exhibit high PISA self-esteem or have simply plateaued does not matter as much as what we glean over the next few years from studying best international practice in teaching, learning, and assessment.

Surveying PISA student results, this much is clear: standing still is not an option in view of the profound changes that are taking place in life, work, and society.

 

Read Full Post »

With the release of the 2015 Program for International Student Assessment (PISA) on the horizon,  the Organization for Economic Cooperation and Development (OECD) Education Office has stoked-up the “Math Wars” with a new study. While the October 2016 report examines a number of key questions related to teaching Mathematics, OECD Education chose to highlight its findings on “memorization,” presumably to dispel perceptions about “classroom drill” and its use in various countries.

mathsubtractionboardThe OECD, which administers the PISA assessments every three years to 15-year-olds from around the globe, periodically publishes reports looking at slices of the data. It’s most October 2016 report,  Ten Questions for Mathematics Teachers and How PISA Can Help Answer Them, based upon the most recent 2012 results, tends to zero-in on “memorization” and attempts to show that high-performing territories, like Shanghai-China, Korea, and Chinese-Taipei, rely less on memory work than lower-performing places like Ireland, the UK, and Australia.

American Mathematics educator Jo Boaler, renowned for “Creative Math,” jumped upon the PISA Study to buttress her case  against “memorization” in elementary classrooms. In a highly contentious November 2016 Scientific American article, Boaler and co-author Pablo Zoido, contended that PISA findings confirmed that “memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment.” Students who relied on memorization, they further argued, were “approximately half a year behind students who used relational and self-monitoring strategies” such as those in Japan and France. 

Australian education researcher Greg Ashman took a closer look at the PISA Study and called into question such hasty interpretations of the findings.  Figure 1.2: How teachers teach and students learn caught his eye and he went to work interrogating the survey responses on “memorization” and the axes used to present the data.  The PISA analysis, he discovered, also did not include an assessment of how teaching methods might be correlated with PISA scores in Mathematics.  Manitoba Mathematics professor Robert Craigen spotted a giant hole in the PISA analysis and noted that the “memorization” data related to “at-home strategies of students” not their instructional experiences and may wel;l indicate that students who are improperly instructed in class resort to memorization on their own.

mathpisateacherdirectedgraphWhat would it look like, Ashman wondered, if the PISA report had plotted how students performed in relation to the preferred methods used on the continuum from “more student-oriented instruction” to “more teacher-directed instruction.” Breaking down all the data, he generated a new graph that actually showed how teaching method correlated with higher math performance and found a “positive correlation” between teacher-directed instruction and higher Math scores. “Correlations,” he duly noted, “do not necessarily imply causal relationships but clearly a higher ratio of teacher-directed activity to student orientation.”

Jumping on the latest research to seek justification for her own “meta-beliefs” are normal practice for Boaler and her “Discovery Math” education disciples. After junking, once again, the ‘strawmen’ of traditional Mathematics — “rote memorization” and “drill,” Boaler and Zoido wax philosophical and poetic: “If American classrooms begin to present the subject as one of open, visual, creative inquiry, accompanied by growth-mindset messages, more students will engage with math’s real beauty. PISA scores would rise, and, more important, our society could better tap the unlimited mathematical potential of our children.” That’s definitely stretching the evidence far beyond the breaking point.

The “Math Wars” do generate what University of Virginia psychologist Daniel T. Willingham has aptly described as “a fair amount of caricature.” The recent Boaler-Zoido Scientific American article is a prime example of that tendency. Most serious scholars of cognition tend to support the common ground position that learning mathematics requires three distinct types of knowledge: factual, procedural and conceptual. “Factual knowledge,” Willingham points out, “includes having already in memory the answers to a small set of problems of addition, subtraction, multiplication, and division.” While some students can learn Mathematics through invented strategies, it cannot be relied upon for all children. On the other hand, knowledge of procedures is no guarantee of conceptual understanding, particularly when it comes to complexites such as dividing fractions. It’s clear to most sensible observers that knowing math facts, procedures and concepts is  what counts when it comes to mastering mathematics.

mathtimestableimageSimply ignoring research that contradicts your ‘meta-beliefs’ is common on the Math Education battlefield. Recent academic research on “memorization” that contradicts Boaler and her entourage, is simply ignored, even that emanating from her own university. Two years ago, Shaozheng Qin and Vinod Menon of Stanford University Medical School led a team that provided scientifically-validated evidence that “rote memorization” plays a critical role in building capacity to solve complex calculations.

Based upon a clinical study of 68 children, aged 7 to 9, studied over the course of one year, their 2014 Nature Neuroscience study, Qin, Menon et al. found that memorizing the answers to simple math problems, such as basic addition or multiplication, forms a key step in a child’s cognitive development, helping bridge the gap between counting on fingers and tackling more complex calculations. Memorizing the basics, they concluded, is the gateway to activating the “hippocampus,” a key brain structure for memory, which gradually expands in “overlapping waves” to accommodate the greater demands of more complex math.

The whole debate over memorization is suspect because of the imprecision in the use of the term. Practice, drilling, and memorization are not the same, even though they get conflated in Jo Boaler’s work and in much of the current Mathematics Education literature. Back in July 2012, D.T. Willingham made this crucial point and provided some valuable points of distinction. “Practice,” as defined by Anders Ericsson, involves performing tasks and feedback on that performance, executed for the purpose of improvement. “Drilling’ connotes repetition for the purpose of achieving automaticity, which – at its worst, amounts to mindless repetition or parroting. “Memorization,” on the other hand, relates to the goal of something ending up in long-term memory with ready access, but does not imply using any particular method to achieve that goal.

Memorization has become a dirty word in teaching and learning laden with so much baggage to the point where it conjures up mental pictures of “drill and kill” in the classroom. The 2016 PISA Study appears to perpetuate such stereotyping and, worst of all, completely misses the “positive correlation” between teacher-directed or explicit instruction and better performance in mathematics.

Why does the PISA Study tend to associate memorization in home-study settings with the drudgery of drill in the classroom?  To what extent does the PISA Study on Mathematics Teaching support the claims made by Jo Boaler and her ‘Discovery Math’ advocates? When it comes to assessing the most effective teaching methods, why did the PISA researchers essentially take a pass? 

 

Read Full Post »

American education historian Diane Ravitch once enjoyed a reputation as one of the leading public intellectuals of our time. After four decades of impressive historical research and compelling writing pushing at the boundaries of education reform, she has now emerged almost unrecognizable as the fiercest critic of school reform in the United States. Her two most recent books, The Death and Life of the Great American School System (2010) and the sequel Reign of Error (2013), bear witness to that radical transformation and provide clues to the fundamental question: What in the world has happened to Diane Ravitch?

RavitchDeathCoverHer 2010 national best seller, The Death and Life of the Great American School System, marks a radical break in her reform advocacy. Much of the book is a revisionist interpretation of the previous decade of education reform, but it also represents a startling about-face. The leading advocate of testing and accountability emerges, almost born-again, as a fierce critic of the Barak Obama –Arne Duncan ‘Race to the Top’ reform agenda, especially standardized testing, school choice and the closure of low-performing schools. “I too had fallen for the latest panaceas and miracle cures,” she confesses, but, as time wore on, simply “lost the faith” (pp. 3 and 4).

Always known for her independent, contrarian streak, Ravitch was again swimming against the tide. Under George W. Bush’s NCLB , she contended that the whole standards movement had been “hijacked” by the testing movement. Instead of focusing upon curriculum reform, “standardized test scores” were considered “the primary measure of school quality.” “Good education,” she wrote, “cannot be achieved by a strategy of testing children, shaming educators, and closing schools”(p. 111). Charter schools, according to Ravitch, had strayed from the original concept best articulated in 1988 by then American Federation of Teachers president Albert Shanker. Instead of becoming a vehicle for empowering teachers to initiate innovative methods of reaching disaffected students, it evolved into a means of advancing privatization, producing an “education industry” dominated by entrepreneurs, philanthropists, and venture capitalists (pp. 123-4). Test-based accountability, Ravitch now claimed, narrowed the curriculum and was being used in inappropriate ways to identify ‘failing schools,’ fire educators, determine bonuses, and close schools, distorting the purpose of schooling altogether (p. 167).

Ravitch focuses much of her scathing criticism on what she termed the “Billionaire Boys’ Club.” Since the turn of the millennium, she claims that the traditional educational foundation world had been significantly changed by the emergence of a new breed of venture philanthropists. By 2002, the Bill and Melinda Gates Foundation, the Walton Family Foundation, and the Eli Broad Foundation had emerged to frame and dominate the school reform agenda. School choice, turnaround schools strategies, and competitive market incentives were all harnessed in mostly failed attempts to leverage improved student test scores. “with so much money and power aligned against the neighbourhood public school and the teaching profession, she bluntly forecast that “public education itself is placed at risk” (p. 222).

Ravitch’s The Fall and Life of the Great American School System harkened back to A Nation at Risk and made a compelling case that American school reform has lost its way. In rejecting the charter school panacea and test-based accountability, she sets out a reasonable, balanced approach to educational improvement. Raising academic standards utilizing the Common Core Curriculum continue to be the centrepiece of her reformist philosophy, but she is more sanguine about the likelihood of reaching a national consensus, settling for a sound balanced curriculum including history, civics, geography, literature, the arts and sciences, foreign languages, and physical/health education. “ If our schools had an excellent curriculum, appropriate assessment and well-educated teachers,” she concludes, “we would be way ahead of where we are now in renewing our school system’ (p. 239).

Swept up in the wave of public reaction to her 2010 book, Ravitch sought to answer the question posed but not fully explored – where should American education be heading? A completely reformed fiery warrior emerges in Reign of Error, a book with an attention grabbing, inflammatory subtitle: “The Hoax of the Privatization Movement and the Danger to America’s Public Schools.” Expanding upon her critique of the American venture philanthropists, she restates her strong opposition to blind faith in charters, testing excesses, shuttering ‘failing’ schools, and removing ‘bad’ teachers. Without the same tone of authenticity and humility, Reign of Error descends into polemic and reads, for the most part, like an angry diatribe. Not quite prepared to provide a constructive path forward, she simply sets out to crush her former allies, now seen as enemies, real and imagined.

RavitchSoundBitesIn the opening chapter of Reign of Error, Diane Ravitch stuns the reader by claiming that there is “no crisis” in American education. “Public education is not broken,” she writes. “It is not failing or declining. Our urban schools are in trouble because of concentrated poverty and racial segregation….Public education is in crisis only so far as society is and only so far as this new narrative of crisis has de-stabilized it” (p. 4). In her book introduction, she also states: “ I do not contend that schools are fine just as they are. They are not. American education needs higher standards for those who enter the teaching profession. It needs higher standards for those who become principals and superintendents. It needs stronger and deeper curriculum in every subject…” (p. xii). You will look in vain, as New Jersey teaching expert Grant Wiggins (2013) noted, for any serious discussion of how to tackle that second set of problems.

The “crisis” myth, according to the newly radicalized Diane Ravitch, is only sustained by “orchestrated attacks” on teachers and principals. “These attacks,” she declares,” create a false sense of crisis and serve the interests of those who want to privatize the public schools.” In an attempt to overturn the prevailing narrative, she argues that these ‘outsiders’ represent not reform but the status quo in education. Together, they form a dangerous bipartisan alliance committed to “corporate reform” and encompassing a broad spectrum from Education Secretary Duncan to Louisiana Governor Bobby Jindal and the Bezos Foundation, from the Hoover Institution to Hollywood, purveyors of films like Waiting for Superman. Since education is not really in crisis, Ravitch contends that all of these interests are destroying the public school system while pursuing an illusion.

Making such claims can win you legions of followers inside the system, but also damage your credibility as a respected scholar purporting to present an “evidence-based” assessment of the state of education. In the chapter entitled “The Facts about the International Test Scores,” Ravitch’s analysis simply does not hold water, especially when it comes to the mathematics scores of U.S. students, compared with other top performing countries. While U.S. grade 4 students do perform reasonably well on basic operations, they are not competitive with the Taiwanese, for example, at higher performance levels. Seventeen-year old Americans, not referenced by Ravitch, have stagnated in reading and mathematics since the first tests in the early 1970s.

In defending teacher autonomy, Ravitch tends to ignore research on the impact of effective teaching on student achievement levels. If New Zealander John Hattie (2008) is correct, teaching may well account for 30 cent or more of student improvement and highly effective teachers can add an extra year or two of growth in achievement level. Without advocating for the firing of teachers on the basis of ‘half-baked’ test-based assessment systems, there is much evidence that poor performance is tolerated for variety of reasons. National estimates from the U.S. Department of Education confirm that, on average, school districts only dismiss 1.4% of tenured teachers and 0.7% of probationary teachers each year.

Instead of focusing so much on the sinister influence of “Billionaire Boys’ Club,” Ravitch might have been more convincing if she had actually produced a coherent reform agenda based upon curriculum improvement and enhancing teacher effectiveness. More vigorous advocacy on her part might have bolstered and possibly salvaged more of the Common Core Curriculum which she campaigned so hard to get on the national policy agenda. Rather than tackling the structural problems, Ravitch may have exerted more impact by venturing into what Larry Cuban (2013) terms the “Black Box” of the classroom. Improving teaching pedagogy, student assessment, and the consistency of teaching, educators like Wiggins insist, would certainly help far more to advance school improvement and student learning, whatever the form or organization of the school.

Over the past five years, Diane Ravitch has become more of an education reform warrior than a credible scholar, especially when she ventures well outside the field of educational history. Since discovering Twitter five years ago, she has become a serial tweeter spewing out snappy 140 character comments and regularly goes ad hominem with those holding opposite views. Standing on the Save Our Schools rally platform on the Ellipse in July 2011, Ravitch spoke for only eight minutes, all in punchy protest sentences. Slogans and sloganeering, as Brian Crittenden reminded us back in 1969, are no substitute for serious thinking and confronting the many contradictions in educational discourse.

American education reform today is a contested terrain occupied by tribalists. Side-stepping critical education reform issues such as teacher quality that might offend camp followers is right out-of-character for Ravitch, the once independently-minded public intellectual. Former reform allies like Frederick Hess, a respected conservative policy analyst, who welcomed The Fall and Life of the Great American School System, now chastise her for becoming a virtual mouthpiece of the teachers’ unions. Whether you think education is in crisis or not, Ravitch’s latest books provide an inventive, perplexing re-interpretation, but will do little to help us overcome the current impasse.

Why is American education reform such a polarized field of public policy?  What happens to respected scholars like Diane Ravitch when they get absorbed into the Manichean world view?  Whatever happened to Ravitch’s deep commitment to putting higher standards and curriculum reform before teacher autonomy and advocacy? Will the tribalism fostered in the School Wars ultimately lead anywhere?

Read Full Post »

Today the Organization for Economic Development and Cooperation (OECD) has succeeded in establishing the Program of International Student Assessment (PISA) test and national rankings as the “gold standard” in international education. Once every three years since 2000, PISA provides us with a global benchmark of where students 15 years of age rank in three core competencies — reading, mathematics, and science. Since its inception, United States educators have never been enamoured with international testing, in large part because American students rarely fare very well.

PISATestVisualSo, when the infamous OECD PISA Letter was published in early May 2014 in The Guardian and later The Washington Post, the academics and activists listed among the initial signatory list contained the names of some familiar American anti-testing crusaders, such as Heintz-Deiter Meyer (SUNY, Albany), David Berliner (Arizona State University), Mark Naison (BAT, Fordham University), Noam Chomsky (MIT) and Alfie Kohn, the irrepressible education gadfly. That letter, addressed to Andreas Schleicher, OECD, Paris, registered serious concerns about “the negative consequences of the PISA rankings” and appealed for a one cycle (three-year) delay in the further implementation of the tests.

The global campaign to discredit PISA earned a stiff rebuke in Canada. On June 11 and June 18, 2014, the C.D. Howe Institute released two short commentaries demonstrating the significant value of PISA test results and effectively countering the appeal of the anti-PISA Letter. Written by Education Fellow John Richards the two-part report highlighted the “Bad News” in Canada’s PISA Results and then proceeded to identify What Works (specific lessons to be learned) based upon an in-depth analysis of the once every three-year tests. In clear, understandable language, Richards identified four key findings to guide policies formulated to “put Canadian students back on track.”

The call for a pause in the PISA tests was clearly an attempt to derail the whole international movement to establish benchmarks of student performance and some standard of accountability for student achievement levels in over 60 countries around the world. It was mainly driven by American anti-testers, but the two Canadian-based signatories were radical, anti-colonialist academics, Henry Giroux (English and Cultural Studies, McMaster University) and Arlo Kempf ( Visiting Professor, Program Coordinator, School and Society, OISE).

Leading Canadian educationists like Dr. Paul Cappon (former CEO, Council on Learning) and even School Change guru Michael Fullan remain supporters of comparative international student assessments. That explains why no one of any real standing or clout from Canada was among the initial group, and, by late June, only 32 Canadian educationists could be found among the 1988 signatories from all over the globe. Most of the home-grown signatories were well known educators in what might be termed the “accountability-free” camp, many like E. Wayne Ross (UBC) and Marc Spooner (U Regina), fierce opponents of “neo-liberalism” and its supposed handmaiden, student testing.

John Richards’ recent C.D.Howe commentaries should, at least temporarily, silence the vocal band of Canadian anti-testers.  His first commentary made very effective use of PISA student results to bore deeply into our key strengths and issues of concern, province-by-province, focusing particularly on student competencies in mathematics. That comparative analysis is fair, judicious, and research-based in sharp contrast to the honey-coated PISA studies regularly offered up by the Council of Ministers of Education (Canada).

The PISA results tell the story. While he finds Canadian students overall “doing reasonably well,”  the main concern is statistical declines in all provinces in at least one subject, usually either mathematics or reading.  Quebec leads in Mathematics, but in no other subject.  Two provinces (PEI and Manitoba) experienced significant declines in all three subject areas. Performance levels have sharply declined ) over 30 points) in mathematics in both Manitoba and Canada’s former leader, Alberta. Such results are not a ringing endorsement of the Mathematics curriculum based upon the Western and Northern Canada Protocol (WNCP). 

The warning signs are, by now, well known, but the real value in Richards’ PISA Results analysis lies in his very precise explanation of the actual lessons to be learned by educators.  What really matters, based upon PISA results, are public access to early learning programs, posting of school-level student achievement results, paying professional level teacher salaries, and the competition provided by achievement-oriented private and  independent (not for profit) schools. Most significantly, his analysis confirms that smaller class sizes (below 20 pupils per class) and increasing mathematics teaching time have a negligible effect on student performance results.

The C.D. Howe PISA Results analysis hit home with The Globe and Mail, drawing a favourable editorial, but was predictably ignored by the established gatekeepers of Canada’s provincial education systems. Why the reluctance to confront such research-based, common sense findings?  “Outing” the chronic under-performance of students from certain provinces ( PEI, Manitoba, New Brunswick, and Nova Scotia) is taboo, particularly inside the tight CMEC community and within the self-referenced Canadian Education Association (CEA) circles.  For the current Chair of CMEC, Alberta Education Minister Jeff Johnson any public talk of Alberta’s precipitous decline in Mathematics is an anathema.

Stung by the PISA warning shots, Canada’s provincial education gatekeepers tend to be less receptive to sound, research-based, practical policy correctives. That is a shame because the John Richards reports demonstrate that both “sides” in the ongoing  Education War are half-right and by mixing and matching we could fashion a much more viable, sustainable, effective policy agenda. Let’s tear up the existing and tiresome Neo-Con vs. Anti-Testing formulas — and re-frame education reform around what works – broader access to early learning, open accountability for student performance levels, paying respectable, professional-level teacher salaries, and welcoming useful competition from performance-driven private and independent schools.

What’s the  recent American Public Noise over “PISAfication” all about anyway?  Why do so many North American educators still tend to dismiss the PISA Test and the sound, research-based studies stemming from the international testing movement?  To what extent do John Richards’ recent C.D. Howe Institute studies suggest the need for a total realignment of provincial education reform initiatives?

 

 

Read Full Post »

Older Posts »