Feeds:
Posts
Comments

Posts Tagged ‘Daisy Christodoulou’

Parents, students and educators are beginning to confront the hidden costs of the COVID-19 pandemic in Canadian K-12 education.  The initial school shutdown from March to June 2020 precipitated a prolonged period of improvised and spotty ‘home learning,’ followed by further experiments in hybrid blended learning, compounded by extended holiday breaks carrying on into 2021.  All of this will have profound implications for student learning and generate new priorities for the ‘Great Reset’ in 2021.

What’s gradually emerging, from U.S. state to state, Canadian province to province, is a clearer picture of the “COVID-19 slide” setting back learning for all students, but particularly for those from disadvantaged, racialized and marginal communities. Postponing provincial assessments simply delays the time of reckoning.

Looking ahead, it’s time to actually confront the profound impact of the COVID-19 onslaught on the ‘pandemic generation’ of students and educators scrambling to adjust to unexpected ‘pivots’ from one instructional mode to another, amounting ting to ‘on-again’ ‘off-again’ regular classroom instruction.

Signs of the COVID-19 slide are beginning to emerge as student impact studies gradually surface, albeit mostly in U.S. states rather than here in Canada. Early on, an April 2020 North West Education Association (NWEA) study rang the alarm bell with some outsized statistical projections of potential learning loss. A McKinsey & Company research summary published in December 2020 provided more reliable estimates of the total potential learning loss to the end of the school year in June 2021.

While the initial worst-case NWEA forecast scenarios have been averted, the cumulative learning loss could still be substantial, especially in mathematics, with students, on average, likely to lose 5 to 9 months of learning by year’s end. Among American black students, the learning loss in mathematics averages 6 months to a year. “While all students are suffering,” the McKinsey & Company researchers claim,” those who came into the pandemic with the fewest academic opportunities are on track to exit with the greatest learning loss.”

Comparable Canadian research on learning loss is hard to find and national media coverage, echoing education faculty research agendas, tends to focus more on the impact on student well-being than on evidence of learning loss. One CBC Radio podcast, posted in November 2020 and billed as COVID Slide’s Impact on Kids Learning,” presented some evidence of the problem, then defaulted to standard pre-pandemic responses, dismissing learning loss concerns and instead focusing on children’s anxieties, mindfulness exercises, and reducing stress through broader and ‘softer’ student assessments.   

Two promising Alberta research studies, cited in passing in the CBC Radio podcast, should not be overlooked. Conducted by University of Alberta educational psychology professor George Georgiou, those studies demonstrate that young readers are lagging behind the learning curve in the wake of the pandemic. 

The first study of changes in literacy test scores, comparing September 2020 results on reading accuracy, fluency and comprehension and with the previous three years,  Student in Grades 2 and 3 performed consistently worse across the three measures and, on average, performed between 6 to 8 months below their grade level.

Professor Georgiou’s second study, funded by Alberta Education, followed 1,000 Grade 1 students on multiple reading tasks from September 2019 until February of that year. He used those results to identify students at-risk and then tested them again in September 2020. Just 85 of 409 children, or roughly 20 per cent, were reading at an average level. Some 60 per cent of the children scored lower in September than in January of 2020, before the pandemic.

School shutdowns and the default to online learning have contributed to the problem. Effective early reading instruction requires face-to-face interventions, preferably with literacy specialists, and that was missing during home learning. No one was prepared for the abrupt shift from in-person to online learning, nor were most elementary teachers skilled enough to implement alternative digital learning programs. 

International research corroborates the early American and Alberta findings and demonstrates conclusively that school closures contributed to an actual COVID slide. In Belgium, where schools closed for 3 months in 2020, learning losses were identified in the final year of Primary School in both mathematics and the Dutch language, particularly in schools with  disadvantaged student populations.

A Baseline Writing assessment for Year 7 pupils in the United Kingdom, where schools were shuttered for 2 months, revealed that students had actually gone backwards. The mean score for Year 7 pupils in November 2020 was roughly equivalent to the Year 5 standard in November 2019. The Year 7 cohort, according to UK writing expert Daisy Christodoulou, were 22 months below their expected level of competency in writing.

Setting new priorities will be critical in the COVID-19 education reset and in preparing for the 2021-22 school year. Shoring up the educational foundations in mathematics and reading will be critical in countering the COVID slide and completing the transition to a technology-enabled system is now a matter of urgent necessity. Some exciting innovations can wait when the shaken system requires stabilizers, socio-economic disparities grow, and students need help to re-engage and ‘catch-up’ in post-pandemic learning.    

What’s standing in the way of addressing the COVID-19 Slide in Student Learning? Why is most of the serious research into COVID “Learning Loss” coming from American education authorities, policy think-tanks, and independent research organizations? If provincial testing is suspended in 2020-21, how will we ever know the impact of the repeated school disruptions? What’s standing in the way of tackling the problem and embarking upon ‘learning recovery’ plans?

Read Full Post »

Mr. Zero to Hero: Alberta Physics teacher Lynden Dorval, May 2012

Suspending Alberta diploma exams in October and November 2020 is understandable in the midst of a global pandemic, but it will have unintended consequences. Replacing exams with sound, reliable, standards-based and replicable alternative forms of summative assessment is a formidable challenge. Taking a longer-term view, it will most likely only exacerbate the gradual and well-documented slide in the province of Alberta’s graduation standards.

While some students and the parents retained the right to write exams, the die is cast and it may also signal the death knell for final exams in a province once hailed for having Canada’s best education system. Eliminating final exams, as demonstrated in my new book The State of the System, has hidden, longer-term consequences, significantly contributing to the ‘big disconnect’ between rising student attainment (i.e., graduation rates and averages) and stagnating or declining achievement.

Critics of exams contend that formal, time-limited assessments cause stress and can affect student well-being. Such claims are disputed by Canadian teen mental health experts, including Stan Kutcher and Yifeng Wei, as well as cognitive scientists like Erin Maloney who cite evidence-based research demonstrating that tests and exams are examples of the “normal stress” deemed essential to healthy human development.

Sound student evaluation is based upon a mix of assessment strategies, including standardized tests and examinations. Testing remains a critical piece, countering more subjective forms of assessment. UK student assessment expert, Daisy Christodoulou, puts it this way: “Tests are inhuman – and that is what is good about them.”

While teacher-made and evaluated assessments appear, on the surface, to be more gentle and fairer than exams, such assessments tend to be more impressionistic, not always reliable, and can produce outcomes less fair to students. They are also laden with potential biases.

A rather extensive 2015 student assessment literature review, conducted by Professor Rob Coe at Durham University, identifies the typical biases. Compared to standardized tests, teacher assessment tends to exhibit biases against exceptional students, specifically those with special needs, challenging behaviour, language difficulties, or personality types different than their teacher. Teacher-marked evaluations also tend to reinforce stereotypes, such as boys are better at math or racialized students underperform in school.

Grade inflation has been an identified and documented concern in high schools since the 1980s, long before the current pandemic education crisis. Two Canadian sociologists, James Cote and Anton Allahar, authors of Ivory Tower Blues (2007), pinpointed the problem of high school students being “given higher grades for less effort” and expecting the same in Ontario universities. One authoritative study, produced at Durham University in the UK, demonstrated that an ‘A’ grade in 2009 was roughly equivalent to a ‘C’ grade in 1980.

What has happened to Alberta high school graduation standards? Back in 2011, Maclean’s magazine ranked Alberta as Canada’s best system of education based upon the performance of its graduating students. With compulsory provincial exams in place in the core subjects, some 20 per cent of Alberta’s Grade 12 students achieved an ‘A’ average, compared to roughly 40 per cent of students across Ontario high schools.

Grading standards in Alberta were demonstrably more rigorous than those in Ontario and other provinces. The University of Calgary’s Dean of Arts described Ontario high schools as being engaged in “an arms race of ‘A’s.’ A 2011 University of Saskatoon admissions study of 12,000 first-year university students’ grades reported that Alberta high school graduates dropped 6.4 percentage points, compared to as much as 19.6 points for those from other provinces. In 2017-18, a leaked University of Waterloo admissions study revealed that the average Ontario student dropped 16 per cent.

“No fail’ and ‘no zero’ student assessment policies proliferated in the early 2000s and most of the resistance stemmed from secondary school teachers, particularly in Alberta. Senior grade subject teachers in Mathematics and Science were in the forefront of the underground battles over teachers’ autonomy in the classroom. Constraining teachers from assigning “zeros’ for incomplete or missing work proved to be the biggest bone of contention.

It flared up in Alberta in May 2012 when Edmonton physics teacher Lynden Dorval, a thirty-three-year veteran with an unblemished teaching record, was suspended, then fired, for continuing to award zeroes, refusing to comply with a change in school assessment policy. It all came to a head when the school board’s computer-generated reports substituted blanks for zeroes. An Alberta tribunal found that Dorval gave students fair warning, and that his methods worked because he had “the best record in the school and perhaps the province for completion rates.” The previously obscure Alberta Physics teacher went from “zero to hero” when he was exonerated, but it proved to be a small victory on the slippery slope to dumbed-down standards.

Grade inflation seeped into Alberta high schools when that province moved away from weighting exams at 50 per cent (to 30 per cent) of the final subject grade. In June 2016, under the new policy, 96 per cent of Math 30-1 students were awarded a passing grade, compared to 71 per cent of those who took the diploma exam, a gap of 25 percentage points. The same pattern was evident in Nova Scotia up until June 2012 when the province eliminated all Grade 12 provincial exams. Since Nova Scotia moved its provincial exams from Grade 12 to Grade 10, that province’s graduation rates have skyrocketed from 88.6 percent to 92.5 percent in 2014–15

While far from perfect, exams do provide not only a more rigorous form of summative assessment, but a fairly reliable benchmark of how students perform across a provincial system. It is, after all, next-to-impossible to establish comparability or assessment benchmarks to assess the alternatives such as uneven and highly idiosyncratic ‘demonstrations of learning.’

The Alberta system, once rated Canada’s best on the basis of its graduation standards, is gradually losing its edge. Suspending the diploma exams in 2020-21 may turn out to be a temporary blip or stand as further evidence of an abandonment of more rigorous graduation standards.

Why did Alberta lose its undisputed status as Canada’s best education system? How important were final exams in solidifying that province’s graduation standards? What is the connection between final diploma exams and two key performance indicators — grade inflation and graduation rates? Why have the universities remained relatively silent while evidence accumulates testifying to the softening of graduation standards?

Read Full Post »

The ongoing COVID-19 crisis may be claiming another victim in one of Canada’s leading education provinces – sound, reliable, standards-based and replicable summative student assessment. After thwarting a 2017-18 Learning Province plan to subvert the province’s Grade 3 provincial student assessment and broaden the ‘measures of success,’ the Ontario Doug Ford government and its education authorities appear to be falling into a similar trap.

What’s most unexpected is that the latest lubricant on the slippery slope toward ‘accountability-free’ education may well have been applied in Doug Ford’s Ontario under a government ostensibly committed to ‘back-to-basics’ and ‘measurable standards’ in the K-12 school system.


All K-12 provincial tests, administered by the Education Quality and Accountability Office (EQAO) were the first to go, rationalized as a response to the pandemic and its impact upon students, teachers, and families. More recently, Ontario’s education ministry opened the door to cancelling final exams by giving school boards the right to replace exam days with in-class instructional time.


Traditional examinations, the long-established benchmark for assessing student achievement, simply disappeared, for the second assessment cycle in a row, going back to the onset of the COVID-19 outbreak. Major metropolitan school districts, led by the Toronto District School Board, Peel District School Board and their coterminous Catholic boards, jumped in quickly to suspend exams in favour of what were loosely termed “culminating tasks” or “demonstrations of learning.”


Suspending exams was hailed in the Toronto Star news report as ‘a rare bright spot” for Ontario high school students. Elsewhere the decision to eliminate exams, once again, elicited barely a whimper, even from the universities. “Nobody’s missed standardized tests or final exams,” University of Ottawa professor Andy Hargreaves noted rather gleefully during the October 29-30 Canadian EdTech Summit.


Suspending examinations has hidden and longer-term consequences not only for students and teachers, but for what remains of school-system accountability. What’s most surprising, here in Canada, is that such decisions are rarely evidence-informed or predicated on the existence of viable, proven and sustainable alternatives.


Proposing to substitute culminating projects labelled as “demonstrations of learning” is based upon the fallacious assumption that teacher assessments are better than final exams. Cherry-picking a recent sympathetic research study, such as a May 2019 Journal of Child Psychology and Psychiatry article highlighting exam stress, may satisfy some, but it is no substitute for serious research into the effectiveness of previous competency-based “culminating activity” experiments.


Sound student evaluation is based upon a mix of assessment strategies, ranging from formative (daily interaction and feedback) assessment to standardized tests and examinations (summative assessment). It is highly desirable to base student assessment upon a suitable combination of reasonably objective testing instruments as well as teacher-driven subjective assessment. UK student assessment expert, Daisy Christodoulou, puts it this way: “Tests are inhuman – and that is what is good about them.”


Teacher-made and evaluated assessments appear, on the surface, to be more gentle and fairer than exams, but such assumptions can be misleading, given the weight of research supporting “level playing field” evaluations. The reality is that teacher assessments tend to be more impressionistic, not always reliable, and can produce outcomes less fair to students.


Eliminating provincial tests and examinations puts too much emphasis on teacher assessment, a form of student evaluation with identified biases. A rather extensive 2015 student assessment literature review, conducted by Professor Rob Coe at the Durham University Centre for Evaluation and Monitoring, identifies the typical biases. Compared to standardized tests, teacher assessment tends to exhibit biases against exceptional students, specifically those with special needs, challenging behaviour, language difficulties, or personality types different than their teacher. Teacher-marked evaluations also tend to reinforce stereotypes, such as boys are better at math or racialized students underperform in school.


Replacing final exams with teacher-graded ‘exhibitions’ or ‘demonstrations of learning mastery’ sounds attractive, but is fraught with potential problems, judging for their track record since their inception in the late 1980s. Dreamed up by the North American father of Outcome-Based Education, Dr. William Spady, assessing student competencies based upon ‘demonstrations of learning’ have a checkered history. Grappling with the OBE system and its time-consuming measurement of hundreds of competencies finished it off with classroom teachers.


A more successful version of DOLM (Demonstration of Learning Mastery), developed by Deborah Meier, Theodore Sizer and the Coalition of Essential Schools (1988 -2016), was piloted in small schools with highly-trained teachers. Such exhibitions were far from improvisational but rather “high stakes, standards aligned assessments” which aimed at securing “commitment, engagement and high-level intellectual achievement” and conceived as “a fulcrum for school transformation.” Systemic distrust, aggravated by testing and accountability, Meier conceded, “rendered attempts to create such contexts infertile.”


Constructing summative evaluation models to replace final exams is not easy and it has defeated waves of American assessment reformers. The Kentucky Commonwealth Accountability and Testing System (CATS) 2007-2008, and its predecessor, KRIS (1992-1998) serve as a case in point. Like most of these first generation reforms, the KRIS experiment was widely considered a failure. Its performance-based tools were found to be unreliable, professional development costs too high, and two elements of the program, Mathematics Portfolios and Performance Events, summarily abandoned. Writing portfolios continued under CATS but a 2008 audit revealed wide variations in marking standards and lengthy delays in returning the marked results of open answer questions.


Most of the recent generation of initiatives were sparked by a January 2015 white paper, “Performance Assessments: How State Policy Can Advance Assessments for 21st Century Learning,” produced by two leading American educators, Linda Darling-Hammond and Ace Parsi. Seven American states were granted a waiver under the Every Student Succeeds Act (ESSA) to experiment with such competency-based assessment alternatives.


Constructing a state model compliant with established national standards in New Hampshire proved to be an insurmountable challenge. While supported by Monty Neill and Fair Test Coalition advocacy forces, New Hampshire’s Performance Assessments for Competency Education (PACE) system ran into significant problems trying to integrate Classroom-Based Evidence (CBE) with state testing criteria and expectations. Establishing evaluation consistency and “comparability” across schools and districts ultimately sunk the experiment. It was anchored in state standards and required external moderation, including re-scoring of classroom-based work. Serving two masters created heavier teacher marking loads and made it unsustainable. Federal funding for such competency-based assessment experiments was cut in December 2019, effectively ending support for that initiative.


Provincial tests and exams exist for a reason and ensure that we do not fly blind into the future.. Replacing final exams with a patchwork solution is not really a wise option this school year. Simply throwing together culminating student activities to replace examinations is, judging from past experiments, most likely a recipe for inconsistency, confusion, and ultimate failure.


Teachers will, as always, do their best and especially so given the current turbulent circumstances. Knowing what we know about student assessment, let’s not pretend that the crisis measures are better than traditional and more rigorous systems that have stood the test of time.

What are the fundamental purposes of summative student assessment? Should provincial tests and final exams be suspended during the second year of the COVID-19 pandemic? Where’s the research to support the effectiveness of alternative ‘demonstration of learning’ strategies? Are we now on the slippery slope toward ‘accountability-free’ education?

Read Full Post »

Laptops, tablets, and SMART boards were all hailed in the early 2000s as the harbingers of a new era of technology-driven educational transformation. It was just the latest in successive waves of technological innovation forecast to improve K-12 education. Billions of education dollars were invested in education technology in recent decades and yet a 2015 Organization for Economic Cooperation and Development (OECD) report has demonstrated that such investments have led to “no appreciable improvements” in educational achievement.

As a new high school English teacher in London, UK, back in 2007-08, Daisy Christodoulou was typical of most educators at the time. She was wowed by whiteboard technology and committed to taking advantage of the latest ed tech gadget to facilitate interactive student learning.  Once in the classroom, in spite of her best intentions, Daisy turned it into a regular classroom projector and rarely used the more sophisticated features. She was not alone because that’s exactly what  most of us did in those years,

Optimistic forecasts of the transformative power of classroom computes and Internet access never materialized.  Spending on IT in U.K. schools quadrupled during the SMART Board phase, but it was a bust and dismissed in 2018 as another example of “imposing unwanted technology on schools.” A $1.3-billion 2013 Los Angeles Unified School Board deal with Apple and Pearson Learning to supply iPads was jettisoned a year later because of security vulnerabilities, incomplete curricula, and inadequate teacher training. Many onlookers wondered, if the giants can’t make it work, can anyone?

The promised ed-tech revolution that never seems to arrive is the central focus of Daisy Christodoulou‘s latest book, Teachers vs. Tech?, released just as the COVID-19 school shutdown thrust millions of teachers into the largely uncharted territory of e-learning on the fly.  It also raises the vitally important, but discomforting question: Why has education technology failed in the past, and is it destined to fail in the future? We may well find out with the biggest global experiment in ed-tech e-learning now underway.

Christodoulou’s Teachers vs. Tech? tackles what has become the central issue in the unsettling and crisis-ridden  COVID-19 education era.  It’s an instantly engaging, highly original, and soundly researched guide to identifying the obstacles to harnessing ed-tech in schools, a deadly-accurate assessment of why teachers retain a healthy skepticism about the marvels of ed tech, and a constructive prescription for re-purposing those 21st century machines.

What’s absolutely refreshing about Teachers vs Tech? is the author’s consistent commitment to reasonably objective, evidence-based analysis in a field dominated by tech evangelists and tech fear mongers. Common claims that teachers are conservative and change-averse, by nature, or that education is a “human” enterprise immune to technology do not completely explain the resistance to ed tech interventions. New technologies come with embedded educational pedagogy, she contends, that embraces pseudoscience theory and cuts against the grain of most classroom teachers.

Christodoulou effectively challenges ed tech innovations free riding on unfounded educational theories. Over the past 70 years or so, she correctly reports, cognitive science and psychology have discovered much about how the human mind works and learning happens.  Many of these discoveries came out of scientific investigations associated with Artificial Intelligence (AI) and information technology. What’s peculiar about this is , in Christodoulou’s words, the gap between what we know about human cognition and what often gets recommended in education technology.”

Education technology is rife with fancy gadgets and fads, most of which are promoted by ed tech evangelists,  school change theorists, or learning corporations. The author finds it very odd that “the faddiest part of education” is the aspect supposedly rooted in scientific research. “Far from establishing sound research-based principles,” she writes, “technology has been used to introduce yet more pseudoscience into the education profession.”  There’s still hope, in her view, that the evidence- based research underpinning learning will eventually find its way into the new technologies.

She does not shy away from tackling the most significant and disputed issues in the integration of education technology into teaching and learning. What are the biggest lessons from the science of learning?  Can technology be effectively used to personalize learning? What’s wrong with saying ‘Just Google It’?  How can technology be used to create active learning? Do mobile smart devices have any place in the classroom? Can technology be employed to build upon the expertise of teachers? How can technology improve student assessment for teachers? All of these questions are answered with remarkably clear, well-supported answers.

The book makes a strong and persuasive case for incorporating the science of learning into technology-assisted classroom teaching.  Drawing upon her first book, Seven Myths about Education (2013), Christodoulou explains how cognitive science has shed new light of the efficacy of explicit instruction for improving student learning.  Direct instruction is judged to be more effective in developing long-term memory to overcome the limitations of short-term memory. Her plea is for ed tech and its associated software to tap more into that form of pedagogy.

CellphoneHidingBehindBookjpg

Teachers will be drawn to her thought-provoking chapter on the use and misuse of smart devices in today’s classrooms. Jumping right into the public debate, Christodoulou demonstrates how today’s mobile phones interfere with learning because they are “designed to be distracting” and absorb too much time inside and outside of school. Citing a 2017 meta-review of the research produced by Paul A. Kirschner and Pedro De Bruyckere, she points out the “negative relationship” between academic achievement and social network activity among young people. Popular claims that adolescents are better at “multi-taking” are judged to be completely unfounded. She favours, on balance, either strictly limiting smart devices or convincing the tech giants to produce devices better suited to teaching and learning environments.

Christodoulou identifies, with remarkable precision, what technology can bring to teaching and student assessment.  Teachers, she shows, have real expertise in what works with students, but they also have blind spots. While there is no substitute for human interaction, ed tech can help teachers to develop more consistency in their delivery and to tap into students’ long term memory,

One of the authors greatest strengths is her uncanny ability to discover, hone-in on, and apply technological solutions that make teaching more meaningful, fulfilling and less onerous when it comes to workload and paperwork. Spaced repetition algorithms, are highlighted as a specific example of how technology can aid teachers in helping students to retain knowledge.  As Education Director of No More Marking, she makes a compelling case for utilizing online comparative judgement technology to improve the process and reliability of student grading.

Christodoulou’s Teachers vs Tech? provides a master class on how to clear away the obstacles to improving K-12 education through the effective and teacher-guided use of technology. Popular and mostly fanciful ed tech myths are shredded, one at a time, and summarized succinctly in this marvelous concluding passage:

Personalization is too often interpreted as being about learning styles and student choice. The existence of powerful search engines is assumed to render long-term memory  irrelevant. Active learning is about faddish and trivial projects. Connected devices are seen as a panacea for all of education’s ills, when they may just make it easier for students to get distracted.”

Implementing ed tech that flies in the face of, or discounts, teacher expertise lies at the heart of the problem. “Successful disruptive innovation solves a problem better than the existing solution,” Christodoulou claims. “Too many education technology innovations just create new problems.” ‘Looking it up on Google,’ she points out, is actually just “a manifestation of discovery learning, an idea which has a long history of failure.”

Technology skeptics expecting another critique of the dominance of the technology giants will be disappointed. The title, Teachers vs. Tech?, ends with a well-placed question mark.  While most of the current ed tech innovations perpetuate an “online life” that is “not on the side of the evidence,” Daisy Christodoulou shows conclusively that we (educators) have only ourselves to blame. “If they’re promoting bad ideas,” she notes, ” it’s at least partly because we’ve made it easy for them to do so.”

What’s the source of the underlying tension between teachers and education technology?  What has contributed to teachers’ skepticism about the marvels of ed-tech innovation?  How was the teachers vs tech tension played out during the COVID-19 school shutdown?  If the latest ed-tech toys and software were programmed with educationally sound, evidence-based pedagogy, would the response of educators be any different?  

Read Full Post »

A tectonic shift is underway in global K-12 education in response to the rapid and unpredictable spread of the frightening COVID-19 pandemic. Schools, colleges and universities have shut down almost everywhere leaving students, teachers and families in uncharted territory. With our educational institutions closed, parents are stepping-up to provide improvised ‘homebound’ education and educators are abruptly transitioning, almost by default, to e-learning in the form of distance education or video enhanced online programs. Provincial school authorities are playing catch-up and trotting out hastily-packaged Learn at Home distance learning programs to fill the extended interruption of regular, in-person classes.

Alberta’s Chief Medical Officer of Health, Dr. Deena Hinshaw, gave the first signal on Saturday March 14 of a significant change in the official public health response to the pandemic. Public health officials right across Canada are now routinely forecasting lengthy school closures beyond two weeks and possibly until the end of the year.

Closing schools for an additional two weeks after March break came first, and now educators are scrambling to make the sometimes rough and difficult transition to providing e-learning for students unable to report to ‘bricks-and-mortar’ schools. Some schools districts may be able to patch-together short-term e-learning modules, but few are prepared for the shift to online leaning on a system-wide scale.

The global COVID-19 pandemic looks like the realization of the wildest dream of the purveyors of technology-driven “disruptive innovation.” Almost overnight, the competition for online learning is not face-to-face, in-person classes, because those classes are cancelled. Now, it’s down to two options — distance learning and online teaching or nothing at all.   It’s happening so fast that even champions of radical technology innovation such as Michael B. Horn of the Christensen Institute are fearful that it may actually backfire.

Transitioning online cannot happen overnight. Recognized experts on digital learning, including the University of Limerick’s Ann Marcus Quinn, warn that technology is essentially a tool and transitioning is for more complex than simply swapping traditional textbook content for digital material is not the answer.

“Online teaching takes preparation and planning,” says Michael K. Barbour, co-author (with Randy LaBonte) of the annual report, The State of Online Learning in CanadaIt requires “the careful consideration of the tools,” their strengths and ,imitations,  and the adoption of “pedagogical strategies” best suited to the means of delivery. “The situation we currently find ourselves in is one of triage,” Barbour claims. “It is’t online teaching, it is remote teaching in an emergency situation.”

Closing schools makes good sense in the midst of acute public health emergencies if it helps to save lives. Yet it does not necessarily have to mean suspending all teacher-guided instruction and learning.  While Alberta announced on March 15, 2020 that all of its K-12 schools and day care centres were closed indefinitely, elementary and secondary teachers are at school and engaged in developing plans for e-learning to support students.  In the case of the Calgary Board of Education, the top priority became gearing up to offer learning online, especially for high school students in their Grade 12 graduating year.

Much can be learned from the abrupt change to distance learning in countries ravaged by the pandemic.  Surveying the challenges faced by China over the first month of school closures, Adam Tyner, a former American visiting scholar at Shanghai’s Fudan University, identified  some vitally important lessons.

  • Expand your learning management system capabilities so that teachers can post videos and interactive content, students can submit work, and teachers and students can easily engage in ongoing communication.. Upgrade your limited, ‘bare-bones’ student information management system by adding a new module, and hold teacher training sessions to bring teachers up to speed on how to utilize the tech tools;
  • Increase your bandwidth and assume that not all students own smartphones or have computers at home.  Regular television stations can be required to air community programming and to include televised elementary school lessons, on a rotating basis, grade-by-grade during the daytime hours. Secure free internet access, for the duration of the crisis, following the lead of major Chinese providers such as Huawei.
  • Encourage teacher experimentation with every means of communication to maintain active links with students.  Lessons and teacher-guided activities can be delivered in small videos or on podcasts, and mini-lessons or discussions carried out utilizing Zoom and other commercial apps.
  • Address the technology access digital disparities gap: Purchasing 4G-equipped tablets and service may help to bridge the “digital divide” between ‘haves’ and have nots’ when it comes to access to technology and the Internet.
  • Plan for Learning-Challenged Students: Switching from in-class to distance online learning is jolting for many students, and particularly for those who are struggling, need more attention, and perform better in guided activities.
  • Tailoring E-Learning for High School: Teenage students experiencing more freedom than usual need more motivational strategies, ongoing monitoring, and accountability to keep them on track with their learning plans.

Ministries of education and school leaders are gradually recovering from the school culture shock delivered by a totally unexpected and dire public health emergency. Some school district superintendents have lost their bearings and continue to promote conventional system-bound thinking in a rapidly changing educational order. With students being educated at home during the regular school hiatus, e-learning has emerged, almost by default. First off the mark were Alberta and New York City schools,, Ontario is now on board with the March 20, 2020 launch of the first phase of its Learn at Home e-learning initiative.

New challenges are surfacing as high-tech entrepreneurs and dominant learning corporations such as Nelson LC see an opportunity to expand their market share in K-12 education.  Educational leaders, closely aligned with learning corporations and working through the C21Canada CEO Academy, see an opening to advance “21st century learning” as the best preparation for the workplace of the future. Teachers’ concerns, on this score, about the encroachment of corporate interests and the fuzziness of such programs are well founded.

Educational technology has its place when it’s serving the needs of teachers rather than complicating and overburdening their working lives. A brand-new book, Daisy Christodoulou‘s Teachers Versus Tech ?, tackles the question squarely and demonstrates its value, particularly in the case of spaced repetition adaptive algorithms and comparative judgement assessment. 

Seasoned technology learning analysts, such as Henry Fletcher Wood  recognize that online learning has, so far, over-promised and under-delivered when it comes to improving teaching and raising student achievement. Practicing classroom educators like Minnesota K-6 teacher Jon Gustafson are actively engaged in translating and adapting “effective principles of instruction” to online and blended learning. Eschewing jazzy e-learning strategies such as student-centred “PBL/inquiry projects” and video chats, Gustafson is applying best practice, including retrieval practice, explicit writing instruction, and formative assessment.

Getting schools, teachers and students prepared for a longer period of distance learning is fast becoming a priority for provincial education policy-makers and school-level management and curriculum leaders. Let’s hope that evidence-based pedagogy and best teaching practice do not get swept aside in the transformation to e-learning in K-12 education.

How is student learning changing in response to the COVID-19 pandemic crisis?  What is emerging in the hiatus to fill the gap left by the prolonged cancellation of K-12 schools?  Should classroom educators be wary of learning corporations appearing bearing charitable gifts to school systems?  Why are teachers so skeptical of system-wide e-learning and online learning panaceas? Going forward, will teachers and ed tech find a way to live in peaceful coexistence in K-12 education? 

 

 

Read Full Post »

University of Kentucky student assessment guru Thomas R. Guskey is back on the Canadian Professional Development circuit with a new version of what looks very much like Outcomes-Based Education.  It is clear that he has the ear of the current leadership in the Education Department of Prince Edward Island.  For two days in late November 2018, he dazzled a captive audience of over 200 senior Island school administrators with has stock presentations extolling the virtues of mastery learning and competency-based student assessment.

GuskeyThomasSpeakingP.E. I’s Coordinator of Leadership and Learning Jane Hastelow was effusive in her praise for Guskey and his assessment theories. Tweets by educators emanating from the Guskey sessions parroted the gist of his message. “Students don’t always learn at the same rate or in the same order,” Guskey told the audience. So, why do we teach them in grades, award marks, and promote them in batches?

Grading students and assigning marks, according to Guskey, can have detrimental effects on children. “No research,” he claims, “supports the idea that low grades prompt students to try harder. More often, low grades lead students to withdraw from learning.”

Professional learning, in Guskey’s world, should be focused not on cognitive or knowledge-based learning, but on introducing “mastery learning” as a way of advancing “differentiated instruction” classrooms. “High-quality corrective instruction,” he told P.E.I. educators, is not the same as ‘re-teaching.’” It is actually a means of training teachers to adopt new approaches that “accommodate differences in students’ learning styles, learning modalities, or types of intelligence.”.

Guskey is well-known in North American education as the chief proponent for the elimination of percentage grades.  For more than two decades, in countless PD presentations, he has promoted his own preferred brand of student assessment reform. “It’s time, “ he insists, “ to abandon grading scales that distort the accuracy, objectivity and reliability of students’ grades.”

Up and coming principals and curriculum leads, most without much knowledge of assessment, have proven to be putty in his hands. If so, what’s the problem?   Simply put, Dr. Guskey’s theories, when translated into student evaluation policy and reporting, generate resistance among engaged parents looking for something completely different – clearer, understandable, jargon-free student reports with real marks. Classroom teachers soon come to realize that the new strategies and rubrics are far more complicated and time-consuming, often leaving them buried in additional workload.

Guskey’s student assessment theories do appeal to school administrators who espouse progressive educational principles. He specializes in promoting competency-based education grafted onto student-centred pedagogy or teaching methods.

Most regular teachers today are only too familiar with top-down reform designed to promote “assessment for learning” (AfL) and see, first hand, how it has led to the steady erosion of teacher autonomy in the classroom.

While AfL is a sound assessment philosophy, pioneered by the leading U.K. researcher Dylan Wiliam since the mid-1990s, it has proven difficult to implement. Good ideas can become discredited by poor implementation, especially when formative assessment becomes just another vehicle for a new generation of summative assessment used to validate standards.

Education leaders entranced by Guskey’s theories rarely delve into where it all leads for classroom teachers.  In Canada, it took the “no zeros” controversy sparked in May 2012 by Alberta teacher Lynden Dorval to bring the whole dispute into sharper relief. As a veteran high school Physics teacher, Dorval resisted his Edmonton high school’s policy which prevented him from assigning zeros when students, after repeated reminders, failed to produce assignments or appear for make-up tests.

Teachers running smack up against such policies learn that the ‘research’ supporting “no zeros” policy can be traced back to an October 2004 Thomas Guskey article in the Principal Leadership magazine entitled “Zero Alternatives.”

Manitoba social studies teacher Michael Zwaagstra analyzed Guskey’s research and found it wanting.  His claim that awarding zeros was a questionable practice rested on a single 20-year-old opinion-based presentation by an Oregon English teacher to the 1993 National Middle School conference. Guskey’s subsequent books either repeat that reference or simply restate his hypothesis as an incontestable truth.

SpadyWilliamOBEGuskey’s theories are certainly not new. Much of the research dates back to the early 1990s and the work of William Spady, a Mastery Learning theorist known as the prime architect of the ill-fated Outcomes-Based Education (OBE) movement.  OBE was best exemplified by the infamous mind-boggling systematized report cards loaded with hundreds of learning outcomes, and it capsized in in the early 2000s. in the wake of a storm of public and professional opposition in Pennsylvania and a number of other states.

The litmus test for education reform initiatives is now set at a rather low bar – “do no harm” to teachers or students.  What Thomas Guskey is spouting begs for more serious investigation. One red flag is his continued reference to “learning styles” and “multiple intelligences,” two concepts that do not exist and are now considered abandoned theories.

Guskey’s student assessment theories fly mostly in the face of the weight of recent research, including that of Dylan Wiliam.  Much of the best research is synthesized in Daisy Christodoulou’s 2014 book, Making Good Progress. Such initiatives float on unproven theories, lack supporting evidence-based research, chip away at teacher autonomy, and leave classroom practitioners snowed under with heavier ‘new age’ marking loads.

A word to the wise for  P.E.I. Education leadership – look closely before you leap. Take a closer look at the latest research on teacher-driven student assessment and why OBE was rejected twenty years ago by classroom teachers and legions of skeptical parents.

What’s really new about Dr. Thomas Guskey’s latest project known as Competency-Based Assessment? What is its appeal for classroom teachers concerned about time-consuming, labour-intensive assessment schemes?  Will engaged and informed parents ever accept the elimination of student grades? Where’s the evidence-based research to support changes based upon such untested theories? 

Read Full Post »

Ontario now aspires to global education leadership in the realm of student evaluation and reporting. The latest Ontario student assessment initiative, A Learning Province, announced in September 2017 and guided by OISE education  professor Dr. Carol Campbell, cast a wide net encompassing classroom assessments, large scale provincial tests, and national/international assessment programs.  That vision for “student-centred assessments” worked from the assumption that future assessments would capture the totality of “students’ experiences — their needs, learning, progress and well-being.”

The sheer scope whole project not only deserves much closer scrutiny, but needs to be carefully assessed for its potential impact on frontline teachers. A pithy statement by British teacher-researcher Daisy Christodoulou in January 2017 is germane to the point: “When government get their hands on anything involving the word ‘assessment’, they want it to be about high stakes monitoring and tracking, not about low-stakes diagnosis.”  In the case of  Ontario, pursuing the datafication of social-emotional-learning and the mining of data to produce personality profiles is clearly taking precedence over the creation of teacher-friendly assessment policy and practices.

One of the reasons Ontario has been recognized as a leading education system is because of its success over the past 20 years in establishing an independent Education Quality and Accountability Office  (EQAO) with an established and professionally-sound provincial testing program in Grades 3, 6, 9 and 10.  Whether you support the EQAO or not, most agree that is has succeeded in establishing reliable benchmark standards for student performance in literacy and mathematics.

The entire focus of Ontario student assessment is now changing. Heavily influenced by the Ontario People for Education Measuring What Matters project, the province is plunging ahead with Social and Emotional Learning (SEL) assessment embracing what Ben Williamson aptly describes as “stealth assessment” – a set of contested personality criteria utilizing SEL ‘datafication’ to measure “student well-being.” Proceeding to integrate SEL into student reports and province-wide assessments is also foolhardy when American experts Angela Duckworth and David Scott Yeager warn that the ‘generic skills’ are ill- defined and possibly unmeasureable.

Social and emotional learning is now at the very core of Ontario’s Achieving Excellence and Equity agenda and it fully embraces “supporting all students” and enabling them to achieve “a positive sense of well-being – the sense of self, identity, and belonging in the world that will help them to learn, grow and thrive.” The Ontario model is based upon a psycho-social theory that “well-being” has “four interconnected elements” critical to student development, with self/spirit at the centre. Promoting student well-being is about fostering learning environments exhibiting these elements:

Cognitive: Development of abilities and skills such as critical thinking, problem solving, creativity, and the ability to be flexible and innovative.

Emotional: Learning about experiencing emotions, and understanding how to recognize, manage, and cope with them.

Social: Development of self-awareness, including the sense of belonging, collaboration, relationships with others, and communication skills.

Physical: Development of the body, impacted by physical activity, sleep patterns, healthy eating, and healthy life choices.

Self/Spirit:  Recognizing the core of identity whieh has “different meanings for different people, and can include cultural heritage, language, community, religion or a broader spirituality.”

Ontario’s new student report cards, proposed for 2018-19 implementation, will incorporate an distinct SEL component with teacher evaluations on a set of “transferable skills” shifting the focus from organization and work habits to “well-being” and associated values, while retaining grades or marks for individual classes. The Ontario Education “Big Six” Transferable Skills are: critical thinking, innovation and creativity, self-directed learning, collaboration, communication, and citizenship.  Curiously absent from the Ontario list of preferred skills are those commonly found in American variations on the formula: grit, growth mindset, and character

The emerging Ontario student assessment strategy needs to be evaluated in relation to the latest research and best practice, exemplified in Dylan Wiliam’s student assessment research and Daisy Christodoulou’s 2017 book Making Good Progress: The Future of Assessment for Learning.  Viewed through that lens, the Ontario student assessment philosophy and practice falls short on a number of counts.

  1. The Generic Skills Approach: Adopting this approach reflects a fundamental misunderstanding about how students learn and acquire meaningful skills. Tacking problem-solving at the outset, utilizing Project-Based Learning to “solve-real life problems” is misguided  because knowledge and skills are better acquired  through other means. The “deliberate practice method” has proven more effective. Far more is learned when students break down skills into a ‘progression of understanding’ — acquiring the knowledge and skill to progress on to bigger problems.
  2. Generic Feedback: Generic or transferable skills prove to be unsound when used as a basis for student reporting and feedback on student progress. Skills are not taught in the abstract, so feedback has little meaning for students. Reading a story and making inferences, for example, is not a discrete skill; it is dependent upon knowledge of vocabulary and background context to achieve reading comprehension.
  3. Hidden Bias of Teacher Assessment: Teacher classroom assessments are highly desirable, but do not prove as reliable as standardized measures administered under fair and objective conditions. Disadvantaged students, based upon reliable, peer-reviewed research, do better on tests than of regular teacher assessments. “Teacher assessment is biased not because they are carried out by teachers, but because it is carried out by humans.”
  4. Unhelpful Prose Descriptors: Most verbal used in system-wide assessments and reports are unhelpful — tend to be jargon-ridden, unintelligible to students and parents, and prove particularly inaccessible to students struggling in school. Second generation descriptors are “pupil friendly” but still prove difficult to use in learning how to improve or correct errors.
  5. Work-Generating Assessments: System-wide assessments, poorly constructed, generate unplanned and unexpected marking loads, particularly in the case of qualitative assessments with rubrics or longer marking time. In the U.K., for example, the use of grade descriptors for feedback proved much more time consuming than normal grading of written work Primary teachers who spent 5 hours a week on assessment in 2010, found that, by 2013, they were spending 10 hours a week.AssessmentMarkLoadCrisisWhat’s wrong with the new Ontario Assessment Plan and needs rethinking?
  1. The Generic Skills Approach – Teaching generic skills (SEL) doesn’t work and devalues domain-specific knowledge
  2. Social and Emotional Learning (SEL) models — carry inherent biases and are unmeasurable
  3. Breach of Student Security – Data mining and student surveys generate personality data without consent
  4. Erosion of Teacher Autonomy – Student SEL data generated by algorithms, creates more record-keeping, more marking, cuts into classroom time.

The best evidence-based assessment research, applied in deconstructing the Ontario Assessment initiative, raises red flags.  Bad student assessment practices, as Wiliam and Christodoulou show, can lead to serious workload problems for classroom teachers. No education jurisdiction that lived up to the motto “Learning Province” would plow ahead when the light turns to amber.

A summary of the researchED Ontario presentation delivered April 14, 2018, at the Toronto Airport Westin Hotel. 

Where is the new Ontario student assessment initiative really heading? Is it a thinly-disguised attempt to create a counterweight to current large-scale student achievement assessments? Is it feasible to proceed with SEL assessment when leading researchers question its legitimacy and validity? Are we running the risk of opening the door to the wholesale mining of student personal information without consent and for questionable purposes? 

Read Full Post »

Canada’s most populous province aspires to education leadership and tends to exert influence far beyond our coast-to-coast provincial school systems. That is why the latest Ontario student assessment initiative, A Learning Province, is worth tracking and deserves much closer scrutiny. It was officially launched in September of 2017, in the wake of a well-publicized decline in provincial Math test scores and cleverly packaged as a plan to address wider professional concerns about testing and accountability.

Declining Math test scores among public elementary school students in Ontario were big news in late August 2017 for one one good reason- the Ontario Ministry’s much-touted $60-million “renewed math strategy” completely bombed when it came to alieviating the problem. On the latest round of  provincial standardized tests — conducted by the Education Quality and Accountability Office (EQAO)only half of Grade 6 students met the provincial standard in math, unchanged from the previous year. In 2013, about 57 per cent of Grade 6 students met the standard  Among Grade 3 students, 62 per cent met the provincial standard in math, a decrease of one percentage point since last year.

The Ontario government’s response, championed by Premier Kathleen Wynne and Education Minister Mitzie Hunter, was not only designed to change the channel, but to initiate a “student assessment review” targeting the messenger, the EQAO, and attempting to chip away at its hard-won credibility, built up over the past twenty years. While the announcement conveyed the impression of “open and authentic” consultation, the Discussion Paper made it crystal clear that the provincial agency charged with ensuring educational accountability was now under the microscope.  Reading the paper and digesting the EQAO survey questions, it becomes obvious that the provincial tests are now on trial themselves, and being assessed on criteria well outside their current mandate.

Ontario’s provincial testing regime should be fair game when it comes to public scrutiny. When spending ballooned to $50 million a year in the late 1990s, taxpayers had a right to be concerned. Since 2010, EQAO costs have hovered around $34 million or $17 per student, the credibility of the test results remain widely accepted, and the testing model continues to be free of interference or manipulation.  It’s working the way it was intended — to provide a regular, reasonably reliable measure of student competencies in literacy and numeracy.

The EQAO is far from perfect, but is still considered the ‘gold standard’ right across Canada.  It has succeeded in providing much greater transparency, but — like other such testing regimes – has not nudged education departments far enough in the direction of improving teacher specialist qualifications or changing the curriculum to secure better student results.  The Grade 10 Literacy Test remains an embarrassment. In May 2010, the EQAO report, for example, revealed that hundreds of students who failed the 2006 test were simply moved along trough the system without passing that graduation standard. Consistently, about 19 to 24 per cent of all students fall short of acceptable literacy, and 56 per cent of all Applied students, yet graduation rates have risen from 68% to 86% province-wide.

The Ontario Ministry is now ‘monkeying around’ with the EQAO and seems inclined toward either neutering the agency to weaken student performance transparency or broadening its mandate to include assessing students for “social and emotional learning’ (SEL), formerly termed “non-cognitive learning.”  The “Independent Review of Assessment and Reporting” is being supervised by some familiar Ontario education names, including the usual past and present OISE insiders, Michael Fullan, Andy Hargreaves, and Carol Campbell.  It’s essentially the same Ontario-focused group, minus Dr. Avis Glaze, that populates the International Education Panel of Advisors in Scotland attempting to rescue the Scottish National Party’s faltering “Excellence for All” education reforms.

The published mandate of the Student Assessment Review gives it all away in a few critical passages.  Most of the questions focus on EQAO testing and accountability and approach the tests through a “student well-being” and “diversity” lens.  An “evidence-informed” review of the current model of assessment and reporting is promised, but it’s nowhere to be found in the discussion paper. Instead, we are treated to selected excerpts from official Ontario policy documents, all supporting the current political agenda, espoused in the 2014 document, Achieving Excellence: A Renewed Vision for Education in Ontario. The familiar four pillars, achieving excellence, ensuing equity, promoting well-being, and enhancing public confidence are repeated as secular articles of faith.

Where’s the research to support the proposed direction?  The Discussion Paper does provide capsule summaries of two assessment approaches, termed “large-scale assessments” and “classroom assessments, ” but critical analysis of only the first of the two approaches.  There’s no indication in A Learning Province that the reputedly independent experts recognize let alone heed the latest research pointing out the pitfalls and problems associated with Teacher Assessments (TA) or the acknowledged “failure” of Assessment for Learning (AfL).  Instead, we are advised, in passing, that the Ontario Ministry has a research report, produced in August 2017, by the University of Ottawa, examining how to integrate “student well-being” into provincial K-12 assessments.

The Ontario Discussion Paper is not really about best practice in student assessment.  It’s essentially based upon rather skewed research conducted in support of “broadening student assessments” rather that the latest research on what works in carrying out student assessments in the schools.  Critical issues such as the “numeracy gap” now being seriously debated by leading education researchers and student assessment experts are not even addressed in the Ontario policy paper.

Educators and parents reading A Learning Province would have benefited from a full airing of the latest research on what actually works in student assessment, whether or not it conforms with provincial education dogma.  Nowhere does the Ontario document recognize Dylan Wiliam’s recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments.  Nor does the Ontario student assessment review team heed the recent findings of British assessment expert, Daisy Christodoulou.  In her 2017 book, Making Good Progress, Christodoulou provides compelling research evidence to demonstrate why and how standardized assessments are not only more reliable measures, but fairer for students form unprivileged families.  She also challenges nearly every assumption built into the Ontario student assessment initiative.

The latest research and best practice in student assessment cut in a direction that’s different from where the Ontario Ministry of Education appears to be heading. Christodoulou’s Making Good Progress cannot be ignored, particularly because it comes with a ringing endorsement from the architect of Assessment for Learning, Dylan Wiliam.  Classroom teachers everywhere are celebrating Christodoulou for blowing the whistle on “generic skills” assessment, ‘rubric-mania,’ impenetrable verbal descriptors, and the mountains of assessment paperwork. Bad student assessment practices, she shows, lead to serious workload problems for classroom teachers.  Proceeding to integrate SEL into province-wide assessments when American experts Angela Duckworth and David Scott Yeager warn that it’s premature and likely to fail is simply foolhardy.  No education jurisdiction priding itself on being “A Learning Province” would plow ahead when the lights turn to amber.

The Ontario Student Assessment document, A Learning Province, may well be running high risks with public accountability for student performance.  It does not really pass the sound research ‘sniff test.’  It looks very much like another Ontario provincial initiative offering a polished, but rather thinly veiled, rationale for supporting the transition away from “large-scale assessment” to “classroom assessment” and grafting unproven SEL competencies onto EQAO, running the risk of distorting its core mandate.

Where is Ontario really heading with its current Student Assessment policy initiative?  Where’s the sound research to support a transition from sound, large-scale testing to broader measures that can match its reliability and provide a level playing field for all?  Should Ontario be heeding leading assessment experts like Dylan Wiliam, Daisy Christodoulou, and Angela Duckworth? Is it reasonable to ask whether a Ministry of Education would benefit from removing a nagging burr in its saddle? 

 

Read Full Post »

Making space for creativity in the classroom sounds like common sense. Few educators today would dispute the wisdom of challenging students to think critically and to solve problems in creative ways. When it is elevated to the primary goal of elementary schools, displacing the acquisition of foundational knowledge and skills, it’s time to ask deeper and more fundamental questions.

KenRobinsonTEDprofile

Teacher Aaron Warner, initiator of the Google-inspired “Genius Hour” at Regina’s Douglas Park Elementary School, is definitely a true believer in teaching creativity.  Justifying his two hour-a-week program in a new book, Kelly Gallagher-Mackay and Nancy Steinhauer’s Pushing the Limits (2017), Warner provides this declaratory statement: “Sixty per cent of the jobs of the future haven’t been invented yet.”  That “insight”, we are told, echoes Sir Ken Robinson’s contention in “Do Schools Kill Creativity?,” the most watched TED Talk of all time.  It is Robinson, of course, who uttered what became that simple, unassailable, unverifiable educational truth that “creativity” is central in developing education that will “take us to a future we can’t grasp.”

What’s the problem with repeating Robinson’s claim and citing a statistic to support that hypothesis? It’s a classic example of transforming education or “building the future schoolhouse,” on what Hack Education commentator Audrey Watters has termed “theory of mythical proportions”  instead of evidence-based policy-making. Citing the statistic that  “60% (or 65%) of future jobs have not been invented yet,” is doubly problematic because no one can authenticate the research behind that oft-repeated statistic.

Two enterprising British teacher-researchers, Daisy Christodoulou and Andrew Old, recently tracked the origin  of that statistic and found it essentially without substance. On the BCC World News Service program, More or Less, aired May 29, 2017, they identified how that statistic originated and got parroted around the globe.  Most fascinating of all, one of the researchers who popularized the claim, Dr. Cathy Davidson, of The Graduate Center CUNY, has now reached similar conclusions and ceased repeating the “65% statistic.”

“I haven’t used that figure since about 2012,” Davidson said, in response to the BBC News investigation.  Her explanation of how the statistic disappeared is revealing about the sorry state of educational policy discourse, not only in Canada but across the world.

The disputed statistic was promulgated in Davidson’s 2011 book, Now You See It:  How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn.  The figure, she says, didn’t originate with her.  She first encountered it in futurist Jim Carroll’s book, Ready, Set, Done (2007). and it has been tracked down to an Australian website where the “65%” figure was quoted with some visuals and categories of new jobs that hadn’t existed before. After Now You See It  appeared, that 65% figure kept being quoted so Davidson attempted to contact the authors of the study to be able to learn more about their findings but with no luck.  By then, the site was down and even the Innovation Council of Australia had been closed by a new government.

Since the reputed source of the statistical claim had disappeared, Davidson began issuing a disclaimer and stopped repeating the figure. She also embraced “Big Data” and started to deconstruct what the category of “job” really means. Much to the surprise of the British researchers, Davidson welcomed the probing questions and agreed that educators need to be far more careful about their use of statistical claims, and, most significantly, the wisdom of “using statistics like that at all.”

SevenMythsBookCoverWhy is 65% so problematic?  The BBC researchers, Christodoulou and Old, also did rough calculations by looking at jobs that exist now and jobs that existed in the past and compared job titles.   They found that maybe 1/3 of all jobs today are actually “new,” even by the most generous count.  That’s 33% not 65% and hardly justification for turning the entire school system upside down.

No one has yet challenged one of Daisy Christodoulou’s key points in the BBC News broadcast. When asked whether “21st century skills” would last, she responded that, in her judgement, “the alphabet (language) and numbers (numerology)” would outlive us all. Surely that claim deserves a much wider public discussion.

Davidson has abandoned that unverified statistic and changed her rationale for system-wide change in the direction of “21st century learning.” Her brand new book, The New Education: How To Revolutionize the University to Prepare Students for a World in Flux (2017), carefully avoids recycling the statistic and, instead, claims with “intuition” rather than “data” that “closer to 100 per cent of jobs have changed in some way” in recent decades.

The American promulgator of the “65% statistic” has definitely backtracked on one of her best known claims. The whole episode has real implications for Canadian education policy discourse. Indeed, it raises serious questions about a whole set of related claims made in Pushing the Limits that schools have to be “transformed to prepare kids for jobs that don’t exist.”

What is the research base for the popular claim that schools should be transformed to “prepare students for jobs not invented yet”? Should we base system-wide reform on unassailable, unverified claims in Sir Ken Robinson’s TED Talks?  Is the spread of the “65% statistic” another example of “bias confirmation’?  Are promoters of “creativity in schools” expanding the space for creativity or looking to displace foundational skills?  Most significantly, how do we dispel claims made using questionable research data? 

 

 

 

 

Read Full Post »

The educational world is a strange place with its own tribal conventions, familiar rituals, ingrained behaviours, and unique lexicon. Within the K-12 school system, educational innovations come in waves where “quick fixes” and “fads” are fashionable and yesterday’s failed innovations can return, often recycled in new guises.

Education research is rarely applied where it is needed in challenging the assumptions of current orthodoxy and teaching practice. Only one out of every ten curriculum or pedagogical initiatives is ever properly evaluated, according to the Organization of Economic Cooperation and Development (OECD) ‘s Education Office, managers of the Program of International Student Assessment (PISA).

Growing numbers of classroom teachers, as well as serious education researchers, are looking for evidence of “what works” before jumping on the latest educational bandwagon. That’s the spark that ignited the British teachers’ movement known as researchED challenging prevailing myths, questioning entrenched theories, and demanding evidence-based teaching practice.

                            researchED founder Tom Bennett’s 2013 book, Teacher Proofwas a direct hit on educational orthodoxy supported by flimsy explanations resting only on questionable social science theories. After a decade of teaching in East London, he knew something was amiss because a succession of pedagogical panaceas such as learning styles, Neuro-Linguistic Programming (NLP), Brain Gym, and ‘soft persuasion techniques’ simply did not work in the classroom.  His work and that of leading researchED apostles like Daisy Christoudoulou and Martin Robinson has now spawned an international movement to demand research-informed teaching practice.

“We believe that the teaching profession is poised and ripe for change,” says Tom Bennett. “It should be a change where teachers and schools are guided by the best evidence available, not just the latest theories. That’s what propels our new, teacher-led organization.”

Surveying the state of Canadian K-12 education and the current alignment of research priorities, Bennett’s prediction may well bear fruit. North American and Canadian education research, mostly the preserve of faculties of education, once described as a “black hole” still gets little or no respect among policy-makers. High-quality research on the effectiveness of reforms is either weak, inconclusive or missing altogether. Is the mindfulness and self-regulation strategy the latest example of that phenomenon?

Much of the field is driven by political or ideological agendas where action research is used to mount a case for province-wide funding of ‘pet projects’ or unproven technology-in-the classroom innovations. Where education projects are supported by sound scholarship and evidence-based research, it too often has little influence on what is mandated for implementation in the classroom.

elearningred2016coverSchool system leaders and their provincial ministers tend to embrace broad, philosophical concepts like “21st century learning” and to mimic initiatives promoted by Pearson Learning, Microsoft and other international learning corporations. Top-down education policy and curriculum mandates like this tend to run aground when they are introduced to teachers as the latest innovation in teaching and learning. Without the active support of committed and engaged teachers they simply die on the vine and wither away, soon to be replaced by the next panacea.

Out of the testing and accountability movement of the 1990s and early 2000s emerged a ‘new managerialism’ – a whole generation of education management that mastered the rhetoric and language of “outcomes” and “accountability” with, sad to say, little to show for the massive investment of time and talent.  With standardized testing under fire, education lobby groups such as Ontario-based People for Education, are mounting a determined effort to implement ‘school change theory’ and broaden student assessment to include uncharted domains in social and emotional learning.

researchED is now in the forefront in blowing the whistle on innovations floating on untested theories. Popular notions that “schools are preparing kids for jobs that won’t exist” have been found wanting when held up to closer scrutiny. Current fashionable teaching practices such as “Discovery Math,” and “Personalized Learning” ,at least so far, simply do not pass the research-litmus test. It is, by no means certain, that introducing coding in elementary schools will work when so few teachers in the early grades have any background or training in mathematics or computer science.

Since September 2013 researchED has attracted droves of teachers to conferences in the U.K., Australia, Scandinavia, and the European Union. Next stop on this truly unique “British education revolution” is Canada.  The movement’s founder, Tom Bennett, will be the headliner of the first researchED conference to be held in Canada on November 10 and 11, 2017 in Toronto. 

ResearchED Toronto aims to attract a brand-new audience of teachers, policy researchers, and reform-minded parents  Tickets for the full conference are available at https://researched.org.uk/event/researched-toronto/  Batten down the hatches, the British are coming, and, once teachers get a taste of the experience, there will be no turning back.

Part Two of a Series on the researchED Movement.

Will the researchED movement find fertile ground in Canada?  Are there signs of a willingness to come together to “work out what works” for teachers and students? How entrenched are the ‘core interests’ upholding the current orthodoxy and inclined to inhabit their own echo chamber?  Will our “urban myths about education” continue to obscure our understanding of what really works in the classroom? 

Read Full Post »

Older Posts »