Feeds:
Posts
Comments

Archive for the ‘Student Assessment’ Category

University of Kentucky student assessment guru Thomas R. Guskey is back on the Canadian Professional Development circuit with a new version of what looks very much like Outcomes-Based Education.  It is clear that he has the ear of the current leadership in the Education Department of Prince Edward Island.  For two days in late November 2018, he dazzled a captive audience of over 200 senior Island school administrators with has stock presentations extolling the virtues of mastery learning and competency-based student assessment.

GuskeyThomasSpeakingP.E. I’s Coordinator of Leadership and Learning Jane Hastelow was effusive in her praise for Guskey and his assessment theories. Tweets by educators emanating from the Guskey sessions parroted the gist of his message. “Students don’t always learn at the same rate or in the same order,” Guskey told the audience. So, why do we teach them in grades, award marks, and promote them in batches?

Grading students and assigning marks, according to Guskey, can have detrimental effects on children. “No research,” he claims, “supports the idea that low grades prompt students to try harder. More often, low grades lead students to withdraw from learning.”

Professional learning, in Guskey’s world, should be focused not on cognitive or knowledge-based learning, but on introducing “mastery learning” as a way of advancing “differentiated instruction” classrooms. “High-quality corrective instruction,” he told P.E.I. educators, is not the same as ‘re-teaching.’” It is actually a means of training teachers to adopt new approaches that “accommodate differences in students’ learning styles, learning modalities, or types of intelligence.”.

Guskey is well-known in North American education as the chief proponent for the elimination of percentage grades.  For more than two decades, in countless PD presentations, he has promoted his own preferred brand of student assessment reform. “It’s time, “ he insists, “ to abandon grading scales that distort the accuracy, objectivity and reliability of students’ grades.”

Up and coming principals and curriculum leads, most without much knowledge of assessment, have proven to be putty in his hands. If so, what’s the problem?   Simply put, Dr. Guskey’s theories, when translated into student evaluation policy and reporting, generate resistance among engaged parents looking for something completely different – clearer, understandable, jargon-free student reports with real marks. Classroom teachers soon come to realize that the new strategies and rubrics are far more complicated and time-consuming, often leaving them buried in additional workload.

Guskey’s student assessment theories do appeal to school administrators who espouse progressive educational principles. He specializes in promoting competency-based education grafted onto student-centred pedagogy or teaching methods.

Most regular teachers today are only too familiar with top-down reform designed to promote “assessment for learning” (AfL) and see, first hand, how it has led to the steady erosion of teacher autonomy in the classroom.

While AfL is a sound assessment philosophy, pioneered by the leading U.K. researcher Dylan Wiliam since the mid-1990s, it has proven difficult to implement. Good ideas can become discredited by poor implementation, especially when formative assessment becomes just another vehicle for a new generation of summative assessment used to validate standards.

Education leaders entranced by Guskey’s theories rarely delve into where it all leads for classroom teachers.  In Canada, it took the “no zeros” controversy sparked in May 2012 by Alberta teacher Lynden Dorval to bring the whole dispute into sharper relief. As a veteran high school Physics teacher, Dorval resisted his Edmonton high school’s policy which prevented him from assigning zeros when students, after repeated reminders, failed to produce assignments or appear for make-up tests.

Teachers running smack up against such policies learn that the ‘research’ supporting “no zeros” policy can be traced back to an October 2004 Thomas Guskey article in the Principal Leadership magazine entitled “Zero Alternatives.”

Manitoba social studies teacher Michael Zwaagstra analyzed Guskey’s research and found it wanting.  His claim that awarding zeros was a questionable practice rested on a single 20-year-old opinion-based presentation by an Oregon English teacher to the 1993 National Middle School conference. Guskey’s subsequent books either repeat that reference or simply restate his hypothesis as an incontestable truth.

SpadyWilliamOBEGuskey’s theories are certainly not new. Much of the research dates back to the early 1990s and the work of William Spady, a Mastery Learning theorist known as the prime architect of the ill-fated Outcomes-Based Education (OBE) movement.  OBE was best exemplified by the infamous mind-boggling systematized report cards loaded with hundreds of learning outcomes, and it capsized in in the early 2000s. in the wake of a storm of public and professional opposition in Pennsylvania and a number of other states.

The litmus test for education reform initiatives is now set at a rather low bar – “do no harm” to teachers or students.  What Thomas Guskey is spouting begs for more serious investigation. One red flag is his continued reference to “learning styles” and “multiple intelligences,” two concepts that do not exist and are now considered abandoned theories.

Guskey’s student assessment theories fly mostly in the face of the weight of recent research, including that of Dylan Wiliam.  Much of the best research is synthesized in Daisy Christodoulou’s 2014 book, Making Good Progress. Such initiatives float on unproven theories, lack supporting evidence-based research, chip away at teacher autonomy, and leave classroom practitioners snowed under with heavier ‘new age’ marking loads.

A word to the wise for  P.E.I. Education leadership – look closely before you leap. Take a closer look at the latest research on teacher-driven student assessment and why OBE was rejected twenty years ago by classroom teachers and legions of skeptical parents.

What’s really new about Dr. Thomas Guskey’s latest project known as Competency-Based Assessment? What is its appeal for classroom teachers concerned about time-consuming, labour-intensive assessment schemes?  Will engaged and informed parents ever accept the elimination of student grades? Where’s the evidence-based research to support changes based upon such untested theories? 

Advertisements

Read Full Post »

Where you live can greatly influence on the educational outcomes of your children. Some education observers go so far as to say: “The quality of education is determined by your postal code.” In school systems with strict student attendance zones, it is, for all intents and purposes, the iron law of public education.

Students, whatever their background, can overcome significant disadvantages. ““Your destiny is in your hands, and don’t you forget that,” as former U.S. President Barack Obama said famously in July 2009. “That’s what we have to teach all of our children! No excuses! No excuses!”

ClosingtheGapCHClassPhotoThere is a fine line between identifying struggling schools and ‘labeling’ them.  “We identify schools and where they are on the improvement journey,” says Elwin LeRoux,, Regional Director of Education in Halifax, Nova Scotia. “Yet we are careful not to ‘label’ some schools in ways that may carry negative connotations and influence student attitudes.”

How a school district identifies struggling schools and how it responds is what matters. Accepting the socio-economic dictates or ignoring the stark realities is not good enough. It only serves to reinforce the ingrained assumption, contribute to lowered academic expectations, and possibly adversely affect school leadership, student behaviour standards, teacher attitudes, and parent-school relations.

While there are risks involved is comparing school performance, parents and the public are entitled to know more about how students in our public schools are actually performing. The Halifax Chronicle Herald broke the taboo in November 2018 and followed the path blazed by other daily papers, including The Globe and Mail and the Hamilton Spectator, in providing a school-by-school analysis of school performance in relation to socio-economic factors influencing student success. The series was based upon extensive research conducted for the Atlantic Institute of Market Studies (AIMS). 

A Case Study – the Halifax Public School System

The Halifax Regional Centre for Education (formerly the Halifax Regional School Board) enrolls 47,770 students in 135 schools, employs 4,000 school-based teachers,and provides a perfect lens through which to tackle the whole question. Student achievement and attainment results over the past decade, from 2008-09 to 2015-16, have been published in school-by school community reports and, when aggregated, provide clear evidence of how schools are actually performing in Halifax Region.

Unlike many Canadian boards, the HRCE is organized in an asymmetrical fashion with a mixed variety of organizational units: elementary schools (84), junior high/middle schools (27), senior elementary (7), P-12 academy (1), junior-senior high schools (6), and senior high schools (10).   Current student enrolment figures, by school division, stand at 25,837 for Primary to Grade 6, 11,245 for Grades 7 to 9, and 10,688 for Grades 10 to 12.

Student Achievement and School Improvement

Since November of 2009, the Halifax system has been more open and transparent in reporting on student assessment results as a component of its system-wide improvement plan. Former Superintendent Carole Olsen introduced the existing accountability system along with a new mission that set a far more specific goal: “Every Student will Learn, every School will Improve.”

HRSBGoodtoGreatCollageThe Superintendent’s 2008-09 report was introduced with great fanfare with an aspirational goal of transforming “Good Schools to Great Schools” and a firm system-wide commitment that “every school, by 2013, will demonstrate improvement in student learning.” Following the release of aggregated board-wide data, the HRSB produced school-by-school accountability reports, made freely available to not only the School Advisory Councils (SACs), but to all parents in each school.

Superintendent Olsen set out what she described as “a bold vision” to create “a network of great schools” in “thriving communities” that “bring out the best in us.” School-by-school reporting was critical to that whole project. “Knowing how each school is doing is the first important step in making sure resources and support reach the schools – and the students—that need them the most,” Olsen declared.

The Established Benchmark – School Year 2008-09

The school year 2008-09, the first year in the HRSB’s system-wide improvement initiative, provided the benchmark, not only for the board, but for the AIMS research report taking stock of student achievement and school-by-school performance over the past decade.

In 2008-09, the first set of student results in the two core competencies, reading and math, demonstrated that HRSB student scores were comparable to other Canadian school systems, but there was room for improvement. In Grade 2 reading, the system-wide target was that 77 per cent of all students would meet established board standards. Only 25 out of some 91 schools (27.5 %) met or exceeded the established target.

While Grade 2 and Grade 5 Mathematics students performed better, problems surfaced at the Grade 8 level, where two out of three schools (67.5 %) failed to meet the HRSB standard. High numbers of Grade 8 students were struggling with measurement, whole number operations (multiplication, division), problem-solving, and communication.

System Leadership Change and Policy Shifts

Schools in the Halifax school system may have exceeded the initial public expectations, but the vast majority of those schools fell far short of moving from “Good Schools to Great Schools.” Some gains were made in student success rates in the two core competencies, reading and mathematics, by the 2013 target year, but not enough to match the aspirational goals set by Superintendent Olsen and the elected school board.

HRSBElwinLeRoux

With Olsen’s appointment in September 2012 as Deputy Minister of Education for Nova Scotia, the robust HRSB commitment to school-by-school improvement and demonstrably improved standards in reading and mathematics faltered. Her successor, LeRoux, a 24-year board veteran, espoused more modest goals and demonstrated a more collegial, low-key leadership style. Without comprehensive school system performance reports, the school community reports, appended routinely as PDFs to school websites, attracted little attention.

The “Good Schools to Great Schools” initiative had failed to work miracles. That became apparent in May 2014, following the release of the latest round of provincial literacy assessments.  The formal report to the Board put it bluntly: “A large achievement gap exists between overall board results and those students who live in poverty.”

School administration, based upon research conducted in-house by psychologist Karen Lemmon, identified schools in need of assistance when more than one-third of the family population in a school catchment could be classified as “low income” households. Twenty of its 84 elementary schools were identified and designated as “Priority Schools” requiring more attention, enhanced resources, and extra support programs to close the student achievement gap.

The focus changed, once again, following the release of the 2017-18 provincial results in Grade 6 Math and Literacy. Confronted with those disappointing results, the HRSB began to acknowledge that students living in poverty came disproportionately from marginalized communities.

Instead of focusing broadly on students in poverty, the Board turned its attention to the under-performance of Grade 6 students from African/black and Mi’kmaq/Indigenous communities. For students of African ancestry, for example, the Grade 6 Mathematics scores declined by 6 per cent, leaving less than half (49 per cent) meting provincial standards. What started out as a school improvement project focused on lower socioeconomic schools had evolved into one addressing differences along ethno-racial lines.

Summaries of the AIMS Research Report Findings

Stark Inequalities – High Performing and Struggling Schools

Hopeful Signs – Most Improved Schools

Summation and Recommendations – What More Can Be Done?

Putting the Findings in Context

School-by-school comparative studies run smack up against the hard realities of the socio-economic context affecting children’s lives and their school experiences.  All public schools from Pre-Primary to Grade 12 are not created equal and some enjoy advantages that far exceed others, while others, in disadvantaged communities, struggle to retain students and are unable, given the conditions, to move the needle in school improvement. So, what can be done to break the cycle?

Questions for Discussion

Comparing school-by-school performance over the past decade yields some startling results and raises a few critical questions:  Is the quality of your education largely determined by your postal code in Canadian public school systems? What are the dangers inherent in accepting the dictates of socio-economic factors with respect to student performance?  What overall strategies work best in breaking the cycle of stagnating improvement and chronic under-performance? Should school systems be investing less in internal “learning supports” and more in rebuilding school communities themselves? 

Read Full Post »

The latest student achievement results, featured in the April 30, 2018 Pan-Canadian Assessment Program (PCAP) 2016 report, prove, once again, how system-critical testing is for K-12 education. Students in every Canadian province except Ontario saw gains in Grade 8 student scores from 2010 to 2016 and we are now much the wiser. That educational reality check simply confirms that it’s no time to be jettisoning Ontario’s Grade 3 provincial tests and chipping away at the reputation of the province’s independent testing agency, the Education Quality and Accountability Office (EQAO).

The plan to end Grade 3 provincial testing arrived with the final report of Ontario: A Learning Province, produced by OISE professor Carol Campbell and her team of six supposedly independent advisors, including well-known change theorists Michael Fullan, Andy Hargreaves and Jean Clinton. Targeting of the EQAO was telegraphed in an earlier discussion paper, but the consultation phase focused ostensibly more on “broadening measures of student success” beyond achievement and into the largely uncharted realm of “social and emotional learning” (SEL).

The final report stunned many close observers in Ontario who expected much more from the review, and, in particular, an SEL framework for assessment and a new set of “student well- being” reports for the 2018-19 school year.  Tampering with Grade 3 testing made former Ontario Deputy Minister Charles Pascal uncomfortable because it interfered with diagnosis for early interventions.

It also attracted a stiff rebuke from the world’s leading authority on formative assessment, British assessment specialist Dylan Wiliam. He was not impressed at all with the Campbell review committee report. While it was billed as a student assessment review, Wiliam noted that none of the committee members is known for expertise in assessment, testing or evaluation.

Education insiders were betting that the Kathleen Wynne Liberal-friendly review team would simply unveil the plan for “broader student success” developed by Annie Kidder and her People for Education lobby group since 2012 and known as the “Measuring What Matters” project. It is now clear that something happened to disrupt the delivery of that carefully nurtured policy baby. Perhaps the impending Ontario provincial election was a factor.

Social and emotional learning is now at the very core of Ontario’s Achieving Excellence and Equity agenda and it fully embraces “supporting all students” and enabling them to achieve “a positive sense of well-being – the sense of self, identity, and belonging in the world that will help them to learn, grow and thrive.”

The Ontario model, hatched by the Education Ministry in collaboration with People for Education, is based upon a psycho-social theory that “well-being” has “four interconnected elements” critical to student development, with self/spirit at the centre. The whole formulation reflects the biases of the architects, since grit, growth mindset, respect and responsibility are nowhere to be found in the preferred set of social values inculcated in the system. Whatever the rationale, proceeding to integrate SEL into student reports and province-wide assessments is premature when recognized American experts Angela Duckworth and David Scott Yeager warn that the ‘generic skills’ are ill- defined and possibly unmeasureable.

Evidence-informed researchers such as Daisy Christodoulou, author of Making Good Progress (2017), do not support the proposed change in Ontario student assessment focus. Generic or transferable skills approaches such as Ontario is considering generate generic feedback of limited value to students in the classroom. Relying too heavily on teacher assessments is unwise because, as Christodoulou reminds us, disadvantaged students tend to fare better on larger-scale, objective tests. The proposed prose descriptors will, in all likelihood, be jargon-ridden, unintelligible to students and parents, and prove particularly inaccessible to students struggling in school.

One of the reasons Ontario has been recognized as a leading education system is because of its success over the past 20 years in establishing an independent EQAO with an established and professionally-sound provincial testing program in Grades 3, 6, and 9 and a Grade 10 literacy test that needs improvement. Legitimate teacher concerns about changes that increase marking loads do need to be addressed in any new student assessment plan and so do objections over the fuzzy, labour-intensive SEL student reports.

The proposal to phase out Ontario provincial testing may already be dead in the water.  If it is, you can guess that the April 30, 2018 editorial in The Toronto Star was definitely a contributing factor.  If the Wynne Liberals go down to defeat in the June 2018 election, the whole plan will likely be shelved or completely revamped by a new government.

Whether you support the EQAO or not, the agency has succeeded in establishing reliable quality standards for student performance in literacy and mathematics. Abandoning Grade 3 testing and gutting the EQAO is not only ill-conceived, but ill advised. Without the PCAP and provincial achievement benchmarks we would be flying blind into the future.

What can possibly be gained from eliminating system-wide Grade 3 provincial assessments?  How does that square with research suggesting early assessments are critical in addressing reading and numeracy difficulties?  Without Ontario, would it be possible to conduct comprehensive Grade 3 bench-marking across Canada?  If staff workload is the problem, then aren’t there other ways to address that matter?  And whatever happened to the proposed Social and Emotional Learning (SEL) assessments and reports? 

Read Full Post »

Ontario now aspires to global education leadership in the realm of student evaluation and reporting. The latest Ontario student assessment initiative, A Learning Province, announced in September 2017 and guided by OISE education  professor Dr. Carol Campbell, cast a wide net encompassing classroom assessments, large scale provincial tests, and national/international assessment programs.  That vision for “student-centred assessments” worked from the assumption that future assessments would capture the totality of “students’ experiences — their needs, learning, progress and well-being.”

The sheer scope whole project not only deserves much closer scrutiny, but needs to be carefully assessed for its potential impact on frontline teachers. A pithy statement by British teacher-researcher Daisy Christodoulou in January 2017 is germane to the point: “When government get their hands on anything involving the word ‘assessment’, they want it to be about high stakes monitoring and tracking, not about low-stakes diagnosis.”  In the case of  Ontario, pursuing the datafication of social-emotional-learning and the mining of data to produce personality profiles is clearly taking precedence over the creation of teacher-friendly assessment policy and practices.

One of the reasons Ontario has been recognized as a leading education system is because of its success over the past 20 years in establishing an independent Education Quality and Accountability Office  (EQAO) with an established and professionally-sound provincial testing program in Grades 3, 6, 9 and 10.  Whether you support the EQAO or not, most agree that is has succeeded in establishing reliable benchmark standards for student performance in literacy and mathematics.

The entire focus of Ontario student assessment is now changing. Heavily influenced by the Ontario People for Education Measuring What Matters project, the province is plunging ahead with Social and Emotional Learning (SEL) assessment embracing what Ben Williamson aptly describes as “stealth assessment” – a set of contested personality criteria utilizing SEL ‘datafication’ to measure “student well-being.” Proceeding to integrate SEL into student reports and province-wide assessments is also foolhardy when American experts Angela Duckworth and David Scott Yeager warn that the ‘generic skills’ are ill- defined and possibly unmeasureable.

Social and emotional learning is now at the very core of Ontario’s Achieving Excellence and Equity agenda and it fully embraces “supporting all students” and enabling them to achieve “a positive sense of well-being – the sense of self, identity, and belonging in the world that will help them to learn, grow and thrive.” The Ontario model is based upon a psycho-social theory that “well-being” has “four interconnected elements” critical to student development, with self/spirit at the centre. Promoting student well-being is about fostering learning environments exhibiting these elements:

Cognitive: Development of abilities and skills such as critical thinking, problem solving, creativity, and the ability to be flexible and innovative.

Emotional: Learning about experiencing emotions, and understanding how to recognize, manage, and cope with them.

Social: Development of self-awareness, including the sense of belonging, collaboration, relationships with others, and communication skills.

Physical: Development of the body, impacted by physical activity, sleep patterns, healthy eating, and healthy life choices.

Self/Spirit:  Recognizing the core of identity whieh has “different meanings for different people, and can include cultural heritage, language, community, religion or a broader spirituality.”

Ontario’s new student report cards, proposed for 2018-19 implementation, will incorporate an distinct SEL component with teacher evaluations on a set of “transferable skills” shifting the focus from organization and work habits to “well-being” and associated values, while retaining grades or marks for individual classes. The Ontario Education “Big Six” Transferable Skills are: critical thinking, innovation and creativity, self-directed learning, collaboration, communication, and citizenship.  Curiously absent from the Ontario list of preferred skills are those commonly found in American variations on the formula: grit, growth mindset, and character

The emerging Ontario student assessment strategy needs to be evaluated in relation to the latest research and best practice, exemplified in Dylan Wiliam’s student assessment research and Daisy Christodoulou’s 2017 book Making Good Progress: The Future of Assessment for Learning.  Viewed through that lens, the Ontario student assessment philosophy and practice falls short on a number of counts.

  1. The Generic Skills Approach: Adopting this approach reflects a fundamental misunderstanding about how students learn and acquire meaningful skills. Tacking problem-solving at the outset, utilizing Project-Based Learning to “solve-real life problems” is misguided  because knowledge and skills are better acquired  through other means. The “deliberate practice method” has proven more effective. Far more is learned when students break down skills into a ‘progression of understanding’ — acquiring the knowledge and skill to progress on to bigger problems.
  2. Generic Feedback: Generic or transferable skills prove to be unsound when used as a basis for student reporting and feedback on student progress. Skills are not taught in the abstract, so feedback has little meaning for students. Reading a story and making inferences, for example, is not a discrete skill; it is dependent upon knowledge of vocabulary and background context to achieve reading comprehension.
  3. Hidden Bias of Teacher Assessment: Teacher classroom assessments are highly desirable, but do not prove as reliable as standardized measures administered under fair and objective conditions. Disadvantaged students, based upon reliable, peer-reviewed research, do better on tests than of regular teacher assessments. “Teacher assessment is biased not because they are carried out by teachers, but because it is carried out by humans.”
  4. Unhelpful Prose Descriptors: Most verbal used in system-wide assessments and reports are unhelpful — tend to be jargon-ridden, unintelligible to students and parents, and prove particularly inaccessible to students struggling in school. Second generation descriptors are “pupil friendly” but still prove difficult to use in learning how to improve or correct errors.
  5. Work-Generating Assessments: System-wide assessments, poorly constructed, generate unplanned and unexpected marking loads, particularly in the case of qualitative assessments with rubrics or longer marking time. In the U.K., for example, the use of grade descriptors for feedback proved much more time consuming than normal grading of written work Primary teachers who spent 5 hours a week on assessment in 2010, found that, by 2013, they were spending 10 hours a week.AssessmentMarkLoadCrisisWhat’s wrong with the new Ontario Assessment Plan and needs rethinking?
  1. The Generic Skills Approach – Teaching generic skills (SEL) doesn’t work and devalues domain-specific knowledge
  2. Social and Emotional Learning (SEL) models — carry inherent biases and are unmeasurable
  3. Breach of Student Security – Data mining and student surveys generate personality data without consent
  4. Erosion of Teacher Autonomy – Student SEL data generated by algorithms, creates more record-keeping, more marking, cuts into classroom time.

The best evidence-based assessment research, applied in deconstructing the Ontario Assessment initiative, raises red flags.  Bad student assessment practices, as Wiliam and Christodoulou show, can lead to serious workload problems for classroom teachers. No education jurisdiction that lived up to the motto “Learning Province” would plow ahead when the light turns to amber.

A summary of the researchED Ontario presentation delivered April 14, 2018, at the Toronto Airport Westin Hotel. 

Where is the new Ontario student assessment initiative really heading? Is it a thinly-disguised attempt to create a counterweight to current large-scale student achievement assessments? Is it feasible to proceed with SEL assessment when leading researchers question its legitimacy and validity? Are we running the risk of opening the door to the wholesale mining of student personal information without consent and for questionable purposes? 

Read Full Post »

Millions of Facebook users were profiled by Cambridge Analytica without their knowledge and that public disclosure has heightened everyone’s awareness of not only the trend to “personality profiling,’ but the potential for massive invasion of privacy. These controversial actions have exposed the scope of Big Data and the wider aspirations of the data analytics industry to probe into the “hidden depths of people.” It has also, as U.K. expert Ben Williamson has reminded us, tipped us off about the growing trend toward personality measurement in K-12 and post-secondary education.

Williamson’s 2017 book, Big Data in Education, sounded the alert that the collection and analysis of more personal information from schoolchildren will be a defining feature of education in coming years. And just as the Facebook debacle raises public concerns about the use of personal data, a new international test of ten and 15-year-olds is to be introduced by the Organization for Economic Cooperation and Development (OECD) – a powerful influence on national education policies at a global scale.  Almost without being detected, it is also emerging as a key component of the current Ontario Student “Well-Being” Assessment, initially piloted from 2014 to 2016 by Ontario People for Education as the core objective of its Measuring What Matters project.

Most data collected about students since the 1990s has came from conventional international, national and provincial examinations of knowledge and cognitive skills. Preparing students for success in the 21st century workplace has been a major driver of most initiatives in testing and accountability.  International test results such as OECD’s Program of International Student Assessment (PISA) have also become surrogate measures of the future economic potential of nations, feeding a global education race among national education systems.

The advent of Big Data is gradually transforming the nature of student assessment. While the initial phase was focused on stimulating competitive instincts and striving for excellence, more recent initiatives are seeking to “broaden the focus of student assessment” to include what is termed “social and emotional learning (SEL).” Much of the motivation is to secure some economic advantage, but that is now being more broadly defined to help mould students committed to more than individual competitiveness.  With the capacity to collect more “intimate” data about social and emotional skills to measure personality, education policymakers are devising curriculum and assessment programmes to improve personality scores. Despite the Cambridge Analytica controversy, personality data is well on the way to being used in education to achieve a variety of competing political objectives.

The ‘Big Five’ of Personality Profiling

The science of the psychographic profiling employed by Cambridge Analytica is hotly contested. It is, however, based on psychological methods that have a long history for measuring and categorizing people by personality. At its core is a psychological model called the “five factor model” of personality – or the “Big Five.” These include “openness”, “conscientiousness”, “extroversion”, “agreeableness” and “neuroticism” (OCEAN). Personality theorists believe these categories are suitable for classifying the full range of human personalities. Psychologists have invented instruments such as the so-called ‘Big Five Inventory’  to capture OCEAN data for personality modelling.

Advent of Stealth Assessment

The upcoming 2018 OECD PISA test will include, for the first time, a battery of questions aimed at assessing “global competencies” with a distinct SEL orientation. In 2019, the OECD plans to launch its international Study of Social and Emotional Learning  Designed as a computer-based self-completion questionnaire, at its core the test is a modified version of the Big Five Inventory. The OECD version maps exactly onto the five factor personality categories with “emotional stability” substituted in place of “neuroticism.” When implemented, the social and emotional skills test will assess students against each of the Big Five categories.

The OECD Education Skills experts, working in collaboration with Pearson International, firmly believe that social and emotional skills are important predictors of educational progress and future workplace performance. Large-scale personality data is clearly seen by the OECD to be predictive of a country’s potential social and economic progress. Although both the OECD and the Ontario Student Well-Being advocates both claim that it is strictly a test of social and emotional skills, Williamson claims such projects employ the same family of methods used in the Cambridge Analytica personality quiz. Upon closer examination, the same psychological assumptions and personality assessment methods underpin most of the latest education ventures.

The OECD is already a powerful influence on the moulding of national education policies. Its PISA testing has reshaped school curricula, assessments and whole systems in the global education race.  It is increasingly likely that its emphasis on personality testing will, once again, reshape education policy and school practices. Just as PISA has influenced a global market in products to support the core skills of literacy, numeracy and science tested by the assessment, the same is now occurring around SEL and personality development.  Canada’s provincial and territorial ministers of education, working under the auspices of the Council of Ministers of Education, Canada (CMEC) have not only endorsed the OECD’s  proposed “global competencies,” but proposed a variation of their own to guide assessment policy.

The Ontario Student Assessment initiative, announced September 6, 2017, deserves closer scrutiny through the lens of datafication and personality profiling. It’s overarching goal bears repeating: “Update provincial assessment and reporting practices, including EQAO, to make sure they are culturally relevant, measure a wider range of learning, and better reflect student well-being and equity.”  Founder of People for Education Annie Kidder hailed the plan for “embedding” the “transferable skills” and positioning Ontario to take “a leading role in the global movement toward broader goals for education and broader measures of success in our schools.”

Critics of large-scale student assessments are quick to identify the underlying influence of “globalization” and the oft-stated goal  of preparing students for the highly competitive “21st century workplace.”  It can be harder to spot currents moving in the opposite direction and heavily influenced by what Kathryn Ecclestone and Denis Hayes aptly termed the “therapeutic education ethos.” Ten years ago, they flagged the rise of  a “therapeutic education” movement exemplified by classroom activities and programs, often branded as promoting ‘mindfulness,’ which pave the way for “coaching appropriate emotions” and transform education into a disguised form of “social engineering” aimed at producing “emotionally literate citizens” who are notably “happy” and experience “emotional well-being.”

Preparing students to be highly competitive human beings or to be creative and cooperative individuals is risking re-framing public education in terms of personality modification, driven by ideological motivations, rather than the pursuit of meaningful knowledge and understanding. It treats children as ‘guinea pigs’ engaged in either market competition preparation or social engineering, and may well stand in the way of classroom teachers pursuing their own evidence-based, knowledge-centred curriculum aims.

Appropriating and misusing personality data by Facebook and Cambridge Analytica led to a significant world-wide public backlash. In education, however, tests and technologies to measure student personality, according to Williamson, are passing unchallenged. It is equally controversial to capture and mine students’ personality data with the goal of shaping students to “fit into” the evolving global marketplace.  Stealth assessment has arrived and being forewarned is forearmed.

Why is education embracing data mining and personality profiling for schoolchildren? What are the connections between Facebook data mining and recent social-and-emotional learning assessment initiatives?  Should students and parents be advised, in advance, when student data is being minded and mapped against personality types?  Why have Canadian assessment projects like the Ontario Measuring What Matters- Student Well-Being initiative escaped close scrutiny?  Should we be more vigilant in tracking and monitoring the use and abuse of Big Data in education? 

Read Full Post »

Canada’s most populous province aspires to education leadership and tends to exert influence far beyond our coast-to-coast provincial school systems. That is why the latest Ontario student assessment initiative, A Learning Province, is worth tracking and deserves much closer scrutiny. It was officially launched in September of 2017, in the wake of a well-publicized decline in provincial Math test scores and cleverly packaged as a plan to address wider professional concerns about testing and accountability.

Declining Math test scores among public elementary school students in Ontario were big news in late August 2017 for one one good reason- the Ontario Ministry’s much-touted $60-million “renewed math strategy” completely bombed when it came to alieviating the problem. On the latest round of  provincial standardized tests — conducted by the Education Quality and Accountability Office (EQAO)only half of Grade 6 students met the provincial standard in math, unchanged from the previous year. In 2013, about 57 per cent of Grade 6 students met the standard  Among Grade 3 students, 62 per cent met the provincial standard in math, a decrease of one percentage point since last year.

The Ontario government’s response, championed by Premier Kathleen Wynne and Education Minister Mitzie Hunter, was not only designed to change the channel, but to initiate a “student assessment review” targeting the messenger, the EQAO, and attempting to chip away at its hard-won credibility, built up over the past twenty years. While the announcement conveyed the impression of “open and authentic” consultation, the Discussion Paper made it crystal clear that the provincial agency charged with ensuring educational accountability was now under the microscope.  Reading the paper and digesting the EQAO survey questions, it becomes obvious that the provincial tests are now on trial themselves, and being assessed on criteria well outside their current mandate.

Ontario’s provincial testing regime should be fair game when it comes to public scrutiny. When spending ballooned to $50 million a year in the late 1990s, taxpayers had a right to be concerned. Since 2010, EQAO costs have hovered around $34 million or $17 per student, the credibility of the test results remain widely accepted, and the testing model continues to be free of interference or manipulation.  It’s working the way it was intended — to provide a regular, reasonably reliable measure of student competencies in literacy and numeracy.

The EQAO is far from perfect, but is still considered the ‘gold standard’ right across Canada.  It has succeeded in providing much greater transparency, but — like other such testing regimes – has not nudged education departments far enough in the direction of improving teacher specialist qualifications or changing the curriculum to secure better student results.  The Grade 10 Literacy Test remains an embarrassment. In May 2010, the EQAO report, for example, revealed that hundreds of students who failed the 2006 test were simply moved along trough the system without passing that graduation standard. Consistently, about 19 to 24 per cent of all students fall short of acceptable literacy, and 56 per cent of all Applied students, yet graduation rates have risen from 68% to 86% province-wide.

The Ontario Ministry is now ‘monkeying around’ with the EQAO and seems inclined toward either neutering the agency to weaken student performance transparency or broadening its mandate to include assessing students for “social and emotional learning’ (SEL), formerly termed “non-cognitive learning.”  The “Independent Review of Assessment and Reporting” is being supervised by some familiar Ontario education names, including the usual past and present OISE insiders, Michael Fullan, Andy Hargreaves, and Carol Campbell.  It’s essentially the same Ontario-focused group, minus Dr. Avis Glaze, that populates the International Education Panel of Advisors in Scotland attempting to rescue the Scottish National Party’s faltering “Excellence for All” education reforms.

The published mandate of the Student Assessment Review gives it all away in a few critical passages.  Most of the questions focus on EQAO testing and accountability and approach the tests through a “student well-being” and “diversity” lens.  An “evidence-informed” review of the current model of assessment and reporting is promised, but it’s nowhere to be found in the discussion paper. Instead, we are treated to selected excerpts from official Ontario policy documents, all supporting the current political agenda, espoused in the 2014 document, Achieving Excellence: A Renewed Vision for Education in Ontario. The familiar four pillars, achieving excellence, ensuing equity, promoting well-being, and enhancing public confidence are repeated as secular articles of faith.

Where’s the research to support the proposed direction?  The Discussion Paper does provide capsule summaries of two assessment approaches, termed “large-scale assessments” and “classroom assessments, ” but critical analysis of only the first of the two approaches.  There’s no indication in A Learning Province that the reputedly independent experts recognize let alone heed the latest research pointing out the pitfalls and problems associated with Teacher Assessments (TA) or the acknowledged “failure” of Assessment for Learning (AfL).  Instead, we are advised, in passing, that the Ontario Ministry has a research report, produced in August 2017, by the University of Ottawa, examining how to integrate “student well-being” into provincial K-12 assessments.

The Ontario Discussion Paper is not really about best practice in student assessment.  It’s essentially based upon rather skewed research conducted in support of “broadening student assessments” rather that the latest research on what works in carrying out student assessments in the schools.  Critical issues such as the “numeracy gap” now being seriously debated by leading education researchers and student assessment experts are not even addressed in the Ontario policy paper.

Educators and parents reading A Learning Province would have benefited from a full airing of the latest research on what actually works in student assessment, whether or not it conforms with provincial education dogma.  Nowhere does the Ontario document recognize Dylan Wiliam’s recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments.  Nor does the Ontario student assessment review team heed the recent findings of British assessment expert, Daisy Christodoulou.  In her 2017 book, Making Good Progress, Christodoulou provides compelling research evidence to demonstrate why and how standardized assessments are not only more reliable measures, but fairer for students form unprivileged families.  She also challenges nearly every assumption built into the Ontario student assessment initiative.

The latest research and best practice in student assessment cut in a direction that’s different from where the Ontario Ministry of Education appears to be heading. Christodoulou’s Making Good Progress cannot be ignored, particularly because it comes with a ringing endorsement from the architect of Assessment for Learning, Dylan Wiliam.  Classroom teachers everywhere are celebrating Christodoulou for blowing the whistle on “generic skills” assessment, ‘rubric-mania,’ impenetrable verbal descriptors, and the mountains of assessment paperwork. Bad student assessment practices, she shows, lead to serious workload problems for classroom teachers.  Proceeding to integrate SEL into province-wide assessments when American experts Angela Duckworth and David Scott Yeager warn that it’s premature and likely to fail is simply foolhardy.  No education jurisdiction priding itself on being “A Learning Province” would plow ahead when the lights turn to amber.

The Ontario Student Assessment document, A Learning Province, may well be running high risks with public accountability for student performance.  It does not really pass the sound research ‘sniff test.’  It looks very much like another Ontario provincial initiative offering a polished, but rather thinly veiled, rationale for supporting the transition away from “large-scale assessment” to “classroom assessment” and grafting unproven SEL competencies onto EQAO, running the risk of distorting its core mandate.

Where is Ontario really heading with its current Student Assessment policy initiative?  Where’s the sound research to support a transition from sound, large-scale testing to broader measures that can match its reliability and provide a level playing field for all?  Should Ontario be heeding leading assessment experts like Dylan Wiliam, Daisy Christodoulou, and Angela Duckworth? Is it reasonable to ask whether a Ministry of Education would benefit from removing a nagging burr in its saddle? 

 

Read Full Post »

Starting next year, students from Kindergarten to Grade 12 in Canada’s largest province, Ontario, will be bringing home report cards that showcase six “transferable skills”: critical thinking, creativity, self-directed learning, collaboration, communication, and citizenship. It’s the latest example of the growing influence of education policy organizations, consultants and researchers promoting “broader measures of success” formerly known as “non-cognitive” domains of learning.

Portrait of Primary Schoolboys and Schoolgirls Standing in a Line in a Classroom

In announcing the latest provincial report card initiative in September 2017, Education Minister Mitzie Hunter sought to change the channel in the midst of a public outcry over continuing declines in province-wide testing results, particularly in Grade 3 and 6 mathematics. While Minister Hunter assured concerned parents that standardized testing was not threatened with elimination, she attempted to cast the whole reform as a move toward “measuring those things that really matter to how kids learn and how they apply that learning to the real world, after school.”

Her choice of words had a most familiar ring because it echoed the core message promoted assiduously since 2013 by Ontario’s most influential education lobby group, People for Education, and professionally-packaged in its well-funded Measuring What Matters‘ assessment reform initiative. In this respect, it’s remarkably similar in its focus to the Boston-based organization Transforming Education.   Never a supporter of Ontario’s highly-regarded provincial testing system, managed by the Education Quality and Accountability Office (EQAO), the Toronto-based group led by parent activist Annie Kidder has spent much of the past five years seeking to construct an alternative model that, in the usual P4E progressive education lexicon, “moves beyond the 3R’s.”

Kidder and her People for Education organization have always been explicit about their intentions and goals. The proposed framework for broader success appeared, almost fully formed, in its first 2013 policy paper.  After referring, in passing, to the focus of policy-makers on “evidence-based decision making,” the project summary disputed the primacy of “narrow goals” such as “literacy and numeracy” and argued for the construction of (note the choice of words) a “broader set of goals” that would be “measurable so students, parents, educators, and the public can see how Canada is making progress” in education.

Five proposed “dimensions of learning” were proposed, in advance of any research being undertaken to confirm their validity or recognition that certain competing dimensions had been ruled out, including resilience and its attendant personal qualities “grit’/conscientiousness, character, and “growth mindset.” Those five dimensions, physical and mental health, social-emotional development, creativity and innovation, and school climate, reflected the socially-progressive orientation of People for Education rather than any evidence-based analysis of student assessment policy and practice.

Two years into the project, the Measuring What Matters (MWM) student success framework had hardened into what began to sound, more and more, like a ‘new catechism.’  The Research Director, Dr. David Hagen Cameron, a PhD in Education from the University of London, hired from the Ontario Ministry of Education, began to focus on how to implement the model with what he termed “MWM change theory.” His mandate was crystal clear – to take the theory and transform it into Ontario school practice in four years, then take it national in 2017-18. Five friendly education researchers were recruited to write papers making the case for including each of the domains, some 78 educators were appointed to advisory committees, and the proposed measures were “field-tested” in 26 different public and Catholic separate schools (20 elementary, 6 secondary), representing a cross-section of urban and rural Ontario.

As an educational sociologist who cut his research teeth studying the British New Labour educational “interventionist machine,” Dr. Cameron was acutely aware that educational initiatives usually flounder because of poorly executed implementation. Much of his focus, in project briefings and academic papers from 2014 onward was on how to “find congruence” between MWM priorities and Ministry mandates and how to tackle the tricky business of winning the concurrence of teachers, and particularly in overcoming their instinctive resistance to  district “education consultants” who arrive promising support but end up extending more “institutional control over teachers in their classrooms.”

Stumbling blocks emerged when the MWM theory met up with the everyday reality of teaching and learning in the schools. Translating the proposed SEL domains into “a set of student competencies” and ensuring “supportive conditions” posed immediate difficulties. The MWM reform promoters came four square up against achieving “system coherence” with the existing EQAO assessment system and the challenge of bridging gaps between the system and local levels. Dr. Cameron and his MWM team were unable to effectively answer questions voicing concerns about increased teacher workload, the misuse of collected data, the mandate creep of schools, and the public’s desire for simple, easy to understand reports. 

Three years into the project, the research base supporting the whole venture began to erode, as more critical independent academic studies appeared questioning the efficacy of assessing Social and Emotional Learning traits or attributes. Dr. Angela L. Duckworth, the University of Pennsylvania psychologist who championed SEL and introduced “grit” into the educational lexicon, produced a comprehensive 2015 research paper with University of Texas scholar David Scott Yeager that raised significant concerns about the wisdom of proceeding, without effective measures, to assess “personal qualities” other than cognitive ability for educational purposes.

Coming from the leading SEL researcher and author of the best-selling book, GRIT, the Duckworth and Yeager research report in Education Researcher, dealt a blow to all state and provincial initiatives attempting to implement SEL measures of assessment. While Duckworth and Yeager held that personal attributes can be powerful predictors of academic, social and physical “well-being,” they claimed “not that everything that counts can be counted or that that everything that can be counted counts.” The two prominent SEL researchers warned that it was premature to proceed with such school system accountability systems. “Our working title, ” she later revealed, “was all measures suck, and they all suck in their own way.”

The Duckworth-Yeager report provided the most in-depth analysis (to date) of the challenges and pitfalls involved in advancing a project like Ontario’s Measuring What Works.  Assessing for cognitive knowledge was long-established and had proven reasonably reliable in measuring academic achievement, they pointed out, but constructing alternative measures remained in its infancy. They not only identified a number of serious limitations of Student Self-Report and Teacher Questionnaires and Performance Tasks (Table 1), but also provided a prescription for fixing what was wrong with system-wide implementation plans (Table 2).

 

 

 

 

 

 

 

 

 

 

 

Duckworth went public with her concerns in February of 2016.  She revealed to The New York Times that she had resigned from a California advisory board fronting a SEL initiative spearheaded by the California Office to Reform Education (CORE), and no longer supported using such tests to evaluate school performance. University of Chicago researcher Camille A. Farrington found Duckworth’s findings credible, stating: “There are so many ways to do this wrong.” The California initiative, while focused on a different set of measures, including student attendance and expulsions, had much in common philosophically with the Ontario venture.

The wisdom of proceeding to adopt SEL system-wide and to recast student assessment in that mold remains contentious.  Anya Kamenetz‘s recent National Public Radio commentary(August 16, 2017) explained, in some detail, why SEL is problematic because, so far, it’s proven impossible to assess what has yet to be properly defined as student outcomes.  It would also seem unwise to overlook Carol Dweck’s recently expressed concerns about using her “Growth Mindset” research for other purposes, such as proposing a system-wide SEL assessment plan.

The Ontario Measuring What Matters initiative, undeterred by such research findings, continues to plow full steam ahead. The five “dimensions of learning” have now morphed into five “domains and competencies” making no reference whatsoever to the place of the cognitive domain in the overall scheme.  It’s a classic example of three phenomena which bedevil contemporary education policy-making: tautology, bias confirmation and the sunk cost trap.  Repeatedly affirming a concept in theory (as logically irrefutable truth) without much supporting research evidence, gathering evidence to support preconceived criteria and plans, and proceeding because its too late to take a pause, or turn back, may not be the best guarantor of long-term success in implementing a system-wide reform agenda.

The whole Ontario Measuring What Works student assessment initiative raises far more questions than it answers. Here are a few pointed questions to get the discussion started and spark some re-thinking. 

On the Research Base:  Does the whole MWM plan pass the research sniff test?  Where does the cognitive domain and the acquisition of knowledge fit in the MWM scheme?  If the venture focuses on Social and Emotional Learning(SEL), whatever happened to the whole student resilience domain, including grit, character and growth mindset? Is it sound to construct a theory and then commission studies to confirm your choice of SEL domains and competencies?

On Implementation: Will introducing the new Social Learning criteria on Ontario student reports do any real harm? Is it feasible to introduce the full MWM plan on top of the current testing regime without imposing totally unreasonable additional burdens on classroom teachers?  Since the best practice research supports a rather costly “multivariate, multi-instrumental approach,” is any of this affordable or sustainable outside of education jurisdictions with significant and expandable capacity to fund such initiatives? 

 

Read Full Post »

Older Posts »