Feeds:
Posts
Comments

Posts Tagged ‘EQAO’

Ontario now aspires to global education leadership in the realm of student evaluation and reporting. The latest Ontario student assessment initiative, A Learning Province, announced in September 2017 and guided by OISE education  professor Dr. Carol Campbell, cast a wide net encompassing classroom assessments, large scale provincial tests, and national/international assessment programs.  That vision for “student-centred assessments” worked from the assumption that future assessments would capture the totality of “students’ experiences — their needs, learning, progress and well-being.”

The sheer scope whole project not only deserves much closer scrutiny, but needs to be carefully assessed for its potential impact on frontline teachers. A pithy statement by British teacher-researcher Daisy Christodoulou in January 2017 is germane to the point: “When government get their hands on anything involving the word ‘assessment’, they want it to be about high stakes monitoring and tracking, not about low-stakes diagnosis.”  In the case of  Ontario, pursuing the datafication of social-emotional-learning and the mining of data to produce personality profiles is clearly taking precedence over the creation of teacher-friendly assessment policy and practices.

One of the reasons Ontario has been recognized as a leading education system is because of its success over the past 20 years in establishing an independent Education Quality and Accountability Office  (EQAO) with an established and professionally-sound provincial testing program in Grades 3, 6, 9 and 10.  Whether you support the EQAO or not, most agree that is has succeeded in establishing reliable benchmark standards for student performance in literacy and mathematics.

The entire focus of Ontario student assessment is now changing. Heavily influenced by the Ontario People for Education Measuring What Matters project, the province is plunging ahead with Social and Emotional Learning (SEL) assessment embracing what Ben Williamson aptly describes as “stealth assessment” – a set of contested personality criteria utilizing SEL ‘datafication’ to measure “student well-being.” Proceeding to integrate SEL into student reports and province-wide assessments is also foolhardy when American experts Angela Duckworth and David Scott Yeager warn that the ‘generic skills’ are ill- defined and possibly unmeasureable.

Social and emotional learning is now at the very core of Ontario’s Achieving Excellence and Equity agenda and it fully embraces “supporting all students” and enabling them to achieve “a positive sense of well-being – the sense of self, identity, and belonging in the world that will help them to learn, grow and thrive.” The Ontario model is based upon a psycho-social theory that “well-being” has “four interconnected elements” critical to student development, with self/spirit at the centre. Promoting student well-being is about fostering learning environments exhibiting these elements:

Cognitive: Development of abilities and skills such as critical thinking, problem solving, creativity, and the ability to be flexible and innovative.

Emotional: Learning about experiencing emotions, and understanding how to recognize, manage, and cope with them.

Social: Development of self-awareness, including the sense of belonging, collaboration, relationships with others, and communication skills.

Physical: Development of the body, impacted by physical activity, sleep patterns, healthy eating, and healthy life choices.

Self/Spirit:  Recognizing the core of identity whieh has “different meanings for different people, and can include cultural heritage, language, community, religion or a broader spirituality.”

Ontario’s new student report cards, proposed for 2018-19 implementation, will incorporate an distinct SEL component with teacher evaluations on a set of “transferable skills” shifting the focus from organization and work habits to “well-being” and associated values, while retaining grades or marks for individual classes. The Ontario Education “Big Six” Transferable Skills are: critical thinking, innovation and creativity, self-directed learning, collaboration, communication, and citizenship.  Curiously absent from the Ontario list of preferred skills are those commonly found in American variations on the formula: grit, growth mindset, and character

The emerging Ontario student assessment strategy needs to be evaluated in relation to the latest research and best practice, exemplified in Dylan Wiliam’s student assessment research and Daisy Christodoulou’s 2017 book Making Good Progress: The Future of Assessment for Learning.  Viewed through that lens, the Ontario student assessment philosophy and practice falls short on a number of counts.

  1. The Generic Skills Approach: Adopting this approach reflects a fundamental misunderstanding about how students learn and acquire meaningful skills. Tacking problem-solving at the outset, utilizing Project-Based Learning to “solve-real life problems” is misguided  because knowledge and skills are better acquired  through other means. The “deliberate practice method” has proven more effective. Far more is learned when students break down skills into a ‘progression of understanding’ — acquiring the knowledge and skill to progress on to bigger problems.
  2. Generic Feedback: Generic or transferable skills prove to be unsound when used as a basis for student reporting and feedback on student progress. Skills are not taught in the abstract, so feedback has little meaning for students. Reading a story and making inferences, for example, is not a discrete skill; it is dependent upon knowledge of vocabulary and background context to achieve reading comprehension.
  3. Hidden Bias of Teacher Assessment: Teacher classroom assessments are highly desirable, but do not prove as reliable as standardized measures administered under fair and objective conditions. Disadvantaged students, based upon reliable, peer-reviewed research, do better on tests than of regular teacher assessments. “Teacher assessment is biased not because they are carried out by teachers, but because it is carried out by humans.”
  4. Unhelpful Prose Descriptors: Most verbal used in system-wide assessments and reports are unhelpful — tend to be jargon-ridden, unintelligible to students and parents, and prove particularly inaccessible to students struggling in school. Second generation descriptors are “pupil friendly” but still prove difficult to use in learning how to improve or correct errors.
  5. Work-Generating Assessments: System-wide assessments, poorly constructed, generate unplanned and unexpected marking loads, particularly in the case of qualitative assessments with rubrics or longer marking time. In the U.K., for example, the use of grade descriptors for feedback proved much more time consuming than normal grading of written work Primary teachers who spent 5 hours a week on assessment in 2010, found that, by 2013, they were spending 10 hours a week.AssessmentMarkLoadCrisisWhat’s wrong with the new Ontario Assessment Plan and needs rethinking?
  1. The Generic Skills Approach – Teaching generic skills (SEL) doesn’t work and devalues domain-specific knowledge
  2. Social and Emotional Learning (SEL) models — carry inherent biases and are unmeasurable
  3. Breach of Student Security – Data mining and student surveys generate personality data without consent
  4. Erosion of Teacher Autonomy – Student SEL data generated by algorithms, creates more record-keeping, more marking, cuts into classroom time.

The best evidence-based assessment research, applied in deconstructing the Ontario Assessment initiative, raises red flags.  Bad student assessment practices, as Wiliam and Christodoulou show, can lead to serious workload problems for classroom teachers. No education jurisdiction that lived up to the motto “Learning Province” would plow ahead when the light turns to amber.

A summary of the researchED Ontario presentation delivered April 14, 2018, at the Toronto Airport Westin Hotel. 

Where is the new Ontario student assessment initiative really heading? Is it a thinly-disguised attempt to create a counterweight to current large-scale student achievement assessments? Is it feasible to proceed with SEL assessment when leading researchers question its legitimacy and validity? Are we running the risk of opening the door to the wholesale mining of student personal information without consent and for questionable purposes? 

Advertisements

Read Full Post »

Canada’s most populous province aspires to education leadership and tends to exert influence far beyond our coast-to-coast provincial school systems. That is why the latest Ontario student assessment initiative, A Learning Province, is worth tracking and deserves much closer scrutiny. It was officially launched in September of 2017, in the wake of a well-publicized decline in provincial Math test scores and cleverly packaged as a plan to address wider professional concerns about testing and accountability.

Declining Math test scores among public elementary school students in Ontario were big news in late August 2017 for one one good reason- the Ontario Ministry’s much-touted $60-million “renewed math strategy” completely bombed when it came to alieviating the problem. On the latest round of  provincial standardized tests — conducted by the Education Quality and Accountability Office (EQAO)only half of Grade 6 students met the provincial standard in math, unchanged from the previous year. In 2013, about 57 per cent of Grade 6 students met the standard  Among Grade 3 students, 62 per cent met the provincial standard in math, a decrease of one percentage point since last year.

The Ontario government’s response, championed by Premier Kathleen Wynne and Education Minister Mitzie Hunter, was not only designed to change the channel, but to initiate a “student assessment review” targeting the messenger, the EQAO, and attempting to chip away at its hard-won credibility, built up over the past twenty years. While the announcement conveyed the impression of “open and authentic” consultation, the Discussion Paper made it crystal clear that the provincial agency charged with ensuring educational accountability was now under the microscope.  Reading the paper and digesting the EQAO survey questions, it becomes obvious that the provincial tests are now on trial themselves, and being assessed on criteria well outside their current mandate.

Ontario’s provincial testing regime should be fair game when it comes to public scrutiny. When spending ballooned to $50 million a year in the late 1990s, taxpayers had a right to be concerned. Since 2010, EQAO costs have hovered around $34 million or $17 per student, the credibility of the test results remain widely accepted, and the testing model continues to be free of interference or manipulation.  It’s working the way it was intended — to provide a regular, reasonably reliable measure of student competencies in literacy and numeracy.

The EQAO is far from perfect, but is still considered the ‘gold standard’ right across Canada.  It has succeeded in providing much greater transparency, but — like other such testing regimes – has not nudged education departments far enough in the direction of improving teacher specialist qualifications or changing the curriculum to secure better student results.  The Grade 10 Literacy Test remains an embarrassment. In May 2010, the EQAO report, for example, revealed that hundreds of students who failed the 2006 test were simply moved along trough the system without passing that graduation standard. Consistently, about 19 to 24 per cent of all students fall short of acceptable literacy, and 56 per cent of all Applied students, yet graduation rates have risen from 68% to 86% province-wide.

The Ontario Ministry is now ‘monkeying around’ with the EQAO and seems inclined toward either neutering the agency to weaken student performance transparency or broadening its mandate to include assessing students for “social and emotional learning’ (SEL), formerly termed “non-cognitive learning.”  The “Independent Review of Assessment and Reporting” is being supervised by some familiar Ontario education names, including the usual past and present OISE insiders, Michael Fullan, Andy Hargreaves, and Carol Campbell.  It’s essentially the same Ontario-focused group, minus Dr. Avis Glaze, that populates the International Education Panel of Advisors in Scotland attempting to rescue the Scottish National Party’s faltering “Excellence for All” education reforms.

The published mandate of the Student Assessment Review gives it all away in a few critical passages.  Most of the questions focus on EQAO testing and accountability and approach the tests through a “student well-being” and “diversity” lens.  An “evidence-informed” review of the current model of assessment and reporting is promised, but it’s nowhere to be found in the discussion paper. Instead, we are treated to selected excerpts from official Ontario policy documents, all supporting the current political agenda, espoused in the 2014 document, Achieving Excellence: A Renewed Vision for Education in Ontario. The familiar four pillars, achieving excellence, ensuing equity, promoting well-being, and enhancing public confidence are repeated as secular articles of faith.

Where’s the research to support the proposed direction?  The Discussion Paper does provide capsule summaries of two assessment approaches, termed “large-scale assessments” and “classroom assessments, ” but critical analysis of only the first of the two approaches.  There’s no indication in A Learning Province that the reputedly independent experts recognize let alone heed the latest research pointing out the pitfalls and problems associated with Teacher Assessments (TA) or the acknowledged “failure” of Assessment for Learning (AfL).  Instead, we are advised, in passing, that the Ontario Ministry has a research report, produced in August 2017, by the University of Ottawa, examining how to integrate “student well-being” into provincial K-12 assessments.

The Ontario Discussion Paper is not really about best practice in student assessment.  It’s essentially based upon rather skewed research conducted in support of “broadening student assessments” rather that the latest research on what works in carrying out student assessments in the schools.  Critical issues such as the “numeracy gap” now being seriously debated by leading education researchers and student assessment experts are not even addressed in the Ontario policy paper.

Educators and parents reading A Learning Province would have benefited from a full airing of the latest research on what actually works in student assessment, whether or not it conforms with provincial education dogma.  Nowhere does the Ontario document recognize Dylan Wiliam’s recent pronouncement that his own creation, Assessment for Learning, has floundered because of “flawed implementation” and unwise attempts to incorporate AfL into summative assessments.  Nor does the Ontario student assessment review team heed the recent findings of British assessment expert, Daisy Christodoulou.  In her 2017 book, Making Good Progress, Christodoulou provides compelling research evidence to demonstrate why and how standardized assessments are not only more reliable measures, but fairer for students form unprivileged families.  She also challenges nearly every assumption built into the Ontario student assessment initiative.

The latest research and best practice in student assessment cut in a direction that’s different from where the Ontario Ministry of Education appears to be heading. Christodoulou’s Making Good Progress cannot be ignored, particularly because it comes with a ringing endorsement from the architect of Assessment for Learning, Dylan Wiliam.  Classroom teachers everywhere are celebrating Christodoulou for blowing the whistle on “generic skills” assessment, ‘rubric-mania,’ impenetrable verbal descriptors, and the mountains of assessment paperwork. Bad student assessment practices, she shows, lead to serious workload problems for classroom teachers.  Proceeding to integrate SEL into province-wide assessments when American experts Angela Duckworth and David Scott Yeager warn that it’s premature and likely to fail is simply foolhardy.  No education jurisdiction priding itself on being “A Learning Province” would plow ahead when the lights turn to amber.

The Ontario Student Assessment document, A Learning Province, may well be running high risks with public accountability for student performance.  It does not really pass the sound research ‘sniff test.’  It looks very much like another Ontario provincial initiative offering a polished, but rather thinly veiled, rationale for supporting the transition away from “large-scale assessment” to “classroom assessment” and grafting unproven SEL competencies onto EQAO, running the risk of distorting its core mandate.

Where is Ontario really heading with its current Student Assessment policy initiative?  Where’s the sound research to support a transition from sound, large-scale testing to broader measures that can match its reliability and provide a level playing field for all?  Should Ontario be heeding leading assessment experts like Dylan Wiliam, Daisy Christodoulou, and Angela Duckworth? Is it reasonable to ask whether a Ministry of Education would benefit from removing a nagging burr in its saddle? 

 

Read Full Post »

Starting next year, students from Kindergarten to Grade 12 in Canada’s largest province, Ontario, will be bringing home report cards that showcase six “transferable skills”: critical thinking, creativity, self-directed learning, collaboration, communication, and citizenship. It’s the latest example of the growing influence of education policy organizations, consultants and researchers promoting “broader measures of success” formerly known as “non-cognitive” domains of learning.

Portrait of Primary Schoolboys and Schoolgirls Standing in a Line in a Classroom

In announcing the latest provincial report card initiative in September 2017, Education Minister Mitzie Hunter sought to change the channel in the midst of a public outcry over continuing declines in province-wide testing results, particularly in Grade 3 and 6 mathematics. While Minister Hunter assured concerned parents that standardized testing was not threatened with elimination, she attempted to cast the whole reform as a move toward “measuring those things that really matter to how kids learn and how they apply that learning to the real world, after school.”

Her choice of words had a most familiar ring because it echoed the core message promoted assiduously since 2013 by Ontario’s most influential education lobby group, People for Education, and professionally-packaged in its well-funded Measuring What Matters‘ assessment reform initiative. In this respect, it’s remarkably similar in its focus to the Boston-based organization Transforming Education.   Never a supporter of Ontario’s highly-regarded provincial testing system, managed by the Education Quality and Accountability Office (EQAO), the Toronto-based group led by parent activist Annie Kidder has spent much of the past five years seeking to construct an alternative model that, in the usual P4E progressive education lexicon, “moves beyond the 3R’s.”

Kidder and her People for Education organization have always been explicit about their intentions and goals. The proposed framework for broader success appeared, almost fully formed, in its first 2013 policy paper.  After referring, in passing, to the focus of policy-makers on “evidence-based decision making,” the project summary disputed the primacy of “narrow goals” such as “literacy and numeracy” and argued for the construction of (note the choice of words) a “broader set of goals” that would be “measurable so students, parents, educators, and the public can see how Canada is making progress” in education.

Five proposed “dimensions of learning” were proposed, in advance of any research being undertaken to confirm their validity or recognition that certain competing dimensions had been ruled out, including resilience and its attendant personal qualities “grit’/conscientiousness, character, and “growth mindset.” Those five dimensions, physical and mental health, social-emotional development, creativity and innovation, and school climate, reflected the socially-progressive orientation of People for Education rather than any evidence-based analysis of student assessment policy and practice.

Two years into the project, the Measuring What Matters (MWM) student success framework had hardened into what began to sound, more and more, like a ‘new catechism.’  The Research Director, Dr. David Hagen Cameron, a PhD in Education from the University of London, hired from the Ontario Ministry of Education, began to focus on how to implement the model with what he termed “MWM change theory.” His mandate was crystal clear – to take the theory and transform it into Ontario school practice in four years, then take it national in 2017-18. Five friendly education researchers were recruited to write papers making the case for including each of the domains, some 78 educators were appointed to advisory committees, and the proposed measures were “field-tested” in 26 different public and Catholic separate schools (20 elementary, 6 secondary), representing a cross-section of urban and rural Ontario.

As an educational sociologist who cut his research teeth studying the British New Labour educational “interventionist machine,” Dr. Cameron was acutely aware that educational initiatives usually flounder because of poorly executed implementation. Much of his focus, in project briefings and academic papers from 2014 onward was on how to “find congruence” between MWM priorities and Ministry mandates and how to tackle the tricky business of winning the concurrence of teachers, and particularly in overcoming their instinctive resistance to  district “education consultants” who arrive promising support but end up extending more “institutional control over teachers in their classrooms.”

Stumbling blocks emerged when the MWM theory met up with the everyday reality of teaching and learning in the schools. Translating the proposed SEL domains into “a set of student competencies” and ensuring “supportive conditions” posed immediate difficulties. The MWM reform promoters came four square up against achieving “system coherence” with the existing EQAO assessment system and the challenge of bridging gaps between the system and local levels. Dr. Cameron and his MWM team were unable to effectively answer questions voicing concerns about increased teacher workload, the misuse of collected data, the mandate creep of schools, and the public’s desire for simple, easy to understand reports. 

Three years into the project, the research base supporting the whole venture began to erode, as more critical independent academic studies appeared questioning the efficacy of assessing Social and Emotional Learning traits or attributes. Dr. Angela L. Duckworth, the University of Pennsylvania psychologist who championed SEL and introduced “grit” into the educational lexicon, produced a comprehensive 2015 research paper with University of Texas scholar David Scott Yeager that raised significant concerns about the wisdom of proceeding, without effective measures, to assess “personal qualities” other than cognitive ability for educational purposes.

Coming from the leading SEL researcher and author of the best-selling book, GRIT, the Duckworth and Yeager research report in Education Researcher, dealt a blow to all state and provincial initiatives attempting to implement SEL measures of assessment. While Duckworth and Yeager held that personal attributes can be powerful predictors of academic, social and physical “well-being,” they claimed “not that everything that counts can be counted or that that everything that can be counted counts.” The two prominent SEL researchers warned that it was premature to proceed with such school system accountability systems. “Our working title, ” she later revealed, “was all measures suck, and they all suck in their own way.”

The Duckworth-Yeager report provided the most in-depth analysis (to date) of the challenges and pitfalls involved in advancing a project like Ontario’s Measuring What Works.  Assessing for cognitive knowledge was long-established and had proven reasonably reliable in measuring academic achievement, they pointed out, but constructing alternative measures remained in its infancy. They not only identified a number of serious limitations of Student Self-Report and Teacher Questionnaires and Performance Tasks (Table 1), but also provided a prescription for fixing what was wrong with system-wide implementation plans (Table 2).

 

 

 

 

 

 

 

 

 

 

 

Duckworth went public with her concerns in February of 2016.  She revealed to The New York Times that she had resigned from a California advisory board fronting a SEL initiative spearheaded by the California Office to Reform Education (CORE), and no longer supported using such tests to evaluate school performance. University of Chicago researcher Camille A. Farrington found Duckworth’s findings credible, stating: “There are so many ways to do this wrong.” The California initiative, while focused on a different set of measures, including student attendance and expulsions, had much in common philosophically with the Ontario venture.

The wisdom of proceeding to adopt SEL system-wide and to recast student assessment in that mold remains contentious.  Anya Kamenetz‘s recent National Public Radio commentary(August 16, 2017) explained, in some detail, why SEL is problematic because, so far, it’s proven impossible to assess what has yet to be properly defined as student outcomes.  It would also seem unwise to overlook Carol Dweck’s recently expressed concerns about using her “Growth Mindset” research for other purposes, such as proposing a system-wide SEL assessment plan.

The Ontario Measuring What Matters initiative, undeterred by such research findings, continues to plow full steam ahead. The five “dimensions of learning” have now morphed into five “domains and competencies” making no reference whatsoever to the place of the cognitive domain in the overall scheme.  It’s a classic example of three phenomena which bedevil contemporary education policy-making: tautology, bias confirmation and the sunk cost trap.  Repeatedly affirming a concept in theory (as logically irrefutable truth) without much supporting research evidence, gathering evidence to support preconceived criteria and plans, and proceeding because its too late to take a pause, or turn back, may not be the best guarantor of long-term success in implementing a system-wide reform agenda.

The whole Ontario Measuring What Works student assessment initiative raises far more questions than it answers. Here are a few pointed questions to get the discussion started and spark some re-thinking. 

On the Research Base:  Does the whole MWM plan pass the research sniff test?  Where does the cognitive domain and the acquisition of knowledge fit in the MWM scheme?  If the venture focuses on Social and Emotional Learning(SEL), whatever happened to the whole student resilience domain, including grit, character and growth mindset? Is it sound to construct a theory and then commission studies to confirm your choice of SEL domains and competencies?

On Implementation: Will introducing the new Social Learning criteria on Ontario student reports do any real harm? Is it feasible to introduce the full MWM plan on top of the current testing regime without imposing totally unreasonable additional burdens on classroom teachers?  Since the best practice research supports a rather costly “multivariate, multi-instrumental approach,” is any of this affordable or sustainable outside of education jurisdictions with significant and expandable capacity to fund such initiatives? 

 

Read Full Post »

Educational talk about “grit” – being passionate about long-term goals, and showing the determination to see them through –seems too be everywhere in and around schools. Everywhere, that is, except in the rather insular Canadian educational world. Teaching and measuring social-emotional skills are on the emerging policy agenda, but “grit” is (so far) not among them.

GritFaceGirlGrit is trendy in American K-12 education and school systems are scrambling to get on board the latest trend.  A 2007 academic article, researched and written by Angela Duckworth, made a compelling case that grit plays a critical role in success.  Author Paul Tough introduced grit to a broad audience in his 2013 book How Children Succeed: Grit, Curiosity, and the Hidden Power of Character, which went on to spend a year on the New York Times bestseller list.  And in the same year, Duckworth herself gave a TED talk, which has been viewed more than 8 million times online.

Since then, grit initiatives have flourished in United States school systems. Some schools are seeking to teach grit, and some districts are attempting to measure children’s grit, with the outcome contributing to assessments of school effectiveness. Angela Duckworth’s new book, Grit: The Power of Passion and Perseverance, is one of the hottest North American non-fiction titles this publishing season.  In spite of the flurry of public interest, it has yet to register in the Canadian educational domain.

GritDuckworthBookCoverOver the past three years the Ontario-based People for Education (P4ED) advocacy organization has been pursuing the goal of broadening the existing measures of student success to embrace “social-emotional skills” or competencies. With a clear commitment to “move beyond the ‘3R’s” and redefine the established testing/accountability framework, P4ED founder Annie Kidder and the well-funded Toronto-centred research team have been creating a “broad set of foundational skills” and developing a method of “measuring schools’ progress toward those goals.”

The Ontario P4ED initiative, billed as “Measuring What Matters “(MWM), proposes a draft set of “Competencies and Skills” identified as Creativity, Citizenship, Social-Emotional Learning, and Health — all to be embedded in what is termed “quality learning environments” both in schools and the community. The proposed Ontario model makes no reference whatsoever to cognitive learning and subject knowledge or to the social-emotional aspects of grit, perseverance or work ethic.

The P4ED project has a life of its own, driven by a team of Canadian education researchers with their own well-known hobby horses. Co-Chair of the MWM initiative, former BC Deputy Minister of Education Charles Ungerleider, has assembled a group of academics with impeccable “progressive education” (anti-testing) credentials, including OISE teacher workload researcher Nina Bascia and York University self-regulation expert Stuart Shanker.

A 2015 MWM project progress report claimed that the initiative was moving from theory to practice with “field trials” in Ontario public schools. It simply reaffirmed the proposed social-emotional domains and made no mention of Duckworth’s research or her “Grit Scale” for assessing student performance on that benchmark. While Duckworth is cited in the report, it is for a point unrelated to her key research findings. The paper also assumes that Ontario is a “medium stakes” testing environment in need of softer, non-cognitive measures of student progress, an implicit criticism of the highly regarded Ontario Quality and Accountability Office system of provincial achievement testing.

GritGrowthMindsetWhether “grit” or any other social-emotional skills can be taught — or reliably measured — is very much in question. Leading American cognitive learning researcher Daniel T. Willingham’s latest American Educator essay (Summer 2016) addresses the whole matter squarely and punches holes in the argument that “grit” can be easily taught, let alone assessed in schools. Although Willingham is a well-known critic of “pseudoscience” in education, he does favour utilizing “personality characteristics” for the purpose of “cultivating” in students such attributes as conscientiousness, self-control, kindness, honesty, optimism, courage and empathy, among others.

The movement to assess students for social-emotional skills has also raised alarms, even among the biggest proponents of teaching them. American education researchers, including Angela Duckworth, are leery that the terms used are unclear and the first battery of tests faulty as assessment measures.  She recently resigned from the advisory board of a California project, claiming the proposed social-emotional tests were not suitable for measuring school performance.  “I don’t think we should be doing this; it is a bad idea,” she told The New York Times.

Why are leading Canadian educators so committed to developing “social-emotional” measures as alternatives to current student achievement assessment programs? Should social-emotional competencies such as “joy for learning” or “grit”  be taught more explicity in schools?  How reliable are measures of such “social-emotional skills” as creativity, citizenship, empathy, and self-regulation? 

Read Full Post »