Feeds:
Posts
Comments

Posts Tagged ‘OECD Education’

Student achievement varies a great deal across the Organization of Economic Cooperation and Development (OECD) countries. Good teachers can have a significant impact upon their students’ learning and achievement and there is now research to support that contention.  What makes some teachers more effective than others is less clear.  It remains one question that cries out for further in-depth study.

A comprehensive research study reported in the latest issue of Education Next (Vol. 19, Spring 2019) tackles that fundamental question on an international comparative scale. Three American researchers, Eric A Hanushek, Marc Piopiunik, and Simon Wiederhold, not only demonstrate that teachers’ cognitive skills vary widely among developed nations, but that such differences matter greatly for student performance in school.

Developing, recruiting and training a teacher force with higher cognitive skills (Hanushek, Piopiunik, Wiederhold 2019) can be critical in improving student learning. “An increase of one standard deviation in teacher cognitive skills,” they claim, “is associated with an increase of 10 to 15 per cent of a standard deviation in student performance.” Comparing reading and math scores in 31 OECD countries, teachers in Finland come out with the highest cognitive skills. One quarter of the gaps in average student performance across countries would be closed if each of them were to raise the level of teachers’ cognitive skills to that of Finland.

What’s most fascinating about this study is the large role Canadian teachers play in the comparative data analysis for teacher cognitive skills.  Of the 6,402 teacher test-takers in 31 countries, the largest group, 834 (13 per cent), were from Canada. Based upon data gleaned from the OECD Program for the International Assessment of Adult Competencies (PIAAC), we now know where Canadian teachers rank in terms of their numeracy and literacy skills (See Figure 1). We also have a clearer indication of how Canadians with Bachelor’s degrees and Master’s or Doctoral degrees rate in terms of their core cognitive skills.

Teachers from Canada fare reasonably well, in the top third, in the comparative analysis of cognitive skills. In literacy, teachers in Canada perform above average, with a median score of 308 points out of 500 compared to the sample-wide average of 295 points.  If there’s a problem, it’s in terms of numeracy skills, where they perform slightly above the teacher-wide sample with a median score of 293, compared to the average of 292 points. Adult Canadians with Bachelor’s degrees actually outperform teachers in numeracy skills by 7 points. Teachers in Finland and Japan, for example, perform better than Canadians with Master’s or Doctoral degrees.

Since the September 2010 appearance of  the McKInsey & Company study “Closing the talent gap,,” American policy-makers have considered teachers’ own academic performance as “a key predictor” of higher student achievement, based upon teacher recruitment practices in countries that perform well on international tests. High scoring countries like Singapore, Finland and Korea, for example, recruit their teacher force from the top third of their academic cohorts in university.

Securing sound data on the actual quality of recent Canadian teacher education cohorts is challenging because of the paucity of reported information. One claim that Canadian teachers come from the “top one third of high school graduates” put forward in a 2010 McKinsey & Company OECD study looks highly suspect.

A September 2008 review of Initial Teacher Education Programs  (Gambhir, Evans, Broad, Gaskell 2008), reported that admission cut-offs ranged from 65 per cent to over 90 per cent, depending upon the faculty of education. Most of the Canadian universities with Faculty of Education programs, to cite another fact, still have grade cut-off averages for acceptance in the Arts and Science that hover between 70 per cent and 75 per cent. With the exception of OISE, Western, Queen’s and UBC, teacher candidates are not drawn from the top third of their academic cohort, particularly in mathematics and sciences.

Differences in teachers’ cognitive skills within a country also seem to have a bearing upon student performance. Plotting student performance difference between math and reading ( at the country level) against the difference in teacher cognitive skills between numeracy and literacy yields some intriguing results (Figure 2). An increase of teacher cognitive skills of one standard deviation is estimated to improve student achievement by 11 per cent of standard deviation. The data for Canada shows a teacher test-score difference between numeracy and literacy of -12 points

The brand new American study (Hanushek, Piopiunik, Wiederhold 2019) also demonstrates that paying teachers better is a possible factor in attracting and retaining teachers with higher cognitive skills. In terms of wage premiums, teachers’ earnings in higher performing countries are generally higher, as borne out by Ireland, Germany and Korea, where teachers earn 30 to 45 per cent more than comparable college graduates in other jobs.

Teachers in Canada earn 17 per cent more than their comparators, while those in the USA and Sweden earn 22 per cent less. Increasing teacher pay has potential value in the United States where salaries discourage the ‘best and brightest’ from entering teaching. There is a caveat, noted by Hanushek and his research team:  Changes in policy must ensure that “higher salaries go to more effective teachers.”

Do smarter teachers make for smarter students? How sound is the evidence that teachers who know more are actually better teachers? Why do we put so much stock in improving student learning in literacy/reading and mathematics?  What potential flaws can you spot in this type of research? 

 

Advertisements

Read Full Post »

Equipping the rising generation of students with what are termed “21st century skills” is all the rage. Since the fall of 2010, British Columbia’s Ministry of Education, like many other education authorities, has been virtually obsessed with advancing a B.C. Education Plan championing the latest iteration of a global education transformation movement – technology-based personalized learning.

BCEdPlanElements

 

The whole concept of 21st century skills, promoted by the World Economic Forum and the Organization for Economic Cooperation and Development (OECD), rests upon widely-circulated global theories about our fast changing, technology-driven, unpredictable future. Leading proponents of the new dogma contend that it is now essential to ensure that our youth are “equipped with the right type of skills to successfully navigate through an ever-changing, technology-rich work environment’ and ready to “continuously maintain their skills, upskill and/or reskill throughout their working lives.”

Much of the 21st century learning mantra went unchallenged and escaped critical scrutiny until quite recently. Today many of the education researchers challenging the 21st century learning orthodoxy are charter members of researchED, a British grassroots teacher research organization, founded by teacher Tom Bennett five years ago.

A growing number of outstanding education researchers, including Daniel T. Willingham, Dylan Wiliam, and Paul A. Kirschner, have been drawn to researchED rEDONTWillinghamCloseUpbecause of its commitment to scrutinize prevailing theories, expose education myths, and encourage more evidence-informed curriculum policy and teaching practice. That is precisely why I took the lead in bringing researchED to Canada in November 2017.

British Columbia teachers have given the futuristic B.C. Education Plan a cool reception and are, by every indication, ripe for teacher-led research and curriculum changes that pass the evidence-based litmus test.

A 2017 BCTF survey of teachers gave the B.C. Education Plan mixed reviews and has already raised serious concerns about the province’s latest iteration of a “21st century skills” curriculum. Teachers’ concerns over “personalized learning” and “competency-based assessment” focus on the “multiple challenges of implementation” without adequate resource support and technology, but much of the strongest criticism was motivated by “confusion” over its purposes, concern over the lack of supporting research, and fears that it would lead to “a less rigorous academic curriculum.”

Such criticisms are well-founded and consistent with new academic research widely discussed in researchED circles and now finding its way into peer-reviewed education Vo Raad/Magazine, Blik van Buiten, Paul Kirschner, Heerlen, 12 12 2013research journals. Professor Paul A. Kirschner and his Open University of the Netherlands team are in the forefront in the movement to interrogate the claims and construct an alternative approach to preparing our children for future success.

Research-informed educators are now asking whether the so-called 21st century skills actually exist. If these skills do exist, to what extent are they new or just repackaged from previous generations of attempted reform.  Why, they ask, have the number of identified skills ballooned from four in 2009 (Partnership for 21st Century Skills) to 16 in 2016 (World Economic Forum).

What students need – and most teachers actually want – is what Kirschner has termed “future-proof education.” Based upon recent cognitive science research, he and others are urging teachers to take action themselves to ensure that evidence-informed practice wins the day.

The best way forward may well be deceptively simple: set aside the “21st century skills” paradigm in favour of the acquisition of knowledge, skills, and attitudes necessary to continue to learn in a stable and enduring way in a rapidly changing world.”

Kirschner and his research team propose a new “future-proof” basis for preparing students for success and fulfillment: 

  1. Cognitive and metacognitive skills are critical. Five of the identified GCM clusters emphasize such skills and suggest emphasizing a progression from concrete cognitive skills to more generic personality competencies.
  2. Authentic learning situations should be a high priority and the driving force for teaching, training, and learning. Such tasks help learners to integrate knowledge, skills, and attitudes, stimulate them to develop coordinate skills, and facilitate transfer of what is learned to new problem situations.
  3. Redesigning schools and professionalizing teachers in 21st century learning strategies are not likely to make much of a difference. Shift the focus to cognitive and metacognitive skills, linking learning with authentic, real-life situations and matching teaching methods with educational contexts and goals.

DidauTaxonomyRushing head-long into 21st century skills makes little sense to Kirschner and fellow researchers because the most effective and durable initiatives are those that are planned and staged over a longer span of as much as 15 years. He proposes a three-stage approach: 1) laying the building blocks (i.e., concrete cognitive knowledge and skills);  2) develop higher-order thinking and working skills; and 3) tackle Bigger Problems that require metacognitive competencies and skills. Much of the underlying research is neatly summarized in David Didau’s 2017 Taxonomy demonstrating the connection between long term memory and working memory in teaching and learning.

All of this is just a small taste of my upcoming researchED Vancouver 2019 presentation on the B.C. Education Plan.  It will not only analyze the B.C. version of 21st Century Learning, but attempt to point the province’s education system in the right direction.

Where did the “21st century skills” movement actually originate?  Where’s the evidence-based research to support 21st century skills projects such as the B’C. Education Plan? How much of the Plan is driven by the imperatives of technology-based personalized learning and its purveyors? Can you successfully prepare students for careers and jobs that don’t exist? Would we be better advised to abandon “21st century skills” in favour of “future-proof learning”? 

Read Full Post »

Millions of Facebook users were profiled by Cambridge Analytica without their knowledge and that public disclosure has heightened everyone’s awareness of not only the trend to “personality profiling,’ but the potential for massive invasion of privacy. These controversial actions have exposed the scope of Big Data and the wider aspirations of the data analytics industry to probe into the “hidden depths of people.” It has also, as U.K. expert Ben Williamson has reminded us, tipped us off about the growing trend toward personality measurement in K-12 and post-secondary education.

Williamson’s 2017 book, Big Data in Education, sounded the alert that the collection and analysis of more personal information from schoolchildren will be a defining feature of education in coming years. And just as the Facebook debacle raises public concerns about the use of personal data, a new international test of ten and 15-year-olds is to be introduced by the Organization for Economic Cooperation and Development (OECD) – a powerful influence on national education policies at a global scale.  Almost without being detected, it is also emerging as a key component of the current Ontario Student “Well-Being” Assessment, initially piloted from 2014 to 2016 by Ontario People for Education as the core objective of its Measuring What Matters project.

Most data collected about students since the 1990s has came from conventional international, national and provincial examinations of knowledge and cognitive skills. Preparing students for success in the 21st century workplace has been a major driver of most initiatives in testing and accountability.  International test results such as OECD’s Program of International Student Assessment (PISA) have also become surrogate measures of the future economic potential of nations, feeding a global education race among national education systems.

The advent of Big Data is gradually transforming the nature of student assessment. While the initial phase was focused on stimulating competitive instincts and striving for excellence, more recent initiatives are seeking to “broaden the focus of student assessment” to include what is termed “social and emotional learning (SEL).” Much of the motivation is to secure some economic advantage, but that is now being more broadly defined to help mould students committed to more than individual competitiveness.  With the capacity to collect more “intimate” data about social and emotional skills to measure personality, education policymakers are devising curriculum and assessment programmes to improve personality scores. Despite the Cambridge Analytica controversy, personality data is well on the way to being used in education to achieve a variety of competing political objectives.

The ‘Big Five’ of Personality Profiling

The science of the psychographic profiling employed by Cambridge Analytica is hotly contested. It is, however, based on psychological methods that have a long history for measuring and categorizing people by personality. At its core is a psychological model called the “five factor model” of personality – or the “Big Five.” These include “openness”, “conscientiousness”, “extroversion”, “agreeableness” and “neuroticism” (OCEAN). Personality theorists believe these categories are suitable for classifying the full range of human personalities. Psychologists have invented instruments such as the so-called ‘Big Five Inventory’  to capture OCEAN data for personality modelling.

Advent of Stealth Assessment

The upcoming 2018 OECD PISA test will include, for the first time, a battery of questions aimed at assessing “global competencies” with a distinct SEL orientation. In 2019, the OECD plans to launch its international Study of Social and Emotional Learning  Designed as a computer-based self-completion questionnaire, at its core the test is a modified version of the Big Five Inventory. The OECD version maps exactly onto the five factor personality categories with “emotional stability” substituted in place of “neuroticism.” When implemented, the social and emotional skills test will assess students against each of the Big Five categories.

The OECD Education Skills experts, working in collaboration with Pearson International, firmly believe that social and emotional skills are important predictors of educational progress and future workplace performance. Large-scale personality data is clearly seen by the OECD to be predictive of a country’s potential social and economic progress. Although both the OECD and the Ontario Student Well-Being advocates both claim that it is strictly a test of social and emotional skills, Williamson claims such projects employ the same family of methods used in the Cambridge Analytica personality quiz. Upon closer examination, the same psychological assumptions and personality assessment methods underpin most of the latest education ventures.

The OECD is already a powerful influence on the moulding of national education policies. Its PISA testing has reshaped school curricula, assessments and whole systems in the global education race.  It is increasingly likely that its emphasis on personality testing will, once again, reshape education policy and school practices. Just as PISA has influenced a global market in products to support the core skills of literacy, numeracy and science tested by the assessment, the same is now occurring around SEL and personality development.  Canada’s provincial and territorial ministers of education, working under the auspices of the Council of Ministers of Education, Canada (CMEC) have not only endorsed the OECD’s  proposed “global competencies,” but proposed a variation of their own to guide assessment policy.

The Ontario Student Assessment initiative, announced September 6, 2017, deserves closer scrutiny through the lens of datafication and personality profiling. It’s overarching goal bears repeating: “Update provincial assessment and reporting practices, including EQAO, to make sure they are culturally relevant, measure a wider range of learning, and better reflect student well-being and equity.”  Founder of People for Education Annie Kidder hailed the plan for “embedding” the “transferable skills” and positioning Ontario to take “a leading role in the global movement toward broader goals for education and broader measures of success in our schools.”

Critics of large-scale student assessments are quick to identify the underlying influence of “globalization” and the oft-stated goal  of preparing students for the highly competitive “21st century workplace.”  It can be harder to spot currents moving in the opposite direction and heavily influenced by what Kathryn Ecclestone and Denis Hayes aptly termed the “therapeutic education ethos.” Ten years ago, they flagged the rise of  a “therapeutic education” movement exemplified by classroom activities and programs, often branded as promoting ‘mindfulness,’ which pave the way for “coaching appropriate emotions” and transform education into a disguised form of “social engineering” aimed at producing “emotionally literate citizens” who are notably “happy” and experience “emotional well-being.”

Preparing students to be highly competitive human beings or to be creative and cooperative individuals is risking re-framing public education in terms of personality modification, driven by ideological motivations, rather than the pursuit of meaningful knowledge and understanding. It treats children as ‘guinea pigs’ engaged in either market competition preparation or social engineering, and may well stand in the way of classroom teachers pursuing their own evidence-based, knowledge-centred curriculum aims.

Appropriating and misusing personality data by Facebook and Cambridge Analytica led to a significant world-wide public backlash. In education, however, tests and technologies to measure student personality, according to Williamson, are passing unchallenged. It is equally controversial to capture and mine students’ personality data with the goal of shaping students to “fit into” the evolving global marketplace.  Stealth assessment has arrived and being forewarned is forearmed.

Why is education embracing data mining and personality profiling for schoolchildren? What are the connections between Facebook data mining and recent social-and-emotional learning assessment initiatives?  Should students and parents be advised, in advance, when student data is being minded and mapped against personality types?  Why have Canadian assessment projects like the Ontario Measuring What Matters- Student Well-Being initiative escaped close scrutiny?  Should we be more vigilant in tracking and monitoring the use and abuse of Big Data in education? 

Read Full Post »

“Canadians can be proud of our showing in the 2015 Programme for International Student Assessment (PISA) report,” declared Science consultant Bonnie Schmidt and former Council of Ministers of Education (CMEC) director Andrew Parkin in their first-off-the mark December 6, 2016 response to the results. “We are, ” they added, “one of only a handful of countries that places in the top tier of the Oganization for Economic Development and Cooperation (OECD) in each of the three subjects tested:science, reading and math.”

pisa2015cmeccover“Canada” and “Canadian students,” we were told, were once again riding high in the once-every-three-years international test sweepstakes. If that that effusively positive response had a familiar ring, it was because it followed the official line advanced by a markedly similar CMEC media release, issued a few hours before the commentary.

Since our students, all students in each of our ten provincial school systems, were “excelling,” then it was time for a little national back-slapping. There’s one problem with that blanket analysis: it serves to maintain the status quo, engender complacency, obscure the critical Mathematics scores, and disguise the lopsided nature of student performance from region to region.

Hold on, not so fast, CMEC — the devil is in the real details and more clearly portrayed in the OECD’s own “Country Profile” for Canada. Yes, 15-year-olds in three Canadian provinces (Alberta, British Columbia, and Quebec) achieved some excellent results, but overall Mathematics scores were down, and students in over half of our provinces trailed-off into mediocrity in terms of performance. Our real success was not in performance, but rather in reducing the achievement gap adversely affecting disadvantaged students.

Over half a million 15-year-olds in more than 72 jurisdictions all over the world completed PISA tests, and Schmidt and Parkin were not alone in making sweeping pronouncements about why Canada and other countries are up and others down in the global rankings.

Talking in aggregate terms about the PISA performance of 20,000 Canadian students in ten different provinces can be, and is, misleading, when the performance results in mathematics continue to lag, Ontario students continue to underperform, and students in two provinces, Manitoba and Saskatchewan, struggle in science, reading, and mathematics.  Explaining all that away is what breeds complacency in the school system.

My own PISA 2015 forecast was way off-base — and taught me a lesson.  After the recent TIMSS 2015 Mathematics results released in November 2016, an  East Asian sweep, led by Singapore and Korea, seemed like a safe bet. How Finland performs also attracts far less attention than it did in its halcyon days back in 2003 and 2006. The significant OECD pivot away from excellence to equity caught me napping and I completely missed the significance of moving (2012 to 2015) from pencil-and-paper to computer-based tests. 

Some solace can be found in the erroneous forcecasts of others. The  recent Alberta Teachers’ Association (ATA) “Brace Yourself” memo with its critique of standardized testing assessment, seemed to forecast a calamitous drop in Alberta student performance levels. It only happened in Mathematics.

Advocates of the ‘Well-Being’ curriculum and broader assessment measures, championed by Toronto’s People for Education, will likely be temporarily thrown off-stride by the OECD’s new-found commitment to assessing equity in education. It will be harder now to paint PISA as evil and to discredit PISA results based upon such a narrow range of skills in reading, math and science.

The OECD’s “Country Profile” of Canada is worth studying carefully because it aggregates data from 2003 to 2015, clarifies the trends, and shows how Canadian students continue to struggle in mathematics far more than in reading and science.

Canadian students may have finished 12th in Mathematics with a 516 aggregate score, but the trend line continues to be in decline, down from 532 in 2003. Digging deeper, we see that students in only two provinces, Quebec ( 544) and BC (522) actually exceeded the national mean score. Canada’s former leader in Mathematics performance, Alberta, continued its downward spiral from the lofty heights of 549 (2003) to 511 (2015).

Since Ontario students’ provincial mathematics scores are declining, experts will be pouring over the latest PISA results to see how bad it is in relation to the world’s top performing systems. No surprises here: Ontario students scored 509, finishing 4th in Canada, and down from 530 on PISA 2003. Excellence will require a significant change in direction.

The biggest discovery in post-2015 PISA analysis was the positive link between explicit instruction and higher achievement in the 2015 core assessment subject, science. The most important factor linked with high performance remains SES (soci0-economic status), but teacher-guided instruction was weighted close behind and students taught with minimal direction, in inquiry or project-based classes, simply performed less well on the global test.

The results of the 15-year-olds are largely determined over 10 years of schooling, and not necessarily the direct consequence of the latest curriculum fad such as “discovery math.’’

It’s better to look deeper into what this cohort of students were learning when they first entered the school system, in the mid-1990s. In the case of Canadian students, for example, student-centred learning was at its height, and the country was just awakening to the value of testing to determine what students were actually learning in class.

Where the student results are outstanding, such as Singapore and Estonia, it is not solely attributable to the excellence of teaching or the rigour of the math and science curriculum.

We know from the “tutoring explosion” in Canada’s major cities that the prevalence of private tuition classes after school is a contributing factor, and may explain the current advantage still enjoyed in mathematics by Pacific Rim students.

Children of Chinese heritage in Australia actually outperformed students in Shanghai on the 2012 PISA test, and we need to explore whether that may be true for their counterparts in Greater Vancouver. The so-called “Shanghai Effect” may be attributed as much to “tiger mothers” as it is to the quality of classroom instruction.

Whether Canada and Canadians continue to exhibit high PISA self-esteem or have simply plateaued does not matter as much as what we glean over the next few years from studying best international practice in teaching, learning, and assessment.

Surveying PISA student results, this much is clear: standing still is not an option in view of the profound changes that are taking place in life, work, and society.

 

Read Full Post »

With the release of the 2015 Program for International Student Assessment (PISA) on the horizon,  the Organization for Economic Cooperation and Development (OECD) Education Office has stoked-up the “Math Wars” with a new study. While the October 2016 report examines a number of key questions related to teaching Mathematics, OECD Education chose to highlight its findings on “memorization,” presumably to dispel perceptions about “classroom drill” and its use in various countries.

mathsubtractionboardThe OECD, which administers the PISA assessments every three years to 15-year-olds from around the globe, periodically publishes reports looking at slices of the data. It’s most October 2016 report,  Ten Questions for Mathematics Teachers and How PISA Can Help Answer Them, based upon the most recent 2012 results, tends to zero-in on “memorization” and attempts to show that high-performing territories, like Shanghai-China, Korea, and Chinese-Taipei, rely less on memory work than lower-performing places like Ireland, the UK, and Australia.

American Mathematics educator Jo Boaler, renowned for “Creative Math,” jumped upon the PISA Study to buttress her case  against “memorization” in elementary classrooms. In a highly contentious November 2016 Scientific American article, Boaler and co-author Pablo Zoido, contended that PISA findings confirmed that “memorizers turned out to be the lowest achievers, and countries with high numbers of them—the U.S. was in the top third—also had the highest proportion of teens doing poorly on the PISA math assessment.” Students who relied on memorization, they further argued, were “approximately half a year behind students who used relational and self-monitoring strategies” such as those in Japan and France. 

Australian education researcher Greg Ashman took a closer look at the PISA Study and called into question such hasty interpretations of the findings.  Figure 1.2: How teachers teach and students learn caught his eye and he went to work interrogating the survey responses on “memorization” and the axes used to present the data.  The PISA analysis, he discovered, also did not include an assessment of how teaching methods might be correlated with PISA scores in Mathematics.  Manitoba Mathematics professor Robert Craigen spotted a giant hole in the PISA analysis and noted that the “memorization” data related to “at-home strategies of students” not their instructional experiences and may wel;l indicate that students who are improperly instructed in class resort to memorization on their own.

mathpisateacherdirectedgraphWhat would it look like, Ashman wondered, if the PISA report had plotted how students performed in relation to the preferred methods used on the continuum from “more student-oriented instruction” to “more teacher-directed instruction.” Breaking down all the data, he generated a new graph that actually showed how teaching method correlated with higher math performance and found a “positive correlation” between teacher-directed instruction and higher Math scores. “Correlations,” he duly noted, “do not necessarily imply causal relationships but clearly a higher ratio of teacher-directed activity to student orientation.”

Jumping on the latest research to seek justification for her own “meta-beliefs” are normal practice for Boaler and her “Discovery Math” education disciples. After junking, once again, the ‘strawmen’ of traditional Mathematics — “rote memorization” and “drill,” Boaler and Zoido wax philosophical and poetic: “If American classrooms begin to present the subject as one of open, visual, creative inquiry, accompanied by growth-mindset messages, more students will engage with math’s real beauty. PISA scores would rise, and, more important, our society could better tap the unlimited mathematical potential of our children.” That’s definitely stretching the evidence far beyond the breaking point.

The “Math Wars” do generate what University of Virginia psychologist Daniel T. Willingham has aptly described as “a fair amount of caricature.” The recent Boaler-Zoido Scientific American article is a prime example of that tendency. Most serious scholars of cognition tend to support the common ground position that learning mathematics requires three distinct types of knowledge: factual, procedural and conceptual. “Factual knowledge,” Willingham points out, “includes having already in memory the answers to a small set of problems of addition, subtraction, multiplication, and division.” While some students can learn Mathematics through invented strategies, it cannot be relied upon for all children. On the other hand, knowledge of procedures is no guarantee of conceptual understanding, particularly when it comes to complexites such as dividing fractions. It’s clear to most sensible observers that knowing math facts, procedures and concepts is  what counts when it comes to mastering mathematics.

mathtimestableimageSimply ignoring research that contradicts your ‘meta-beliefs’ is common on the Math Education battlefield. Recent academic research on “memorization” that contradicts Boaler and her entourage, is simply ignored, even that emanating from her own university. Two years ago, Shaozheng Qin and Vinod Menon of Stanford University Medical School led a team that provided scientifically-validated evidence that “rote memorization” plays a critical role in building capacity to solve complex calculations.

Based upon a clinical study of 68 children, aged 7 to 9, studied over the course of one year, their 2014 Nature Neuroscience study, Qin, Menon et al. found that memorizing the answers to simple math problems, such as basic addition or multiplication, forms a key step in a child’s cognitive development, helping bridge the gap between counting on fingers and tackling more complex calculations. Memorizing the basics, they concluded, is the gateway to activating the “hippocampus,” a key brain structure for memory, which gradually expands in “overlapping waves” to accommodate the greater demands of more complex math.

The whole debate over memorization is suspect because of the imprecision in the use of the term. Practice, drilling, and memorization are not the same, even though they get conflated in Jo Boaler’s work and in much of the current Mathematics Education literature. Back in July 2012, D.T. Willingham made this crucial point and provided some valuable points of distinction. “Practice,” as defined by Anders Ericsson, involves performing tasks and feedback on that performance, executed for the purpose of improvement. “Drilling’ connotes repetition for the purpose of achieving automaticity, which – at its worst, amounts to mindless repetition or parroting. “Memorization,” on the other hand, relates to the goal of something ending up in long-term memory with ready access, but does not imply using any particular method to achieve that goal.

Memorization has become a dirty word in teaching and learning laden with so much baggage to the point where it conjures up mental pictures of “drill and kill” in the classroom. The 2016 PISA Study appears to perpetuate such stereotyping and, worst of all, completely misses the “positive correlation” between teacher-directed or explicit instruction and better performance in mathematics.

Why does the PISA Study tend to associate memorization in home-study settings with the drudgery of drill in the classroom?  To what extent does the PISA Study on Mathematics Teaching support the claims made by Jo Boaler and her ‘Discovery Math’ advocates? When it comes to assessing the most effective teaching methods, why did the PISA researchers essentially take a pass? 

 

Read Full Post »