Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence (AI)’ Category

One of the world’s most infamous digital visionaries, Marc Prensky, specializes in spreading educational future shock.  Fresh off the plane from California, the education technology guru who coined the phrase “digital natives” did it again in Fredericton, the quiet provincial capital of New Brunswick.  Two hundred delegates attending the N.B. Education Summit (October 16-18, 2019) were visibly stunned by his latest presentation which dropped what he described as a series of “bombs” in what has become his ongoing campaign of creative disruption.

His introductory talk, “From giving kids content to kids fixing real world problems,” featured a series of real zingers. “The goal of education,” Prensky proclaimed, “is not to learn, it is to accomplish things.” “Doing something at the margins will not work” because we have to “leapfrog over the present to reach the future.”When you look out at a classroom, you see networked kids.” Instead of teaching something or developing work-ready skills, we should be preparing students to become “symbiotic human hybrids” in a near future world.

Having spent two breakfasts, totaling more than two hours, face-to-face with Marc Prensky, a few things became crystal clear. The wild success of his obscure 2001 article in On the Horizon on “digital natives” and “digital immigrants” totally surprised him. He is undaunted by the tenacious critics of the research-basis of his claims, and he’s perfectly comfortable in his role as education’s agent provocateur.

Prensky burst on the education scene nearly twenty years ago. His seminal article was discovered by an Australian Gifted Education association in Tasmania, and it exploded from there. Seven books followed, including Digital Game-Based Learning (2001), From Digital Natives to Digital Wisdom (2012), and Education to Better Their World (2016).

While riding the wave, he founded his Global Future Foundation based in Palo Alto, California, not far from the home of TED Talks guru Sir Ken Robinson. He is now full-time on the speaking circuit and freely admits that he seeks to “drop a few bombs” in his talks before education audiences. Even though he writes books for a living, he confessed to me that he hasn’t “used a library in years.”

Assembled delegates at the recent Summit were zapped by Prensky in a session designed as a wake-up call for educators. About one-third of the delegates were classroom teachers and they, in particular, greeted his somewhat outlandish claims with barely-concealed skepticism.

Listening to students is good practice, but idealizing today’s kids doesn’t wash with most front-line practitioners.  How should we prepare the next generation? “We treat our kids like PETS (capitalized). Go here, do that… We don’t have to train them to follow us. Let’s treat them as CAPABLE PEOPLE (capitalized).” Making such assumptions about what’s happening in classrooms don’t go over with professionals who, day-in-day-out, model student-focused learning and respect students so much that they would never act that way. Especially so, with teachers struggling to reach students in today’s complex and demanding classroom environments.

Striving for higher student performance standards is not on Prensky’s radar. “Academics have hijacked K-12 education,” he stated. Nor is improving provincial test scores. “We’re not looking to raise PISA scores. That test was designed by engineers – for engineers.” There’s no need to teach content when information is a Google click away, in Prensky’s view.  “All the old stuff is online, so the goal of education is now to equip kids with the power to affect their world.” 

Prensky has survived waves of criticism over the years and remains undaunted by the periodic salvos.  Since inventing the term “digital natives” and becoming their champion, six points of criticism have been raised about his evolving theory of preparing kids for future education:

  1. The Generational Divide: The generational differences between “digital natives” and pre-iPod “digital immigrants” are greatly exaggerated because digital access and fluency are more heavily influenced by factors of gender, race and socio-economic status. Millennials may use ‘social media’ technology without mastering the intricacies of digital learning and utilizing its full potential (Reeves 2008, Helsper and Enyon 2009,  Frawley 2017)).
  2.  Video-Game Based Learning:  Unbridled advocacy of video-game based learning tends to ignore its negative impacts upon teens, including the glorification of violence, video game addiction, and the prevalence of “digital deprivation” as teens retreat into their private worlds (Alliance for Childhood 2004).
  3. Brain Change Theory: Claims that “digital natives” think and process information differently are based upon flimsy evidence, and trace back to work by Dr. Bruce Perry, a Senior Fellow at the Child Trauma Academy in Houston, TX. It actually relates more to how fear and trauma affect the brain. This is often cited as an example of “arcade scholarship” or cherry-picking evidence and applying it to support your own contentions (Mackenzie 2007).
  4. Stereotyping of Generations: Young people do not fit neatly into his stereotype of “digital natives” because the younger generation (youth 8-18) is far more complex in its acceptance and use of technology, ranging from light to heavy users of digital technology. Boys who play video games are not representative of the whole generation. (Kaiser Family Foundation 2005, Helsper and Enyon 2009)).
  5. Disempowering of Teachers: Changing methodology and curriculum to please children may help to advance student engagement, but it denigrates “legacy learning” and reduces teachers to mere facilitators of technology programs and applications. Dismissing “content knowledge” is unwise, especially when the proposed alternative is process learning and so vacuous (Mackenzie 2007)
  6. Digital Deprivation:  Expanded and excessive use of video games and digital toys can foster isolation rather than social connection which can be harmful to children and teens. Some prime examples of those adverse effects are exposure to violence, warped social values, and ethical/moral miseducation  (Turkle 1984, Alliance for Childhood 2004))

Most critical assessments of Marc Pensky’s case for pursuing “digital wisdom” call into question its efficacy and even its existence. “Digital technology can be used to make us not just wiser but smarter” is his more recent contention. Knowing how to make things is “know how” but it is only one type of knowledge and hardly a complete picture of what constitutes human wisdom.

Combining technology with human judgement has advanced through AI (artificial intelligence), but it’s probably foolhardy to call it “digital wisdom.” It implies, to be frank, that only things that can be qualified and turned into algorithms have value and denigrates the wisdom of the ages.  Championing the inventive mind is fine, but that can also lead to blind acceptance of the calculating, self-interested, and socially-unconscious mind. Where humanity perishes, so do the foundations of civilizations.

Why does digital evangelist Marc Prensky stirr up such controversy in the education world?  Where’s the evidence to support his case for the existence of “digital nativism”? Does “digital wisdom” exist or is it just a new term for useful knowledge or “know how”? Should teaching knowledge to students be completely abandoned in the digital education future?  

Read Full Post »

A recent headline in the New Scientist caught the eye of University College London Professor Rose Luckin, widely regarded as the “Dr. Who of AI in Education.” It read: “AI achieves its best mark ever on a set of English exam questions.” The machine was well on its way to mastering knowledge-based curriculum tested on examinations. What was thrilling to Dr. Luckin, might well be a wake-up call for teachers and educators everywhere.

Artificial Intelligence (AI) is now driving automation in the workplace and the “Fourth Industrial Revolution” is dawning. How AI will impact and possibly transform education is now emerging as a major concern for front-line teachers, technology skeptics, and informed parents. A recent Public Lecture by Rose Luckin, based upon her new book Machine Learning and Intelligence, provided  not only a cutting-edge summary of recent developments, but a chilling reminder of the potential unintended consequences for teachers.

AI refers to “technology that is capable of actions and behaviours that require intelligence when done by humans.” It is no longer the stuff of science fiction and popping up everywhere from voice-activated digital assistants in telephones to automatic passport gates in airports to navigation apps to guide us driving our cars. It’s creeping into our lives in subtle and virtually undetectable ways.

AI has not been an overnight success. It originated in September 1956, some 63 years ago, in a Dartmouth College NH lab as a summer project undertaken by ten ambitious scientists.  The initial project was focused on AI and its educational potential. The pioneers worked from this premise: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”  Flash forward to today — and it’s closer to actual realization.

Dr. Luckin has taken up that challenge and has been working for two decades to develop “Colin,” a robot teaching assistant to help lighten teachers’ workloads. Her creation is software-based and assists teachers with organizing starter activities, collating daily student performance records, assessing the mental state of students, and assessing how well a learner is engaging with lessons.

Scary scenarios are emerging fueled by a few leading thinkers and technology skeptics.  Tesla CEO Elon Musk once warned that AI posed an “existential threat” to humanity and that humans may need to merge with machines to avoid becoming “house cats” to artificially intelligent robots.  Theoretical physicist Stephen Hawking has forecast that AI will “either be the best thing or the worst thing for humanity.” There’s no need for immediate panic: Current AI technology is still quite limited and remains mechanically algorithmic and programmed to act upon pattern recognition.

One very astute analyst for buZZrobot, Jay Lynch, has identified the potential dangers in the educational domain:

Measuring the Wrong Things

Gathering data that is easiest to collect rather than educationally meaningful. In the absence of directly measured student leaning, AI relies upon proxies for learning such as student test scores, school grades, or self-reported learning gains. This exemplifies the problem of “garbage in, garbage out.”

Perpetuating Bad Ways to Teach

Many AIfE algorithms are based upon data from large scale learning assessments and lack an appreciation of, and input from, actual teachers and learning scientists with a grounding in learning theory. AI development teams tend to lack relevant knowledge in the science of learning and instruction. One glaring example was IBM’s Watson Element for Educators, which was based entirely upon now discredited “learning styles” theory and gave skewed advice for improving instruction.

Giving Priority to Adaptability rather that Quality

Personalizing learning is the prevailing ideology in the IT sector and it is most evident in AI software and hardware. Meeting the needs of each learner is the priority and the technology is designed to deliver the ‘right’ content at the ‘right’ time.  It’s a false assumption that the quality of that content is fine and, in fact, much of it is awful. Quality of content deserves to  be prioritized and that requires more direct teacher input and a better grasp of the science of learning.

Replacing Humans with Intelligent Agents

The primary impact of AI is to remove teachers from the learning process — substituting “intelligent agents” for actual human beings. Defenders claim that the goal is not to supplant teachers but rather to “automate routine tasks” and to generate insights to enable teachers to adapt their teaching to make lessons more effective.  AI’s purveyors seem blind to the fact that teaching is a “caring profession,” particularly in the early grades.

American education technology critic Audrey Watters is one of the most influential skeptics and she has expressed alarm over the potential unintended consequences. ” We should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?”  The implicit and disturbing answer – teachers as professionals are virtually interchangeable with robots.

Will teachers and robots come to cohabit tomorrow’s classrooms? How will teaching be impacted by the capabilities of future AI technologies? Without human contact and feedback, will student motivation become a problem in education?  Will AI ever be able to engage students in critical thinking or explore the socio-emotional domain of learning? Who will be there in the classroom to encourage and emotionally support students confronted with challenging academic tasks?

 

Read Full Post »