Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence (AI)’ Category

ChatBotOpenAi

Artificial intelligence has finally arrived in a big way in the form of its latest edtech innovation, OpenAI’s ChatGPT.  The intelligent chatbot which surfaced in November 2022 can spin well written essays and even solve mathematical equations in seconds. It could well be what is known as a ‘game changer’ in education, particularly in the higher grades, colleges and universities.

Teachers everywhere are awakening to a new reality: assignments requiring regurgitation are fast becoming obsolete and classroom practitioners will have to up-their-game to stay one step ahead of the robots. That’s a tall order for school systems accustomed to moving at a glacial pace. It’s also a lot to expect of teachers in the wake of the pandemic education setback.

The initial wave of reaction to ChatGBT hit like a tsunami and most of it sounded apocalyptic. San Francisco high school teacher Daniel Herman’s feature article in The Atlantic, “The End of High School English” (December 9, 2022) predicted the worst.  Teenagers, he wrote, have “always found ways around hard work of actual learning,” from Cliff’s Notes in the 1950s, to “No Fear Shakespeare” in the 1990s to You Tube videos and analysis in recent years. For most, writing an essay, at home or in school was the moment of reckoning. You were “on your own with a blank page, staring down a blinking cursor, the essay awaiting to be written.”

ChatBotCompetition

The arrival of ChatGPT, a tech marvel that can generate sophisticated written answers to virtually any prompt, customized to various writing styles, will likely further erode school writing programs. It will also, in some cases, signal the end of writing assignments altogether, eliminating writing as a critical skill, a recognized metric for intelligence, and a teachable skill.

One of North America’s leading authorities on literacy and cognitive science, Natalie Wexler, responded to such dire predictions with a short essay in Forbes (December 21, 2022) in defense of continuing to teach writing at all levels. While ChatBGT may be able to produce good essays, she claimed, that did not make writing obsolete.

Millions of students, at all levels, including PSE, continue to struggle with their writing. In the United States, Wexler reminded us, only 27 per cent of Grade 8 and Grade 12 students performed at the proficient level or above in recent national assessments.  In other words, most students lack proficiency in expressing themselves in writing.

Surveying Canadian student writing assessment scores, based upon incomplete data from province-to-province, indicates that our students only perform marginally better.  One April 2019 York University study of Ontario undergraduate university students found that some 41 per cent of 2,230 students self-reported that they were “at risk in academic settings because of limited levels of basic skills” and some 16 per cent indicated that they were totally lacking in the required skills, particularly in writing, test taking, and academic study skills.

Writing involves far more than acquiring a skill. Here Natalie Wexler explains why: “When done well, [writing] isn’t just a matter of displaying what you already know—although it’s crucial to have some pre-existing knowledge of the topic you’re writing about. The process of writing itself can and should deepen that knowledge and possibly spark new insights. So when students use ChatGPT, they’re not just cheating whatever institution is giving them credit for work they haven’t done. They’re also cheating themselves.”

TeacherWritingwithStudents

Writing also has significant related benefits. When students write about something they are studying – in any subject – it provides ‘retrieval practice’ and improves their retention of the material. Building their store of knowledge in their long-term memories makes it, in turn, easier to acquire more knowledge. “Prior knowledge about a topic is like mental velcro,” Marilyn Jaeger Adams reminded us. “The relevant knowledge gives the words of the text places to stick and make sense, thereby supporting comprehension…”

Explicit writing instruction, beginning with the sentence, also helps students understand the texts they have been asked to read. “The syntax of written language is more complex than that of spoken language, with constructions like subordinate clauses and the passive voice,” writes Natalie Wexler. “Many students don’t just become familiar with that syntax through reading. But when they learn to use those complex constructions in their writing, they’re in a much better position to understand them when they encounter them in text.”

An initial alarm about ChatGPT was sounded in December 2022 by South Carolina college professor Darren Hick who caught one of his students cheating by using the chatbot to write an essay for his philosophy class.  The essay on David Hume and the paradox of horror was found to be machine generated and Hick imposed a sanction for plagiarism. It was a test case for what is sure to  follow from the Winter Term of 2023 onward.  Reacting to reports of such cases, New York City public schools, America’s largest school system, decided to “block” access to ChatGPT in all of its schools.

The incursion of Chat GPT will likely have disastrous consequences if teachers are now deterred from teaching or assigning writing in their classes. Coming out of the pandemic, we now have a serious literacy crisis, varying in severity from province-to-province, here in Canada. With so much emphasis on correcting reading deficits, student writing does not get the attention it deserves. The last thing we need is a technological innovation that makes it easier for students to progress without actually mastering writing.

Will classroom teachers be up to combating the invasion of the writing bot? Will school systems attempt to block access to the technological marvel? Will we look for technological patches like TurnItIn.com, reprogrammed to detect, identify and counteract plagiarism, the outward expression of breaches in academic integrity? In the initial phase, will it come down to a battle between rival bots?

Read Full Post »

One of the world’s most infamous digital visionaries, Marc Prensky, specializes in spreading educational future shock.  Fresh off the plane from California, the education technology guru who coined the phrase “digital natives” did it again in Fredericton, the quiet provincial capital of New Brunswick.  Two hundred delegates attending the N.B. Education Summit (October 16-18, 2019) were visibly stunned by his latest presentation which dropped what he described as a series of “bombs” in what has become his ongoing campaign of creative disruption.

His introductory talk, “From giving kids content to kids fixing real world problems,” featured a series of real zingers. “The goal of education,” Prensky proclaimed, “is not to learn, it is to accomplish things.” “Doing something at the margins will not work” because we have to “leapfrog over the present to reach the future.”When you look out at a classroom, you see networked kids.” Instead of teaching something or developing work-ready skills, we should be preparing students to become “symbiotic human hybrids” in a near future world.

Having spent two breakfasts, totaling more than two hours, face-to-face with Marc Prensky, a few things became crystal clear. The wild success of his obscure 2001 article in On the Horizon on “digital natives” and “digital immigrants” totally surprised him. He is undaunted by the tenacious critics of the research-basis of his claims, and he’s perfectly comfortable in his role as education’s agent provocateur.

Prensky burst on the education scene nearly twenty years ago. His seminal article was discovered by an Australian Gifted Education association in Tasmania, and it exploded from there. Seven books followed, including Digital Game-Based Learning (2001), From Digital Natives to Digital Wisdom (2012), and Education to Better Their World (2016).

While riding the wave, he founded his Global Future Foundation based in Palo Alto, California, not far from the home of TED Talks guru Sir Ken Robinson. He is now full-time on the speaking circuit and freely admits that he seeks to “drop a few bombs” in his talks before education audiences. Even though he writes books for a living, he confessed to me that he hasn’t “used a library in years.”

Assembled delegates at the recent Summit were zapped by Prensky in a session designed as a wake-up call for educators. About one-third of the delegates were classroom teachers and they, in particular, greeted his somewhat outlandish claims with barely-concealed skepticism.

Listening to students is good practice, but idealizing today’s kids doesn’t wash with most front-line practitioners.  How should we prepare the next generation? “We treat our kids like PETS (capitalized). Go here, do that… We don’t have to train them to follow us. Let’s treat them as CAPABLE PEOPLE (capitalized).” Making such assumptions about what’s happening in classrooms don’t go over with professionals who, day-in-day-out, model student-focused learning and respect students so much that they would never act that way. Especially so, with teachers struggling to reach students in today’s complex and demanding classroom environments.

Striving for higher student performance standards is not on Prensky’s radar. “Academics have hijacked K-12 education,” he stated. Nor is improving provincial test scores. “We’re not looking to raise PISA scores. That test was designed by engineers – for engineers.” There’s no need to teach content when information is a Google click away, in Prensky’s view.  “All the old stuff is online, so the goal of education is now to equip kids with the power to affect their world.” 

Prensky has survived waves of criticism over the years and remains undaunted by the periodic salvos.  Since inventing the term “digital natives” and becoming their champion, six points of criticism have been raised about his evolving theory of preparing kids for future education:

  1. The Generational Divide: The generational differences between “digital natives” and pre-iPod “digital immigrants” are greatly exaggerated because digital access and fluency are more heavily influenced by factors of gender, race and socio-economic status. Millennials may use ‘social media’ technology without mastering the intricacies of digital learning and utilizing its full potential (Reeves 2008, Helsper and Enyon 2009,  Frawley 2017)).
  2.  Video-Game Based Learning:  Unbridled advocacy of video-game based learning tends to ignore its negative impacts upon teens, including the glorification of violence, video game addiction, and the prevalence of “digital deprivation” as teens retreat into their private worlds (Alliance for Childhood 2004).
  3. Brain Change Theory: Claims that “digital natives” think and process information differently are based upon flimsy evidence, and trace back to work by Dr. Bruce Perry, a Senior Fellow at the Child Trauma Academy in Houston, TX. It actually relates more to how fear and trauma affect the brain. This is often cited as an example of “arcade scholarship” or cherry-picking evidence and applying it to support your own contentions (Mackenzie 2007).
  4. Stereotyping of Generations: Young people do not fit neatly into his stereotype of “digital natives” because the younger generation (youth 8-18) is far more complex in its acceptance and use of technology, ranging from light to heavy users of digital technology. Boys who play video games are not representative of the whole generation. (Kaiser Family Foundation 2005, Helsper and Enyon 2009)).
  5. Disempowering of Teachers: Changing methodology and curriculum to please children may help to advance student engagement, but it denigrates “legacy learning” and reduces teachers to mere facilitators of technology programs and applications. Dismissing “content knowledge” is unwise, especially when the proposed alternative is process learning and so vacuous (Mackenzie 2007)
  6. Digital Deprivation:  Expanded and excessive use of video games and digital toys can foster isolation rather than social connection which can be harmful to children and teens. Some prime examples of those adverse effects are exposure to violence, warped social values, and ethical/moral miseducation  (Turkle 1984, Alliance for Childhood 2004))

Most critical assessments of Marc Pensky’s case for pursuing “digital wisdom” call into question its efficacy and even its existence. “Digital technology can be used to make us not just wiser but smarter” is his more recent contention. Knowing how to make things is “know how” but it is only one type of knowledge and hardly a complete picture of what constitutes human wisdom.

Combining technology with human judgement has advanced through AI (artificial intelligence), but it’s probably foolhardy to call it “digital wisdom.” It implies, to be frank, that only things that can be qualified and turned into algorithms have value and denigrates the wisdom of the ages.  Championing the inventive mind is fine, but that can also lead to blind acceptance of the calculating, self-interested, and socially-unconscious mind. Where humanity perishes, so do the foundations of civilizations.

Why does digital evangelist Marc Prensky stirr up such controversy in the education world?  Where’s the evidence to support his case for the existence of “digital nativism”? Does “digital wisdom” exist or is it just a new term for useful knowledge or “know how”? Should teaching knowledge to students be completely abandoned in the digital education future?  

Read Full Post »

A recent headline in the New Scientist caught the eye of University College London Professor Rose Luckin, widely regarded as the “Dr. Who of AI in Education.” It read: “AI achieves its best mark ever on a set of English exam questions.” The machine was well on its way to mastering knowledge-based curriculum tested on examinations. What was thrilling to Dr. Luckin, might well be a wake-up call for teachers and educators everywhere.

Artificial Intelligence (AI) is now driving automation in the workplace and the “Fourth Industrial Revolution” is dawning. How AI will impact and possibly transform education is now emerging as a major concern for front-line teachers, technology skeptics, and informed parents. A recent Public Lecture by Rose Luckin, based upon her new book Machine Learning and Intelligence, provided  not only a cutting-edge summary of recent developments, but a chilling reminder of the potential unintended consequences for teachers.

AI refers to “technology that is capable of actions and behaviours that require intelligence when done by humans.” It is no longer the stuff of science fiction and popping up everywhere from voice-activated digital assistants in telephones to automatic passport gates in airports to navigation apps to guide us driving our cars. It’s creeping into our lives in subtle and virtually undetectable ways.

AI has not been an overnight success. It originated in September 1956, some 63 years ago, in a Dartmouth College NH lab as a summer project undertaken by ten ambitious scientists.  The initial project was focused on AI and its educational potential. The pioneers worked from this premise: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”  Flash forward to today — and it’s closer to actual realization.

Dr. Luckin has taken up that challenge and has been working for two decades to develop “Colin,” a robot teaching assistant to help lighten teachers’ workloads. Her creation is software-based and assists teachers with organizing starter activities, collating daily student performance records, assessing the mental state of students, and assessing how well a learner is engaging with lessons.

Scary scenarios are emerging fueled by a few leading thinkers and technology skeptics.  Tesla CEO Elon Musk once warned that AI posed an “existential threat” to humanity and that humans may need to merge with machines to avoid becoming “house cats” to artificially intelligent robots.  Theoretical physicist Stephen Hawking has forecast that AI will “either be the best thing or the worst thing for humanity.” There’s no need for immediate panic: Current AI technology is still quite limited and remains mechanically algorithmic and programmed to act upon pattern recognition.

One very astute analyst for buZZrobot, Jay Lynch, has identified the potential dangers in the educational domain:

Measuring the Wrong Things

Gathering data that is easiest to collect rather than educationally meaningful. In the absence of directly measured student leaning, AI relies upon proxies for learning such as student test scores, school grades, or self-reported learning gains. This exemplifies the problem of “garbage in, garbage out.”

Perpetuating Bad Ways to Teach

Many AIfE algorithms are based upon data from large scale learning assessments and lack an appreciation of, and input from, actual teachers and learning scientists with a grounding in learning theory. AI development teams tend to lack relevant knowledge in the science of learning and instruction. One glaring example was IBM’s Watson Element for Educators, which was based entirely upon now discredited “learning styles” theory and gave skewed advice for improving instruction.

Giving Priority to Adaptability rather that Quality

Personalizing learning is the prevailing ideology in the IT sector and it is most evident in AI software and hardware. Meeting the needs of each learner is the priority and the technology is designed to deliver the ‘right’ content at the ‘right’ time.  It’s a false assumption that the quality of that content is fine and, in fact, much of it is awful. Quality of content deserves to  be prioritized and that requires more direct teacher input and a better grasp of the science of learning.

Replacing Humans with Intelligent Agents

The primary impact of AI is to remove teachers from the learning process — substituting “intelligent agents” for actual human beings. Defenders claim that the goal is not to supplant teachers but rather to “automate routine tasks” and to generate insights to enable teachers to adapt their teaching to make lessons more effective.  AI’s purveyors seem blind to the fact that teaching is a “caring profession,” particularly in the early grades.

American education technology critic Audrey Watters is one of the most influential skeptics and she has expressed alarm over the potential unintended consequences. ” We should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?”  The implicit and disturbing answer – teachers as professionals are virtually interchangeable with robots.

Will teachers and robots come to cohabit tomorrow’s classrooms? How will teaching be impacted by the capabilities of future AI technologies? Without human contact and feedback, will student motivation become a problem in education?  Will AI ever be able to engage students in critical thinking or explore the socio-emotional domain of learning? Who will be there in the classroom to encourage and emotionally support students confronted with challenging academic tasks?

 

Read Full Post »