A recent headline in the New Scientist caught the eye of University College London Professor Rose Luckin, widely regarded as the “Dr. Who of AI in Education.” It read: “AI achieves its best mark ever on a set of English exam questions.” The machine was well on its way to mastering knowledge-based curriculum tested on examinations. What was thrilling to Dr. Luckin, might well be a wake-up call for teachers and educators everywhere.
Artificial Intelligence (AI) is now driving automation in the workplace and the “Fourth Industrial Revolution” is dawning. How AI will impact and possibly transform education is now emerging as a major concern for front-line teachers, technology skeptics, and informed parents. A recent Public Lecture by Rose Luckin, based upon her new book Machine Learning and Intelligence, provided not only a cutting-edge summary of recent developments, but a chilling reminder of the potential unintended consequences for teachers.
AI refers to “technology that is capable of actions and behaviours that require intelligence when done by humans.” It is no longer the stuff of science fiction and popping up everywhere from voice-activated digital assistants in telephones to automatic passport gates in airports to navigation apps to guide us driving our cars. It’s creeping into our lives in subtle and virtually undetectable ways.
AI has not been an overnight success. It originated in September 1956, some 63 years ago, in a Dartmouth College NH lab as a summer project undertaken by ten ambitious scientists. The initial project was focused on AI and its educational potential. The pioneers worked from this premise: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Flash forward to today — and it’s closer to actual realization.
Dr. Luckin has taken up that challenge and has been working for two decades to develop “Colin,” a robot teaching assistant to help lighten teachers’ workloads. Her creation is software-based and assists teachers with organizing starter activities, collating daily student performance records, assessing the mental state of students, and assessing how well a learner is engaging with lessons.
Scary scenarios are emerging fueled by a few leading thinkers and technology skeptics. Tesla CEO Elon Musk once warned that AI posed an “existential threat” to humanity and that humans may need to merge with machines to avoid becoming “house cats” to artificially intelligent robots. Theoretical physicist Stephen Hawking has forecast that AI will “either be the best thing or the worst thing for humanity.” There’s no need for immediate panic: Current AI technology is still quite limited and remains mechanically algorithmic and programmed to act upon pattern recognition.
One very astute analyst for buZZrobot, Jay Lynch, has identified the potential dangers in the educational domain:
Measuring the Wrong Things
Gathering data that is easiest to collect rather than educationally meaningful. In the absence of directly measured student leaning, AI relies upon proxies for learning such as student test scores, school grades, or self-reported learning gains. This exemplifies the problem of “garbage in, garbage out.”
Perpetuating Bad Ways to Teach
Many AIfE algorithms are based upon data from large scale learning assessments and lack an appreciation of, and input from, actual teachers and learning scientists with a grounding in learning theory. AI development teams tend to lack relevant knowledge in the science of learning and instruction. One glaring example was IBM’s Watson Element for Educators, which was based entirely upon now discredited “learning styles” theory and gave skewed advice for improving instruction.
Giving Priority to Adaptability rather that Quality
Personalizing learning is the prevailing ideology in the IT sector and it is most evident in AI software and hardware. Meeting the needs of each learner is the priority and the technology is designed to deliver the ‘right’ content at the ‘right’ time. It’s a false assumption that the quality of that content is fine and, in fact, much of it is awful. Quality of content deserves to be prioritized and that requires more direct teacher input and a better grasp of the science of learning.
Replacing Humans with Intelligent Agents
The primary impact of AI is to remove teachers from the learning process — substituting “intelligent agents” for actual human beings. Defenders claim that the goal is not to supplant teachers but rather to “automate routine tasks” and to generate insights to enable teachers to adapt their teaching to make lessons more effective. AI’s purveyors seem blind to the fact that teaching is a “caring profession,” particularly in the early grades.
American education technology critic Audrey Watters is one of the most influential skeptics and she has expressed alarm over the potential unintended consequences. ” We should ask what happens when we remove care from education – this is a question about labor and learning. What happens to thinking and writing when robots grade students’ essays, for example. What happens when testing is standardized, automated? What happens when the whole educational process is offloaded to the machines – to “intelligent tutoring systems,” “adaptive learning systems,” or whatever the latest description may be? What sorts of signals are we sending students?” The implicit and disturbing answer – teachers as professionals are virtually interchangeable with robots.
Will teachers and robots come to cohabit tomorrow’s classrooms? How will teaching be impacted by the capabilities of future AI technologies? Without human contact and feedback, will student motivation become a problem in education? Will AI ever be able to engage students in critical thinking or explore the socio-emotional domain of learning? Who will be there in the classroom to encourage and emotionally support students confronted with challenging academic tasks?