This post, the second in a three-part series, comes from my keynote address at a recent conference titled “AI is here. Where are we?” hosted by ACS Athens — a Middle States accredited international school in Athens, Greece.
I had the honor of delivering these remarks in advance of a panel on the impact of AI on education.
Part 2: Mirror, Mirror
When faced with a mirror, we have two choices: (a) we can scrutinize ourselves, or (b) we can look somewhere else.
What do those choices look like for educators in the context of artificial intelligence?
When ChatGPT was released, some schools banned its use or decided that students would have to complete assessments the old-fashioned way—in the classroom with pen and paper.
Other schools decided that ChatGPT had opened a door to think differently. Those schools asked students to work with ChatGPT, even if that meant that their teachers needed to experiment with new forms of assessment.
The schools that have embraced AI are looking intently at what’s in the mirror in front of them.
One English Literature teacher told me, “When ChatGPT came out, the first thing I did was prompt it with one of my writing assignments. The result was decent. Not great, but certainly not bad. And that’s when I realized, ‘If an algorithm can write a decent response, maybe that assessment isn’t very good to begin with.’”
As far as I’m concerned, it took a lot of courage for that teacher to gaze into the mirror. Other teachers have told me similar things.
Conversely, I would like to propose that the schools that have banned AI are turning away from the mirror in front of them.
Among other things, they may have decided that ChatGPT is a machine to accelerate cheating, a possibility they want to eliminate.
But is ChatGPT really a cheating machine? Or are we humans the cheaters who will use whatever tools we can find when we aren’t invested in the learning process?
By the way, there is a certain irony in schools banning the use of AI by students. As my 15-year old son told me, “Sometimes the school work you get requires you to answer like a machine.”
We need to sustain our gaze at this AI mirror, because this technology is only going to get more sophisticated, more powerful, and more pervasive over the next five years.
If we agree to look in the mirror, then we need to ask the essential questions articulated by the hosts of the ACS Athens conference “AI is here. Where are we?”
- What is the role of education in keeping human intelligence relevant in the age of AI?
- How can education bridge the gap between human and artificial intelligence?
- What are the implications of AI on education and the skills required for success in the future?
What is the role of education in keeping human intelligence relevant in the age of AI?
I would like to propose that the only way education can keep human intelligence relevant in the age of AI is to ensure that students learn to work with artificial intelligence.
Not against it. We can’t win that battle.
Nor can we simply give up and allow AI to decide everything for us. That would be even worse.
Instead, the role of education can be reflected in a simple expression:
Human + AI > Human vs. AI
As Reid Hoffman has put it, soon everyone will have an AI co-pilot.
Think of the ubiquity of smartphones. That’s what AI co-pilots will be like, because AI is a tectonic force.
This is an opportunity for teachers to lead the way.
For example, Paul Erb, who teaches at Woodberry Forest School in Virginia, has allowed his students to write their essays with the assistance of generative tools like ChatGPT.
However, he has also told them that their essays must reflect an argument that they the students believe in; it must be written in the student’s own voice; and it must make reference to their class discussions. He tells his students that their essays will be graded for coherence, correctness, familiarity with the text, suitability of quotations chosen to support their theses, and clarity and force of a logical argument. He tells his students that their essays should not read like BS. (By the way, he is using BS in the technical sense here—as in stuff that’s made up but that sounds convincing.)
By itself, AI will generate BS, hallucinations, and C+ work. But a student can potentially accomplish much more by working with an AI co-pilot. And they can likely work faster.
How can education bridge the gap between human and artificial intelligence?
School has always been good at helping kids to learn the “Known Knowns.” Consider the traditional curriculum, which matters tremendously. We need a solid base of knowledge upon which to scaffold and build new learning.
Now we also need to evolve the curriculum model to address the “Known Unknowns”—for example, learning about and with AI.
There are no AI textbooks with the answers in the back—none that will be worth teaching in a traditional class, anyway. Anything that was published recently is already obsolete.
Instead, teachers need to design opportunities for students to experiment with AI. Problem-, project-, and challenge-based learning approaches work well for learning about the “Known Unknowns.”
This is a governance and leadership issue.
Boards of Trustees or Boards of Directors need to work with school leaders to invest resources in “Known Unknown” learning experiences.
What are the implications of AI on education and the skills required for success in the future?
AIs (and the algorithms on which they are built) are decision-making or decision-framing machines.
But as Yuval Noah Harari reminds us, “We had better understand our minds before the algorithms make our minds up for us.”
In other words, we adults need to be willing to look in the mirror and we need to teach our students to look in the mirror.
Specifically, school should be a place where—by design—students ask and answer questions that help us understand how to frame and make the most important decisions in the world:
Who am I? Who are we? What matters to us? What are we going to do about it?
This is a school wide conversation.
Do your teachers, parents, students, and other stakeholders agree that the purpose of school is to address those questions?
They ought to reflect on an additional insight from Yuval Noah Harari: “Very soon, somebody will have to decide how to use this power [ie, the power of artificial intelligence] based on some implicit or explicit story about the meaning of life. Philosophers are very patient people, but engineers are far less patient, and investors are the least patient of all. If you don’t know what to do with that power, market forces will not wait a thousand years for you to come up with an answer.”
Check back next week for the final installment of this three-part series examining the impact of AI on education.