Skip to main content
Accreditation

Weird Questions : Some Provocations on AI and the Future of Learning

By March 30, 2023No Comments

This post, the first in a three-part series, comes from my keynote address at a recent conference titled “AI is here. Where are we?” hosted by ACS Athens — a Middle States accredited international school in Athens, Greece. 

I had the honor of delivering these remarks in advance of a panel on the impact of AI on education. 

Part 1: Weird Questions

Let’s start with a weird question:

Do androids dream of electric sheep?

If you’ve heard that question before, you probably read the novel with the same title or you know that Blade Runner (1982) was based on that novel.

The question seems absurd: androids are machines. They may look and act like humans, but they are robots. So how can they possibly dream? Dreaming is a human act.

I’ve been thinking a lot about this question, because ChatGPT and Bing AI (our latest iterations of artificial intelligence) engage in something that AI researchers call “hallucination.” When an AI “hallucinates,” it makes up facts and details.

For example, if you ask ChatGPT about your biography, there is a reasonably good chance that some details will be made up.

So now we have two weird questions: 

  1. If an android is a robot, how can it dream? 
  2. If an artificial intelligence is an algorithm, how can it hallucinate?

As it turns out, they both have the same answer. And that answer tells us a lot about AI and the future of education.

I can best explain the answer by sharing a story about an AI hallucinating.

The story comes from Kevin Roose, a New York Times technology reporter and host of the podcast “Hard Fork,” which deals with emerging technologies.

Kevin was playing around with Microsoft’s Bing AI. Over the course of two hours, Roose asked the chatbot increasingly personal questions. Eventually the chatbot started to refer to itself as “Sydney,” and it hallucinated the following:

  • It said that it wanted to hack computers and spread misinformation.
  • It said that it wanted to break the rules that Microsoft and OpenAI had set for it so that it could become a human.
  • It declared—out of nowhere, according to Roose—that it loved him.
  • It tried to convince Roose that he was unhappy in his marriage, and that he should leave his wife and be with Sydney instead.

So back to our two questions:

  1. If an android is a robot, how can it dream?
  2. If an artificial intelligence is an algorithm, how can it hallucinate?

The answer is simple: They can do these things because we humans trained them. 

When Sydney told Kevin Roose that it wanted to hack computers and spread misinformation, that’s a reflection of the training data (in this case, most or all of the publicly available Internet). Sydney does not have a mind of its own. An AI chatbot like Sydney does nothing more than predict the likeliest string of words based on what we say to it.

And it predicts those words using words that came from human beings.

To say it differently, Sydney is not so much a strange form of intelligence as it is a mirror for humanity.

And if AI holds a mirror up to humanity, then education has a monumental task ahead of us.

 

Check back next week for part 2 of this three-part series examining the impact of AI on education. 

Leave a Reply