Picture this. Sometime in the not-too-distant future, the classroom is redefined. Rather than a human teacher, artficial intelligence, or AI for short, is now in charge. Rows of students, bound to their laptops, are being watched over by Al, their artificial teacher. Students are being told off when they’re not working on the right thing, and given easier or more difficult problems based on their previous answers.
Al has worked out which resources in its collection seem to best engage its students. AI does all the marking, writes report cards, and disciplines students for poor performance. Human teachers worldwide have been made redundant in their millions.
The last few years have brought some exciting advances in artificial intelligence and with it, many conversations about how it might render large swaths of the workforce, including teachers, redundant. There is no doubt that recent developments in AI represent a significant advance in computing, allowing us to solve problems with computers in a way that was unimaginable just a few short years ago. It’s also true that these new capabilities will replace some jobs, or perhaps more likely, increase productivity, as computers have done for the last few decades.
But AI isn’t human – not even close. And teaching is a far more human pursuit than the dystopian vision illustrated above. Replacing teachers with AI is premised on the fact that a teacher’s main job is to disseminate information and correct work. That of course, couldn’t be further from the truth.
The current generation of AI isn’t really as intelligent as the media might have us believe. While it’s feats have been impressive – from beating grandmasters at chess, and Go (the much more difficult ancient Chinese board game), to helping diagnose cancer and generating eerie (yet oddly intriguing) artwork. Yet in all of these cases, these AIs are only smart in a very specific domain – the AI that beats humans at chess, can only play chess.
To understand why, let’s explore for a minute how modern AI actually works.
Let’s say we want an AI that can recognise whether or not there is a cat in a particular video on the internet. In order to do this, we have to “train: our AI. To do so, we provide the AI – a computer algorithm – a so-called “training set”. In this case, a set of videos that are known to either feature, or not feature, cats. During training, we then give a virtual reward to the AI when it correctly predicts the cat, not unlike training a dog to sit. That reward mechanism gives the AI (remember, it’s just a computer program, no voodoo here), a hint as to whether or not it is on the right track in its cat predictions.
If the training was successful, it should be able to identify cats in any video it subsequently watches. Note that the AI isn’t actually aware of what a cat is and isn’t – it simply has an algorithm (or pattern) for determining the visual cues of a cat. So if you later gave it a hand-drawn abstract image of a cat, it would struggle. Also note, importantly, that it has no concept of a dog, whether the video was shot insight or outside, or whether or not it’s funny or cruel. This is a specific intelligence. It is intelligent in one specific domain – the presence of cats.
Now let us apply this same concept to your students’ latest essays. Let’s say we train an AI on tens of thousands of essays
from around the world, and give it an idea of which of those are good or bad. Put aside for a moment the sticky problem of good or bad being a fluid concept. Think about what this artificial intelligence actually knows. It knows only essays, and some arbitrary set of rules, which it has defined itself, about what makes them good or bad. It doesn’t have a general understanding of the context, or the nature of the story. It can’t feel the emotion, and most importantly, it can’t detect creativity. So what if a student comes up with a truly creative or unique response? If the AI hasn’t seen anything like it before, it will probably assess it as bad, because it doesn’t match any of its known patterns.
In other words, it will reward mediocrity. It takes general intelligence, a nuanced, interconnected understanding of millions of concepts, to comprehend, appreciate and appropriately respond to something creative.
And that’s just the simple case of an essay. A more open-ended, constructivist approach to the lesson would pose an even greater challenge for our edgling AI teacher.
What else isn’t AI all that great at?
Well, as it turns out, pretty much all of the things a teacher actually does. Can it build a meaningful, trusting relationships with students? Nope. Can it empathise with those students? Nope. It simply doesn’t have the tools or the general intelligence, to understand the human condition, and use it to motivate, persuade and inspire.
But modern AI isn’t useless in the classroom. Far from it. In the next few years, AI will become the ultimate personal teacher assistant. It will make it quicker to mark, help identify patterns in the work students do (and have ever done), and compare that with millions of other data points from around the world. It will provide useful advice and insights that will save teachers time, time that can be spent applying their uniquely human talents.
Real-life human teachers aren’t about to be replaced by computers. We should all be thankful for that. Perhaps one day, artificial general intelligence will come up to par with human intelligence, but not anytime soon. Besides, if and when it does, our intelligent computer overlords may have alternate plans for us anyway. In the meantime, the best teachers will be the ones that take full advantage of their new semi-intelligent personal assistants to combine the best of what AI can offer with their uniquely human edge.
Byron Scaf is the CEO of Stile, an Australian education start-up that creates STEM curriculum resources used by over 100,000 school students. Born and educated in Melbourne, Byron studied neuroscience and engineering at Melbourne University before joining Better Place, an electric car infrastructure start-up, where he built and oversaw the Australian technical operation. Byron then transitioned from a focus on renewable energy to one of education. In 2012, he developed a learning platform for the Australian Academy of Technology and Engineering’s STELR program, an online STEM resource for Australian schools. Shortly thereafter, Byron was brought on to lead Stile where he continues to head a team of passionate teachers and engineers. Byron’s vision for Stile is to a create a thought-leading education organisation that works collaboratively with teachers, students and school leadership to create resources, professional development opportunities and industry partnerships that best prepare students for the future.