Geoffrey Hinton, Godfather of AI Vaughn Ridley/Collision via Sportsfile / Wikimedia Commons

Editor's note: The following text is a transcript of a podcast story. To listen to the story, click on the arrow beneath the headline above.
NICK EICHER, HOST: Today is Tuesday, October 14th. Good morning! This is The World and Everything in It from listener-supported WORLD Radio. I’m Nick Eicher.
MARY REICHARD, HOST: And I’m Mary Reichard. It’s hard to talk about artificial intelligence without using human terms. And some experts say that’s a good thing…because the best way to keep AI from taking over might be to give it something surprisingly human.
EICHER: WORLD Opinions contributor Maria Baer says it’s an interesting idea…but one that probably won’t work. Still, it reveals something deeper: a God-shaped hole in both the machine and the men who made the machine.
MARIA BAER: Long before we were talking about computers achieving “deep learning,” MIT professor Sherry Turkle began writing about a peculiar problem. Audio here from a HuffPost video posted online.
SHERRY TURKLE: In 1976 it was just the beginning of the personal computing trend. And I saw how people related to their personal computers…
Turkle studies the sociological impacts of digital tech, and she wrote in the 1980s that kids were beginning to refer to their digital games in human terms. They spoke about what their games “knew” or didn’t “know;” and whether the games “could cheat.”
Turkle said this implied a set of parallel risks: that digital tech would cause us to think of machines in increasingly human terms and humans in increasingly mechanistic terms. Both she claims are categorical errors.
And yet, the temptation to talk about AI in terms of what it “knows,” what it “says,” and the kind of “thinking” it can do is almost insurmountable. Is intelligence really the right word? It feels dystopian and weird; but how else do we describe this thing?
AI is a form of computing that can scour and synthesize impossibly huge sets of data much faster and with far fewer errors than the human brain is capable of doing. Its essential offering is its speed, and that’s not nothing. To be clear, neither is this a human action. But it is like one, and there really isn’t anything else it is so nearly like.
Geoffrey Hinton is widely referred to as a “godfather of AI.” On a recent Globe and Mail podcast he said that in order to diminish the existential risks posed by the technology, engineers ought to imbue it with a distinctly human virtue.
GEOFFREY HINTON: We have to face up to fact, they're going to be more intelligent. They're going to have a lot of power. And what examples do you know more intelligent things being controlled by less intelligent things? Well, the only one I know is a mother and baby.
Hinton and many other AI critics contend that AI will, by design, eventually have to choose between competing values. Because AI “learns” over time, we may lose control over which values, and how it weighs them.
His solution is to build AI with a “maternal instinct”—to “teach” it to value its humanity’s needs over its own. Hinton believes that the only thing that will stop the tech from “replacing” us, is its decision to “parent” us.
Hinton is an avowed materialist. He has stumbled here upon a sacred and mystical truth. In fact, the “maternal instinct” has long been a jagged stumbling block for evolutionists like Hinton who believe that humans evolved from animals by continually adapting toward survival. There is no plausible reason for the “evolving” of selflessness in such a scheme. Self-sacrifice is categorically antithetical to survival.
Evolutionists who are willing to grapple with this question will usually argue that selflessness “promotes social cohesion” or peace or some such, but that’s a fully circular argument. It says we adapted to admire selflessness because it’s admirable.
The real, non-circular argument is that selflessness is Good because it is what love requires, and because God is Love, God is Good, and He made us in His image. Being human, therefore, means being the only created beings with the capability and the moral imperative to be selfless.
Paul told the Philippians to “look not only to [our] own interests, but also to the interest of others.” This kind of exhortation isn’t even possible, let alone good, unless we’re in a world designed and continuously protected by a God of love.
Fortunately, that’s the world we are in. Geoffrey Hinton is right that without selflessness, all relationships—even a material “relationship” between humans and machines they’ve built—will devolve into power struggles. But he’s wrong that selflessness is the kind of thing that can be “built into” something that’s not human.
It’s because selflessness doesn’t make practical sense that it must be chosen, and chosen again and again. The Holy Spirit has to give it to us, but we must choose to ask. As we barrel forward into a machine-dominated world, beseeching Him again and again for this distinctly human virtue is going to be essential.
I’m Maria Baer.
WORLD Radio transcripts are created on a rush deadline. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of WORLD Radio programming is the audio record.
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.