AI is not your mom
The “godfather of AI” wants to give the technology a “maternal instinct” in order to protect humanity, but it won’t work
Geoffrey Hinton in Toronto, June 2023 Wikimedia Commons

Full access isn’t far.
We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.
Get started for as low as $3.99 per month.
Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.
LET'S GOAlready a member? Sign in.
It’s surprisingly difficult to talk about artificial intelligence without using anthropomorphic language. Decades ago, long before we were talking about computers achieving “deep learning,” MIT professor Sherry Turkle began writing about this peculiar problem. Turkle studies the sociological impacts of digital tech, and she wrote in the 1980s that kids were beginning to refer to their digital games in human terms. They spoke about what their games “knew” or didn’t “know;” or whether the games “could cheat.”
Turkle said this implied a set of parallel risks: that digital tech would cause us to think of machines in increasingly human terms and humans in increasingly mechanistic terms. Both, she claimed, are categorical errors.
And yet, the temptation to talk about AI in terms of what it “knows,” what it “says,” and the kind of “thinking” it can do is almost insurmountable. Is intelligence really the right word? It feels dystopian and weird; but how else do we describe this thing?
AI is a form of computing that can scour and synthesize impossibly huge sets of data much faster and with far fewer errors than the human brain is capable of doing. Its essential offering is its speed, and that’s not nothing. To be clear, neither is this a human action. But it is like one, and there really isn’t anything else it is so nearly like.
This may explain why Geoffrey Hinton—widely referred to as a “godfather of AI”—said at a recent AI conference that in order to diminish the existential risks posed by the technology, engineers ought to imbue it with a distinctly human virtue: the “maternal instinct.”
Hinton said it’s reasonable to imagine that AI, which is built to “learn,” may one day “overpower” the humans that create it. If we build a machine we can command and then give it a directive as encompassing and vague as “make human work more efficient”—but without sufficient definitions, or limiting principles—what’s to stop the machine from “deciding” humans are in our own way? I can imagine this principle because I see it in my own life. It’s a bit like telling my husband not to let me stay up too late watching TV, despite how strenuously I protest at five minutes to midnight that I “just need to finish this episode.” Hinton (and many other AI critics) contend that AI will, by design, eventually have to choose between competing values. Because AI “learns” over time, we may lose control over which values, and how it weighs them.
His solution is to build AI with a “maternal instinct”—to “teach” it to value its “children’s” (humanity’s) needs over its own. “The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,” Hinton said. The only thing that will stop the tech from “replacing” us, he said, is its decision to “parent” us.
Hinton, an avowed materialist, has stumbled here upon a sacred and mystical truth. In fact, the “maternal instinct” has long been a jagged stumbling block for evolutionists—like Hinton!—who believe that humans evolved from animals by continually adapting toward survival. There is no plausible reason for the “evolving” of selflessness in such a scheme. Self-sacrifice is categorically antithetical to survival.
Evolutionists who are willing to grapple with this question will usually argue that selflessness “promotes social cohesion” or peace or some such, but that’s a fully circular argument. It says we adapted to admire selflessness because it’s admirable.
The real, non-circular argument is that selflessness is Good because it is what love requires, and because God is Love, God is Good, and He made us in His image. Being human, therefore, means being the only created beings with the capability and the moral imperative to be selfless. Jesus made two radical, world-changing claims during his ministry: that He was God, and that love, even when it might be considered weakness and vulnerability, is in fact a transcendent strength. Paul told the Philippians to “look not only to [our] own interests, but also to the interest of others.” This kind of exhortation isn’t even possible, let alone good, unless we’re in a world designed and continuously protected by a God of love.
Fortunately, that’s the world we are in. Geoffrey Hinton is right that without selflessness, all relationships—even a material “relationship” between humans and machines they’ve built—will devolve into power struggles. But he’s wrong that selflessness is the kind of thing that can be “built into” something that’s not human.
It’s because selflessness doesn’t make practical sense that it must be chosen, and chosen again and again. The Holy Spirit has to give it to us, but we must choose to ask. As we barrel forward into a machine-dominated world, beseeching Him again and again for this distinctly human virtue is going to be essential.

These daily articles have become part of my steady diet. —Barbara
Sign up to receive the WORLD Opinions email newsletter each weekday for sound commentary from trusted voices.Read the Latest from WORLD Opinions
John Mac Ghlionn | There are hopeful signs of renewal, but youthful energy has a way of fading
R. Albert Mohler Jr. | The astounding memorial service for Charlie Kirk
Adam M. Carrington | David French’s misguided proposal to amend the Constitution
Nathan Leamer | Over the last six months, the left has lost its control of the public policy narrative
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.