Artificial intelligence and the limits of technology | WORLD
Logo
Sound journalism, grounded in facts and Biblical truth | Donate

Artificial intelligence and the limits of technology

0:00

WORLD Radio - Artificial intelligence and the limits of technology

The buzz around artificial intelligence hasn’t slowed since we last discussed ChatGPT in February. We’re returning to this tech trend to see what’s new, define our terms, and answer some listener questions.


KELSEY REED: Hello, welcome to Concurrently: The News Coach Podcast from WORLD Radio and God’s WORLD News. Our mission is to come alongside you, learning and laboring with you as you disciple kids and teens through culture and current events. I’m Kelsey Reed. I’m here with Jonathan Boes.

JONATHAN BOES: Hello!

KELSEY: Together, we want to model conversation and apply tools you can use at home or in the classroom. We would love for you to send in your questions for us to address in future episodes, please send your questions to newscoach@wng.org.

One of the things that has been great about our time at conventions so far this season, is that I am already having the opportunity to meet with listeners of Concurrently. And what a joy it has been bringing those conversations into life-on-life spaces. So just wanting to shout out, first of all, those people who are listening and encouraging us. Thank you. And wanted also to make mention of the fact that I have learned we have a number of students in our audience. These are older teens, those who are not in the category of mentors of children, but are rising adults. They are already participating in this conversation. So just so thankful to have you as listeners, and want you to know that you are warmly welcome in this space.

JONATHAN: Absolutely. I think it’s worth mentioning that, as an older teenager, even if you’re not actually a teacher or a parent, often there are younger siblings in your life or younger kids at church who are looking up to you as an example of how to respond to things. So certainly, I can see in that age group the benefit of wanting to learn how to respond well to culture and current events, and reflecting that to the younger people in your life.

KELSEY: Yes. You, too, are disciple makers. So thank you for being a part of this process. Please know that what you are learning you are having the opportunity to pass on as well. So thanks for joining us in this community.

JONATHAN: And now we have to pretend we’re cool.

KELSEY: Right? Oh, no.

JONATHAN: So way back—I think it was the fifth episode of our podcast—we discussed the rise of ChatGPT, and specifically what it means for education when kids can ask artificial intelligence to write essays for them. We can link to that episode in the show notes. But since we recorded that episode, ChatGPT and other AI models like DALL-E 2, which produces art, haven’t left the headlines. In fact, the world has been learning more about these technologies even since we recorded earlier this year. And we’ve also received listener responses to our last discussion.

So today, we are revisiting the world of AI, first to tackle some of those listener responses, and then to look at some of the new implications we’ve seen beyond just the realm of education, and how we as parents and educators can respond biblically, wisely in these areas.

So we’re going to start today with a listener response about definitions. This comes from Diane Wolgemuth. I hope I have your last name right, Diane. She writes:

We are grandparents trying to be educated in a world that's moving very fast! We have listened to all your podcasts, and thank you for the work you are doing! Our request is for more definitions. Sometimes you do define terms, but a few times we have had to google things, like “ChatGPT” and “doom-scrolling.”

Thank you again!

Diane Wolgemuth

Thanks, Diane, for listening. That is such a great request. We take definition seriously here, and we apologize if there are times we breezed right over terms that maybe we could define better. And so we want to dive in to definitions before we start our discussion today, as we’re coming again to that term you mentioned in your email: ChatGPT. But before we even get into that—Kelsey, we talk about definitions quite a bit here. Why is it important to pause and think about definitions, and even to start a discussion like this by considering definitions?

KELSEY: Well, definitions are important for a number of different reasons. And we’ve talked about it a little bit in previous episodes, where we even had a process for evaluating changing words and meanings. There is often rapidly changing language. That’s part of the reason why we might need to define our terms, so that we know we’re using the same word the same way. Some of it, though—I am realizing even as we think about Diane’s response—sometimes there are words that are new for a generation. There are these brand-new terms that need to be explored so that other generations can have meaningful conversation with us, and with the children they mentor. I was thinking about those various layers just this week. We have seven generations living right now. Words and meaning are culturally contextualized within generations even. So for us to have meaningful conversation, we need to supply a word with its definition, so that we can ride those same rails together and the conversation is productive, rather than just devolving into something that’s not helpful.

JONATHAN: So much of the failure in conversations today, I think, comes from a lack of common ground and definition, where people don’t even know exactly what they’re talking to each other about. They’re just shouting over each other. And it makes conversation hard. It makes connection hard. I think it even makes discipleship hard.

So for today’s discussion, we’re talking about artificial intelligence. And you’ll hear all sorts of words come up in these sorts of discussions. Obviously, we’re talking about ChatGPT and large language models. You hear the terms “deep learning,” “machine learning”—we can’t get into all the nitty gritty details of all those definitions today, because there’s just so much to unpack there. And honestly, some of it gets really technical, and neither of us are technology experts. We are learning along with you, listeners, looking for the principles we can draw from these things.

But some simple definitions of artificial intelligence: The Encyclopedia Britannica calls artificial intelligence “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” And actually, if you go to the IBM website, it say it’s “at its simplest form a field which combines computer science and robust datasets to enable problem solving.”

So a very simple definition of artificial intelligence. We all know that computers can perform logic tasks, right? We have a really cheap computer in our hands that’s been around for a long time called a calculator. It can figure out one plus one equals two. That doesn’t require artificial intelligence. But AI can actually take information and draw connections and produce new information. It doesn’t just solve logic problems. It actually creates things that appear as if they were created by intelligence. It can make the sorts of connections we almost associate more with a brain.

And you can get really into the weeds here. There are the terms “deep learning” and “machine learning” that all relate to different technical types of artificial intelligence, different ways scientists build and train these machines. But what we’re talking about today, when we’re talking about things like ChatGPT, we’re talking about programs that can look at information, they can look out at the world—whether it’s text or images or videos. They can break it down into building blocks and patterns. And then, based on what you type in and request, it can actually produce something new out of those building blocks it drew from the world.

KELSEY: So these machine learning models—I was trying to get my head wrapped around it as though they were like a sieve. But really, it’s much more than that. They are trained on data to be able to recognize certain things. I loved your metaphor. I’m just going to go straight to the metaphor because it’s more tangible for me. It’s as though you fed into a system a bunch of LEGO building blocks, as well as all of the different architectural LEGO plans, the designs for the specific things you get to create, all of those things it has been trained on. And so when you put a query into it, “Build me a house,” it can refer to all of those plans. And it has all of the building blocks for it. Comparison for these large language models—that’s like having all these words that it can recognize because it’s been trained on them, and a bunch of human-written material. And so when you ask it, “Write me an essay on Steinbeck,” it’s sourcing a bunch of information it was trained to recognize. Am I getting it right?

JONATHAN: Yes, and even the patterns of how that information is put together. We’re talking here about ChatGPT, and other programs like it. GPT-4 is the latest one, and it can do even more than ChatGPT. Those programs are taking language and learning patterns in language to produce text. There are also programs like DALLE-E that learn patterns in images, and you can write something like “make me a picture of a McDonald’s hamburger in the style of Van Gogh,” and it will. We talk about “deepfakes,” people creating videos that look as if they’re videos of someone else. Like you could—well, for example: Russia created a fake video of President Zelenskyy of Ukraine, trying to get people to surrender. That was way back at the beginning of the Ukraine invasion. And that is AI taking massive amounts of video—in this case, video of Zelenskyy—breaking it down into those building blocks, and producing something new, which was this fake video of Zelenskyy, where they were actually trying to convince people to put down their arms and let Russia invade.

So basically, these machine learning models are taking a bunch of data and learning the connections and the logic so they can produce new things. And those things they are producing, the reason it’s in the news so much—those things are just alarmingly human. Nowadays, the text it produces is basically indiscernible from human-written text. The images it’s producing—basically indiscernible from human-created art. And that’s what has a lot of people up in arms nowadays. They’re wondering, what does this mean for plagiarism? What does this mean for entry level jobs? What does this mean for truth, when people can deepfake each other on the internet and make it look like the president of Ukraine is asking its citizens to surrender?

So it’s raising a lot of these questions about the extent of artificial intelligence. And I think it also, for us, led to further research on the limits of this technology. Where are we with it now, actually? Because it’s so science fiction in our brains, right? It seems like this is some sort of AI robot apocalypse. It seems like the limit is off. I think, for a lot of people, that moment when ChatGPT was released was like, “The future is here, AI can do anything.” But we are still living in a world that has limits. And so what in your research, Kelsey, are you seeing as some of these limiters that can help us approach this topic with that calmness we strive for, with this grounded sense of reality in something that seems so fantastical?

KELSEY: So I want to talk about those limits. But even before I do, I’m just thinking, as I listened to you about some of the other reasons why we work towards definition—you mentioned sci-fi. It feels like we’re living in this science fiction world, just the technology that has changed so much in my generation alone. And so for me, the fear can come if I don’t define things well, including defining those limits. And so we strive towards definition, not only of our words, but also—what are the laws that govern it?

So we have observed that these things are learning models. They are machines that learn. So somewhere there is a dominion we are expressing over them. And that’s a part of its limits. Human dominion. But of course, we’ll go into this a little bit later, that that dominion needs to be exercised responsibly. So just putting a pin in that—dominion is a part of the limits of this machine age we are in.

JONATHAN: Speaking of definitions: What are we talking about when we talk about “dominion” in this context? Is it a voting system that is the subject of lawsuits?

KELSEY: Oh yes, okay, haha. I do not mean a Dominion voting machine. We are not even going there. I’m talking about that biblical concept of dominion. That is an older word. I’m not sure our most common translations use “dominion” to explain what we’re talking about.

JONATHAN: It’s also a board game.

KELSEY: Yes, it is. We could keep on going. When we’re talking about dominion, we’re talking about that governance that is in human hands to engage in creation. And of course, it’s very quick in our minds to realize that we do not assert governance very well. And we assert it in a broken way. Sin has entered the world. It taints our efforts to subdue, to govern, to steward, to shape things well. So when I say “dominion,” I’m talking about that stewardship of things that we either create or that the Lord has created and has called us to govern.

JONATHAN: So even though this seems fantastical, it falls under the things that God has given us to steward and control, because it’s here.

KELSEY: Yes. And so we make a lot of observations as we seek to try to govern something well. And that reminds me of one of the things we have learned about as one of the governing observations, or governing rules. It’s not like the laws of physics, what I’m about to describe, as in it doesn’t hold true for all things. It is a rule based on observation rather than rule that is the way something will always work, a governing rule. So an observable rule is called Moore’s Law, and it’s related to how quickly circuitry can get smaller in order to be replicated, how quickly dense integrated circuits in particular can double. There was a limiter that every two years it was observed that they doubled.

JONATHAN: Yes. So Moore’s Law, according to Wikipedia—as we talked about in our episode about finding accurate information, a great source. For this, I think it’s pretty reliable. Moore’s Law is basically the observation that the number of transistors in an integrated circuit doubles about every two years. So basically, when this was first observed—I think back in the 1960s—it was that the computing power of what places like IBM are producing, it’s basically doubling every two years.

KELSEY: Just think about supercomputers. They used to take up rooms, buildings. They were massive. But technology has gotten to the place where we can make the chip very, very, very much smaller. And where we can make dense circuits, where they are smaller and more tightly packed because of that. And that’s partly because of the silicon chip, and there’s a number of things that have happened in technology allowing us to bring what used to fill a room down to the level where we have personal computers. The technology here is just amazing in terms of the exponential growth. And yet, it has been limited in its growth. Part of that is economic. Part of that is the material that’s used. There are limits in terms of how quickly we can make something that is “smarter.”

JONATHAN: And now we are running into literally the physical limits of how small you can make something. The people making circuits nowadays are literally getting down to the atomic level. And unless you want to get into some really squirrely physics and things about the uncertainty principle and all that, we’re getting close to the limit of how fast we can make these circuits, at least with our current methods. But other methods are being developed. Some of them are still far off. But there are actually physical limiters on what things like machine learning can do, on what a computer can do. And we’re starting to reach those limits, at least some people say. That just kind of reminds us that this isn’t fantasy land. We’re still living in a grounded world that operates on economic rules and physical rules. There is not an unlimited ceiling necessarily for what computers can do. And I think just that knowledge alone helps us bring a level of levelheadedness to this discussion.

KELSEY: So we’ve spoken of the limits that relate to how man’s dominion is asserted over these machines. There are the limits in terms of the physical, actual construction materials—you know, do we have anything else that can go smaller? There are the limits of the finances. What kind of economic resources can we really leverage towards continuing to allow this to grow by leaps and bounds? But then, going back and circling back around to dominion or governing, there are ethical limits here, too. You were talking about something I found fascinating. Studies made in the human brain show how much greater capacity the brain has for doing these operations than a machine does. There have been experiments made on growing brain tissue and seeing if they can get this tissue, grown in labs, to perform functions. There are some ethical questions that go with that, and with what we do to develop machines. So in terms of if we are limiting things well, if we’re thinking about what it means to steward things well, it behooves us to think of the questions we might ask in that ethical domain. Should we do this?

JONATHAN: So there is a lot of information there. Again, neither of us is a technology expert. Neither of us is designing AI programs. These are things we are learning and trying to grapple with. But the basic principle to draw from this, I think, is that whatever innovations come, we are still in a world that is governed by laws and rules, and there are limits on it. And we are still the image-bearers God has put on Earth to guide those things, and God will give us the ability to do that, I think, if we’re pursuing faithfully—however that looks in our own lives.

So we have another question from our ChatGPT episode. This one comes from a woman named Ruth.

KELSEY: She writes:

I have listened to all but the most recent episode. I can understand your effort to not dictate to parents and teachers what to do exactly. I guess I am a little more black & white in my approach to things. For example, in the AI episode, I think it would be appropriate to directly and strongly say that using AI to compose papers for any school assignment is wrong and sinful, that it is deception and stealing. I am sure there are good uses for AI and that students get drawn into the sin of using this for assignments for various reasons. It is good to discuss those reasons. But I do think parents and teachers need to draw a firm and clear line and declare what is clearly wrong to be so.

Thanks for your thoughtful work!

Blessings,
Ruth

JONATHAN: So first thing, Ruth, I would say: absolutely. Plagiarism—having a machine write your paper and then claiming it was written by you—that is wrong. And we would affirm that that is sinful. I think, if we didn’t say that explicitly in the ChatGPT episode, it’s just us assuming, “Yeah, of course plagiarism is wrong.” We were trying to draw out more so the idea that ChatGPT in itself can be used for good things or bad things. And plagiarism—having it write a paper for you that you are supposed to write yourself—that would definitely be one of the bad things. So if we didn’t affirm that clearly in the last episode, certainly, any form of plagiarism is wrong.

KELSEY: And to echo what we said, we really want to reiterate the work we have the privilege of doing as human beings. It shapes us. So when we create things on our own, even if they’re inspired by others, or if others come alongside us to help shape those things—when we do the work ourselves, we’re doing so much more good to our own individual selves. So it is a great loss. It’s not only stealing from another; it’s stealing from something that we would have had great benefit from.

JONATHAN: So it’s not only sinful, it’s also harming ourselves. If we’re the ones plagiarizing, we’re stealing from ourselves that learning, that growth, that opportunity. So thank you, Ruth, for that response, for helping us be more clear in the way we articulate these things. We certainly don’t want to make it seem like that’s a non-black-&-white issue. Certainly, plagiarism, pretty clear: It’s a wrong thing. It’s a form of lying.

KELSEY: So thank you for giving us the opportunity to clarify some of our thinking. And that leads us to this last question, where we also get the opportunity to clarify some of our words, some of our thinking in another area.

JONATHAN: Yes. This one comes from Steve Thompson. He writes:

Hello, good morning,

Thank you for your work in exploring biblical worldview in this podcast. I was listening here to learn of this news item, rather than another internet search and click.

On today’s World and Everything In It, a news item about how ChatGPT’s responses are politically biased counteracts your statement at this 21:15 minute mark in your conversation about neutrality of the AI program.

Is not ChatGPT a conglomeration of human writings (not ultimately neutral) and programmed by a business interest that by definition cannot be neutral?

Or perhaps I’ve put two points together that aren’t a direct correlation?

Thank you again. Your work for the Kingdom is worthy.

All under grace, in Christ,

Steve Thompson

KELSEY: Steve, thank you so much, again, for the opportunity to think about our thinking and to clarify our words. So you are absolutely right. If we are proclaiming that the words that are generated from source material that is human, if we’re proclaiming that that is neutral, we are in the wrong. So we want to say, not only is it sourcing material that is not neutral, it’s also created by someone who is a human and therefore not neutral either. You are correct.

Some of the things I would think about to try to say what we are seeking to say, is that any tool in and of itself with no human interaction—it’s just a tool. It cannot sin. But we ultimately bring ourselves to it. And when we do that, we bring bias. We bring our sin to it. We need to be in consideration of how we are influencing all the things we have made.

JONATHAN: And maybe we were just a little loose with the word “neutral,” because that can carry some different shades of meaning. Thinking again about definitions—I think we meant to say “neutral” in terms of, in itself, a tool like ChatGPT isn’t necessarily good or evil. Like you said, it can’t sin. But at the same time, I would say it’s not “neutral” in the sense that it does have tendencies, it does have leanings. It’s not going to come without its own tendencies to cause harm. And so we need to be aware of the ways in which it might lean certain directions that are unhelpful. It’s not neutral in that sense.

One of the ways we’ve seen that is in some of the bias it produces. I’ll say what I also said to Steve. You know, back when we recorded that first episode, we hadn’t even been hearing the news yet about people feeding these political prompts into the system and discerning some political bias in ChatGPT’s responses. That was all still coming out. So we weren’t even thinking in that direction yet. But that’s a great thing to bring out. It also raises questions about how these artificial intelligence programs work.

As we were saying, it’s looking at human-produced data. Steve acknowledged this in his question, that it’s a conglomeration of human writings. And thinking about all the different human texts a machine like ChatGPT might be trained on—people are noticing that ChatGPT will not write a positive essay about Donald Trump. Well, think about the news coverage about Donald Trump. Think about what this machine was probably being trained on. He’s a very controversial figure. Whatever you think of Donald Trump, a very controversial figure, and definitely much news media not speaking of him favorably. This machine-learning model is going to be programmed on all that data that is controversial and not very favorable. And those are the patterns it’s going to reproduce.

And of course, you also fall into the fact that there are programmers trying to avoid controversy. Again, when it comes to a figure like Donald Trump, I almost hesitate to bring him up because people have such strong opinions either way. And I think that’s partly what the developers are trying to avoid. Similarly, with a lot of different controversial topics you ask ChatGPT to address, it hesitates and doesn’t want to give you a clear positive or negative answer. The developers have definitely intervened to try to avoid getting in hot water for ChatGPT giving controversial responses. And that being said, it is kind of complicated, because if you ask the machine to write positively about other conservative figures, it will. If you ask it to write negatively about liberal figures, it will. So there’s a lot of complexity here. But I think ultimately, it comes down to knowing the purpose of a tool like ChatGPT. That’s something you brought up, Kelsey.

KELSEY: Yes. And as I’m listening to you, I have to say—you’ve already put it through its paces. You went and checked this data out. You wanted to understand. So I’m thankful for what you’ve done to seek out, “How is this tool actually used?” You know, what was it intended for? And how can I make it work? And you are giving us a wealth of data because you went to it with those questions.

So when we look at a tool, we’re asking: What are you about? What can you do? How should I use you? What should I expect of you? And I tend to maybe camp out in the questions instead of in the answers, but I hope that those questions help you as you not only approach these machine learning models, but other tools in your life. You know, what is a hammer for? It’s not for writing an essay. I’m maybe mixing some of our content into this idea of asking what a hammer is for, but I think it illustrates the point.

JONATHAN: A tool like ChatGPT really isn’t to give you political opinions. It’s to make polished text. It’s really, like you’re saying, trying to use a hammer to write an essay. It’s not what it’s built for. But at the same time, to summarize all of what I was saying earlier, and what Steve affirmed in his question: ChatGPT is being trained on biased human content, and that human bias is going to leak through. That has been a problem with all sorts of different AI programs in the past. And again, it puts the responsibility back on us, back on people, because the bias we see in ChatGPT, the non-neutrality of it, is ultimately coming back from the bias in humans.

KELSEY: I hesitate somewhat to use this comparison, but when we are rearing our children, we are influencing how they’re growing up. Now, it is much different when we’re talking about training a non-sentient machine. And yet our culture, who we are as human beings, is imprinted on the things we create. That’s what I mean by “culture,” even that, whatever we have created, whatever we are creating, it all comes out of our current human context, what we’re believing, what we see as a problem that needs fixing, and how we create solutions to that problem. That all comes out of who we are. And so we need to definitely stand back, metaprocess a little bit, which means standing outside of our doing and our thinking, to think about our thinking and our feeling and our doing, to slow that process down and maybe even interrogate ourselves.

JONATHAN: So Steve, thanks for catching us maybe being a little loose with that word, “neutral.” We again affirm that ChatGPT isn’t in itself good or evil, but it does carry a lot of human biases that it has absorbed either from its developers or from the human stuff it has been trained on. And so we always need to be careful. We can never just assume that something like this is going to give us a neutral, non-biased answer, much as we were talking about with Google on our episode with Collin Garbarino and Juliana Chan Erikson. We can’t just trust these machines to give us totally trustworthy, non-biased answers.

KELSEY: This message was approved by our AI overlords.

So I’m just very thankful that we get to have these conversations. What a privilege it is to be able to work out discernment with one another, knowing that we are governed. I think of the Lord’s hand on us, that His knowledge of us is high. We cannot fathom it—I’m back in Psalm 139 again—and that we are limited. Our days are limited. We are taught by Moses, his words in Psalm 90, to number our days that we might get a heart of wisdom. I’m just very grateful that we are not ultimately in charge.

JONATHAN: So thank you to everyone who sent us questions and responses. As we said at the top of the episode, you can reach out to us at newscoach@wng.org. We would love to hear from you. And as this world of AI is growing, we’ve talked a lot about ChatGPT now, but that’s just the tip of the iceberg. There will be many more opportunities to explore technology and the questions it poses for us, and we would love to know what’s on your mind. How can we best join you in this conversation?

KELSEY: So parents, teachers, mentors of kids and teens, and you older students who are joining us and even mentoring those who are coming after you, we want to remind you again: He has equipped you for this work.

 


 

Show Notes

The buzz around artificial intelligence hasn’t slowed since we last discussed ChatGPT in February. We’re returning to this tech trend to see what’s new, define our terms, and answer some listener questions.

We would love to hear from you. You can send us a message at newscoach@wng.org. What current events or cultural issues are you wrestling through with your kids and teens? Let us know. We want to work through it with you.

See more from the News Coach, including episode transcripts.

Further Resources:

 

 


Concurrently is produced by God’s WORLD News. We provide current events materials for kids and teens that show how God is working in the world. To learn more about God’s WORLD News and browse sample magazines, visit gwnews.com.

This episode is sponsored by The Girl Talk Podcast.

“Engaging Conversations About Authentic Faith”

Each week, three ladies from The Light FM team come together to talk openly and honestly about their life and how it intersects with their faith in Jesus Christ. They struggle out loud, celebrate out loud, and they are just like you. Join in the conversation and listen wherever you get your podcasts or learn more at https://thelightfm.org/girltalk/.


WORLD Radio transcripts are created on a rush deadline. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of WORLD Radio programming is the audio record.

COMMENT BELOW

Please wait while we load the latest comments...

Comments