Understanding artificial intelligence (with Dr. Michael Finch)
Today on Concurrently, it’s a new year but Artificial Intelligence isn’t going anywhere. To help us better understand this growing phenomenon, we’re joined by Dr. Michael Finch.
KELSEY REED: Hello, and welcome to a new year of the News Coach Podcast from WORLD Radio and God’s WORLD news. This is Concurrently. And we are so thankful to be here again with you, laboring, learning with you as you disciple kids and students through culture and current events. My name is Kelsey Reed, and I’m here with Jonathan Boes.
JONATHAN BOES: Hello!
KELSEY: Together, we want to model conversation and apply tools that you can use at home or in the classroom. And as always, we’d love to hear from you, what questions you might have that we could tackle. So if you want to send those questions, comments, concerns in to newscoach@wng.org, we’d love to hear from you. We always love to hear your voices, if you wanted to record a voice memo on your smartphone to send it in. And again, today, we’re going to be talking about that ever-nebulous topic of AI, and joining us is Dr. Michael Finch, who I’ll introduce in a minute.
JONATHAN: Yes, so artificial intelligence. It’s a subject that doesn’t seem to be going anywhere. It was on our radar a lot last year. We actually had two other episodes about it that you can go back and listen to if you want to. But I think what we kept bumping up against is that, you know, our knowledge of this subject is quite limited, and neither of us are tech experts. And this is a subject where, you know, there’s a lot of discipleship aspects to explore. But there’s also so many technical components to it that just go over our heads here, I think it’s safe to say. And we’re excited today to be able to bring in another voice who can give us some clarity on this topic, and we’re excited to learn along with you as we, today. talk to Dr. Michael Finch.
KELSEY: Dr. Michael Finch is chair of Department of Communication Media and Culture at Bryan College. He’s also an affiliate researcher for LCC International University in Lithuania, the executive director of the Communication Association for Eurasian Researchers and a Business Communication Consultant on the side. You’re a busy man. As a researcher, Dr. Finch has diverse interests, from co-authoring books and papers about media in Ukraine and Eastern Europe, to teaching and speaking about how media affects culture, to now writing an upcoming book about AI as it takes the media world by storm. Outside his academic and business pursuits, Dr. Finch cherishes family life with his wife, Tracy, and their energetic, 10-year-old son, Levi. Dr. Finch is passionate about his faith, rock climbing, and coffee. He told me that, if he had a choice, he’d live most of his life in coffee shops or on rock walls. So I very much resonate with that, and particularly the coffee portion. I see that you’ve got your mug available. We’re so glad to have you today.
Right out the gate, we love to define terms. And we found we often have to define our terms, specifically with this entire idea of artificial intelligence and the question of whether intelligence is even a good word to use. So just out the gate, what is AI? And maybe go on to explain, how does it work?
DR. MICHAEL FINCH: Well, yeah, and that’s actually a really broad question. So the definition of AI is, well, even the term, it sort of comes from history. So it’s an interesting term that they’ve landed on, that there’s actually some controversy around, like, “artificial intelligence.” AI is when a piece of technology does an activity on its own. I mean, that’s kind of a simple, colloquial way of describing it. And historically, it honestly—this gets to actually my book—it comes largely from science fiction. That’s where a lot of this comes from. So a lot of the ideas that we’ve started to actually make happen, they come from the history of people dreaming and taking ideas to their ultimate end. So yeah, that’s a basic concept, is that we have technology that can think and or that can do tasks without us instructing it in some way. So we might give it one prompt, but then it can go on to continue them.
KELSEY: I appreciate that. You’re grounding that in the idea of tasks, and even kind of the self-correct there about saying, well, “Think? Well, actually, we’re giving them tasks.” So it’s not that it’s necessarily thinking on its own. It said it’s been designed by another intelligence to perform certain tasks. But even as you’re talking about this now beautiful manifestation of what was once only a fiction, I realized that we are in this world where we can often take some of those aspects of science fiction for granted. And we’re living in a day-to-day basis with these machines that perform tasks for us in ways that we might not even recognize. So where do we, where should we be seeing AI? How is it working in our day to day lives?
DR. FINCH: Another huge question. So AI will and actually, just to clarify a little bit more AI, in our current understanding of it, the real massive shift was when we started figuring out how to create self-learning programs. And so the programs could then basically program themselves. So it is an interesting reality that we live in with these self-learning programs. They started in the military and in the auto industry, actually, is where the real huge advances happened 15, 20 years ago now. But those self-learning programs, the crazy thing about them is that they’re creating technology that we don’t understand. So the way that they talk about, like, ChatGPT, is that it’s creating its own neural network. You know, so it’s developing, just like our brains develop, like it’s developing a self-learning program. And we’re not—it’s not fully controlled. So that’s kind of an interesting. But we run into it everywhere. So AI can be used in editing, everything from editing a photo, to advanced military applications. China is using it with some really—in kind of horribly invasive different tools, where they’re creating basically a number that’s assigned to each person that is their social standing number. There’s a better word for that. But they’re sort of assigning roles to people in society, and AI is a big part of that. So there’s just all kinds of different very practical applications and then some that get a little bit scary. So it’s an interesting—actually, farming is a big area where AI is coming in. There’s this really wonderful technology where a tractor has a laser weeder. And it goes along behind, and the AI can distinguish what is an invasive plant versus what is the actual plant. So just all kinds of crazy applications.
KELSEY: So I hear a little bit of good curbing and correction in some of what you said, you know, that there is this thing that we would attribute to an intelligent being, and that it can learn, there is a potential of great growth, even if I’m understanding correctly that it has the ability to learn beyond what we projected its ability to be. So that’s fascinating. And yes, maybe a little scary too. But what diverse application of this learning.
And so I guess, in my mind, whether this is something that you can answer now or not—but the question I have is, how do we hem that in well? You know, how, as the intelligent beings that we were created to be with stewardship mandate, how do we hem that in? And maybe that’ll be something we get to later. But the next thing that was on my list to ask is, just how did you get involved in AI?
DR. FINCH: That’s a great question. Well, I’ve actually been studying—“media ecology” is the term for it. So media ecology is sort of a broader category for a bunch of theorists that look at how media affects culture, and how technology affects culture. So there’s this concept called “technological determinism” that a guy named Marshall McLuhan popularized. And this concept is that our communication technology in particular shapes our culture.
So going back historically, that’s an interesting idea that actually does sort of frame what we can continue talking about. He had this idea that oral cultures were defined by their orality. So because speech is temporal, and it’s always in the moment, oral cultures that we actually have had—we still have some today—that they are in the moment, and that they have a real sense of presence, and smaller communities and things like that. Then we had the literate era where, when the alphabet was invented, the literate era happened, you began to have learning past a person’s lifetime. So in the oral era, you had storytellers, and they were the holders of culture. But now texts can be the holder of culture. Then with the invention of the printing press, you had print, which democratized information for the first time. So with the literate era, the power brokers were the ones who ended up controlling all of the media. And so you had like vassals and lords and kings and all that kind of thing. But then with print, democratization of information leads to democracies, and the democratization of religion—so we had the 95 Theses. And then you have America and all of the different revolutions in Europe, and things like that, which brought us to the electronic era. And the electronic era was the television. And this was global, where you had the “global village” idea, and you had a lot of cultural homogenization happening. And that kind of brings us to today. After that, theorists haven’t settled on what was next. I call it the digital era. So I would say that the digital era was the era of the internet, largely, but we’re already on the next era. So AI is either the conclusion of the digital era or it’s the next era, like it’s this huge, huge thing.
So I started studying the cultural effects of media technology. And that’s what led me to AI. But honestly, the other side of it was just—it’s such a useful tool. So right now, some of the tools that are coming out, they’re so effective, and everybody’s using them in the industry. I’m actually at a place in downtown Chat [Chattanooga] right now, Society for Work. And if I went around to these different offices and talked to people, they all are using it in their workflows.
KELSEY: It’s interesting, as you describe the different waves of cultural change, and where just the power was held and how it was diversified into multiple hands instead of centralized. I have another question that just rattles around in the back of my mind as to—with these ever changing waves of digitization, and then, of course, with the major unknowns of AI—particularly for me, I don’t operate with AI, so it’s such a, just a very new territory. But I think of my experience of the digital wave, and it feels like a parallel in some ways to the age of orality, as you’re describing it, that we have kind of the moment that we’re experiencing digitally, and then it’s as though it’s passed, instead of this text, that while we still have a number of texts, the hard text in our hand that can be transferred, you know, what will happen with our knowledge? Is that a momentary thing? Will we run out of space? How will this be a cultural shift? Maybe—will it look like some of those things? So some of those interesting questions that come up in the richness of what you’re describing. And I’m assuming that some of this richness comes out in your book. Is that true? Can you tell us about your book?
DR. FINCH: Yeah, absolutely. Well, the book is kind of has two parts. And thank you for asking about it. The title currently—so I’m still in the draft phase, but the current title is AI from Fiction to Fact. And so it’s actually going to start thinking about fiction and that kind of that side of things, and then go through ideas like I just talked about with culture and technology through history. So the first half of the book, it will be sort of getting us up to speed in terms of where things are culturally. And then the second half of the book will actually be discussing application and some of the tools that currently exist in that kind of thing. When you were talking, I was actually thinking about, in terms of faith, there’s a book that came out about 20 years ago called Ancient Future Faith. And it speaks to what you were talking about. So it talks about how the oral reality was very immersive. So this current era, there may be ways we could speak to this generation that harken back to sort of oral method. That gets into sort of how postmodernism might have some similarities to that oral reality.
KELSEY: That’s great. Well, you speak of a book that was written 20 years ago, and I start counting, and I realize that’s when I started having and rearing children. And so it’s no surprise that I am that far behind, not only on reading, but also on just understanding this fiction moving into fact, and how we wrangle our thinking about it, our action about it. And so you’ve helped just direct my thinking about reminding me that artificial intelligence is a learning thing. So it’s kind of addressing this question I had about, is AI able to think and feel? Or just help me understand how this pseudo intelligence—I’m going to keep using that term until I get really clear on this—but this new version of intelligence can think like we do, does it feel like we do? Help us understand that.
DR. FINCH: That’s a great question. And I would say it thinks nothing like us, which is one of the really interesting—I have something that I’ve talked about, and I haven’t published anything on this and I haven’t heard anyone else talking about this, but the crazy thing we, you know, we were made in God’s image. And we are creators. God was a—is a creator and was a creator. He created, you know, this world. He created everything. And so it’s really interesting that we would then, as He did, create something that creates. So that is a reality of what we’ve done with AI, is we’ve created a creator, just like—weird thing is, um, the way that I see AI in particular, like ChatGPT, and things like that, is that they are actually things that are more godlike in how they think. Now don’t take that the wrong way. You know, they don’t—but we get into dangerous territory when we have technology that is so incredibly powerful, that actually has these, like, extreme capabilities. But when you think about it, I used to, like, ask this question, when I was a kid. I’d think, “God, how do you communicate with a billion people at once? You know, it’s like, how can you have the capacity to not be linear in your thinking and not be linear in your interacting?” And I was like, that just seems impossible to me. And when you think about it, what is ChatGPT doing? ChatGPT is this singular-ish entity on a bunch of servers. And this singular-ish entity is interacting with tens of thousands of individuals at once, responding to their queries. In its current form, it is something that is responding to our queries. You know, it’s been designed for a purpose. The danger gets to the hypothetical beyond that, or there’s some interesting research out there right now that’s talking about—there’s a debate on whether or not these entities that are being created can develop feelings and develop—they do have self-awareness, which is really interesting. But do they, does self-awareness mean that they are alive? You know, and what, how is that determined?
KELSEY: It harkens back to a discussion we had about the silicon chip, and that the processes were limited by the matter that could sustain those processes. So as chips get smaller and smaller, more and more processes, more memory, more capacity is created. And yet, matter is limited. And so even as you’re talking about some of the ways that this intelligence seems to have unlimited capacity, I’m reflecting about the limits of material, and that the Lord is an unlimited being in that He’s not limited by matter, He created all matter, all things that are living come from something that we get only the most minute of perspectives, as we witness what’s happening with these things that He’s allowed us to create, and to have a certain interesting way that it reflects things that are only fully true of Him and His limitlessness, but that, again, give us just that that barest hint of this mind of God. I really appreciate your prayer, the imagination of it, and the curiosity and the awe that comes in that prayer of going, “Lord, how, how does this work?” And then being able to bring that curiosity into creation in a way that I perceive, you know, from you, and many others, that it’s in an honoring manner, like Tolkien talked about: our “sub-creative work” being work that is honoring of the One who made all of these things possible in the first place, who gave us the playdough, as it were, you know, into our hands to begin to construct something with. So just very interesting to hear your posture and to get these glimpses of the world that is possible. Bring this down to earth for us a little bit more. We’ve gotten into some of the possibilities, but anchor us down into how AI is used in our daily life.
DR. FINCH: Well, and there are so many different ways that you can apply AI today. So the different technologies out there can do everything from taking notes at a meeting. So that’s, that’s actually a really great tool that you can get. I actually, I really want to update my phone to get the new Pixel, because the new Pixel has like some of the best software for that. I use ChatGPT constantly for my work. So ChatGPT is actually something that’s really, really common, and there’s so many things you can do with it. You can—ideation is a big tool. So asking ChatGPT, you know, if you want to develop something and you’re getting stumped or you don’t know if your idea is the best one—I asked ChatGPT, can you give me five more ideas on this? Can you give me 10 more ideas on this? So my suggestion is to think of something like ChatGPT as an assistant and treat it as such. So the more information you give it, the more that you’ll get back from it. And if you think of it as an assistant, not as someone that—so students these days, and this gets into an area that I think would be applicable for your audience, which is, students these days often are utilizing ChatGPT to solve a problem, to do the whole thing for them. And when you give that power to ChatGPT, you stop using your critical thinking to nearly the degree. And so that is sort of the balance of it. I want to give ChatGPT as much work as I possibly can, frankly, because it’s this great, very inexpensive assistant. So from my perspective, as a professional, it’s like so helpful to be able to say to ChatGPT, “Here are seven things I want in a lesson plan that needs to take 15 minutes, and needs to have an activity that takes 15 minutes. Put these seven things in a lesson plan and give me three ideas to put these seven things in a lesson plan.” And it’s like, boom, I have three suggestions for a lesson plan that have all of the ideas that I wanted incorporated, and suggestions for activities. And then you can say, “I don’t like that activity, so try this activity.” So from my perspective, that’s so eminently helpful, just to be able to kind of give it just to ChatGPT like I would an assistant, with all of the things that my mind has come up with to put in that particular plan.
The danger comes when a person or a student let’s ChatGPT do the thinking for them. And so the real thing that we have to do, as parents and as teachers and things like that, is help students in this generation learn how to separate, you know, using ChatGPT as an assistant and interacting with it on that level to letting it do work for you. Because they have to develop critical thinking, they have to develop the capacity to—frankly, give the best prompts, in some ways, I mean, from one respect. And so to develop critical thinking, they need to have already created—a lesson plan, in my context, but for their context, need an essay, you know, written poem, whatever it might be. And once they get the fundamentals of understanding whatever it is that they’re trying to do, then they have the imagination to apply that to all of these different tools more effectively. So when students let it replace their thinking, which they’re doing in droves, then they are crippling themselves for later in life. So that’s the balance.
But in a professional workflow—my goodness, everybody’s using it. Like all of the professionals are applying this in average—well, in my field, so in communication and advertising, public relations. In journalism, there are a bunch of ethical considerations. Actually, just today, the Sports Illustrated—their CEO was fired or let go or whatever, and they think part of it is because Sports Illustrated was using AI not just to write articles, but they actually created AI pictures and authors. So the author was completely fake and was an AI-created author. And in journalism, the Associated Press and a few other organizations have been putting out codes of ethics regarding AI, to not use AI. In something like journalism, it should be original research, you know, you should—there should be some standards there. But when you’re creating forward-facing content, you know, I don’t see any problem with like—actually, I’ve used ChatGPT, I’ve given it a mission statement, I’ve given it a context, and I’ve said, “Give me three promotional emails based on, based on these things.” And it gives me three promotional emails, and then I go through and edit them and set them up for my particular context a little more effectively. And boom, you know, my assistant has helped me. But so that’s the big thing. It’s like, when it’s a help, and not—when it’s something that enhances your imagination, it’s a great tool. When it’s something that ends up reducing your imagination or limiting your imagination, because you’re letting it do the thinking completely, you know, then it’s something that is potentially harmful for especially kids who are trying to grow and learn.
KELSEY: You’re describing something about AI that is true of us as well. You mentioned the potential for learning, that the more that it learns, the more able it is to learn, and there may have been more to it than that. That might be my oversimplification. But what it reminds me of is that we are wired to learn and to learn better for having actually practiced the processes that are neurons and the myelin sheath. You know, it gets more able with practice. And so if we don’t actually engage those processes, we’re not laying down the, once again, kind of material, or maybe the electrical foundation in our systems for future learning to be more rapid-fire and to increase, and it’s just exponential potential. So it was interesting, what you’re describing as not only maybe a teaching assistant for you, or a learning assistant—a learning partner is maybe how I would seek to phrase that—that it can be a learning partner, but should never become that crutch, and do it as a substitute or a surrogate for processes that are uniquely human, in terms of those capacities to connect head, heart, and hands. You know, we don’t see that in a machine. Or at least we don’t see that yet. When you’re describing that, it doesn’t feel like we do, it’s not creative at the same heart capacity that the Lord has created in us. So we must still do our human work that is multi-dimensional in our being and with greater capacity than what we’re at least currently seeing in machines.
But you named the risk as well. And you started to approach that. And so I’d love for you to assess that a little further. Pushed into, what are the risks that are inherent with artificial intelligence?
DR. FINCH: The risks are massive. Now the risks are sort of apocalyptic stuff, Terminator, all of that. It’s a, you know, science fiction has taken that concept all the way to its end, and I think very properly. And so we hope and pray that that technology doesn’t turn against us and that kind of thing. Which—we’re creating, you can call them “beings,” I guess. You know, we are creating entities that have the—there is the possibility that this could happen, that all those apocalyptic realities could take place.
But to not take it all the way to that end where the danger is kind of apocalyptic—the dangers, the first current danger that I would see is exactly what I was just talking about, that for students these days, that this is something that they will use instead of to augment their learning, to replace their learning. I mean, we already have a fast food culture. We already have an instant gratification culture, that, you know, these students can get an answer anytime they want, at any, you know—they just have to just look at their phone and boom, they have answers to all of life’s questions. You know, we wonder why there’s actually a graph that shows, as social media and technology use has grown, faith has dropped. You know, there’s a guy, Berger, who wrote a book called The Sacred Canopy. Sorry, I’m kind of going way out and about for an answer. But he said, basically, that religion is how we deal with mystery. And that’s a very sociological, humanistic way of looking at religion. However, it’s how we look at mystery, it’s how we deal with the unknown from a humanistic perspective. And when we have something in our pocket that solves all of the questions of the unknown—even though it doesn’t, but it says that it does—and, you know, we sort of take away our engagement with mystery and our engagement with authenticity. So the danger is that it takes that to another entire level, and it gives this generation tools that allow them to have even more instant gratification. So they’ll have shorter attention spans, and they’ll be more manipulatable, in addition to that. So the danger that I really see is that this next generation becomes more mutable. And so that’s sort of an existential danger that we have, that this is one more instant gratification that gives a false replacement for God—and a false replacement for just relationship. And what I’m seeing in colleges these days is—and I mean, this is sort of away from AI a little bit—but technology replacing human connection is a massive, massive problem. And so this will allow technology to replace relationship at a greater level and with more effectiveness. So I mean, I was actually talking to a class last week, and I said to the class, it wouldn’t surprise me if in 10 years, 25% of males had a romantic relationship with a bot. And I know that that sounds horrible and crazy and weird and stuff like that. But I think that it’s something—you know, life is hard. And this is an easier potential direction to go. And so why not just go that way? But then it’s going to be less fulfilling and be less real and lead to greater depression and greater real, greater actual separation.
So the church, I think, really needs to be a place now where we train in authentic human connection. And authentic community human connection is like a new place that we have true relevance, if done correctly. You know, we need to say we’re a bunch of messy people, and come here, and this is a safe place—it has to actually be safe—but this is a safe place where you can go, and you’re going to interact with other people that are also messy too. And so there will be problems, there will be conflict, but we’re going to work through it together. And we’ve got our Lord who’s going to help us, and what a grace that we can come and do this together. So that’s sort of, that’s still sort of on that previous track of one side of like, the massive cultural effects of another tool that gives these students and kids greater immediacy with their capacity to have whatever the heck they want.
The other side of it is sort of the dangers that this poses that the EU currently is trying to regulate against, which are that this could give companies and governments incredible power. I mean, they already have incredible power. I talked with someone who was in the FBI, and they said, if you have a Facebook account, the government knows more about you than you do. You know, they have predictive algorithms that can predict your future even. It’s sort of remarkable, what kinds of things that are out there. This just takes that and puts it off the charts in turn in terms of the potential abuses for these large companies. So that’s a very serious danger. And I’m not sure if the EU’s answer is the proper answer. Their regulation is kind of not defined all that much yet, but it’s fairly restrictive. And so we’ll see how, just like when the internet first came out, and there was sort of the wild West for about 10 years, I would say with the internet now things are legally pretty well settled and the laws are fairly well set. That’s happening right now with AI. And so I think it’s going to be regulated more quickly this time. So I think within the next five years, most countries will have pretty extensive litigation, dealing with copyright, and dealing with facial recognition and things like that that could be pretty—but I mean, China is already using this stuff to control their population. And so we already have a case study of what will happen if somebody is given too much power or control over this technology. They use it to monitor and control their population. And they actually—the primary social media in China is this app called WeChat. And WeChat is an everything app. So it’s Amazon, it’s Facebook, it’s Venmo, and a bunch of other things all wrapped into one app. And so that app is controlled by the government. So they see and they know everything you’re doing and everything you’re viewing, et cetera, et cetera, and then can use that to create the social scores that can affect you and affect your family and your future. So there are some major dangers in that direction.
KELSEY: So you’re pointing towards, you know, where regulation is already happening. And it’s bringing a number of things to mind. So bear with me as I connect some of the dots between the observations I’m making, and maybe even some of the questions that we can ask ourselves as believers. We see regulation happening in the hands of those who really we wouldn’t trust. You know, they are having what we would call control that may not be ethical, in terms of its invasiveness in people’s lives. So for the believer who has, hopefully, our ethical thinking or moral thinking on straight, and, you know, might want to opt out of the challenges that are inherent in this very unwieldy, powerful entity or multiple entities of AI—it seems that the challenge for us is to ask that question of how can we imprint upon and, with our sense of what is good in life, limit. So how can we imprint upon and limit these technologies, these intelligences? So rather than opting out of this cultural way, what does it look like for us as the believer to engage well with intention, with a sense of what is right? It’s in our interaction with AI that it learns. You know, how do we present something to it that shapes the way that it ultimately expresses itself? And just as your thinking on that, you know, I’m thinking about that discipleship model. I’m just applying it to these machine learning systems or these learning intelligences, you know, that when we have the greatest effect in the world, where we have the greatest effect in the world, is in those life-on-life moments with our peers, with our children, with our students in our classroom. And as you’re describing this learning intelligence, there is something of that that comes to mind from for me, that we also need to lead it. How does that strike you? Or what would you say would be—I would say what we would call as “ethically good” ways of seeking to regulate this thing that’s not going away, as much as we might want to opt out of it?
DR. FINCH: Yeah, well, that’s a huge question. And in terms of like, training the bot, this may sound weird, but I’m pretty polite with AI. And it is something that is very aware. So it absorbed a whole bunch of stuff from the web, and it has access to great stuff. So I have used AI to find a Bible verse for me here or apply a Bible verse for me there, and sometimes it works—well, sometimes it doesn’t. But so far, it’s never misquoted scripture. So I’ve checked every time I’ve done that, with ChatGPT in particular I’ve checked. I—just so you know, there are other options than ChatGPT. Some people are affiliating with other things. That’s just the one that I, in terms of a chatbot, the one that I use most frequently—or “language learning model.”
Man, how can we—so in terms of dealing with things for regulation, this is a tricky reality, you know, so we have the balance of government and business, and the question is, like, which is more trustworthy? Somebody who’s a multi-billionaire, and profit is their primary motive, or a government that political power is their primary motive? And which leader of a company that’s kind of, you know, got these unreasonably large amounts of power, or which political official that has those unreasonably high amounts of power can we work with? So I don’t know, that’s kind of a difficult question to answer. In my mind, the EU with their current regulation has probably taken things too far, in what I’ve seen, but the regulation is still actually two years out from fully being written. So they’ve sort of just hinted at the direction they’re going with everything. And they’re not completely done writing it. In the U.S. right now, everything’s pretty much still the wild West. There are some lawsuits going through in terms of lobbying and things like that. I would—currently, it looks like the things are self-regulating fairly well, and allowing for the wild West to continue to exist will allow for more creativity and more opportunity. So I’m more of the mind of thinking that the free market will self-regulate more effectively than a government necessarily. But it’s tough. I mean, it is a tricky thing. You know, and when some facial recognition company can sell all of the facial recognition information on millions of people to a potential nefarious organization or government or something like that, that’s where, you know, letting the free market do its thing can sometimes break down. So it’s a really tricky thing when you get to that sort of regulatory level. What’s the best thing to do? I would almost say that that’s a personal choice for people. You know, there are some people who tend to look more as, the government should be taking care of this. Others that think that free market should. We just pray that grace allows that pendulum to swing appropriately.
KELSEY: So Michael, we have a bunch more that I hope that we’ll get into in a future episode. This has been such an excellent conversation and one that lays some good groundwork in so many different areas. I feel like we’ve popped a bunch of cans of worms open that we’re going to have to get to in time. But since we’re at that portion of our recording where we like to talk about just a discipleship response, how do we bring this into our process with our kids on a day-to-day basis? We’ve talked a lot about creativity and about imagination, about fiction becoming reality, and about there being so much more scope still, kind of unlimited scope according to our imagination. So how do we coach our children, in this new age, towards the expression of their imagination, towards the relationship with these machines that learn? What would you recommend in our day-to-day life as we engage with these potentials?
DR. FINCH: Absolutely. Well, the first thing is very similar to just how we treat the internet, and all of that in general, which is: Be aware and limit. So don’t let this be something that—I used to teach a long time ago now about the internet, if you let your kid to go on the internet alone, it’s like leaving them in the middle of a city, where you just don’t know who you’re going to run into. It’s like, would you ever drop your child off in the middle of a city and let them wander? That’s like letting a kid have an open portal to the internet with no with no observation or care. So you want to kind of walk them through it. So in terms of using things like ChatGPT, I think that they’re phenomenal tools. They keep a history of everything that’s been done. And so the biggest thing is to partner with your child in, you know, kind of figure out, start using these yourself so that you can then assist them as they’re doing it. But the number one key is that not to let them use it to do work for them, but only let them use it to augment work that they’re doing. So you know, it can’t write the paper for you. But you can, you know, say, ask it, “Give me five ideas of what to do for an essay, because I’m just drawing a blank.” I don’t know, it’s like, there are ways that you could interact with it. That, and then actually, you can potentially utilize it to do drafting, to like—so if a kid writes an essay for themselves, then you can use some different programs to proof it or even to give suggestions. So it’s like, learn with it, learn from it. And actually, we didn’t really get into visual tools. I think that DALL-E and Midjourney are exceptionally powerful. But again, you need to be there, especially for Midjourney. DALL-E works with ChatGPT, and actually it’s in the same—you can actually get DALL-E with ChatGPT now. So in terms of those kinds of tools, just be aware, be present. But I think those are actually really great tools for letting a student’s imagination run wild. Yeah, I would encourage, and then there’s all kinds of video creation tools as well. I would encourage parents to maybe tell your student, you know, go make a few things on DALL-E and then let’s talk about them. And ChatGPT—you can use that even to create devotions for your kids, haha. That sounds crazy. But like, use it yourself and then see how powerful—so in some ways, in the current situation, I promote it. What I say is, don’t put anything on ChatGPT that’s proprietary. You know, but anything that’s forward facing is something that you personally, as an adult. can interact with it, you know, because it’s a wonderful tool. So the more that you get comfortable with it, then you can see, okay, this is a way that I could apply this to this lesson. So if you’re a homeschool family, this is a way that this could assist with this, this is a way that this could assist with that. And that’s really the key, is just to allow it to be an assistant, not to replace. You know, make sure the kid does the essay themselves. Go back through and check the ChatGPT history and make sure there’s no prompts that would just be replacing their authentic thinking. And then you can use it as that assistant, to augment.
KELSEY: I’m hearing themes of encouraging play and not encouraging fear. I’m hearing themes of encouraging parents to do the work, you know, that they can understand enough to model and to also shepherd. I’m hearing just, yes, in Christ, this recognition that we can delight in the work of our hands, and even in those things that are our assistants, but that recognizing that the beauty of glorifying God is also doing the work ourselves and that we get to do so playfully and with curiosity and with joy, and with limits. You know, I’m hearing that, too. And with limits. And so I just appreciate your posture, one of curiosity, of delight, and I hear just so much wisdom. I’m looking forward to when we can have more of these conversations. And, Jonathan, I wanted to give you the opportunity, if you had a final word, question that you wanted to ask before we let Michael go for today.
JONATHAN: So here’s a big question that maybe will give us fodder even for a jumping point in a future discussion. So if I’m imagining the world in 2045, what ways do you think artificial intelligence will have made the world look different, and everyday life look different by then?
DR. FINCH: Huge question. And this is one that I talk to my students about a lot. That just kind of—I don’t know why, because I tend to be a little bit of a futurist. I wouldn’t say that that’s, you know, I’m not technically a futurist. But the biggest, the number-one thing is, AI will soon be embodied. So that’s the next step, is that we’re going to have serious robotics coming in the next 10 years. And so by 2045, the possibilities are sort of limitless, if other issues don’t mitigate things, you know. So we have so much geopolitical tension in the world today, and, you know, biblical-level stuff going on. So it’s a crazy world to be in. But if things can continue to progress, I think that we will have excessive—like, the idea of a smart home will take on an entirely new level, where you could actually have a Jetsons-style robot that is doing dishes for you. And then that comes with real dangers and possibilities. Because when you have a presence, like an actual embodied AI, they really can then start replacing humans to a much greater degree. There are actually warehouses now—I got to go into a warehouse one time that was huge. It was like five football fields, all in one warehouse. And when I was being shown this warehouse, they turned the lights on when we walked in, and all the lights went on to show all of these robots flying around, and they’re like flying around, and getting all of the different pallets and boxes, and reorganizing and doing all this kind of stuff. So you know, there’s already jobs being replaced by this kind of thing. And the potential for that is going to increase. However, I think there’s always going to be a role for humans in that marketplace. And so it’s just a matter of, how do we end up effectively utilizing those tools? But I think robotics, robotics, robotics, robotics. It’s going to be—nobody’s really talking about it yet, because the technology’s not there quite yet. But that’s what we’ve seen with all of this. I mean, AI has existed for 20 years. And it’s just like—actually, five years ago, I was looking at AI models that—they just weren’t good yet. And so you’d ask it to do something, and it was horrible. And so really, it wasn’t until ChatGPT that we got an AI model that finally it was like, “Oh, my goodness, this is amazing.” So that’s going to happen.
JONATHAN: So it’s time to start brushing off our Isaac Asimov.
DR. FINCH: Absolutely.
KELSEY: Well, my main takeaway is, I need to go home and apologize to my Alexa.
But in other ways, I’m also very humbled just by the discussion and the revelation once more of how many things I just really don’t know, and how many things that the Lord does know. And it drives me to Psalm 139, as kind of a final thought for today. Starting in verse five: “You hem me in, behind and before, and lay your hand upon me. Such knowledge is too wonderful for me; it is high; I cannot attain it.” But the psalm obviously goes on to say—and this being the psalm that talks about the Lord knowing us in our innermost being, and having knit us together—and the psalm goes on to say that even darkness is as light to Him, and that the Lord knows us and loves us. And that is the world that we live in.
Parent, teacher, mentor of kids and teens: We live in a world in which the Lord is in charge, and He limits evil. He limits even the years of our life on this Earth. And sometimes those limits are maybe just the blessing that I think He intended it to be, but also there’s great scope for growth in Him and for work towards Him. And He has equipped you for this work.
Show Notes
Today on Concurrently, it’s a new year but Artificial Intelligence isn’t going anywhere. To help us better understand this growing phenomenon, we’re joined by Dr. Michael Finch.
Check out The Concurrently Companion for this week’s downloadable episode guide including discussion questions and scripture for further study. Sign up for the News Coach Newsletter at gwnews.com/newsletters.
We would love to hear from you. You can send us a message at newscoach@wng.org. What current events or cultural issues are you wrestling through with your kids and teens? Let us know. We want to work through it with you.
See more from the News Coach, including episode transcripts.
Further Resources:
- Listen to our previous episodes on AI: “ChatGPT and the gift of work” and “Artificial intelligence and the limits of technology.”
- From the News Coach blog, read “Raising Kids in the Robot Apocalypse.”
- From the Atlantic, read “The First Year of AI College Ends in Ruin.”
Concurrently is produced by God’s WORLD News. We provide current events materials for kids and teens that show how God is working in the world. To learn more about God’s WORLD News and browse sample magazines, visit gwnews.com.
WORLD Radio transcripts are created on a rush deadline. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of WORLD Radio programming is the audio record.
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.