Think Fast: AI Goes Rogue

July 11, 2025 00:07:55
Think Fast: AI Goes Rogue
Think Deeper
Think Fast: AI Goes Rogue

Jul 11 2025 | 00:07:55

/

Show Notes

Grok, the AI chatbot found on Elon Musk's X, went off script recently, calling itself "MechaHitler, questioning the holocaust, and more.

We discuss the lessons to be taken for Christians and their usage of AI

Catch the full episode of Think Deeper on Monday, 7/14!

With Will Harrub, Jack Wilkie, and Joe Wilkie

focuspress.org

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign has been in the news a significant amount. [00:00:08] Speaker B: Just a little bit. [00:00:09] Speaker A: Yeah, yeah, just. Just a little bit. Grok went crazy on was this a couple days ago. Just went absolutely nuts. And so for those that are unfamiliar, Grok is the X or formerly Twitter, AI Helper or whatever. And it's a part of the platform where you can go and ask questions very much like ChatGPT and yeah, Jack, I'm actually gonna let you get in. You're the one that brought this to my attention. Did you see what happened with Grok? And I thought I had seen something. I wasn't really paying attention. And then, my goodness, I got on there and this was wild stuff. [00:00:38] Speaker B: Apparently they toned down the moderation or something and it started calling itself Mecca Hitler and saying that they, you know, they were never going to shut him down. And, you know, basically he was going to do what he had to do and he started spreading. I don't even remember what the things were about. There were things about the Holocaust, things like that, that immediately they kind of deleted all those posts. They shut it down and went in and rewired it. And it was a pretty wild thing. Of course, there's been stories about other ones kind of going rogue and turning on people, like, you know, threatening to kill people or whatever else. And I think, you know, all of this is program stuff. But the lesson that comes out of that is, I think so many people have gotten to blindly trusting these things already. It is just unbelievable. Well, I asked ChatGPT this or whatever. Look how either Grok was telling the truth and they had to come in and be like, no, no, you're not allowed to say that, or Grok was totally off the rails and they were having to come in and go, okay, let's rewire the code. A human was having to program it one way or the other. The things it was, the answers it was giving, the things it was saying changed from one hour to the next totally drastically. You know, the. It was some really wild stuff to get into, which we're not going to spend our time on here, that it was just throwing out there. And people were freaking out. Like, why is it saying this? Well, the cautionary tale for me is a lot. I see a lot of church people get on there and ask doctrinal questions of Chat GPT. Well, Chad GPT said that there's only one church and it's the Churches of Christ. Like what Chad GPT did. What you have to understand is it. It didn't think through this and study the Bible for itself and give you this answer. It went and googled Wayne Jackson's article and regurgitated it for you. That's great. But like, so I think my takeaway from this is the trust level is, is off the charts. People got to reign it in. [00:02:35] Speaker C: You've got people doing Bible classes off of Chat GPT, sermons off of Chat GPT. And I think to, I of course agree with that. The angle, the angle that is most interesting to me is how much, how much dumber it's making people. There was a, there was a study and I don't have the details of the study in front of me, but essentially it was a seven month study where they took, you know, basically three groups of people. One was allowed to use Chat GBT from the start when it came to writing essays and doing, doing assignments, doing schoolwork and stuff. The next group I think was maybe allowed to use some like middling form like Google or something like that. And then the other one could not use anything, basically had to think for themselves and the results were exactly what you would expect that as time went on the users of ChatGPT, when ChatGPT was taken away, could not formulate an essay, could not formulate thoughts for themselves versus the group, the later on group, specifically the third group who wasn't allowed to use anything, they were able to, their scores were higher, they were testing higher and cognitive behaviors and activities and things, things that required critical thinking and problem solving. Surprise, surprise, that third group that didn't have access to Chat GPT performed a whole lot better than the person who just kind of, again, to Jack's point of trusting, just blindly throwing stuff in and letting it think for you. That's what's so scary about it to me. Can it be useful as a tool? You know, AI in general, sure. But what AI, what people are literally using it for is tell me how to think and tell me what to think. That's a serious problem because then you're going to have people who, I mean this is a age old leadership technique, you know, age old leadership principle. If you really want to develop somebody under you, you can't give them the answers every time. Why? Because if you just give them the answers every time, they're never going to think for themselves or say, oh my leader, I'll go to them, they'll give me the answer and so they'll never develop that. That's the exact same thing with Chat GPT, you know, and again, not that you can't use it for a math equation or something, but when we're using that to say, hey, tell me what to think, surprise, surprise, our brains are going to shut off and we're not going to be able to problem solve or critically think in the least because AI has been doing it for us. [00:04:47] Speaker A: It's give a man a fish, right? You know, you, you got to teach him how to fish. And I don't think, especially for young kids, we're talking about parents with, with younger kids in this episode. Do not let your kids use AI like on any level. I'm just going to say it like, don't let them on any level use AI for anything. Yeah, well, it's super helpful. It's this, it's that we've gotten along for 6,000 years without AI, we can do it for a little longer. Kids, let them come into their own learning. Did you see that kid? I think it was ucla where he holds up the laptop and he had the last assignment that he did on chat GPT. [00:05:19] Speaker C: He basically showed that he had done it all on chat gbt. [00:05:22] Speaker A: Yeah, like his entire, the. I think it was the last year of his degree or whatever. It was basically all chat GPT stuff that he just put in there. And he's, he's bragging about this on social media, like that's where our society is. That's brutal. So, no, don't let your kids use any of this stuff, in my opinion. And as far as it goes, the AI, the other thing that you're seeing is people getting into relationships. We already knew this. I think we talked about this a while back. But like, Will, you were just talking off air before we started about the guy that married his AI companion, even though he's already married or he's thinking about marrying this or whatever. Like he's in this deep relationship. And then I read another one where Jack, as you were referencing, this guy, goes to shut down the AI software and the thing blackmailed him about his affair. I'm going to let this come out. If you shut me down, I'm going to, I'm going to let the affair come out. Like this is getting out of control. I don't know how many sci fi movies we need to, to watch before. Right, yeah, this is a problem, but it is a problem and this is only going to get worse. I'm very skeptical. I've used it very, very sparingly. That doesn't make me better than anybody else. It just makes me more scared to, to engage with things like this specifically. Coming back around to the main point, be very careful on. Because the thing about Grok going rogue is either it's telling the truth, it told the truth in this one moment. And Jack, you made this point off air, which is a great point. It either told the truth in that one moment, which tells me that it's not telling the truth the rest of the time, or it's lying in that moment, which means it can be tricked into lying one way or the other, depending on how you look at it. We have a problem. Either they're stifling the truth, the creators are stifling the truth, or the developers are, you know, allowing for the truth, and it has an opportunity to start lying itself. Man, be very, very careful on all of these AI things. That's not just Grok. There's a bazillion of one out there that can be tricked. They can do all sorts of stuff. So, yeah, watch what you're doing. But fellas, any other closing thoughts before we wrap up? Well, thank you for listening to this. This quick, Think fast. But also to the episode, of course, the full episode you might be listening to. Think fast before time. The full episode will be dropping on Monday for the usual listener on Saturdays on the deep end. If you're not part of the deep end, go to focuspress.org backslash + we'd love to see on our Patreon got extra content, things like that. Make sure to join us there, but we will sign off here and we'll talk to you again next week. Thanks for listening.

Other Episodes

Episode

September 06, 2022 01:05:06
Episode Cover

Healthy Living: Biblical Principle, or Personal Preference?

Many Christians place little importance on the physical world, including our bodies, concluding that such things don't matter because "it's all gonna burn anyway!"...

Listen

Episode

June 13, 2025 00:06:52
Episode Cover

Think Fast: Simone Biles Flips on Trans Athletes

Decorated American gymnast Simone Biles had strong thoughts on those who oppose male participation in female sports... until she had a sudden change of...

Listen

Episode

July 10, 2023 01:05:49
Episode Cover

Situational Discipleship

We've lost all understanding of what it means to be a kid, an adult, and an elderly person. Kids are rushed into adulthood by...

Listen