How does Congress spot AI fakes? The same way you do
Even those with expertise on artificial intelligence struggle to parse out what’s real and what’s not
House Speaker Mike Johnson checks his phone in the U.S. Capitol. Associated Press / Photo by J. Scott Applewhite

It’s growing difficult to determine on the internet what’s news, what’s artificial intelligence, and what’s a bit of both—even for members of Congress.
Most recently, a fake letter of resignation from Federal Reserve Chairman Jerome Powell carrying an AI-generated government seal tricked at least one lawmaker’s office. News outlets were already speculating that President Donald Trump might break precedent and remove a sitting chairman of the Fed for not lowering interest rates. The image made it seem like that’s exactly what had happened.
“Powell’s out!” Sen. Mike Lee, R-Utah, said in a caption as he reposted the letter on X. He deleted it a few minutes later when it became apparent Powell wasn’t going anywhere.
The letter is a relatively benign example. From Iran-sponsored imagery of bombed buildings in Israel to political satire, AI-generated images are increasingly convincing and can fool anyone, including legislators. Experts say lawmakers should institute safeguards to ensure they only share real material—and protect their reputations in the eyes of their electorate.
“The problems of misinformation and disinformation haven’t changed,” said Jason Davis, a research professor at Syracuse University. “What has changed is the speed and the scale—particularly when you couple it with social media.”
Lawmakers not only must parse what’s real and what’s not, but they also must beware of ignoring true information because of skepticism towards AI.
“A lot of people [are] using ‘fake AI’ as a crutch for things that don’t align with their worldview,” Davis said. “It’s an easy cop-out for me to now be like, ‘Yeah, but you know that that’s fake, right? I just don’t believe it. It’s AI.’”
When asked about what he does to defend against fake materials online, Rep. Michael Cloud, R-Texas, said step one is maintaining a high degree of skepticism. “Make sure you have multiple sources all the time,” he said. “I think people are going to get more savvy to be able to interpret to know what’s real and what’s not.”
Cloud is a member of the House Task Force on Artificial Intelligence. The process he uses to validate online information is no different than what is available to his constituents.
Rep. Ami Bera, D-Calif., another member of the task force, gave a similar answer.
“You could go to trusted sources,” Bera said. “You could also say, ‘OK, let’s look for second sources, third sources, let’s look for the original source.’ It’s something we have to think of as an institution.”
Davis, the research professor at Syracuse, said the country’s leaders should have some additional protections in place. He has helped develop AI detection tools in partnership with Syracuse and the Department of Defense and believes similar programs can help lawmakers combat fake imagery. The technology uses AI to spot, well, AI.
“Some of those AI-driven detection tools are the best defense in terms of image detection,” Davis said. “We actually have quite a few that are really good—better than 90% and way better than human eyeballs.”
He explained that those detection tools don’t just evaluate the image; they use data from the video or picture and compare it to patterns unique to how AI platforms generate material. By training detection tools on those patterns, the technology determines with a high degree of probability what is AI and where it came from.
“[Lawmakers] should be seeking those kinds of tools as a support mechanism,” Davis said.
Davis noted that the tools aren’t perfect and are constantly in a race to catch up with new AI models. Davis suggested that legislators could require AI companies to monitor their own content.
Matthew Jordan, a Pennsylvania State University associate professor who studies the role of media in culture, believes the public is beginning to catch on to some of the more sensational content online, particularly when it comes to politics. He believes consumers will be less likely to tolerate the propagation of AI material posing as reality—especially when it’s coming from lawmakers.
“Generally, I think people are growing convinced that a lot of what they encounter online is AI-generated. I think the populace is not dumb. That’s going to come out in the wash eventually,” Jordan said. “Lawmakers that are aware of this huge kind of cultural shift will get out in front of this.”
For now, it’s still up to lawmakers and their individual offices to confirm the content they repost online is real. Asked what Congress can do to increase the public’s confidence they won’t be tricked, Rep. Don Beyer, D-Va., believes there’s some low-hanging fruit—like the recommendations the AI taskforce included in its 2024 report.
“For example, in political ads we either prohibited or required that they be labeled,” Beyer said, referring to the task force’s proposals. A similar requirement would apply to general advertising.
“It’s a brave new world out there, especially when it comes to ‘what can you trust?’” Beyer added.

This keeps me from having to slog through digital miles of other news sites. —Nick
Sign up to receive The Stew, WORLD’s free weekly email newsletter on politics and government.
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.