One of the less surprising things to come out of Hurricane Helene is the abuse of generative AI. It seems to be a thing these days to create as much misinformation surrounding a tragedy as possible and this disaster is no exception.

Sad Girl In A Boat
At least two troubling images have shown up on Facebook. I was dubious about the image of the little girl and dog as soon as I saw it. The generator got the finger count right, but as is common with images of people, the rest of the person was wonky. The child’s skin looks heavily filtered. That doesn’t guarantee that it’s generative AI, but it’s a good indicator. It just doesn’t look natural. Neither does the dog’s fur.
I’ve read in the past that generative AI has trouble getting lighting right, and this image shows that pretty well. If this were an image of a real rescue, the lighting would clearly be coming from one angle. Instead, this has an almost portrait-like quality to it that you just couldn’t get on a river during a hurricane-induced rainstorm.
I note too that it had been pulled by the original poster—or Facebook—fairly quickly.
Here’s more information on the images at VerifyThis.
“Did You See What Trump Did?”
The second image is much more troubling. It purports to show former President Trump wading through knee-deep floodwaters with another man, possibly a local official.
There are so many problems with that image. Even a brief critical examination should give away the fact that it’s the product of generative AI.
- Does anyone really believe that his USSS detail would allow him or any other person on his level to wade through floodwaters like he’s strolling the beach at Mar-A-Lago? Check out these images of firefighters performing actual water rescues. First responders wear way more than just a life jacket. There’s a helmet, fall protection, and some sort of outer waterproof gear. Mr. Trump is wearing none of that gear.
- Floodwaters, especially knee-deep waters, move fast, and move strong. It’s easy to underestimate the power of that much water moving at just a few miles an hour. Plus, when the water is that deep and that dirty, you don’t know where the ground is or how level it is. You can’t see the terrain. That’s why the pros wear fall protection gear with lines attached. That way a fellow rescuer can pull them to safety if they find a drop-off the hard way.
- None of his USSS detail is visible. I get that professional press photographers are really good at cropping unwanted people from an image, but there should be at least a couple of other people visible in the background or in the wings. I just refuse to believe that at this point in his campaign, after two assassination attempts, his protective detail would be so far away from him that they were easily cropped out of an image.
- Both men are wearing jeans and wading through knee-deep water. Yet their jeans don’t look wet at all. There are no water spots anywhere on their pants.
- The writing on the yellow hat is far too unfocused, which is a good indicator of generative AI usage. A real photo would show that writing clearly, even on an image compressed by Facebook’s algorithms. Ditto the patches on the life jackets.
- The former president would almost certainly be wearing a ball cap, be it a red MAGA hat or a blue hat indicating his status as former president.
- The former president’s hands don’t have enough fingers.
- The lighting is off, as I mentioned about the other photo. Even on an overcast day, there’s a lighting hotspot from the sun, and neither image shows that.
- And lastly, neither one of these images appears on TinEye. If either one were real, they’d be all over the internet in various croppings and image sizes. The AP and Reuters would almost certainly have them up, as would at least a few local media outlets. Instead? Crickets.
The big problem with the Trump image—in my mind, anyway—is that his supporters will assume that it’s a real image, demonstrating his leadership skills and showing how wonderful he is. “It’s obvious how much he cares for the people. You’d never see Harris or Walz doing something like this.” It’ll be used to further the division that’s already tearing the country apart, and his supporters won’t consider that the image is fake. Or worse, they won’t care, because “it looks like something he’d do.”
Here’s the VerifyThis explanation of the Trump image.
Why Does It Matter?
There are a couple of problems with generative AI.
The first is that all generative AI engines have to be trained on pre-existing data before they can start “creating” stuff. In almost every case I’m aware of, the companies behind generative AI engines trained their systems on data—images—they scraped from the web without having permission from the copyright holder.
Some might ask “How is that different from someone seeing an image or post and being inspired to create something similar?’
I read “To Build a Fire” years and years ago, probably in elementary school, or maybe junior high (that’s middle school to you young’uns). If I were to write a similar story, I would certainly be influenced by London’s original. But I’d be able to realize it and edit out any overt references to make it my own. Generative AI doesn’t have that ability. It might well pull complete phrases from the story, essentially plagiarizing it. Further, it doesn’t have the life experiences it can season the story with.
The second big issue applies to the way it’s used. It’s getting easier and easier to create misleading material. And once this false information is out there, it’s next to impossible to shut it down. As Jonathan Swift wrote in 1710,
Few lies carry the inventor’s mark, and the most prostitute enemy to truth, may spread a thousand without being known for the author: besides, as the vilest writer has his readers, so the greatest liar has his believers: and it often happens, that if a lie be believed only for an hour, it has done its work, and there is no farther occasion for it. Falsehood flies, and truth comes limping after it; so that when men come to be undeceived, it is too late; the jest is over, and the tale has had its effect: like a man, who has thought of a good repartee, when the discourse is changed, or the company parted; or like a physician, who has found out an infallible medicine, after the patient is dead.
Paraphrased, “A lie is halfway round the world before the truth has got its boots on.” In this day of rampant and often deliberate misinformation, we must redouble our efforts to push back against both obvious and subtle lies.
Finding Generative AI
Here are a couple of guides to help you learn how to differentiate between generative AI and real images.
- CreativeBloq: How to spot AI images: don’t be fooled by the fakes
- NPR: AI-generated images are everywhere. Here’s how to spot them
- Tech.Co: 9 Simple Ways to Detect AI Images (With Examples) in 2024
Jake Tapper at CNN has a very enlightening piece about the risks of deep-fake images and videos created with generative AI.
Beyond those guides, here are some suggestions of my own.
- Think objectively about the situation portrayed in the image. Would a former president go wading through floodwaters?
- Look for other uses of the image. Go beyond Facebook. Take the time to check TinEye or Google Image Search. You don’t have to be first to share something. It’s not a bad thing to take five minutes to verify an image. RevEye is a browser extension for the big three browsers that allows you to reverse search an image across the major search engines. SightEngine is another tool to check images.
- Even if you don’t search the actual image, hit your favorite search engine. Google competently. If it’s an especially newsworthy person, you’ll find the image on a wire service like AP or Reuters. If you don’t, ask yourself why not. The conspiratorial answer is that the MSM is suppressing the image for some reason. The objective answer could be that it’s a generative AI image.
- If you find you got snookered by a generative AI image, take it down! Or at least edit the image to show that you know it’s AI. Your edit doesn’t have to be elegant or professional. Download it to your phone and use the phone’s photo editor. Upload it to Canva from your desktop. Toss “AI” text on it in a couple of places, then put it back in your post. Then edit your original post to show that the image was fake, so if someone shared your post, it’s automatically updated on their share. There’s no shame in admitting you were wrong about something. I think that’s one of the biggest problems we have today as Americans. We don’t like to admit we made a mistake. We’re scared of what people might think of us for being wrong, never stopping to worry about what people think of us for sharing a fake image in the first place. I have tons more respect for someone who admits an error than for someone who holds fast to their statements even when proven wrong.
Disclaimer
And as a reminder, I do occasionally use generative AI, usually ChatGPT, to help me create post titles and social media posts. I do not knowingly use or allow generative AI images here other than a single image I posted when I was experimenting with SocialBu. Bloggers, if you need images for your posts, go to a site like Pixabay and be sure to filter out AI-generated images.
Writing
I happily note that I have made my writing goal five out of the last six weeks. Even better: I only missed it last week by 37 words.
I set a goal each year of about 250,000 words between blogging and fiction. That works out to 3,846 words per week. Since Week 35 of this year, which roughly corresponds to my writing surge that involved the grandkids, I averaged 5,750 words per week. Previously, I averaged 1,031 words/week. I’m thrilled with the improvement, naturally.
If I do a blog post each week of about 1,100 words and reach my annual goal, that would give me almost 200,000 fiction words in a year. That works out to roughly two completed novels plus a bunch of short stories, or a good novella.
I obviously haven’t achieved that goal yet. But it’s good to have a goal, right?
Ghost continues apace. I’m just under 77,000 words and hitting Chapter 27. Things are starting to happen for and to Keith. The FBI finally gets involved.
Thanks for reading! Feel free to share a thought in the comments. Sign up for my infrequent newsletter here. Find some of my other writing at The Good Men Project, too. Subscribe to the blog via the link in the right sidebar or follow it on Mastodon. You can also add my RSS feed to your favorite reader.
Share your thoughts!