Pictures often have a wax- or plastic-like look in some areas, especially faces. Objects in videos are usually not consistent and change their shapes from frame to frame or at least from scene to scene. The last Coca Cola Christmas ad is a good example. The trucks had different amounts and configuratios of tires in every shot.
Problems are often hidden by lowering the quality of the picture and making it more grainy. So a good rule of thumb is to be suspicious of every recent but low-quality picture or video. Surveillance footage doesn’t look like it did in the 1990s nowadays.
The tire thing is becoming the only type of error I can reliably catch. Look into the details to check for consistency in patterns. Radial patterns such as spokes in a wheel. Symmetry across left and right features. If there’s multiples of an object, look for difference between them. Generated images keep copying source material, but it doesn’t understand the rules. Yes, there are times where things aren’t symmetrical or two items are slightly different models/versions, but there’s a point where if two things are very very similar, there probably won’t be a a weird difference.
Plus, not only are models getting better at generating images, but users are wising up as well and are using real pictures as the base. Generators will still tweak details into mild fantasy, but it’s drastically reduced. In the above blondie onlyfans military pics, I’m not seeing any vehicle detail issues. At least, none with the nose of a HMMV and I’m not familiar with the Toyota land cruiser. But then the question is, why is “she” in a land cruiser? I’m not aware of any regular US service using Toyota but there’s one that played a role in the Kabul airport… Taking? Evacuation? Even if the original post mentioned that honored truck, it’s missing the door graphic that seems to be original. Even still, the one pictured is pristine.
And this is why AI wins. It’s the gish gallop of imagery. It takes so much effort to examine, check, and report errors that it all spreads like wildfire before the first rebuttal gets posted.
In the pictures above the lighting really is an obvious thing, but AI can usually do that better. It will still not look right though. But that’s where my point is valid again: The pictures are very small, especially the first one. You can’t make out many of the details that are telling of AI.
However, look at the hairlines of the two dark-hairde women. Not only are they exactly identical, they also don’t make sense.
In the second picture, it’s physically impossible to sit in a car like that. Her back is way to straight, her legs must be really short. The car interor looks odd to me too, the way the door lines up with the dashboard
So I’d say AI gets better at making pictures that have a good general impression, but the odd details, messy lighting and “waxy” look is still there.
Biggest give-away are shots being low resolution or low quality in contexts where it doesn’t make sense. Even the shitiest phone cameras capture a LOT of information. If you are looking at something with low res for no particular reason then the reason might be an AI hiding artifacts in the noise.
Another thing you see a lot is inconsistency throughout the frame. Zoom in and shit starts to get pretty obvious when you start looking in the corners and shit.
Another thing to look for is weird depth-of-field. Things moving in and out of focus regardless of distance from the ‘camera’. AI is smart enough to know that real cameras have depth of field, but don’t seem to understand what it actually is. So you will have a foreground subject in focus, something behind them out of focus, but then the background is somehow in focus again. I think this is actually where a lot of the “it just looks fake for some reason” appearance comes in. We know what photos look like even if we don’t fully understand why a particular photo looks weird.
This is, of course, after you have scanned for shit that doesn’t make sense, like too many fingers weird random background stuff and physics breaking things.
Lots of articles out there about it, but generally you’re looking at background details at this stage. Texts that are wiggly, pictures that are blobs, things around the periphery of individual scenes is usually where you catch stuff. Models are still really bad at perspective and ordering of objects.
AI scenes always appear perfectly lit with no sunlight shadows.


For me the dead giveaway is where exactly is her ass sitting in the car. Looks like her lower back is either between seats or integrated directly with a centre console and her ass is firmly planted on top of the seatbelt fastener.
This to me looks like a real picture of the interior of a vehicle with an AI “person” grafted on top of it.
No, the car is AI too. Look how the dashboard lines up with the door. Usually you can’t see the side of the dashboard if the door is closed. It would make no sense to build a car like that.
Wow, these are very good.
The first has too identical eyebrows, but the second is mostly suspect by the flawlessness which could be expected from retouched influencer photos.
In the two dark-haired women also have identical hairlines that have an impossibe spike in them. The woman in the second one sits in an impossible position and the car door doesn’t line up with the actual interior of the car. It also looks like the dashboard is melting behind the steering wheel.
It does seem a little odd to see an American soldier in a car with right side steering.
The air vents in that car are supposed to have handles for you to adjust the direction. They’re at different angles so they’ve clearly been rotated by the passengers, but there’s no handle to rotate them.
Some cars don’t have the handle. You just push the flap to open and twist to change direction. Some Fords are like this.
If the money is for nothing, and the chicks are free, it’s AI.
Look at the source. If it’s a reliable source, odds are pretty good it’s not AI generated. If it’s a sketchy source, don’t take it as real. All of the tips other people have given to spot AI generated content can help, but as models improve it’ll get harder to spot, and we’ll eventually have to rely on only trusting media from reliable sources.
Like a dog shooting up a house with a gun in its mouth type of fake?
Are you asking yourself a question?
Yes I am. I woke up this morning made myself a cup of coffee took a good shit. and made myself breakfast. Then I thought it would be a really good idea to turn on my laptop come to lemmy.world go to this exact community in order to ask my self a question.
You chose wisely.
Don Dickle, the wise



