Pictures often have a wax- or plastic-like look in some areas, especially faces. Objects in videos are usually not consistent and change their shapes from frame to frame or at least from scene to scene. The last Coca Cola Christmas ad is a good example. The trucks had different amounts and configuratios of tires in every shot.
Problems are often hidden by lowering the quality of the picture and making it more grainy. So a good rule of thumb is to be suspicious of every recent but low-quality picture or video. Surveillance footage doesn’t look like it did in the 1990s nowadays.
The tire thing is becoming the only type of error I can reliably catch. Look into the details to check for consistency in patterns. Radial patterns such as spokes in a wheel. Symmetry across left and right features. If there’s multiples of an object, look for difference between them. Generated images keep copying source material, but it doesn’t understand the rules. Yes, there are times where things aren’t symmetrical or two items are slightly different models/versions, but there’s a point where if two things are very very similar, there probably won’t be a a weird difference.
Plus, not only are models getting better at generating images, but users are wising up as well and are using real pictures as the base. Generators will still tweak details into mild fantasy, but it’s drastically reduced. In the above blondie onlyfans military pics, I’m not seeing any vehicle detail issues. At least, none with the nose of a HMMV and I’m not familiar with the Toyota land cruiser. But then the question is, why is “she” in a land cruiser? I’m not aware of any regular US service using Toyota but there’s one that played a role in the Kabul airport… Taking? Evacuation? Even if the original post mentioned that honored truck, it’s missing the door graphic that seems to be original. Even still, the one pictured is pristine.
And this is why AI wins. It’s the gish gallop of imagery. It takes so much effort to examine, check, and report errors that it all spreads like wildfire before the first rebuttal gets posted.
In the pictures above the lighting really is an obvious thing, but AI can usually do that better. It will still not look right though.
But that’s where my point is valid again: The pictures are very small, especially the first one. You can’t make out many of the details that are telling of AI.
However, look at the hairlines of the two dark-hairde women. Not only are they exactly identical, they also don’t make sense.
In the second picture, it’s physically impossible to sit in a car like that. Her back is way to straight, her legs must be really short. The car interor looks odd to me too, the way the door lines up with the dashboard
So I’d say AI gets better at making pictures that have a good general impression, but the odd details, messy lighting and “waxy” look is still there.
Pictures often have a wax- or plastic-like look in some areas, especially faces. Objects in videos are usually not consistent and change their shapes from frame to frame or at least from scene to scene. The last Coca Cola Christmas ad is a good example. The trucks had different amounts and configuratios of tires in every shot.
Problems are often hidden by lowering the quality of the picture and making it more grainy. So a good rule of thumb is to be suspicious of every recent but low-quality picture or video. Surveillance footage doesn’t look like it did in the 1990s nowadays.
The tire thing is becoming the only type of error I can reliably catch. Look into the details to check for consistency in patterns. Radial patterns such as spokes in a wheel. Symmetry across left and right features. If there’s multiples of an object, look for difference between them. Generated images keep copying source material, but it doesn’t understand the rules. Yes, there are times where things aren’t symmetrical or two items are slightly different models/versions, but there’s a point where if two things are very very similar, there probably won’t be a a weird difference.
Plus, not only are models getting better at generating images, but users are wising up as well and are using real pictures as the base. Generators will still tweak details into mild fantasy, but it’s drastically reduced. In the above blondie onlyfans military pics, I’m not seeing any vehicle detail issues. At least, none with the nose of a HMMV and I’m not familiar with the Toyota land cruiser. But then the question is, why is “she” in a land cruiser? I’m not aware of any regular US service using Toyota but there’s one that played a role in the Kabul airport… Taking? Evacuation? Even if the original post mentioned that honored truck, it’s missing the door graphic that seems to be original. Even still, the one pictured is pristine.
And this is why AI wins. It’s the gish gallop of imagery. It takes so much effort to examine, check, and report errors that it all spreads like wildfire before the first rebuttal gets posted.
In the pictures above the lighting really is an obvious thing, but AI can usually do that better. It will still not look right though. But that’s where my point is valid again: The pictures are very small, especially the first one. You can’t make out many of the details that are telling of AI.
However, look at the hairlines of the two dark-hairde women. Not only are they exactly identical, they also don’t make sense.
In the second picture, it’s physically impossible to sit in a car like that. Her back is way to straight, her legs must be really short. The car interor looks odd to me too, the way the door lines up with the dashboard
So I’d say AI gets better at making pictures that have a good general impression, but the odd details, messy lighting and “waxy” look is still there.