11.6 C
Wednesday, February 21, 2024

How to Spot Images Created by AI

It’s becoming more and more difficult to believe what you see. Here’s how to determine whether what you’re viewing was made by a person or an AI. And test your skills at identifying AI-generated content by taking our quiz!

By Chandra Steele

Even when it becomes obvious that the pope is donning a new puffer, you can no longer believe your own eyes. There are serious repercussions if you can’t distinguish between truly manufactured photos and those produced by artificial intelligence because AI visuals have fast progressed from absurdly strange to disturbingly credible.

AI tools for creating images—those that you can imagine but may not necessarily be realized through conventional media like photography and painting—are readily available, either free or inexpensive. Images made by OpenAI’s Dall-E 2, Stable DIffusion, Midjourney, and Craiyon can be deceiving at first glance and perhaps even on the second and third.

Test Your AI-Detection Skills With Our Quiz (Available at PCMag.com)

These text-to-image generators may produce deepfake porn and political propaganda in a matter of seconds, but the harm they can cause is long-lasting. AI is becoming increasingly adept at avoiding detection. Though these do not include anything that would be apparent to the human eye, the industry is working on watermarking and other techniques to identify AI-generated photographs. But there are actions you may do to assess photographs and improve your chances of not being duped by a robot.

The techniques described here are not perfect, but they will hone your intuition for seeing AI in action.

Flip It and Reverse It

If the image is newsworthy, conduct a reverse image search to see where it came from. Even-make that especially-just because a photo is trending on social media does not imply that it is authentic. The likelihood that anything is produced increases if you can’t discover it on a reputable news website despite the fact that it sounds revolutionary.

Consider the 2001 Great Cascadia earthquake It never happened, but as pictures of the made-up event began to circulate on the Midjourney subreddit, it started to seem real to people who never looked into it further. The series’ depiction of the destruction in the Pacific Northwest, including the fallen roads, the rescue crews, and the terrified individuals on their phones, provides a coherent story that is easily accepted.

Even once a reverse image search reveals the truth, further investigation is still required. When we ran a “photo” with the caption “Amateur video captures the aftermath of the tsunami in Kodiak, AK” through Google’s image search, the results included an article from NBC News with the headline “Floods Ravage Western Canada” and a fragment of a recognizable image. The event appears to be real at first view, however a click reveals that Midjourney

Hunt for Artifacts

Artificial intelligence can exhibit certain discrepancies up close because it constructs its works from the original contributions of others. Zoom in as much as you can on each area of an image as you look for indications of AI. It will be simpler to see stray pixels, strange edges, and misplaced shapes in this manner.

When you look at this DALL-E 2 image of a crowded museum, you can see that the throng is having a busy weekend day of culture.

(Credit: PCMag / DALL-E 2)

But get closer to that crowd and you can see that each individual person is a pastiche of parts of people the AI was trained on.

(Credit: PCMag / DALL-E 2)

Smooth Operator

In general, AI struggles with porosity and other flaws. There is a probability that something in an image isn’t real if it appears to be too flawless to be true. It may be difficult to tell in a filtered online environment, but this selfie of a fashion influencer taken by Stable Diffusion gives itself away with skin that puts Facetune to shame.

(Credit: PCMag / Stable Diffusion)

Even Khloe Kardashian, who may receive more flak than anybody else for turning those settings all the way to the right, presents a lot more authentic human emotion on Instagram. Contrary to the fake selfie above, her expertly contoured and highlighted face has light and depth, and the skin on her neck and body has some texture and color variation.

Match Game

Closer inspection of an image may also uncover irregularities, such as earrings that don’t match or an image with more limbs than usual. Images of a house party that programmer Miles Zimmerman and Midjourney staged went viral. They have a sinister, menacing aspect that is more akin to John Currin paintings than actual reality. But what’s even more unsettling is how many teeth and digits the women in the pictures have.

One warning: Midjourney has lately improved significantly at portraying human digits, so determining AI may require more effort than simply counting to ten whenever you see a pair of hands. Consider small aspects like sunlight and reflections that you might otherwise ignore.

Background Details
AI frequently concentrates on generating the foreground of an image, leaving the backdrop hazy or undetectable. Look over that hazy landscape to see if you can spot any recognized topographical features or the outline of any signs that don’t appear to have any writing. While catching AI off guard is the reverse of what AI is, which is a science.

(Credit: PCMag / DALL-E 2)

Using DALL-E 2, we produced the following picture of a busy street. You may observe one significant indication that adheres to the matching principle in the foreground: On the pavement, the word “TAXI” is painted in irregularly spaced letters. There are other hints in the backdrop as well. The form of the individual walking to the left is fuzzed out, along with all the license plates.

Useful Time
In accordance with the adage “It takes one to know one,” AI-driven technologies to identify AI would seem to be the way to go. Even when there are lots of them, they frequently lack the ability to distinguish one another. However, they can provide you with some information.

AI or Not

You can drag and drop, upload, or place the URL of a suspicious image into AI or Not to receive its verdict. We uploaded a “photo” of a palomino in a field that was created with Stable Diffusion and got a definitive answer: AI.

(Credit: PCMag / AI or Not)

Hive AI-Generated Content Detection

You can upload or drag and drop photographs into the AI detector of Hive Moderation, a business that offers AI-directed content-moderation solutions. The likelihood that the palomino image was created by AI was 93.8%.

(Credit: PCMag / Hive Moderation)

Hugging Face AI Detector

You can upload or drag and drop suspicious photographs using Hugging Face’s AI Detector. We utilized the identical fraudulent-looking “photo,” and the verdict was 90% real and 10% bogus.

(Credit: PCMag / Hugging Face)

Related Articles


Please enter your comment!
Please enter your name here

Latest Articles