1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
ConflictsMiddle East

Fact check: AI fakes in Israel's war against Hamas

Ines Eisele | Uta Steinwehr
November 10, 2023

Real or fake? Images generated by artificial intelligence have become a disinformation tool in the war between Israel and Hamas. DW's fact-checking team shows you how to spot them.

A compliation of three AI-generated images referring to Israel's war in Gaza
Images like these are easy to create with the help of AI generators and a little know-how

How do fake stories created with artificial intelligence (AI) work? What narratives do they present? And how dangerous are they?

DW fact checkers answer the most critical questions about the role of AI in the conflict between Israel and Hamas in Gaza.

1) How do AI-generated image fakes work?

AI is everywhere these days — even in wars. Artificial intelligence applications have greatly improved this year, and almost anyone can now use standard AI generators to create images that look real, at least at first glance.

Users must "feed" tools such as Midjourney or Dall-E with prompts, including specifications and information, to do this. The tools then convert the written prompts into images. 

Some create more artistic illustrations, while others create photorealistic images. The generation process is based on what is known as machine learning. 

For example, if generators are asked to show a 70-year-old man riding a bicycle, they search their database to pair the terms with images. 

Based on the information available, the AI generators create the image of the older cyclist. Thanks to more and more input and technical updates, the tools have improved vastly and are constantly learning.

All this applies to images related to the Middle East conflict. 

Here, too, people use such tools to create more or less realistic scenes that, according to our observations, are often intended to capture emotional moments to spread certain narratives. But more on that later.

In a conflict where "emotions are very, very high," says AI expert Hany Farid, disinformation, including its spread through AI images, works exceptionally well. 

Hardened fronts are the perfect breeding ground for creating and disseminating fake content and intensifying emotions, Farid, a professor of digital forensics at the University of California at Berkeley, tells DW.

Fact check: How to spot AI images?

2) How many AI images of the Israel-Hamas war are in circulation? 

Images and videos created with the help of artificial intelligence have already added to disinformation related to the war in Ukraine — and continue to do so. 

As AI tools have developed rapidly since Russia's 2022 invasion of Ukraine, many observers expected it to play an even greater role in the Israel-Hamas war. However, according to experts, the great flood of AI images has failed to materialize thus far.

"In the conflict between Israel and Hamas and related disinformation, we are not seeing a massive use of AI-generated images," says Tomasso Canetta from the European Digital Media Observatory. 

"There are some examples, but it's not much if we compare it to the amount of disinformation that is actually old images and old videos that are now reshared in a misleading way," he adds.

However, this does not mean the technology isn't a factor. Farid explains that he does not consider the number of AI fakes to be the relevant factor.

"You can have two images that go super viral, and hundreds of millions of people see it. It can have a huge impact," he says. 

"So it doesn't have to be a volume game, but I think the real issue we are seeing is just the pollution of the information ecosystem."

3) What narratives do the AI fakes serve in the Israel-Hamas war?

The AI images circulating on social media networks can usually trigger strong emotions. 

Canetta identifies two main categories. One refers to images that focus on the suffering of the civilian population and arouse sympathy for the people shown. The other is AI fakes that exaggerate support for either Israel, Hamas or the Palestinians and appeal to patriotic feelings.

An AI-generated image highlighting tell-tale features indicating that it is a fake
Certain features help identify whether an image was created by AI

The first category includes, for example, the picture above of a father with his five children in front of a pile of rubble. It was shared many times on X (formerly Twitter) and Instagram and seen hundreds of thousands of times in connection with Israel's bombardment of the Gaza Strip. 

In the meantime, the image has been marked with a community notice, at least on X, that it is fake. It can be recognized as such by various errors and inconsistencies that are typical for AI images.

The man's right shoulder, for instance, is disproportionately high. The two limbs emerging from underneath are also strange — as if they were growing from his sweater. 

Also striking is how the hands of the two boys who have wrapped their arms around their father's neck merge. And there are too many or too few fingers and toes in several of the hands and feet in the picture.

Similar anomalies can also be seen in the following AI fake that went viral on X, which purportedly shows a Palestinian family eating together in the rubble, evoking sympathy for Palestinian civilians.

An image highlighting typical anatomiocal anomalies in AI-generated images
Deformed faces as well as glitches in arms and hands are typical inconsistencies in AI images like this one

The picture below, which shows soldiers waving Israeli flags as they walk through a settlement full of bombed-out houses, falls into the second category, which is designed to spark feelings of patriotism.

The accounts that share the image on Instagram and X appear to be primarily pro-Israeli and welcome the events depicted. DW also found the picture as an article image in a Bulgarian online newspaper, which did not recognize or label it as AI-generated.

What looks fake here is the way the Israeli flags are waving. The street in the middle also appears too clean, while the rubble looks very uniform. The destroyed buildings also look like twins, standing at pretty regular intervals. 

All in all, the visual impression is too "clean" to appear realistic. This kind of flawlessness, which makes images look like they have been painted, is also typical for AI.

An AI-generated image of Israeli tanks and soldiers carrying flags as they march through the streets of bombed out Gaza
AI images circulating on social networks are usually capable of triggering strong emotions in viewersImage: X

4) Where do such AI images come from?

Private accounts on social media distribute most images created with the help of artificial intelligence. They are posted by both authentic and obviously fake profiles. 

However, AI-generated images can also be used in journalistic products. Whether and in which cases this can be useful or sensible is currently being discussed at many media companies.

The software company Adobe caused a stir when it added AI-generated images to its range of stock photos at the end of 2022. These are labeled accordingly in the database.

An AI-generated image showing two children sitting and staring at burning buildings in Gaza at sunset
An AI-generated image on the Israel-Gaza conflict in Adobe Stock

Adobe now also offers AI images of the Middle East war for sale — for example of explosions, people protesting or clouds of smoke behind the Al-Aqsa Mosque. 

Critics find this highly questionable, and some online sites and media have continued to use the images without labeling them as AI-generated. The image above, for example, appears on the site "Newsbreak" without any indication that it was computer-generated. DW found this out with the help of a reverse image search.

Even the European Parliamentary Research Service, the European Parliament's scientific service, illustrated an online text on the Middle East conflict with one of the fakes from the Adobe database — again without labeling it as an AI-generated image.

Canetta from the European Digital Media Observatory is appealing to journalists and media professionals to be very careful when using AI images, advising against their use, especially when it comes to real events such as the war in Gaza.

The situation is different when the goal is to illustrate abstract topics such as future technologies.

5) How much damage do AI images cause?

The knowledge that AI content is circulating makes users feel insecure about everything they encounter online. 

UC Berkeley researcher Farid explains: "If we enter this world where it is possible to manipulate images, audio and video, everything is in question. So you're seeing real things being claimed as fake."

That is precisely what happened in the following case: an image allegedly showing the charred corpse of an Israeli baby was shared on X by Israel's Prime Minister Benjamin Netanyahu and the conservative US commentator Ben Shapiro, among others. 

The controversial anti-Israeli influencer Jackson Hinkle then claimed that the image had been created using artificial intelligence. 

A screenshot of an X post by Jackson Hinkle claiming an image retweeted by Israeli Prime Minister Benjamin Netanyahu and US right-wing commentator Ben Shapiro was fake
Anti-Israel agitator Jackson Hinkle presents alleged proof on X that the photo is an AI fake

As alleged proof, Hinkle attached a screenshot of the AI detector "AI or not" to his post, which classified the image as AI-generated. 

Hinkle's claim on X was viewed more than 20 million times and led to heated discussions on the platform. 

In the end, many stated that the image was, in all likelihood, genuine and that Hinkle was, therefore, wrong. Farid also told DW that he could not find any discrepancies in the picture that would indicate an AI fake.

How can that be, you might ask? AI detectors, which can be used to check whether an image or text is possibly AI-generated, are still very error-prone. Depending on the image checked, they get it right or wrong and often only make decisions as likely probabilities — not with 100% certainty.

Therefore, they are at the most suitable as an additional tool for checking AI fakes, but definitely not as the only tool.

Image of a screenshot on X, in which the platform claims the image in a tweet from Benjamin Netanyahu and Ben Shapiro is real
When DW asked X if this image was 'AI or not,' the platform declared it was 'likely human'

DW's fact-checking team could also not detect any clear signs of AI-based manipulation in the image, presumably showing a baby's corpse. 

Curiously, "AI or not" did not classify the image as AI-generated when we tried it out ourselves — and pointed out that the image quality was poor.

Another AI detector (Hive moderation) also concluded that the image was genuine.

Mina Kirkowa contributed to this report, which was originally written in German.