Fact check: Are ICE fakes trying to drown out real videos?
January 30, 2026
Immigration crackdowns ordered by US President Donald Trump have turned deadly, with a second US citizen killed by Immigration and Customs Enforcement (ICE) agents this month.
Cellphone videos from eyewitnesses in Minneapolis show how several ICE agents tackled 37-year-old ICU nurse Alex Pretti to the ground and then shot him. He had not drawn his gun, as initially stated by the Department of Homeland Security. Footage from different cellphone recordings show his gun on a belt, which an ICE officer removed before Pretti was killed.
Earlier this month, ICE agents shot and killed 37-year-old Renee Good as she was driving away in her car.
Videos taken from multiple angles have also discredited the official statement that Good was trying to run over an officer, who shot her point blank by firing through her window.
Amid these incidents, social media has been flooded with a mix of authentic eyewitness videos and AI‑generated fakes, complicating efforts to understand what actually happened. The Las Vegas Metropolitan Police Department warned it had seen a rise in AI-generated images and video in connection with its forces, adding it "does not participate in proactive immigration enforcement activities." DW Fact check investigated several viral clips.
ICE officers getting arrested by police?
Claim: ICE officers are getting arrested or beaten by police, as seen in several videos on different platforms and posts in different languages (here, here and here).
DW Fact check: Fake
Details such as garbled text in the videos give it away. The subway signs don't make sense ("Exit Ses"; "Sotreé Seet"; "42eet"), logos on uniforms are wrong or misspelled ("pice"; "IICE").
Body movements appear unnatural, exaggerated or stiff, for example, when the police officer grabs one of the ICE agents with his right hand, but his left arm hangs down with little to no movement. Dialogue appears to be jumbled, as if the AI forgot to add a response from the other respective character.
ICE agents act like NPCs (Non-Playable Characters) in a video game — background characters controlled by the game rather than a player. They don't seem to react to what's happening, and mouth movements while yelling seem abrupt and exaggerated.
Similar patterns also appear in other AI-generated content of protesters allegedly confronting ICE officers.
ICE officers entering classrooms or university campuses?
Claim: An ICE officer entered California State University looking for a student, while other agents showed up at a high school's soccer match.
DW Fact check: Fake
Both of the videos published on the social media platform TikTok are AI-generated. The logo of AI-generative software Sora can be seen popping up in the video of the university's classroom — a telltale sign that this has been created with the help of AI and doesn't show real, authentic footage.
The video of ICE agents watching a crowd at a soccer match has a weird, glossy look to it. A search for "Agleca soccer" doesn't get any results. Faces in the crowd seem to be distorted, and writing on posters is gibberish. TikTok has also added a warning saying this video contains AI-generated content.
Are AI fakes drowning out real eyewitness footage?
"One of the problems with all of the AI-generated content and fake videos circulating among the real videos is it becomes very difficult to distinguish what is real," said Courtney Radsch, Director of the Center for Journalism and Liberty at the Open Markets Institute in the US.
With AI tools now widely accessible, anyone can fabricate videos that appear real.
Radsch warned that disinformation campaigns may intentionally release fake videos to drown out accurate documentation of deadly ICE encounters.
Brittani Kollar, deputy director of MediaWise, Poynter's digital media literacy project teaching people how to spot mis- and disinformation, said that when false information goes viral, the verified fact checks tend not to get seen by as many people.
"Viral deepfakes can in fact, drown out real videos when it comes to algorithms because more people are watching the deepfakes," she told DW.
It gets increasingly tough to decipher what's real and what actually happened, which "could undermine legal processes, undermine trust in video evidence, undermine the trust in eyewitness accounts," Radsch warned.
Can we still spot AI-generated fakes?
It’s still possible — but increasingly challenging — as generative tools advance. Although detection tools like Hive Moderation exist, AI generating tools evolve faster than the systems designed to identify them. Earlier giveaways such as unnatural eye blinking or distorted reflections occur less frequently.
Kollar advises looking for clues in the video:
- Watermarks or AI-tool identifiers
- Odd phrasing, distorted text or inconsistent lighting
- Audio in a language you understand
- Captions that seem sensational or lack context
- Reporting from reputable media outlets
- Additional footage or alternate camera angles
- Verification of source accounts
"It's virtually impossible to tell real from fake in many cases if the attempt is to make a deepfake," Radsch added. "Even sophisticated experts can't necessarily do that."
She argued for stronger technical protocols to authenticate real footage.
Why create these videos? Disinformation — and profit
Experts say motivations vary, from malicious actors aiming to disrupt public discourse and trolls seeking chaos.
In addition, creating AI content could also be motivated by economic interests, according to Radsch. ICE raids could be a very lucrative topic in order to generate followers or boost digital advertising revenue.
Ultimately, Radsch said, it doesn't matter all that much who's behind the latest wave of AI content; the deeper issue is collapsing trust: "People are losing faith that facts can be established."
With virtually no guardrails on how AI-generated videos can be monetized, she said the social media ecosystem incentivizes disinformation for profit. And the consequence is very alarming: When people aren't able to discern between falsely created and authentic videos, they often avoid the news altogether.
Rachel Baig and Ines Eisele contributed to this report.
Edited by: Silja Thoms