In a world where AI-generated content is becoming increasingly prevalent and largely shapes public perception, misinformation regarding the California wildfires is spreading rapidly across social media.
Investigators are exploring various possible ignition sources for the massive, wind-driven fires that have killed at least 24 people, consumed 40,000 acres, and destroyed over 12,300 structures in Los Angeles County, per NBC News.
While officials have the Hurst fire 95% contained, the Eaton and Palisades fires — among the most destructive in California’s history — are not even halfway contained and continue to burn thousands of acres.
Several AI-generated visuals have circulated widely during the fires, including one that falsely depicted the iconic Hollywood sign engulfed in flames, according to CBS News.
Another viral clip falsely claimed to show firefighters using women’s handbags to extinguish flames. However, an LAFD spokesperson clarified that firefighters were using standard canvas bags commonly used to tackle small fires.
Research from Everypixel Journal reveals that over 15 billion AI-generated images have been created since 2022, with an average of 34 million generated daily.
LAFD Public Information Officer Erik Scott addressed false claims and misinformation surrounding the fires on X (previously known as Twitter), emphasizing the importance of confirming information via direct sources.
“We have been made aware that there are inaccurate social media posts circulating suggesting that people can come work in California as part of a clean-up crew in areas that burned in recent wildfires,” Scott said. “There is no truth to this social media post, and there is no need to call and inquire.”
“Another post that circulated starting on January 7 claimed that Los Angeles-based fire agencies were seeking help from the general public in fighting #wildfires sweeping through the area,” he added. “This post is also baseless.”
Syracuse University professor Jason Davis, who specializes in misinformation detection, told CBS that sharing unverified content, even with trusted friends, can give it unnecessary credibility.
CBS recommends looking for visual inconsistencies to identify potential AI-generated content. The images often have background discrepancies and primarily focus on a specific subject while distorting other details.