AI-driven misinformation on climate change is a growing threat
AI tools like Bard and ChatGPT have been found to generate and spread climate change misinformation, raising concerns about their potential impact on public opinion.
Stella Levantesi reports for DeSmog.
In short:
- Studies show that AI tools like Bard and ChatGPT can fabricate climate misinformation, making it harder to distinguish real science from fake.
- AI-generated misinformation can be spread via synthetic media, social bots, and algorithms that tailor content based on users’ biases.
- Researchers are developing AI tools to counter misinformation, but they face challenges such as “hallucinations” and the rapid pace of AI advancement.
Key quote:
“ ... researchers have suggested that AI is being used to emotionally profile audiences to optimize content for political gain.”
— Asheley R. Landrum, associate professor at the Walter Cronkite School of Journalism and Mass Communication and a senior global futures scientist at Arizona State University
Why this matters:
AI-generated climate misinformation threatens to undermine trust in science. Its ability to spread rapidly and persuasively, especially on social media, makes it a significant challenge for combating climate disinformation and influencing public policy.
Related: Fossil fuel industry spreads misinformation to hinder global shift to renewable energy