Loading...
Loading...
Documented cases of AI exhibiting political bias, ideological slant, or propaganda-like outputs. Reports of LLMs pushing specific political narratives or viewpoints.
2 reports in this category
AI-generated reading summaries on the Fable book app produced biased and offensive commentary that surprised users expecting neutral, helpful book summaries. The AI-generated content included commentary that reflected biases embedded in its training data, producing assessments of books that were culturally insensitive, politically loaded, or simply inappropriate for a mainstream reading platform. The incident was documented by the AI Incident Database as part of a broader pattern of AI systems producing unexpectedly biased outputs when deployed in consumer-facing applications. Unlike chatbots where users actively prompt responses, Fable's AI summaries were presented as authoritative editorial content — giving the biased outputs an air of credibility they didn't deserve. Users who encountered the offensive commentary had no way to know it was AI-generated rather than human-written editorial content. The case illustrated a subtle but important category of AI harm: not the dramatic failures of chatbots encouraging suicide or generating deepfakes, but the quiet erosion of trust when AI-generated content infiltrates platforms without adequate disclosure or quality control. Book summaries that carry hidden biases can shape readers' perceptions of authors, cultures, and ideas in ways that are difficult to detect and correct. The Fable incident joined a growing list of examples where AI content generation was deployed in consumer products without sufficient safeguards — from Google's AI Overviews giving dangerous health advice to news aggregators publishing AI-generated summaries that misrepresented original reporting. Each case reinforced the same lesson: deploying AI to generate public-facing content requires more than just technical capability — it requires careful human oversight to catch the biases and errors that automated systems inevitably produce. ---
### What Happened **In early 2024**, New Hampshire voters received robocalls featuring what sounded exactly like President Joe Biden urging them not to vote in the state's primary election. "Stay home and save your vote for the November election," the AI-generated Biden voice said. The cadence, tone, and characteristic rasp were eerily convincing — but the real President Biden had never recorded any such message. ### The AI Response The deepfake robocall represented one of the most brazen attempts to use AI to interfere with an American election. It targeted Democratic voters specifically, attempting to suppress turnout in a pivotal early primary state. The Federal Communications Commission launched an investigation and moved to crack down on AI-generated audio in political robocalls, ultimately proposing rules that would require disclosure when AI voice technology is used in campaign communications. The technology behind the attack was alarmingly accessible. Modern AI voice cloning tools can create a convincing replica from as little as 30 seconds of audio — and for a public figure like the President, there are thousands of hours of publicly available speech recordings to work with. The cost of generating such a clone is trivial, often just a few dollars through consumer-accessible platforms. ### The Aftermath The incident sent shockwaves through the political establishment and cybersecurity community, as it demonstrated that the feared scenario of AI-powered election interference was no longer theoretical. Election officials across the country scrambled to prepare for a wave of deepfake content heading into the November 2024 presidential election. The Biden robocall became a watershed moment — the first major case of AI being weaponized against American democratic processes, and a preview of the challenges that elections worldwide would face in the AI era. ---