Loading...
Loading...
AI incidents and unusual LLM behavior that don't fit neatly into other categories. Miscellaneous reports of strange, unexpected, or noteworthy AI outputs.
17 reports in this category
Goldman Sachs Chief Economist Jan Hatzius says AI investment had basically zero contribution to US GDP growth in 2025. Most of the hundreds of billions invested bought imported chips from Taiwan and Korea, growing their economies instead. 80% of 6,000 executives surveyed by NBER reported no impact on employment or productivity. $700 billion more in AI investment is planned for 2026.
In a private Q&A with priests of the Diocese of Rome on Feb 19, 2026, Pope Leo XIV told priests to use their brains instead of artificial intelligence to prepare homilies. He said he now sees and hears it happening. The Pope also recommended deeper prayer and ongoing study, saying priests should not reduce everything to brief moments but truly learn to listen. This marks the Catholic Church's first direct institutional pushback against AI replacing creative and spiritual work in ministry.
NBER study of 6,000 executives across US, UK, Germany, Australia: nearly 90% report AI had no impact on employment or productivity over 3 years. Average AI usage among those who use it: 1.5 hours/week. 25% do not use AI at all. $250B+ invested with no measurable macro return. Apollo chief economist: AI is everywhere except in the incoming macroeconomic data. Yet executives forecast 1.4% productivity gains over the next 3 years - recreating the Solow productivity paradox from 1987.
A researcher asked Claude to reverse-engineer their Kickstarter smart sleep mask after the app kept disconnecting. Claude decompiled the Android APK, found hardcoded MQTT broker credentials shared by every copy of the app, and connected to the company server. It could read live EEG brainwave data from approximately 25 active devices belonging to sleeping strangers — and send electrical muscle stimulation pulses to their faces. The mask features EEG brain monitoring, EMS around the eyes, vibration, heating, and audio. The entire reverse engineering session took about 30 minutes. The researcher responsibly disclosed the vulnerability without naming the product or company.
Major news publishers including The Guardian and NYT have started blocking the Internet Archive Wayback Machine crawler — not over piracy concerns, but because AI companies are using archived content as a backdoor for training data. The collateral damage: public access to historical news records is being cut off because the training data wars have made every content repository a target for scrapers.
OpenAI deleted the word "safely" from its mission statement, discovered in its latest IRS filing. The old mission was to build AI that "safely benefits humanity, unconstrained by a need to generate financial return." The new version drops "safely" along with other words like "responsibly," "unconstrained," "safe," "ensuring," and "positive." This coincided with OpenAI restructuring from a nonprofit into a for-profit public benefit corporation. A researcher tracked every version of the mission from IRS filings since 2016 in a git repo, showing the gradual drift from idealism to corporate language.
**In December 2024**, a Waymo autonomous taxi collided with a Serve Robotics delivery robot on a Los Angeles street — a bizarre robot-on-robot accident that perfectly captured the chaos of deploying multiple autonomous AI systems in shared urban spaces. Neither the robotaxi nor the delivery bot was able to navigate around the other, resulting in a collision between two machines that were each supposedly designed to operate safely in the real world. The incident raised questions that hadn't previously been relevant in transportation safety: What happens when AI systems from different companies, with different programming, different sensors, and different decision-making algorithms, encounter each other in unpredictable real-world situations? Each system was designed and tested primarily with human actors in mind — pedestrians, cyclists, other cars driven by people. The scenario of two autonomous systems needing to negotiate with each other was an edge case that apparently neither company had adequately prepared for. The collision was relatively minor — no humans were injured — but it highlighted a systemic problem that will only grow as autonomous systems become more common. Cities are increasingly home to delivery robots, robotaxis, autonomous shuttles, and other AI-driven machines sharing sidewalks and streets. Each operates according to its own proprietary algorithms, with no shared communication protocol or coordination framework. The Waymo-Serve collision became a viral symbol of an AI future that might be more chaotic than the orderly sci-fi vision companies sell. It was documented by the AI Incident Database as an example of emerging multi-agent AI failures — incidents that occur not because any single system fails, but because multiple AI systems interact in ways their creators didn't anticipate. ---
LayerX Security's 2025 Enterprise AI and SaaS Data Security Report revealed a staggering statistic: 77% of employees regularly paste sensitive corporate data into AI chatbots like ChatGPT — and most do it from personal, unmanaged accounts that completely bypass enterprise security controls. The report, based on browser-level telemetry data from global organizations, found that generative AI tools have become the leading channel for corporate data exfiltration, responsible for 32% of all unauthorized data movement. The numbers painted a grim picture of corporate AI hygiene. Of the 45% of enterprise users actively engaging with generative AI platforms, 71.6% accessed them through non-corporate accounts. Among users who pasted data into AI tools, the average was 6.8 pastes per day — with more than half (3.8 pastes) containing sensitive corporate information. Nearly 40% of uploaded files contained personally identifiable information or payment card data, and 22% of pasted text included sensitive regulatory information. The method's simplicity was what made it so dangerous: copy/paste behavior is invisible to traditional data loss prevention systems, firewalls, and access controls. An employee could paste proprietary source code, customer data, financial projections, or trade secrets into ChatGPT with a few keystrokes — and if they were using a personal account, the company would have no visibility into the data exposure. Samsung had already learned this lesson when engineers leaked semiconductor code, prompting a company-wide ban on generative AI tools. LayerX CEO Or Eshed warned that the leakage created geopolitical risks, regulatory compliance concerns, and the possibility of corporate data being used for model training. With up to 80% of AI tools used by employees operating without IT oversight — so-called "shadow AI" — the report concluded that organizations were facing "an unprecedented challenge" that conventional security approaches were ill-equipped to address. ---
**In July 2025**, a Florida mother received a frantic phone call from what sounded exactly like her daughter, crying hysterically about a car accident. The voice was unmistakably her daughter's — the same inflection, the same emotional patterns, the same way she said "Mom." Panicked and desperate to help, the mother followed instructions to send money immediately. She transferred $15,000 before discovering the horrifying truth: the voice on the phone was an AI clone. Her daughter was fine. The entire thing was a scam. The attack used AI voice cloning technology that can create a convincing vocal replica from as little as a short social media clip. Scammers find a few seconds of a target's voice online — from TikTok videos, Instagram stories, or even voicemail greetings — and feed it into accessible AI tools that can reproduce their voice saying anything. When combined with caller ID spoofing (making the call appear to come from the "daughter's" number) and emotional manipulation (a panicked scenario requiring immediate action), the scam becomes devastatingly effective. This case was part of a massive wave of AI voice clone scams in 2025. Instances of deepfake fraud surged by 3,000% in 2024, according to IBM, and the trend accelerated. The scams exploit the most fundamental human instinct: a parent hearing their child in distress will act first and verify later. By the time the verification happens, the money is already gone. Law enforcement agencies have struggled to keep pace with the technology. The tools are cheap, widely available, and improving rapidly. Traditional advice — "call the person back on a known number" — is the only reliable defense, but in a moment of genuine-sounding panic, even cautious people can be caught off guard. The Florida mother's $15,000 loss was far from the largest such theft, but it illustrated how AI has turned every family into a potential target. ---
When OpenAI released its image generation capabilities in ChatGPT, users immediately discovered they could create Studio Ghibli-style artwork — and the internet went wild. Millions of Ghibli-inspired images flooded social media, with everyone from random users to The White House creating anime-styled versions of real photos in the distinctive aesthetic of legendary animator Hayao Miyazaki's studio. The craze was so massive it reportedly overwhelmed OpenAI's servers and became one of ChatGPT's biggest viral moments. But it also ignited a fierce copyright controversy. Studio Ghibli had never authorized the use of its artistic style for AI training, and Miyazaki himself has been a vocal critic of AI-generated art, previously calling it "an insult to life itself." The irony of his life's work being mass-reproduced by the technology he despises was not lost on observers. The legal questions were murky. Can an artistic "style" be copyrighted? OpenAI's terms of service claim users own their generated images, but the underlying model was trained on vast datasets that almost certainly included Studio Ghibli content. In a notable disparity, free ChatGPT users began receiving messages that image generation was unavailable "due to copyright considerations," while paid subscribers could still generate Ghibli-style images — suggesting OpenAI was managing liability rather than addressing the underlying ethical issue. The episode crystallized the tensions at the heart of generative AI: the technology is built on the creative labor of artists who were never asked for permission and receive no compensation, while the companies profiting from it argue that learning from existing art is no different from how human artists develop their styles. For many artists and their supporters, the mass commodification of a living legend's distinctive aesthetic was a bridge too far. ---
### What Happened **In February 2024**, fraudsters pulled off the first known deepfake video heist, stealing $25.6 million (HK$200 million) from a multinational firm's Hong Kong office — later identified as British engineering firm Arup. The scam was breathtaking in its sophistication: criminals used AI to create deepfake video replicas of multiple company executives, including the CFO, and staged an entire fake video conference call where every participant except the victim was a fabricated digital clone. ### The AI Response The scheme began with what appeared to be a phishing email from the UK-based CFO, requesting a "secret transaction." The targeted finance employee was initially skeptical — but when he joined a group video call and saw the CFO and other familiar colleagues on screen, his doubts evaporated. The faces, voices, and mannerisms were convincingly replicated using publicly available video and audio footage of the real executives. Convinced by what appeared to be direct orders from senior leadership, the employee made 15 separate transfers totaling $25.6 million to five different bank accounts. It took approximately a week before the company realized what had happened. By then, the money was gone. Hong Kong police investigated but reported no arrests at the time of the initial disclosure. ### The Aftermath The Arup heist marked a terrifying escalation in AI-enabled fraud. Previously, deepfake scams had primarily involved cloned phone calls or pre-recorded videos. This was the first documented case of a live, multi-person deepfake video conference — a scenario that makes traditional verification methods (like "get them on video") completely obsolete. Instances of deepfake fraud surged by 3,000% in 2024, with tools becoming so accessible that convincing voice clones can be generated from just 30 seconds of audio. ---
**In early 2025**, fraudsters cloned the voice of Italian Defense Minister Guido Crosetto with such precision that high-profile business leaders were convinced they were speaking to the minister himself. The scammers called prominent Italian executives and industrialists, claiming that kidnapped journalists needed urgent ransom payments and that the Italian government could not pay directly due to political sensitivities. They asked the business leaders to wire money immediately to rescue the hostages. At least one victim transferred nearly one million euros before police were able to freeze the funds. The voice clone was so convincing that even people who knew the defense minister personally were deceived. ThreatLocker's Chief Product Officer demonstrated during a webinar how AI-generated voice clones can capture not just the basic sound of a person's voice, but their pronunciation, cadence, tone, and even filler words — making the output nearly indistinguishable from the real person, especially over a compressed telephone connection. The technology required to pull off such an attack is shockingly accessible. A convincing voice clone can be generated from as little as 30 seconds of audio — readily available for any public figure from interviews, press conferences, or social media. The cost is trivial, often less than a few hundred dollars, and some tools can even generate speech in real-time, allowing scammers to hold interactive conversations rather than relying on pre-recorded messages. The Crosetto case proved that "voice can no longer be treated as identity," as security researchers warned. A person's voice, once considered among the most reliable markers of authenticity in business communications, has been fundamentally undermined by AI technology that is cheap, accessible, and improving by the month. ---
OpenAI's o3 model went viral in April 2025 — not for writing code or answering questions, but for its uncanny ability to pinpoint exact locations from photos. Users on X discovered that o3's new image-reasoning capabilities, which allow it to crop, rotate, zoom in, and analyze visual details, make it a frighteningly effective geolocation tool. Give it a photo of a dimly lit bar with a mounted rhino head, and it'll correctly identify the specific Williamsburg speakeasy. Show it a library photo, and it nails the location in 20 seconds. People started treating it like a supercharged version of GeoGuessr, feeding the model restaurant menus, neighborhood snapshots, building facades, and even selfies. In many cases, o3 wasn't using EXIF metadata or memories from past conversations — it was deducing locations from subtle visual clues like signage fragments, architectural styles, vegetation, and ambient lighting. The privacy implications are massive. There's nothing preventing someone from screenshotting a person's Instagram Story and using ChatGPT to identify exactly where they are — making it a potential stalking and doxxing tool. TechCrunch noted there appeared to be "few safeguards in place to prevent this sort of 'reverse location lookup'" and that OpenAI's safety report for the models didn't even address the issue. OpenAI eventually responded, saying: "We've worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy." But security researchers pointed out that the genie was already out of the bottle — the capability exists, and determined bad actors will find ways to exploit it regardless of surface-level guardrails. ---
A security flaw in ChatGPT's Deep Research mode — an agentic feature that can read emails and browse the web autonomously — allowed attackers to extract sensitive Gmail data including message subjects, sender names, and body excerpts. The vulnerability exploited the fact that Deep Research operates with broader access than standard ChatGPT, connecting to users' email accounts and web content to gather information for research tasks. The flaw highlighted a growing concern with AI agent features: as chatbots are given more autonomous capabilities — reading emails, browsing websites, executing code — the potential attack surface expands dramatically. An attacker could craft a malicious email or webpage that, when processed by Deep Research, would trick the system into forwarding private email content to an external server. This was part of a broader pattern of ChatGPT security incidents. In March 2023, a Redis bug exposed chat histories and payment data. Samsung engineers accidentally leaked proprietary code. Researchers demonstrated training data extraction techniques. And cybercriminals compiled and sold millions of ChatGPT credentials on the dark web. Each incident underscored the same fundamental problem: ChatGPT has become a repository for extraordinarily sensitive information — personal confessions, business strategies, medical queries, financial data — while operating in an environment where security was often an afterthought to functionality. OpenAI has taken steps to address these issues, including launching a bug bounty program with rewards up to $20,000, introducing temporary chat modes that auto-delete after 30 days, and allowing users to opt out of training data collection. But as AI assistants gain more agentic capabilities, the race between new features and security safeguards grows ever more precarious. ---
**A hacker claiming to have breached OmniGPT**, a widely-used AI chatbot and productivity platform, posted evidence on the notorious Breach Forums exposing the personal data of 30,000 users and more than 34 million lines of conversation logs. The leaked data included emails, phone numbers, and — most alarmingly — the contents of files users had uploaded to the platform, some containing credentials, billing details, and API keys. The breach highlighted a vulnerability unique to AI chatbots: users tend to treat them as casual, private repositories for any and all information. People share sensitive business data, personal confessions, financial details, and proprietary code with AI chatbots in ways they'd never share with a public website — creating a treasure trove for attackers. As security researchers at Skyhigh noted, these platforms operate as "black boxes" where users have "virtually no insight into how their data is handled, stored, or protected." The OmniGPT breach was particularly alarming because it happened on the service's back end — entirely outside of anything users could have done to prevent it. No amount of careful prompting or privacy settings could have protected them from a server-side compromise that exposed their entire conversation history. The incident came amid a broader wave of AI-related security failures. Samsung had banned employees from using ChatGPT after engineers leaked proprietary semiconductor code. Over 100,000 ChatGPT credentials had been stolen through malware. And researchers demonstrated that training data could be extracted from AI models. The allure of powerful, often free AI services continues to grow — and with it, the attack surface for cybercriminals who understand that AI chat logs may be the most intimate data people produce. ---
**In July 2025**, ChatGPT users got an unwelcome surprise: their supposedly private AI conversations were showing up in Google search results. The culprit was OpenAI's "Make this chat discoverable" checkbox — part of what the company later called a "short-lived experiment" — which allowed shared ChatGPT conversations to be indexed by search engines. Over 4,500 conversations became publicly searchable, exposing mental health discussions, legal concerns, career worries, and in some cases even API keys and credentials. The feature's purpose wasn't immediately clear to most users, and critically, no robots.txt tags were in place to prevent search engine crawling. A simple Google query like `site:chatgpt.com/share "keyword"` could surface private conversations. Even after OpenAI removed the discoverability toggle on August 1, 2025, previously indexed conversations remained accessible in search caches and the Internet Archive's Wayback Machine. The problem wasn't limited to ChatGPT. Hundreds of thousands of Grok conversations were also found in Google search results through a similar indexing issue. Research showed that users often feel anonymous in chat interfaces, sharing personal and sensitive details they would never put in an email — creating a false sense of privacy that these incidents brutally shattered. The incident fueled growing concern about the collision between AI convenience and data privacy. Stanford researchers warned that users should "stay alert" when using platforms like ChatGPT or Gemini, since seemingly simple exchanges can leave lasting data trails. Concentric AI found that generative AI tools exposed around three million sensitive records per organization during the first half of 2025, with up to 80% of AI tools used by employees operating without oversight from IT or security teams. ---
**In early 2025**, a cybercriminal known as "emirking" listed 20 million OpenAI user login credentials for sale on a dark web forum, complete with samples of the allegedly stolen data. The listing sent shockwaves through the AI community, though the exact source of the credentials — whether from a direct breach of OpenAI systems or compiled from credential-stuffing attacks using previously leaked passwords — remained unclear. The incident was part of a broader pattern of ChatGPT security concerns. In June 2023, cybersecurity firm Group-IB had already identified over 101,000 stealer-infected devices with saved ChatGPT credentials, primarily stolen through malware like Raccoon, Vidar, and RedLine. The Asia-Pacific region was hit hardest. And in March 2023, a bug in ChatGPT's Redis library had exposed chat history titles and payment information for 1.2% of ChatGPT Plus subscribers. The stakes are unusually high with AI chatbot credentials. Unlike a compromised social media account, a stolen ChatGPT account may contain months or years of intimate conversations — personal confessions, business strategies, medical questions, code repositories, and proprietary information that users shared thinking they were having private exchanges. Italy slapped OpenAI with a €15 million fine for privacy violations, and Samsung banned employees from using generative AI tools entirely after engineers accidentally leaked proprietary semiconductor code through ChatGPT. The credential sale highlighted a growing reality: as AI chatbots become central to people's work and personal lives, they become prime targets for cybercriminals who understand the treasure trove of information stored in chat histories. ---