Support

AI's hall of shame: the biggest security scandals

28 May 2025

pic 50_1.jpg

If there were an awards ceremony for the most jaw-dropping AI failures of recent years, the competition would be fierce. From uninvited bots eavesdropping on meetings to billion-dollar lawsuits over copyright breaches, it’s clear that the shiny promise of AI has a shadow side - and security is its Achilles’ heel. Here’s our round-up of the most notorious AI mishaps in recent memory. Think of it as a “Wall of Fame,” if your idea of fame includes leaked transcripts, lawsuits, and a very public loss of trust.


🎤 Otter.ai sends a transcript to the wrong person September 2024

After a Zoom meeting ended, Otter’s AI emailed a full transcript - including post-call private chat - to the wrong attendee. The fallout? A cancelled investment deal and an angry engineer.

🔗entrepreneur.com


👻Meeting bots crashing the party 2023–2024

Bots from Fireflies, Otter, and Read.ai joined meetings uninvited. Sometimes, even when the actual user wasn’t present. Board meetings, faculty reviews, confidential strategy calls - nothing was off-limits.

🔗 chronicle.com


🔓 Granola AI lets anyone access your transcripts March 2025

Security researchers found a hardcoded API key in Granola AI, letting anyone download private transcripts without authentication. All fixed now - but only after the alarm bells rang.

🔗 vulnu.com


🇮🇹 OpenAI fined €15M by Italy December 2024

Italy’s privacy watchdog ruled that OpenAI had illegally collected and used personal data to train ChatGPT. The €15 million fine followed a temporary national ban.

🔗reuters.com


🎶 Musicians fight back against AI April 2024

Over 200 global artists - including Paul McCartney, Elton John, and Dua Lipa - signed a letter demanding stricter copyright protections against AI-generated music.

🔗theguardian.com


🖼️ Stability AI sued by Getty Images December 2024

Getty Images filed a $1.7B lawsuit accusing Stability AI of scraping 12 million photos without permission to train its generator.

🔗petapixel.com


📚 Writers unite: OpenAI & Microsoft face legal heat April 2025

Authors like Sarah Silverman and institutions like The New York Times sued OpenAI and Microsoft for training their tools on protected content—without consent.

🔗reuters.com


👮 Facial recognition causes wrongful arrests January 2025

The Washington Post exposed at least eight wrongful arrests caused by flawed facial recognition used in policing. Many of the affected were people of colour.

🔗washingtonpost.com


🏥 Medical data breach via Meta Pixel May 2024

3 million patients’ health data was leaked when Advocate Aurora Health embedded Meta Pixel into its websites - unintentionally transmitting PHI to third parties.

🔗HIPAA Journal


🧾 Otter and MeetGeek banned by UMass April 2024

UMass Amherst banned these tools over two-party consent law violations. Meeting bots were reportedly recording without full participant approval. 🔗The Massachusetts Daily Collegian


🧠 AI hallucinations in medical notes April 2024

An AP investigation revealed that AI-generated medical transcripts “invent” information, posing serious risks to patient safety.

🔗apnews.com


🧩 The Pattern: Speed over Safety?

All these failures have one thing in common: speed and convenience were prioritised over control and transparency. From startups to Silicon Valley giants, too many AI vendors treat security as an afterthought. In the rush to innovate, privacy gets compromised - and real-world damage follows.

🔐 Ulla: Security-First by Design

While others chase scale, we built Ulla to stay small where it matters: your private data stays private. No OpenAI

  • No third-party APIs
  • On-premises deployment available
  • Ulla only joins meetings by invitation
  • Users stay in full control of their information

So while the “Wall of Shame” keeps growing, we’re staying off it - by design.

Want to see what secure AI actually looks like?

📩 info@ulla.bot | ulla.bot

Posted in Uncategorized on May 28, 2025.