Back to Insights
Mindset April 29, 2026

Fear Is the Product

AI companies warn of world-ending models while abandoning safety pledges. Why doomsday messaging serves their market position more than the public.

Fear Is the Product

There is a pattern worth naming. An AI company announces a model so dangerous it cannot be released. Experts sound alarms. The press amplifies the warnings. The company's profile rises. Then, quietly, the model ships anyway. The cycle repeats. What looks like transparency about risk functions, on closer inspection, as a highly effective form of market positioning.

This is the argument at the centre of a BBC Future investigation by Thomas Germain, published April 29, 2026. The immediate trigger is Anthropic's announcement of Claude Mythos, a model the company says finds cybersecurity vulnerabilities at a level that far surpasses human experts. Anthropic warned in an early April blog post that the fallout for economies, public safety, and national security could be severe. The company simultaneously announced a partnership with more than 40 organisations to patch vulnerabilities before the model's capabilities spread. The announcement generated significant media coverage. It also generated serious doubts.

Heidy Khlaaf, chief AI scientist at the AI Now Institute, pointed to a glaring omission in Anthropic's claims: the company provided no false positive rates, which she describes as the largest indicator of how useful a security tool actually is. This is not some unknown metric, she said. Anthropic did not address the point when asked for comment. Separately, claims surfaced that Anthropic may have withheld a wide release of Mythos due to insufficient computing power, a question the company also declined to answer.

The Mythos announcement is one data point in a longer sequence. In 2019, when Dario Amodei was an executive at OpenAI, the company declared GPT-2 too dangerous to release because of concerns about malicious applications. They released it months later. OpenAI CEO Sam Altman said in 2015 that AI would probably most likely lead to the end of the world. He later said he loses sleep wondering if he has done something really bad by launching ChatGPT. In 2023, hundreds of tech leaders including Altman, Amodei, Bill Gates, and Demis Hassabis of Google DeepMind signed a statement calling AI extinction risk a global priority. That same year, Elon Musk signed a letter calling for a six-month pause on advanced AI development. He announced his AI company xAI less than six months later.

The Attention Economy of Existential Risk

Apocalypse messaging commands a different quality of attention than any product launch could. When a company frames its own technology as a civilisational threat, it stops being a vendor and becomes a protagonist in a story that feels genuinely historic. Shannon Vallor, a professor of the ethics of data and artificial intelligence at the University of Edinburgh, argues this framing has a specific effect on public perception. If you portray these technologies as somehow almost supernatural in their danger, she says, it makes people feel powerless and outmatched. The implied conclusion is that the only protection available comes from the companies themselves.

Distraction as Strategy

Emily M Bender, a professor of computational linguistics at the University of Washington and co-author of the book The AI Con, frames the apocalypse narrative as misdirection. While attention fixes on hypothetical extinction events, present-day harms accumulate with less scrutiny. The source article cites AI-related misdiagnoses in healthcare settings, data centres that could emit more greenhouse gases than entire nations, research linking AI chatbots to psychosis and suicide, and a growing body of work suggesting possible links between AI use and cognitive decline. Bender's characterisation is direct: companies are telling audiences to look over here, away from the environmental destruction and labour exploitation happening now.

Safety as Founding Myth, Then as Liability

Both OpenAI and Anthropic were built on safety-first origin stories. OpenAI launched as a non-profit, promising to develop AI responsibly before less careful players got there first. Anthropic was founded by people who left OpenAI specifically over safety disagreements. Both are now moving toward public stock market listings. The sequence of institutional decisions since those founding moments is telling. Google dropped its stated limits around building AI weapons. OpenAI fought a legal battle to shed its non-profit status. Anthropic abandoned what the source describes as its flagship policy to never train a model it could not guarantee was safe. Vallor's reading is straightforward: look at what an organisation's incentives are, and behaviour follows from there.

Utopia and Apocalypse as a Single Sales Pitch

The fear narrative rarely travels alone. In a 2024 essay, Sam Altman predicted that fixing the climate, establishing a space colony, and discovering all of physics would eventually become commonplace. Dario Amodei wrote of a country of geniuses in a datacenter. Vallor identifies utopia and apocalypse as two sides of the same coin, each operating at a scale too grand for regulation or governance to feel adequate. In either frame, the individual is left to wait for an outcome beyond their control, delivered or prevented by the same set of companies. Even the name Mythos, the article notes, seems designed to inspire something closer to religious awe than product evaluation.

The Governance Argument Hiding in Plain Sight

There is a regulatory dimension to this dynamic that rarely gets stated explicitly. If AI is so powerful that it defies normal human oversight, the logical policy conclusion is that only the companies building it are qualified to oversee it. Critics in the piece argue that this is not an accident. Vallor puts it plainly: every technology save this one, including nuclear and biological weapons, has been subject to governance. Nothing about AI is inherently ungovernable, she says. Unless we choose not to govern it. The companies most loudly warning of danger are also the ones most structurally positioned to benefit from a regulatory vacuum.

Altman criticised Anthropic's fear-based marketing in a recent podcast interview, which the source notes with some irony given his own long record of apocalyptic statements. An OpenAI spokesperson pointed to a blog post in which Altman wrote that the company would resist the potential of this technology to consolidate power in the hands of the few. Anthropic, for its part, shared blog posts from third parties supporting Mythos' cyber capabilities but did not address the core critique. Neither company's response engaged directly with the pattern the piece identifies.

Whether stricter regulatory frameworks will materialise is genuinely open. If public scepticism about doomsday messaging continues to grow, as it may given the long list of Silicon Valley predictions that did not arrive on schedule, it could become harder for companies to use fear as a default posture. That said, the incentive structure Vallor describes has not changed. Companies racing toward IPOs while framing their own products as existential risks face no obvious penalty for the contradiction. The more useful question for anyone working in media or marketing may be simpler: when a company tells you to be afraid of what it built, ask who benefits from that fear.