Rage builds against ChatGPT: +775% bad reviews and record uninstalls in a few days

For months, ChatGPT sat comfortably on millions of phones.

Then a single announcement flipped the mood almost overnight.

Within days, the flagship AI assistant from OpenAI faced a wave of angry users, a flood of one‑star ratings, and mass deletions after news broke of its partnership with the US Department of Defense.

A weekend that changed the mood around chatgpt

OpenAI’s mobile app had been a fixture in app store rankings, seen by many as a harmless productivity tool. That perception cracked at the end of February 2026.

From 28 February onwards, uninstallations of the ChatGPT app surged. According to figures reported in France, deletions jumped about 295% in a single weekend, an almost unheard-of spike for a mainstream consumer app that had already reached maturity.

Users did not just walk away quietly. Ratings plunged, with bad reviews rising an estimated 775%, turning app stores into a venting space for frustration and distrust.

Until then, criticism of ChatGPT tended to focus on hallucinations, bias, or subscription pricing. The new backlash is different. It is directed less at the product’s technical limits than at OpenAI’s political and ethical choices.

The trigger: a deal with the us department of defense

The breaking point was the announcement of a partnership between OpenAI and the US Department of Defense, still widely referred to by its historical name “the Department of War” in many foreign media reports.

OpenAI framed the deal as a way to support “defence and national security” use cases. That phrasing immediately raised alarms among users wary of AI being used for warfare, surveillance, or automated decision‑making in conflict zones.

On social media, long‑time fans of ChatGPT expressed a sense of betrayal. Many said they had no problem with AI used for education, writing or coding, but drew a red line at any association with weapons systems or military intelligence.

For a significant share of the audience, the partnership shattered the idea that ChatGPT was a neutral, civilian tool underpinned by purely benevolent intentions.

➡️ This unexpected partnership between two industry giants is about to shake up your shopping experience

➡️ Hygiene after 60 : not once a day, not even once a week, here’s the shower frequency that truly keeps you thriving

➡️ Most smartphones collect this data by default, but turning it off takes seconds

➡️ Aluminum foil in the freezer: the simple hack winning over more households

➡️ James Webb Observed an Exoplanet with an Atmosphere So Reflective It Outshines Its Own Star and the Reason Is Purely Alien

➡️ First autumn snow set to hit 4 counties within 48 hours: will you wake to 0C and roads on Sunday?

➡️ France loses €3.2 billion Rafale deal after last minute reversal sparking accusations of political cowardice and a deep rift over national pride

➡️ People who feel mentally overstimulated often carry unprocessed emotional energy

This controversy layered onto existing scepticism around Sam Altman, OpenAI’s CEO, whose leadership has been repeatedly questioned since the dramatic boardroom struggle in late 2023. For critics, the defence deal is proof that commercial and strategic ambitions now dominate the company’s original safety‑first rhetoric.

Bad reviews: what users actually complain about

From bugs and subscriptions to ethics and trust

Before this episode, negative reviews of ChatGPT’s mobile app often mentioned throttling of free tiers, login issues, or GPT-4 access locked behind a subscription. Those gripes have not disappeared, but they are no longer the main story.

In recent days, user comments have shifted towards:

  • Anger at perceived “militarisation” of a civilian AI tool
  • Demands for clearer transparency on government partnerships
  • Fears of data being shared with defence or intelligence agencies
  • Calls to boycott the app until ethical guarantees are in place

Some reviewers state explicitly that they still find the technology impressive, but will no longer support it financially or keep it on their phone while these deals remain active.

A trust crisis, not just a UX complaint

The jump of roughly 775% in poor ratings is a signal that users are not just mildly annoyed. They are re‑evaluating whether the company’s goals align with their own values.

Apps usually suffer rating slumps after bugs, design overhauls, or price hikes. This storm follows a corporate announcement. That difference matters: fixing interface issues or adding features does not automatically repair a moral fracture.

Competitors sense an opening

ChatGPT’s troubles have inevitably benefited rivals. Anthropic’s assistant, Claude, for instance, has publicly stressed that it has not signed a deal with the US Department of Defense.

The company has mentioned disagreements over how AI might be used for surveillance and autonomous weapons, positioning itself as more cautious on military applications.

By distancing itself from defence contracts, Claude presents an alternative for users who want advanced AI without ties to the arms sector.

Other players, from open‑source projects to smaller startups, are also moving quickly to underline their ethical charters. Some emphasise transparent governance structures, shareholder limits, or clear bans on offensive military use.

Sam altman under renewed scrutiny

Sam Altman has long been a polarising figure in the tech scene. Admired by some as a visionary capable of steering AI into mainstream life, he is criticised by others for blurring the line between safety research and aggressive commercial expansion.

The defence partnership has reignited those debates. Detractors argue that OpenAI has drifted from its original non‑profit mission and now behaves like a conventional contractor vying for state deals, particularly in strategic sectors such as defence.

Supporters counter that national security work can include non‑lethal applications: cybersecurity, logistics, simulation, disaster response planning. They insist that refusing any interaction with defence bodies would shut out AI from fields where it could prevent harm or stabilise crises.

Concern Pro‑deal argument Critic argument
Use in warfare Focus on defensive tools only Tools can migrate to offensive uses
Data privacy Contracts can include strict safeguards Government access risk remains high
Public trust National security partnerships are normal Undermines civilian, friendly image of AI

What this means for everyday users

For most individuals, ChatGPT still behaves the same way on screen. It writes emails, generates lesson plans, drafts code. The algorithms powering your conversations have not suddenly turned into weapons.

The dispute is more about governance and trajectory. Users are essentially asking: if I rely on this tool daily, who ultimately steers its future? And where do they draw their moral boundaries?

Some are deciding to keep using ChatGPT while putting pressure on OpenAI to release more details on the partnership, including:

  • Clear limits on types of military use
  • Independent audits of safety practices
  • Public reporting on government contracts

Others choose to uninstall the app outright, switching to alternatives that communicate stricter positions on defence work, or to local models that run on their own devices.

Understanding the risks and trade‑offs

When AI companies work with defence institutions, several layered risks add up. Dual‑use technology can support both peaceful and violent ends. A tool trained for analysing satellite images to coordinate humanitarian aid can just as easily support targeting systems.

There is also the risk of function creep. A model first deployed for translation or logistics might later be fine‑tuned for battlefield simulations. Even if a contract initially restricts usage, political pressure or emergencies can erode those constraints.

On the other hand, refusing collaboration completely can have costs. States will pursue AI‑enhanced capabilities regardless. If more safety‑conscious actors refuse to sit at the table, less scrupulous competitors may fill the gap, shaping military AI with fewer ethical brakes.

How users can respond in practical ways

People unsettled by the recent news have several concrete options beyond leaving a one‑star review.

  • Compare privacy policies and ethical statements of different AI tools
  • Use browser versions instead of mobile apps, and limit data sharing where possible
  • Test smaller or open‑source models for sensitive tasks
  • Contact providers asking for explicit military‑use policies
  • Support research groups and NGOs that monitor AI use in warfare

Another path is to separate tasks. Some users now keep a “general‑purpose” assistant for harmless content and rely on local, offline tools for anything involving personal, medical or political data. Splitting usage reduces exposure if a provider’s partnerships change direction again.

For many, the spike in bad ratings and deletions is less a final break than a warning shot. It shows that the public is watching where generative AI is heading, and that ethical lines drawn by users can trigger swift, measurable backlash when they feel crossed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top