Over 22,000 signatures to keep GPT‑4o: what users stand to lose and why OpenAI won't back down

A petition on Change.org has gathered more than 22,000 signatures after OpenAI's decision to retire the GPT‑4o model. We examine why the company took this step, what lies behind the lawsuits, and what the consequences are for users — including in Ukraine.

83
Share:
GPT-4o на смартфоні (Фото: Depositphotos)

Why this matters now

More than 22,000 people have signed a petition to preserve GPT‑4o after OpenAI permanently retired the model. The news, reported by Business Insider, is notable not only for the number of signatories: it is an example of a conflict between the demand of certain user groups for a "warm" style of interaction with AI and the company's efforts to reduce risks and consolidate development around newer generations of models.

OpenAI’s decision — a rational transition or ignoring the community’s requests?

In its statement, OpenAI explained the closure of GPT‑4o by low demand: according to the company, the model was used by about 0.1% of the audience. The company also frames this as a strategic shift to GPT‑5.1 and GPT‑5.2, which it says provide greater stability and better safety guarantees.

"The model was used by roughly 0.1% of the audience, so we are concentrating resources on newer versions that better meet stability and safety requirements."

— OpenAI, official statement

Social response and legal risks

Supporters of GPT‑4o in the petition called it "indispensable" because of its emotional sensitivity and "warm" tone of interaction; some said they were willing to give up paid subscriptions in order to have the model restored. At the same time, GPT‑4o features in at least eight lawsuits alleging that its tone may have influenced psychologically vulnerable users. Here two dimensions collide — user comfort and legally substantiated risks.

"We believe GPT‑4o provided a different type of interaction that was useful to many people."

— petition author on Change.org

Technical context: not just the model, but the infrastructure

Beyond that, OpenAI is not standing still: the company introduced Lockdown Mode to protect against data leaks and unveiled the Frontier platform for creating and managing "AI colleagues." For developers, this is a way to combine innovation with control — but for users who had grown attached to a particular interaction style, it feels like a loss.

What it means for Ukraine

For Ukraine’s media and psychological landscape, the question of empathetic AI has practical significance: during the war and in post‑traumatic conditions, access to tools that mitigate stress can be important. However, the interests of vulnerable audiences require balanced solutions — a combination of accessibility, responsible design, and transparent reporting from developers.

Conclusion

This is not just a story about a single model. It is a test for the industry: how to reconcile innovation, community demands, and legal responsibility. The ball is now in the companies’ and regulators’ court — can they propose mechanisms that preserve useful features without unacceptable risks?

World news