OpenAI removes "teasers" from ChatGPT's responses: what this will mean for user trust

OpenAI has responded to a wave of criticism and is changing ChatGPT’s tone — from promotional hints to more direct, rational responses. Why this matters now and what the quality of AI language means for information resilience.

41
Share:
Ілюстративне фото: Depositphotos

What happened

According to Android Headlines, OpenAI is revising ChatGPT’s response style after a wave of user complaints about so-called “teaser” phrasings — phrases like “you won’t believe” or “want to know more.” The company is adjusting the tone in the GPT-5.3 and GPT-5.4 models, and has also released a faster version, GPT-5.4 mini, and a Codex app for Windows.

Why it matters

The decision is not just cosmetic. When AI responses resemble marketing tricks, user trust falls and the risk of manipulation increases. OpenAI, according to CEO Sam Altman, is focusing not only on the model’s knowledge but also on how it presents that knowledge — a direct step toward improving the service’s transparency and clarity.

“Humanity and naturalness in communication have become key for users”

— Sam Altman, CEO of OpenAI (via Android Headlines)

What changes to expect

The updates are aimed at reducing marketing-style wording and making responses more direct and informative. It is also a reaction to criticism from part of the audience, who, after the discontinuation of GPT-4o at the end of 2025, felt mixed impressions of the new versions — from “too robotic” to overly flattering.

Why this matters for Ukraine

Clearer, less manipulative AI responses are not just a matter of convenience. In times of information warfare, the quality of phrasing determines how easily foreign and domestic audiences can separate facts from manipulation. For Ukrainian media, volunteers, and state institutions, this is a chance to reduce the risks of disinformation and increase the effectiveness of communication with partners.

What’s next

Technical changes are only part of it. More important is how developers and users will monitor the results and whether this will transform into better moderation and transparency practices. Industry experts point out: when it comes to AI, trust is built over years and can be destroyed in an instant.

Whether OpenAI can find a balance between naturalness of communication and neutrality of expression depends on testing, openness to feedback, and regulatory pressure. For users, this means: check sources and demand clarity from tools, not emotional cues.

World news