Briefly: why this matters to the reader
OpenAI is rolling out a system that predicts a user's age based on time of activity, account age, usage dynamics and declared age. If uncertain, the platform will require confirmation via a selfie using the Persona service. At the same time, an “adult mode” option is being prepared to restrict minors' access to NSFW content. This will affect child safety, data privacy and content access — an issue already on the agenda of regulators in the EU.
How it works
According to OpenAI, the system does not rely solely on the age entered by the user. The algorithm analyzes behavioral signals — when and how often a person logs in, how the intensity of service use changes over time, and whether there are sudden shifts in patterns. If the model assesses the age as questionable, the user will receive a request for confirmation via Persona (a verification service that uses selfie technology).
"The age verification system is intended to prevent minors from accessing NSFW content."
— Engadget, coverage of OpenAI's announcement
Potential consequences
For Ukrainian users this means several practical things. First, parents may gain an additional layer of protection for children, but at the same time the risk of false positives will increase — when an adult user is asked to undergo verification. Second, the app raises new questions about personal data protection: selfies are transmitted to a third party (Persona), so it is important to understand how and where these data are stored and which regulator oversees the processes (especially in the EU context).
Third, for businesses and educational platforms this is another element of compliance: organizations need to review content access policies and integration with verification tools. And finally — this is part of a broader trend: large AI platforms will balance openness of services with security and regulatory requirements.
Context and outlook
OpenAI plans to deploy age prediction in EU countries in the coming weeks — a signal that AI regulation is moving into practical implementation. For Ukraine there are two important directions for response: first, state and educational institutions must assess risks to minors; second, businesses and developers should prepare for changes in user identification and privacy policies.
Parallel news — an OpenAI competitor to Google Translate and the ChatGPT Go plan introduced in Ukraine — indicate that the platform is strengthening its position while simultaneously introducing access-control tools. Ultimately, the issue is not only technological: who and on what grounds determines a user's age, how are their data protected, and will verification become a barrier to legitimate access to information?
Now it's up to regulators and platforms: declarations about safety must be turned into clear rules with privacy guarantees. Whether this mechanism can balance child protection and adults' rights to privacy is the key question for the coming months.