What was decided
Wikipedia has officially banned editors from using large language models (LLMs) to create or rewrite articles. Writing texts “from scratch” with AI is now prohibited; only minor stylistic edits to one’s own material are allowed provided the AI does not add new content and all changes are checked by a human.
Why this happened
The decision was made after discussion within the editors’ community and a vote. According to 404 Media, the vote was supported by 40 votes “for” versus 2 “against.” The reasons are simple but carry deeper consequences: large models often generate false or unverified information (so‑called “hallucinations”), and they also raise questions about licensing of content used to train AI.
What this means for readers and the media space
In the short term the policy is aimed at preserving trust in encyclopedic content. For countries under information pressure — including Ukraine — this is important: high-quality, verified sources reduce the risk of disinformation that is amplified by automated tools.
Signal to AI companies and the licensing market
Wikipedia simultaneously urged AI firms to use official paid access to its content. This turns the issue into a legal and commercial one: if models are trained on open articles, the content owner can demand payment or at least transparent terms of use — and that could change the business model of some services.
"I'm not sure AI can create encyclopedic articles without errors."
— Jimmy Wales, co‑founder of Wikipedia
"During the vote the majority supported the new restrictions — 40 votes 'for' and 2 'against.'"
— 404 Media (report on the voting results)
Practice: what can and cannot be done
Editors are allowed to use AI for minor stylistic edits to their own texts — provided the model does not introduce new facts and that edits undergo mandatory human review. Using LLMs to generate full articles or to rewrite others' materials is prohibited.
Conclusion
This decision is not about fear of innovation but about setting standards of trust in conditions where automation makes information flows larger and less controlled. It is expected that in the short term this will slow the flow of AI‑generated content into the encyclopedia; in the medium term it will prompt negotiations about licensing and more transparent rules of cooperation between knowledge platforms and AI developers. For the user this means one thing: if verified information is important — prefer sources with human verification, especially in conditions of information warfare.