What happened
Superhuman temporarily turned off the Expert Review feature in its writing tool — a decision publicly confirmed by CEO Shishir Mehrotra in a LinkedIn post. The feature used generative artificial intelligence to create comments and advice on texts, allegedly on behalf of well-known writers or experts.
How the feature worked
The tool analyzed a user's text and selected names of authors or scholars relevant to the material's topic. Based on those names, the AI generated reviews, relying on publicly available information and language models. According to reports, names of real authors and scholars appeared in the service without their permission, which provoked sharp criticism in the professional community.
Reaction and legal risks
Some writers and academics strongly criticized the practice. According to available information, a class-action lawsuit is already being prepared against Superhuman, and intellectual property lawyers are drawing attention to issues of consent, authorship, and possible reputation brute-forcing. For the platform this means not only reputational damage but also real legal liabilities.
"We turned off Expert Review while we review the feature. We wanted to help users find new ideas, but we are currently re-evaluating the approach and processes."
— Shishir Mehrotra, CEO of Superhuman
Product context
In 2025 Grammarly changed its name to Superhuman. Under the new brand, the company combined four products: the Grammarly writing assistant, the Coda workspace, Superhuman Mail and Superhuman Go. In recent years the company has also promoted AI agents for style checking and plagiarism detection — reinforcing a bet on automating editorial processes.
Why this matters for authors and readers
This story is not just about a technical error: it's about trust in the tools used by millions of authors, journalists and editors. When comments are signed with the names of real experts without their consent, it undermines source attribution and makes fact-checking harder. For Ukrainian authors and media this is a reminder — to verify tools and demand transparency in how AI models other people's voices.
What’s next
Superhuman has several paths: increase algorithmic transparency, implement explicit expert consent and labeling of AI-generated content, or risk protracted litigation and loss of trust. Regulators and professional communities are already paying attention to such cases — which will influence how quickly industry standards for similar services appear.
Question to the reader: is a simple shutdown enough to fix the problem, or are systemic changes needed in the approach to authorship and transparency of AI tools?