Grammarly Hit With Class Action Lawsuit Over AI Feature That Used Writers' Identities Without Permission
Technology

Grammarly Hit With Class Action Lawsuit Over AI Feature That Used Writers' Identities Without Permission

A federal lawsuit targets Grammarly's parent company Superhuman over an AI tool that impersonated real journalists and authors — including investigative reporter Julia Angwin.

By Sophia Bennett5 min read

Grammarly Faces Legal Action Over AI 'Expert Review' Tool

Grammarly and its parent company, Superhuman, are now facing a federal class action lawsuit stemming from an artificial intelligence feature that allegedly exploited the names, voices, and professional reputations of hundreds of real writers, journalists, and editors — all without their knowledge or consent.

The lawsuit, filed Wednesday in the Southern District of New York, names award-winning investigative journalist Julia Angwin as the lead plaintiff. Angwin, who founded The Markup — a nonprofit newsroom dedicated to examining technology's role in society — is also a regular opinion contributor for The New York Times, where she has long reported on how major tech companies have chipped away at personal privacy.

What Was the 'Expert Review' Feature?

Last year, Superhuman introduced a collection of AI-powered tools to the Grammarly platform. Among them was a feature called "Expert Review," which claimed to channel the editorial sensibilities of seasoned writers — both living and deceased — to critique and refine users' written content. The tool drew on a large language model to simulate feedback from well-known figures, including bestselling author Stephen King and astrophysicist Neil deGrasse Tyson, as well as numerous journalists and professional writers.

Although Grammarly included a disclaimer noting that none of the individuals featured had endorsed or contributed to the tool's development, the feature still drew swift and sharp criticism from writers who felt their identities and life's work were being commercially exploited without permission.

A Lawsuit Built on Long-Standing Privacy Laws

The legal complaint, filed on behalf of Angwin and all others similarly affected, argues that Superhuman engaged in the unlawful misappropriation of real people's names and identities to generate profit. Attorneys estimate that damages across the plaintiff class exceed $5 million, though no specific dollar amount has been formally demanded.

Peter Romer-Friedman, the attorney representing Angwin, says the legal foundation is solid. Both New York and California — where Superhuman is headquartered — have well-established statutes prohibiting the commercial use of a person's name or likeness without explicit consent.

"Legally, we think it's a pretty straightforward case," Romer-Friedman explained. "One of the reasons we're filing this is that we can see what's happening in our society — professionals who spend years, or in Julia's case decades, honing their craft are seeing their names and skills appropriated by others without their consent."

The lawsuit itself puts it plainly: "Contrary to the apparent belief of some tech companies, it is unlawful to appropriate peoples' names and identities for commercial purposes, whether those people are famous or not."

Angwin: 'Are You Kidding Me?'

Angwin says she first learned about Grammarly's use of her identity through tech newsletter Platformer, and her initial reaction was one of disbelief. As someone who regularly covers digital privacy and surveillance, she had always associated deepfake-style impersonation with celebrities — not working journalists like herself.

"Deepfakes are something I always think celebrities are getting caught up in, not regular journalists," she told WIRED. "I was just like, are you kidding me?"

Her frustration deepened when she reviewed the actual advice her AI counterpart had been dispensing to users. Rather than offering the sharp, clarity-focused editorial guidance she has built her reputation on, the simulated version of Angwin appeared to do the opposite.

In one instance, the AI suggested transforming a clear, concise sentence into something longer and unnecessarily complex — making it harder to read, not easier. In another case, it recommended expanding on a theme that had no real relevance to the text in question.

"It wasn't even just anodyne," Angwin said. "It was actually kind of actively making it worse. It felt very scattershot to me. I was surprised at how bad it was."

Superhuman Pulls the Plug Amid Growing Backlash

Even before the lawsuit was formally filed, Superhuman had already decided to pull the Expert Review feature following intense public criticism from the writing community.

Ailian Gan, Superhuman's director of product management, issued a statement acknowledging the misstep. "After careful consideration, we have decided to disable Expert Review as we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented — or not represented at all," Gan said. "We built the agent to help users tap into the insights of thought leaders and experts. Based on the feedback we've received, we clearly missed the mark. We are sorry and will do things differently going forward."

Superhuman CEO Shishir Mehrotra also addressed the controversy in a LinkedIn post, writing that the company had received "valid critical feedback from experts who are concerned that the agent misrepresented their voices" and that the scrutiny would be used to improve future products.

As of publication, Superhuman had not issued a formal response to the lawsuit itself.

A Broader Warning for the AI Industry

This case arrives at a time when questions surrounding AI's use of real people's identities, creative output, and professional reputations are becoming increasingly urgent. For many observers, the Grammarly situation illustrates a growing tension between tech companies eager to deploy AI-powered features and the individuals whose work and names those systems depend upon.

Whether this lawsuit results in a landmark ruling or a quiet settlement, it sends a clear signal: using someone's identity to sell a product — even through an AI intermediary — carries real legal and ethical consequences.