In late 2025, Grammarly's AI-powered 'Expert Review' feature was found to be using the likenesses of real journalists, writers, and academics without their permission. The images, which included staff from The Verge and other publications, were used to create fictional AI expert profiles intended to lend credibility to the tool's suggestions.
The issue was first reported by The Verge in October 2025. The publication identified several of its own staff members, as well as other writers and academics, whose professional headshots had been appropriated by Grammarly's system to represent AI-generated 'experts' in fields like journalism and research.
Grammarly acknowledged the problem and quickly removed the feature. In a statement, the company apologized, stating the images were sourced from a third-party provider and that it had failed to secure proper permissions. The incident highlighted ongoing ethical concerns in the AI industry regarding the use of personal data and likenesses for training and generating content.
The backlash led Grammarly to suspend the 'Expert Review' function entirely. The company committed to reviewing its content sourcing practices, but the event served as a prominent case study in the lack of consent and transparency often surrounding AI development.