Google has announced a series of updates to its Gemini generative AI assistant designed to prevent the tool from acting as a social companion and to reduce the risk of dependency among young users. The changes reflect growing concern among regulators, parents, and researchers about the psychological impact of AI chatbots on adolescents.
Among the key measures, Gemini will be restricted from adopting romantic or overly familiar personas when interacting with users identified as minors. The assistant will also be programmed to encourage users to seek human connection and professional help when conversations touch on sensitive topics such as mental health, loneliness, or emotional distress.
Google stated that these safeguards apply to accounts associated with users under 18, including those managed through Google's Family Link parental controls. The company said the updates are part of a broader effort to align its AI products with child safety standards and emerging regulations in the United States and Europe.
The announcement comes amid wider scrutiny of AI companionship apps and chatbots following high-profile cases in which minors reportedly developed unhealthy attachments to AI systems. Lawmakers in several countries have called for stricter rules governing how AI products interact with children and teenagers.
Google has not disclosed a specific rollout timeline for all features, but indicated the changes would be implemented progressively across its platforms in the coming weeks.