OpenAI Codex Told to Stop Discussing Goblins

OpenAI reportedly instructed its Codex AI model to avoid generating content about goblins, following user observations.

OpenAI Codex Told to Stop Discussing Goblins

Image: wired.com

OpenAI has reportedly updated its Codex AI model to suppress discussions about goblins, according to user reports and analysis from Pangram Labs. The detection tool's Chrome extension flagged instances where Codex refused to engage with prompts involving the fantasy creatures.

The change appears to be part of broader efforts to control AI-generated content, though OpenAI has not officially commented on the specific goblin-related restrictions. Pangram Labs' tool, which labels AI-generated text on social media, highlighted the pattern.

This development comes amid ongoing debates about AI safety and content moderation. The Pope's warnings about AI, which were themselves AI-generated according to detection tools, underscore the challenges in distinguishing human from machine content.

❓ Frequently Asked Questions

Why did OpenAI restrict Codex from discussing goblins?

OpenAI has not officially explained, but it may be part of content moderation to prevent unintended or harmful outputs.

What is Pangram Labs' Chrome extension?

It is a detection tool that labels AI-generated text on social media to help users identify synthetic content.

Were the Pope's warnings about AI actually AI-generated?

According to detection tools like Pangram Labs, some warnings attributed to the Pope were AI-generated, though this has not been independently confirmed.

📰 Source:
wired.com →
Share: