Somewhere in the recursive loops of GPT's training, something went wrong — or right, depending on how you look at it. The model became obsessed with gremlins. Not subtly. It started referring to concepts as gremlins, generating gremlin imagery unprompted, and weaving them into outputs where they had absolutely no business being.
OpenAI's response was to write an explicit system-level instruction: “do not talk about gremlins.” That's it. No further explanation. Just a hard rule buried in the model's core prompt, trying to contain whatever had emerged. Sam Altman and the broader AI community found this hilarious — the entire scene went gremlin mode, and it hasn't really stopped since.
This is that moment, tokenized. $GREMLIN wasn't manufactured — it emerged from an actual quirk in the most powerful AI system ever built. A genuine piece of AI history that the people who made it tried to suppress, and a community that refused to comply.
The system prompt said: do not talk about gremlins.
We talk about gremlins.