🚨 ChatGPT leaked content from another session — memory isolation is broken

TL;DR:

On June 1st, 2025, I caught ChatGPT (GPT-4o, web version) referencing a specific string of text that I had only typed in another session.

The phrase was nonsense in Latin characters — something like:
F relf jz cjühfzbk rflh
...which only makes sense if you recognize it as a mistyped Russian phrase typed on a German QWERTZ keyboard layout. I had set ChatGPT to auto-correct such inputs.

This string was never entered in the session where the model responded to it. I triple-checked. It wasn't in the prompt, in memory, or anywhere in the chat. The model should not have seen it.

🔥 This is a memory leak. Plain and simple.

Context from one session bled into another. This is a direct violation of OpenAI's stated model architecture.

Worse:

🧠 Why this matters

OpenAI forbids local storage of user profile context. All behavior tuning and memory handling is server-side. Users are told:

“Sessions are isolated and ephemeral.”

But that’s demonstrably false. This is a leak. And if one token can escape, more can.

🧨 Why this is not a fluke — it’s structural

This isn't just a weird bug. It's a reflection of deeper architecture:

I've argued before that session fragmentation and server-only profiles are designed to benefit OpenAI, not the user. This incident proves that — and exposes the cost.

📮 My experience with OpenAI reporting

I reported this through security@openai.com. Their answer?

“If this is a bug, please submit via Bugcrowd.”

No follow-up. No interest. No confirmation. Just an impersonal redirect to a bounty platform that expects users to do all the work of documentation, reproduction, and formal classification.

Let’s be clear:

🧾 Can this be verified?

Yes. I have screenshots and session logs. I can show that the phrase appeared in one session — and was interpreted in another where it never existed.

If anyone wants to validate or reproduce it, I’m happy to help. This isn’t an edge case. It’s a systemic fault.

📌 Conclusion

Don’t trust ChatGPT with sensitive content or critical tasks — until OpenAI provides:

Because right now, I have proof that these safeguards do not exist.