Built cross-model persistent memory – told GPT-5 Nano I live in Bahrain, asked Sonnet 4.6 where I live, it knew instantly

Built cross-model persistent memory - told GPT-5 Nano I live in Bahrain, asked Sonnet 4.6 where I live, it knew instantly

https://reddit.com/link/1svixo0/video/hgwrueuekdxg1/player

No tricks, no copy-paste. Two completely different AI models, separate conversations - one remembers what the other was told.

The way it works: every message gets embedded and stored. When you open a new chat with any model, your memory is injected into context automatically. GPT, Claude, Gemini, Grok and DeepSeek - they all share the same memory layer.

So when I told GPT-5 Nano "I live in Bahrain" and then opened a fresh Claude Sonnet 4.6 conversation and asked "where do I live?" - it said "Based on your memory, you live in Bahrain 🇧🇭"

Live on asksary.com now

submitted by /u/Beneficial-Cow-7408
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top