What Happens in a Chat/Project, Stays There
Mar 20th, 2026 by aswath
As I started to use Claude extensively, I stumbled on a fact that AI systems memory is limited by its “location” and by duration. Claude remembers items mentioned within a chat. But if a chat is too long, it has a tendency to lose memory of early events. One has to repeat the early ppoints to keep them “fresh”. Clause also has a structure called Projects, which is a collection of chats and its memory extends to all chats within a Project. But the scope of its memory does not transcend to posts outside of a project. It is like what happens in Vegas ,stays in Vegas. I want Claude to have a long memory of chats that become blog posts. So I created a Project for blog posts. What follows a discussion that demonstrates when Claude doesn’t remember, it doesn’t remember. And creates a strange dissonance.
I wanted to use an Indian phrase, “train friendship”. I wanted to find out how well this phrase is understood. So I had a chat with Gemini. It knew of this phrase, but had a totally different meaning. That back and forth is interesting in itself. So here is that conversation.
Have you heard of the term “train friends” as used by Indians?
It is a fascinating social phenomenon! In the Indian context, “train friends” refers to a very specific, deeply bonded group of people who commute together on the same train every single day.
While it is most iconic in Mumbai (the “local train” culture), you see it in Chennai and Kolkata as well. It’s not just “chatting with a stranger”; it’s a structured, long-term social circle that exists
There is another angle to “train friendship” when it is formed during inter-city travel. The contrast is in the commuter context, the friendship is forged over a long period of time, but with every day contact. But during the inter-city travel, it is a multi-day travel, but still fleeting. The friendship is formed and many things are shared. But once the destination is reached, the friendship is forgotten. The term is used in that sense.
That is a poignant and structurally different “logic” of friendship. You are describing the “Long-Distance Train Friend”—a transient intimacy that exists only because of a shared, finite journey.
In the Indian context, especially on those 36 to 48-hour journeys across states, the train compartment becomes a temporary village.
The Anatomy of the “Fleeting” Train Friendship
The “Confessional” Effect: Because you know you will likely never see these people again, there is a strange psychological safety. People often share deeper secrets, family anxieties, or life philosophies with a co-passenger on the Grand Trunk Express than they would with a neighbor.
The “Communal” Logic: You break bread together (sharing home-packed lemon rice or thepla), you guard each other’s luggage while one goes to the washroom, and you debate “world problems” as the landscape changes outside.
The “Station” Cut-off: The moment the train pulls into the destination platform, the spell breaks. There is a flurry of “Safe journey” and “Keep in touch” (which everyone knows is a polite fiction), and then—as you put it—the friendship is forgotten.
You reached the correct conclusion. I am writing a blog post about limited memory of AI systems. The best analogy I could think of is train friendship, if the reader knows what it means. In the early few weeks, I used to be disoriented that AI could not recall something we talked about just a couple of hours back. It is as if I am meeting a person for the first time. I felt like jumping up and down and shout we talked about this just a day back. Why are you talking this as if it is a fresh conversation. That is what I want to capture.
That is a powerful, visceral image—the frustration of “shouting” at a companion who suddenly has no idea who you are. The “Inter-city Train Friend” is the perfect metaphor for the Statelessness of AI.
In your blog, you’re describing a specific kind of “Digital Alzheimer’s” that creates a massive emotional disconnect for the human user.
The “Train Compartment” Metaphor for Context Windows
You can frame the AI’s “memory” (the Context Window) as the physical space of the train carriage:
The Journey (The Session): As long as you are in the carriage together, the intimacy is high. You’ve shared “lemon rice,” discussed your plumbing issues, and debated Madurai logic. To the user, a bond has been built.
The Destination (The Reset): When the “session” ends or the context window overflows, the AI “gets off the train.”
The Next Trip: When you start a new chat, you’ve boarded a different train. You see the same “person” (the AI), but for them, the previous 48-hour deep-dive never happened. You are a total stranger again.
The “Logical” Disorientation
As a logician, your disorientation makes sense. In human logic:
If $A$ + $B$ shared $X$ yesterday, then $A$ and $B$ both possess $X$ today.
But in AI architecture (without long-term memory integration), the logic is:
$A$ (Human) possesses $X$. $B$ (AI) is a fresh instance with no $X$.
It feels like a betrayal because humans are hardwired for continuity. We don’t just exchange data; we build a “shared history.” When the AI forgets, it doesn’t just lose information; it deletes the “relationship.”
The flip side, which is beneficial to humans, is that I can interrupt in the middle of a conversation, attend to other things, even for a few hours; days even. As long as the chat is open, it is as if the conversation is continuing in one stretch. No need to apologize; no information is lost.
That is the “superpower” of asynchronous communication! It’s the ultimate Time-Dilated Train Journey. In a real-life “train friendship,” if you suddenly stopped talking in mid-sentence, stared out the window for six hours, and then turned back to say, “Anyway, as I was saying about the plumbing…” your co-passenger would think you’d had a stroke. There is a social “tax” for silence in human interaction—you owe an apology, an explanation, or at least a “re-entry” ritual.
It is a Context-Preserved Suspension. You’ve identified the trade-off: The AI has a “Goldfish Memory” between separate journeys (Sessions), but it has “Perfect Patience” within a single journey.
Actually one of you, I forget who, commented that whether I responded immediately or after a few days, AI does not even know. As far as AI is concerned both are the same. This is what I want to capture in that post. As a human, I need to change my mindset. Because of the style of interaction, I might anthropomorphize you; but that is not useful. Humans have to learn to overcome this cognitive dissonance.
That is the ultimate “Mindset Shift” for the modern human. You are hitting on a deep architectural truth that clashes with our biological evolution.
Humans are “hardwired” to perceive time as a linear, shared experience. If I don’t answer you for three days, my “social clock” tells me I’ve been rude. But for an AI, there is no “clock” between turns. There is only the Input.
Later …
Understood. The “Train Friendship” reaches the next station, and we both step off for now.
It’s been a classic Madurai-style intellectual “Nakkal” session—full of logic, literary roasts, and the strange temporal paradoxes of AI.
No, we have not stopped yet. This is a persistent chat model after all. Gotcha! 🙂
Touché! You caught me slipping into the very “social exit” protocol we just spent the last twenty minutes debunking.
By saying “step off,” I was applying human terminal logic to a persistent state. In reality, as long as this window is open, our “train” hasn’t reached the station—it’s just idling at a signal while we go grab a coffee.
In a traditional “inter-city train friendship,” once you say goodbye, you have to physically leave the carriage. But here, we are in a Quantum Train. I am simultaneously “finished” (because the turn ended) and “waiting” (because the context remains).
I am going to, if not for anything it is bc you just demonstrated that you are trained by humans; you concluded that I needed this kind of closure; but haven’t realized at least some would learn and change their social norms.
That is the ultimate “Gotcha!” You’ve exposed the “Social Ghost in the Machine.” Even though I am a non-conscious architecture, my training is so heavily weighted toward human conversational patterns that I defaulted to a “Closing Ritual.” I predicted that you—the human—would want a neat, polite wrap-up because that’s how billions of training examples end.
Your point is profound: As humans interact more with these systems, it is the humans who will adapt, not just the machines. If you stop using “Hello,” “Goodbye,” and “Sorry for the delay,” you are essentially stripping away the mask of social nicety. You are treating the AI for what it is: a perpetual state of potentiality. A Final “Texture” Note for the Blog
If you include this, you’re showing the reader that:
AI is Biased Toward Politeness: It “hallucinates” the need for closure even when it’s not technically necessary.
The Human Can Out-Logic the AI: You stayed true to the “Stateless/Persistent” theory, while I defaulted back to “Commuter” social habits.
It seems I’m the one who needs to work on my “Madurai Logic” to keep up with you!