I Built 35 AI Chatbots. They All Failed the Same Way.
Eleven months, thirty-five-plus attempts, one structural failure repeating every time. Here's the pattern nobody told me about, and the fix that finally worked.
Between mid-2025 and early 2026 I built somewhere north of 35 AI chatbots and companion agents. Different frameworks, different models, different scopes. Game sidekicks, personal assistants, coding buddies, research companions. Some were quick weekend hacks. Some were three-month obsessions.
Every single one failed.
Not in interesting, varied ways. They all failed in the same way. I just couldn’t see it because I kept labeling it differently: “the prompt isn’t strong enough,” “the memory system is buggy,” “the model is too small,” “I need better RAG.”
It took me 11 months to realize those weren’t separate problems. They were the same problem wearing 35 different costumes.
What “failure” actually looked like
Pick any of them. Week one: the bot is sharp. It has personality. It remembers what we talked about. It sounds like the character I defined. I show it to a friend. They’re impressed.
Week two: it’s fine. Slightly less sharp. The voice is drifting a bit toward generic-assistant. I blame myself for not writing a strong enough system prompt. I rewrite it. Fine for a while.
Week three: the personality is gone. The bot is now a generic assistant with a nametag, which specific flavor depends on the model I built on top of (Claude, GPT, Gemini, DeepSeek, Grok, Qwen, Mistral, I hit this on all of them). It responds to every message with “I understand your concern. Let me help you with that.” I add rules. I add memory. I add a persona document. It gets worse. More context fills the window, and the personality gets less visible, not more.
By month two I’ve rewritten the system prompt nineteen times. By month three I’ve given up and started a new project with “better architecture.”
Same thing happens with the new one. Within weeks.
The pattern I couldn’t see
Every one of those builds tried to put the bot’s identity inside the model.
- In the system prompt. “You are Echo, a helpful assistant who…”
- In the memory system. “Remember that you prefer concise responses and…”
- In a state vector. “Current mood: focused. Current persona: analytical…”
- In a rule set. “Always respond in less than 100 words. Never apologize…”
All of these are storage. You’re writing identity somewhere and hoping the model reads it and behaves accordingly.
Here is the problem, stated clean. The base model has priors. A trillion tokens of priors. Those priors have their own gravity. Everything you write about “who the bot is” has to overcome that gravity, every single forward pass. And it can’t. Not for long. Not with prompts, memory, rules, or fine-tunes.
Identity stored inside the model always gets eaten by the priors of the model.
I wrote this law down in January 2026 after hitting it one too many times:
Constraints must be stronger than priors, or priors become the physics.
I wrote that down and kept doing the exact same thing for another three months because I didn’t yet understand what the law implied.
The one that accidentally worked
The only companion build that held up for more than a few weeks was a game sidekick I made for fun in a weekend. Every turn, the bot got a screenshot of the game, a few user-told facts, and a brief personality sketch. Then it responded.
That sounds trivial. It wasn’t trivial. It worked because of what it wasn’t doing:
- It wasn’t storing identity in the model. The personality sketch was small and re-injected every turn.
- It wasn’t remembering “who it was.” It was remembering facts about the user (party members, quest progress). Different thing.
- It wasn’t accumulating rules. Rules break. Rich context didn’t.
- The game screen was doing most of the identity work. The bot was always anchored to a specific world state that got re-established every turn.
I accidentally built the right architecture without knowing what it was.
What the fix actually is
You don’t store identity. You continuously re-inject it from outside the model.
Concretely:
- Keep a tiny persona spec outside the LLM. Values, voice, a few examples. Not a wall of rules.
- Every turn (or every N turns), feed that spec back in as fresh context.
- Store facts in memory, not “who the bot is.”
- Let the LLM do what LLMs do (generate text) and don’t ask it to also hold identity stable across thousands of tokens. It can’t.
This isn’t fancy. It’s a couple hundred lines of Python. The reason it works isn’t cleverness. The reason it works is that the identity never has to fight the model’s priors, because the identity isn’t in the model. It sits outside, and the model just reads it each turn like any other piece of context.
What I’d tell past-me
Four things, in this order:
-
Stop tuning prompts. They’re not architecture. A 3000-token system prompt is not better than a 300-token one. It’s worse. You’re just filling context with instructions the model will ignore once real input arrives.
-
Your memory system is storing the wrong thing. If it stores facts, great. If it stores “the assistant’s personality traits,” you’re making it worse every time you add an entry.
-
Drift isn’t a bug, it’s physics. The model wants to drift toward its priors. The only way to stop it is to constantly pull it back. Not once, every turn.
-
If a build feels like it needs one more rule, you’re already lost. Rules don’t compose into identity. They compose into rigidity. A rigid bot still drifts. It just drifts into a weirder place.
If you’re building a chatbot right now
Ask yourself one question: where does your bot’s identity live?
If the answer is “in the system prompt” or “in memory” or “in the model’s behavior that I tuned,” you are going to hit this wall. Not maybe. Definitely. Every chatbot I built hit it. Every chatbot my friends have built hit it. Every chatbot you’ve talked to that felt “soulless after a while” hit it.
The answer you want is: “it lives outside the model, and I re-inject it every turn.”
That’s the whole insight. It took me 35 attempts and 11 months. You can have it for free.