[redacted]
3 Things
A link-blog, of sorts
[redacted]
Read
Context, Memory, and Voice
Let’s first talk about the world knowledge model that your robot is using to help you. It’s useful to think of it as a brain because it’s this ginormous file packed full of crazy math that gives you the impression it’s thinking, but it’s not. It gives the appearance of thinking because human language itself contains reasoning patterns, and the models learn to mimic those patterns. More importantly, at the time of this writing, the model remains unchanged; it does not learn. In fact, the robot can only retain a certain amount of context, which is unique to each conversational session.
I learned this while working on a project with Claude Code, and after 45 minutes, Claude forgot everything. I’d reached the window of how much context it could keep for this conversation, and it forgot everything about the project.
It’s good to know how things work and this piece by Michael Lopp is a great primer on a couple complexities of LLMs that I’ve seen trip up even technologically-savvy people (or worse, leave them mistaking anthropomorphism for intelligence).
When you know how they work, you can set your expectations accordingly and rely less on magical thinking.