Normally, I’d be reading about NPM security breaches and AI security breaches separately, but now I can get them in the same article! Truly amazing how technology has progressed.
Fun times ahead!
Haha. Great summary.
By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter).
Ha, by an intern
I mean it’s not that big a deal. However, it would another thing if the model itself leaked. Now that would be something.
edit: Like I thought, it turns out to be a TS wrapper with more internal prompts. The fireship video is really funny, they use regex to detect if the user is angry 😭
The harness is as important as the model
As they tell it, Claude Code is over 80% written by the models anyway…
Tool usage is very important. Qwen3.5 (135b) can already do wonderful things on OpenCode.
Like a healthy brain. And just like a healthy brain, it’ll still hallucinate and make mistakes probably:
The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional “store-everything” retrieval.
As analyzed by developers like @himanshustwts, the architecture utilizes a “Self-Healing Memory” system.
We’re gonna make AGI and realize that being stupid sometimes and making mistakes is integral to general intelligence.
being stupid sometimes and making mistakes is integral to general intelligence.
Smart people figured this out a long time ago.
https://www.amazon.com/s?k=nassim+taleb+antifragile&adgrpid=187118826460
https://www.goodreads.com/en/book/show/18378002-intuition-pumps-and-other-tools-for-thinking
That’s what makes us humans at least…
At its core is MEMORY.md, a lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; it stores locations.
Actual project knowledge is distributed across “topic files” fetched on-demand, while raw transcripts are never fully read back into the context, but merely “grep’d” for specific identifiers.
This “Strict Write Discipline”—where the agent must update its index only after a successful file write—prevents the model from polluting its context with failed attempts.
For competitors, the “blueprint” is clear: build a skeptical memory. The code confirms that Anthropic’s agents are instructed to treat their own memory as a “hint,” requiring the model to verify facts against the actual codebase before proceeding.
Interesting to see if continue.dev takes advantage of this methodology. My only complaint has been context with it.
The code is still on GitHub, just an earlier commit: https://github.com/chatgptprojects/clear-code/tree/627ab39f09681d9c7d6915861d36d361bdc6d889
In this mode, the agent performs “memory consolidation” while the user is idle. The
autoDreamlogic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.this blog post reads like a marketing piece
Best part of the leak, they use regex matches for sentiment lol
I think saw one of the keywords was dumbass. And another looked for you calling it a piece of shit
Lmao, so the LLM framework falls back to similar shit to what ALICE used?
The best learning method is from your own mistakes. So, Claude is still learning.
I was like “Ha, ha nice April’s fools”… Then I keep reading the comments and… WTF‽
This is just the UI right? Or the models too?
Vote people. There’s town and city votes everyday or often. Vote!
Actual project knowledge is distributed across “topic files” fetched on-demand, while raw transcripts are never fully read back into the context, but merely “grep’d” for specific identifiers.
Consistent with a lot of bugs and goofs I’ve heard people in long running instance of Claude will encounter.








