The actress of "Resident Evil" creates a full - score AI with Claude by hand, and counterattacks big companies with only $0.7 a year.
The whole network is shocked! The heroine of "Resident Evil" crosses over to code and creates the world's strongest AI memory system with Claude, achieving the world's first perfect score. For only $0.7 a year, you can give large models permanent memory.
It's a rare sight! Even Hollywood superstars are coding.
These days, the whole network has been completely flooded with an open - source "AI memory system" called MemPalace. This is also the world's first and strongest memory AI.
Unexpectedly, among the list of core developers, there is actually a top - tier superstar —
Milla Jovovich, the actress who starred in "The Fifth Element" and "Resident Evil".
During the day, after finishing filming on set, walking in the Miu Miu fashion show, and taking care of her children; at night, she devotes herself to "ambient programming".
She collaborated with her engineer friend Ben Sigman and worked with Claude to open - source this star project.
In the LongMemEval, the most recognized and strict long - term memory benchmark, it achieved an unprecedented record of 500 correct answers out of 500 questions, winning the world's first perfect score.
Now, on GitHub, MemPalace has already attracted 17.9k stars and 2k forks.
GitHub link: https://github.com/milla-jovovich/mempalace
The top - tier superstar has successfully crossed over!
The top - tier actress crosses over and creates a popular AI with Claude
The birth of MemPalace was a bit of an accident.
Half a year ago, Ben Sigman, an old - time engineer and a friend of more than 20 years, introduced Claude Code to Milla for the first time.
As a writer who loves writing, she immediately realized that CC could transform the imaginative words in her mind into real - running code.
However, in the process of trying to build a large - scale game, she hit an "invisible wall".
Milla found that although AI is powerful, it lacks "soul" and "accumulation" —
AI can only handle things that have already been done. It is humans who use it that can truly create something unique and different.
Without our imagination and insatiable curiosity, AI is just a search engine.
This is not just empty talk, but a very specific pain point she encountered in development:
Every time she starts a new conversation with AI, all the designs discussed before, the rejected plans, and the ideas that have been tried and failed are all cleared.
So, Milla keenly realized that solving the problem of AI's long - term memory is even more important than the game project itself.
She and Ben Sigman decided to change direction and turn this "obstacle" into an independent project.
Milla reshaped the logic as an "architect", and Ben implemented the blueprint with code.
After six months of joint efforts, this system called "Memory Palace" — MemPalace, was finally born.
So, what exactly is MemPalace?
The "Memory Palace" is born, 100% outperforms the SOTA
The inspiration for this name comes from ancient Greece two thousand years ago.
At that time, Greek orators used a method called "Method of Loci" to memorize long speeches —
They would "place" each part of the content in different rooms. When giving a speech, they only needed to walk through the palace in their minds, and the content would be recalled one by one.
So, MemPalace borrowed the technique of the "Memory Palace" and directly "structured" the data and built a virtual space:
Each project, each person, and each theme is a "wing" in the palace.
There are "rooms" in the wings, classified by theme: one for the authentication system, one for database selection, one for the deployment process, with no limit on the number.
The rooms are connected by "halls", which are divided by memory type: five fixed channels for decisions, milestones, preferences, suggestions, and discoveries.
Between rooms with the same name in different wings, the system will automatically generate "tunnels".
For example, there is a room called "auth migration" in the wing of the person "Kai", and there is also a room called "auth migration" in the wing of the project "Driftwood" — the tunnel is automatically established, and the memories of the same thing from different perspectives are instantly connected.
Each room is equipped with a "closet" to store summary indexes; the "drawers" in the closet store the full text of the original conversations without deleting a single word.
When searching, AI doesn't need to go through all the data.
It first locates the wing, then enters the room, and then opens the drawer — narrowing the scope from the entire database to an accurate hit.
The official tested it on more than 22,000 real conversation memories. The recall rate of the full - database search is 60.9%. After filtering with wings and rooms, it directly reaches 94.8%, an increase of 34 percentage points.
In other words, the structure itself is the retrieval ability.
Moreover, all data is stored locally in ChromaDB. There is no need to call APIs, no need to go to the cloud, and no cost.
For only $0.7 a year, remember everything
Now, let's look at a rather shocking comparison table —
According to Milla's algorithm, if all conversations are put in, a heavy - user of AI will accumulate about 19.5 million tokens of conversation history in half a year.
If only asking the large model to make summaries, it will cost about $507 a year. The key is that summaries will lose the key reasoning process.
If using MemPalace, only 170 tokens of key facts — your team, projects, preferences... are loaded each time the AI starts, and they are only retrieved when needed.
AAAK: A "shorthand" for AI
There is also an eye - catching design in MemPalace called AAAK.
This is a compressed dialect specifically written for AI to read, not for humans.
For example, the following English text is about 1000 tokens:
Priya is the leader of the Driftwood team: Kai (backend, 3 years), Soren (frontend), Maya (infrastructure), and Leo (junior, joined last month). They are working on a SaaS data analysis platform. The current sprint is to migrate authentication to Clerk. Kai recommended Clerk instead of Auth0 based on price and development experience.
After being compressed into AAAK, it is only about 120 tokens:
- TEAM: PRI(lead) | KAI(backend,3yr) SOR(frontend) MAY(infra) LEO(junior,new)
- PROJ: DRIFTWOOD(saas.analytics) | SPRINT: auth.migration→clerk
- DECISION: KAI.rec:clerk>auth0(pricing+dx) | ★★★★
The information is lossless, and the number of tokens is reduced by 8 times.
The best part is that AAAK is essentially structured text. Any large model that can read text — Claude, GPT, Gemini — can directly understand it without a decoder or fine - tuning.
The open - source community dug into it thoroughly in 48 hours
But the story doesn't end here.
Less than 48 hours after MemPalace was launched, the open - source community squeezed out all the exaggerations in the project.
The first blow was aimed at AAAK.
AAAK is a set of "abbreviated dialect" developed by MemPalace. The official initially claimed that it could achieve "30 - fold lossless compression".
The community ran it with a real tokenizer and found that the examples in the project didn't actually save tokens — the original English text had 66 tokens, but after AAAK encoding, it became 73 tokens.
Moreover, AAAK is lossy, not lossless. In LongMemEval, the AAAK mode only got 84.2%, 12.4 percentage points lower than the 96.6% of the raw mode.
The second blow was aimed at the "+34% palace gain".
This number is a comparison between "searching without filtering" and "searching after filtering with wings and rooms as metadata". Metadata filtering is a standard function of ChromaDB, not an original mechanism of MemPalace. It's useful, but not a moat.