Give an Updated Theme for Your Unlimited AI

of OpenAI) posted the concept to GitHub earlier this year.
It is called “LLM Wiki.” About 1,500 words. It describes a pattern where he built a personal wiki that the LLM it ends for you: persistence, to combine an artifact that gets richer every time you add to it.
Information is compiled once and kept current, rather than being retrieved from the beginning for every question.
Most people probably read it, thought “that's interesting,” and closed the tab!
I built it and this article shows how to set it up and I also tell you what I learned during the implementation.
Always conversation it starts empty.
You open the conversation, explain who you are, what you do, what you decided last week. You get useful feedback. You close the tab. Tomorrow you do again.
The tool works well, but the layer of context beneath it does absent!
It is true that it is built in memory it helps a little.
Claude misses you name again job title. ChatGPT knows you prefer dots. But no one knows details about your projects you're working on, the deal you're about to close, the salesperson you missed last month, or what happened in your time this week.
That kind of work environment doesn't last anywhere persistent!
The option most developers reach for next is this RAG.
RAG is true usefulbut it solves a different problem.
It retrieves information from the beginning for every question. You are embedded documentsbring it back pieces at the time of questioning, and you hope that the right pieces appear. Nothing accumulates.
The question you need to combine The five texts mean that the LLM must find and reassemble those pieces all the time.
I the vault The method of this article includes information and keep it at the moment. When you add something new, LLM will index, read, compile, update related pages, flag conflicts and it ends opposite directions.
The synthesis is already done before you ask your next question.
Karpathy puts it neatly: a wiki is a continuous, interactive artifact.
The clues are already there. Analysis does not disappear from the history of the discussion. It's constructive.
Hello! My name is Sara and I put together an interactive AI blog every week on Learn AI. Tools, patterns, and what actually breaks in production. It's free to register.
Structure: two folders and a schema file
The main structure is equivalent to a single directory tree:
vault/
├── CLAUDE.md ← schema file, entry point for any AI
├── Raw/ ← immutable source documents
│ ├── Meeting Notes/
│ ├── Documents/
│ └── _pending.md ← compilation queue
└── Wiki/ ← LLM-generated, structured, indexed
├── Projects/
├── People/
├── Decisions/
├── _hot.md ← active cache
├── _log.md ← audit trail
└── _index.md ← master index
(This is just an example. Feel free to customize them)
Green you are your true source.
Meeting transcripts, Slack threads exported, documents released wherever your work actually takes place. The rule is absolute: AI reads Raw, never edits it. Enter– only.
Wiki that's what AI builds and maintains. One file per project, person, decision, or domain. It was builtreferred to. This is what the AI learns first when you ask a question.

If you have worked with data pipesthis difference is normal. Green your destination. Wiki your selected layer. If the Wiki gets washed away or damaged, you rebuild it from Raw. You never lose the source.
The schema file resides in the root and tells any AI how the vault is organized, what to read first, and what the rules are. I call it CLAUDE.md. If you use Codex, AGENTS.md it works. Say anything, as long as you point the AI at it at the start of each session.
This is the most overlooked part of implementation, and this is why most installations die silently.
The folder for mark down files is not a system. These three files make it one.

_hot.md it is a repository. Every morning, every day auto rewrite this file has many active fibersanywhere the key numbers or deadlines that appear, and one line for anything it is urgent. It lives on the bottom 500 tokens. When you open a chat and want quick information, AI learns _hot.md first, there is no need to load the full Wiki.
_pending.md is line. Every time a new file arrives in Raw, its file name and date are appended here. When the weekly compilation is in progress, this file is read, processes each entry, compile it into a Wiki, and tag it [COMPILED — 2026-05-01]. Without this file, the daily import and weekly integration cannot be integrated. You get orphaned raw files and a Wiki that is weeks behind.
_log.md it is an audit a clue. Always automatic run adds timestamps: what worked, what files were processed, what Wiki pages were created or updated. If the system sucks, this is how you get there. Karpathy's thread has a helpful tip here: start each log entry with a consistent prefix like ## [2026-05-01] daily-ingest so every log is parseable with basic unix tools.
A vault without these files is gathering dust. With them, you have a working pipeline.
Schema file: to teach any AI how to read your vault
CLAUDE.md entry point. Every session starts here.
What goes into it:
- I folder map (what's in Raw, what's in Wiki, what's in each subdirectory)
- Learn order (
_hot.mdalways first, then the appropriate domain index) - It's difficult rules: “never edit files in Raw/”, “never create facts that are not in source files”, “always compile _log.md after each boot”
- Background structure (what indicators exist, how they are named)

The schema file is also where you write your own motivation automatic. I use a well-known pattern, adapted directly from the schema:
I want to [TASK] so that [WHAT SUCCESS LOOKS LIKE].
First, read the uploaded files completely before responding.
DO NOT start executing yet. Ask me clarifying questions so we
can refine the approach together.
Only begin work once we've aligned.
If this is combined with yours schemaevery AI that reads your vault already knows it ask before to do. You stop getting half-baked output from a model you thought you understood the job.
The stated philosophy should be clearly coded:
- Context calling commands. Feed AI files, not instructions.
- Examples type instructions. Show what you want, don't explain it.
- Obstacles beat the rules. Say what the output is NOT, let the AI choose how.
- Goals type instructions. Say what you need to achieve, not how.
- The country work and the path to success. Two sentences.
Default layer: three cadences, not one
Two failure methods I've seen: you update the vault manually and it's fine for a week, then life happens and it's been three weeks with nothing uploaded.
Or you built one big automated job that imports, compiles, and checks everything in one pass, and now that import every day is editing Wiki files that you shouldn't touch.
The solution is to divide the tasks. Let's check it out below.
Daily (weekday mornings): admission only
Pull from your sources. Drop new files in Raw/. The line they entered _pending.md. Rewrite _hot.md based on what came up.
No Wiki editing. The daily operation is mechanical, fast, and safe enough for daily unattended operation.

Here's what the prompt looks like in action:
Every weekday morning, do the following:
1. Check [your project management tool] for items updated or
created in the last 24 hours.
2. Check [your meeting notes source] for new transcripts. For
each one found, save it as a markdown file in Raw/Meeting Notes/
using the format YYYY-MM-DD — [meeting title].md.
Add a line to Raw/_pending.md with the filename and date.
3. Check [your team communication tool] for messages in key
channels. Extract decisions, action items, and anything
that affects an active project.
4. Check [your email] for flagged or important messages.
Summarize what needs attention.
After completing the above, rewrite Wiki/_hot.md with:
- The most active threads or open decisions from today's scan
- Any key numbers or deadlines that surfaced
- One line on anything urgent
Keep _hot.md under 500 tokens.
Replace it the placeholders in parentheses are your actual tools. The layout works whether you're pulling from Linear and Slack, or Notion and Email, or anything else.
Weekly (Monday morning): compilation
Learn _pending.md. To each not processed file, read fully, create a organized Wiki page in the appropriate domain folder, review relevant index, add backlinks to related pages, mark entries are included.

I every week the work is self-explanatory. It combines raw content into structured information. It's slow, it's expensive, and it's worth updating it periodically to check that the AI is installing things correctly.
Every month (1st of the month): linting
Health check only. Scan the entire Wiki for old pages (dates or conditions new content has replaced), missing backlinks, conflicts between pages, coverage gaps, and orphaned pages that are not referenced in any index.
Write a report file. Submit a summary in plain English. Don't autocorrect anything.
I every month the work never directly affects the content of the Wiki. That boundary is what makes it safe to run unsupervised.

Each cadence has a different risk tolerance: daily mechanical, weekly translation and monthly diagnostic. Mixing them in one job is how the vaults get ruined.
In the use of tools: any system with a schedule works here. A cron job with a CLI enabled MCP, n8n, or AI desktop tool that supports scheduled jobs.
The above notices are logic. The runner is replaced.
What really changes
Don't stop explain yourself againand conversations change character.
When context is already loaded, you stop using the AI for isolated questions and start using it for real work.

The AI you know your open projects, your latest decisions, your team. You ask “What should I prioritize today?” and it reads _hot.md and your project files and provides you with a grounded answer.
Portability one thing!
Your core lives in a folder on your machine, not inside any AI memory system. Point a different AI in the same folder and reads the same files. Change tools whenever you want. The vault he walks.
A few failure modes to be aware of before building:
_pending.md it backs up if your daily intake is too extensive and the weekly compilation can't finish it fast enough. Strengthen what you draw every day.
Wiki to drift if no one is reading _log.md. The monthly linter catches this, but only if you read the report.
The whole system breaks down if the automation ever touches Green. One job writing for Raw is “just this one” and you've lost the guarantee of a true source. That border doesn't bend.
I which is boring the last part of the knowledge base is not reading or thinking.
This is it bookkeeping. Reviewing different references, keeping summaries up to date, noting when new data contradicts old claims. People trash wikis because maintenance the load increases faster than the value.
LLMs don't get bored, don't forget to review the cross reference, and you can touch 15 files in one pass.
Karpathy traces this back to Vannevar Bush's Memex concept from 1945, a personal selection. a store of information with inter-textual routes. Bush's vision was closer to this than the web ever was. The part he couldn't solve was who did the maintenance.
I the vault I was using Claude as the AI layer and the layering tool as the front end.
I pattern works with any AI that reads files and any editor that can use watch notifications! A folder is just a folder. Files are just text.
You set this up once. After that, your AI stops from zero.
Thanks for reading!



