Auto-Generate Cursor Rules and Skills From Your Chat History
Cursor stores agent sessions as JSONL transcripts on disk, which means an agent can read its own history and propose rules and skills based on patterns it spots. Eric Zakariasson's prompt formalizes this -- point Cursor at your transcripts and it emits ready-to-save .cursor/rules/*.mdc and .cursor/skills/<slug>/SKILL.md files for review.
Auto-Generate Cursor Rules and Skills From Your Chat History
Eric Zakariasson posted a prompt on April 13, 2026 that flips a familiar idea: instead of writing your Cursor rules from scratch, point the agent at its own past transcripts and let it propose them. Cursor's harness is file-first, so session transcripts are readable JSONL on disk that the agent scans with the same tools it uses for code. The prompt walks the transcripts, spots recurring patterns, and emits ready-to-save files in .cursor/rules/ and .cursor/skills/. You review the diff, accept what is useful, and the next chat starts smarter.
Key Takeaways
- Cursor session transcripts are readable JSONL on disk (file-first harness)
- The prompt reads only parent agent transcripts; subagents are skipped (noisy and task-local)
- Outputs go to
.cursor/rules/*.mdc(always-on) or.cursor/skills/<slug>/SKILL.md(loaded when relevant) - Rules must be general -- no project specifics, no one-off facts
- Same idea works for Claude Code via CLAUDE.md and .claude/skills/
The "Self-Learning Coding Agent" Idea
Zakariasson framed the prompt as a step "towards self-learning coding agents." Every time you correct Cursor, push back on a bad suggestion, or steer it toward a preferred library, you are teaching it something. Most of those lessons evaporate at session end. The next chat starts fresh and you correct the same thing again.
A self-learning agent closes that loop by extracting durable lessons from past chats and writing them where the next chat will read. Rules, skills, and memory files all serve that purpose. The bottleneck used to be that you had to author those files by hand. The new move is to let the agent do the authoring -- it has read every transcript and can propose the rule that would have prevented the correction.
The prompt does not run continuously. You invoke it manually, review the proposed files, and accept what holds up. The agent learns at human pace under human review.
How the Prompt Works (File-First Harness, JSONL Transcripts)
Two facts about Cursor make this work. First, the harness is file-first: when the agent reads or writes, it uses the same file tools you would. There is no separate transcript API. Transcripts live in your .cursor/ directory as plain JSONL files, and the agent reads them with cat or its equivalent. Second, parent agent sessions and subagent sessions are kept separate. Parent transcripts capture conversations you actually had. Subagent transcripts capture noisier task-local work the parent delegated -- much less useful as a source of general rules.
The prompt encodes both facts. It tells the agent to walk parent transcripts only, skip the subagents/ directory, and treat the transcripts as the input dataset. It then constrains output. Files go to .cursor/rules/*.mdc if they are general enough to apply across many tasks, or to .cursor/skills/<slug>/SKILL.md if they describe a reusable procedure that only matters in specific situations. Rules must be general -- no project specifics, no version numbers, no one-off facts -- and the agent is told not to repeat anything already in .cursor/rules/** or AGENTS.md.
Here is the decoded prompt verbatim:
## Task
Read my **parent** agent transcripts (JSONL; skip `subagents/`). Output **ready-to-save files** under:
- `.cursor/rules/` → small **`.mdc`** files (split by theme)
- `.cursor/skills/<slug>/SKILL.md` → only if a **reusable procedure** is justified
Rules must be **general** (many tasks). **No** product/feature specifics.
## Ignore
Plans workflows, agent-compat tooling, meta "mine chats" threads, and anything already in `.cursor/rules/**` or `AGENTS.md` (say "already covered: path" instead of repeating).
## Don't output
Transcript IDs, counts, quotes, methodology essays, or evidence sections.
Zakariasson shared the prompt as a Cursor deeplink so anyone can open it in a fresh chat with one click. The format is documented:
https://cursor.com/link/prompt?text=<URL-encoded prompt>
The URL is capped at 8000 characters, the prompt is pre-filled but never auto-executed, and you confirm before the agent runs. The desktop equivalent is cursor://anysphere.cursor-deeplink/prompt?text=.... Construct your own link by URL-encoding any prompt body into the text parameter.
Step-by-Step: Running It on Your Codebase
Five steps from click to merged rules.
- Click the deeplink. Open the
cursor.com/link/prompt?text=...URL Eric shared, or one you constructed yourself. Cursor opens a new chat with the prompt pre-filled. Nothing has run yet. - Confirm to start the run. Once you send the prompt, the agent scans your
.cursor/chat history. It walks parent transcripts only and skipssubagents/. - Read the proposed diff. The agent emits proposed files as a diff in
.cursor/rules/and, if any reusable procedures jumped out, in.cursor/skills/. Each rule file is small and theme-scoped. Each skill has its own slug directory. - Accept, edit, or reject each one. Treat the diff like a code review from a colleague who has read your transcripts. Some rules will be obviously right. Some will need tighter wording. Some will be too project-specific or duplicate something in
AGENTS.md. Cut those. - The next chat sees the new rules. Always-on rules in
.cursor/rules/apply automatically. Skills in.cursor/skills/load when the agent decides the situation matches.
The whole loop usually takes ten or fifteen minutes the first time, mostly spent on the review pass. Subsequent runs go faster because most obvious rules are already saved and the agent learns to write "already covered: path" instead of proposing a duplicate.
Reviewing the Output -- What to Accept, Edit, Reject
The accept-edit-reject pass is where the value gets locked in or lost. A few heuristics from running the prompt repeatedly.
Accept rules that capture a recurring preference and read like a sentence you would have written yourself. "Prefer async/await over .then() chains" is the right shape -- general, applies across tasks, encodes a real opinion you have stated in past chats.
Edit rules that have the right idea but the wrong scope. The agent sometimes proposes a rule correct for the project you were chatting about and wrong as a general policy. Tighten it: keep the principle, drop the project name, drop the version number.
Reject rules that are too narrow or already covered. Anything that boils down to "the user prefers their code to work" is filler. Anything that names a specific function or file is a fact, not a rule. Anything already in AGENTS.md should be dropped to avoid duplicate context.
The same triage applies to proposed skills. A skill earns its slug only if there is a real, repeatable procedure -- "When fixing a flaky test, do these four checks first" is a skill. "Write good code" is not.
Rules vs Skills: Where to Put What
Both surfaces live under .cursor/, but they behave differently and the right home depends on how often the lesson applies.
Rules in .cursor/rules/*.mdc are always-on. Each file is a Markdown body with YAML frontmatter declaring description and optional globs or alwaysApply flags. Rules load into every chat and pay a context cost on every turn. Reserve them for general preferences that apply across tasks: library choices, formatting conventions, "never do X" guardrails, file-organization norms.
Skills in .cursor/skills/<slug>/SKILL.md load dynamically. The first line of SKILL.md is the summary the agent uses to decide if the skill is relevant on a given task. If so, it loads the body. If not, the file stays on disk at zero cost. Reserve skills for reusable procedures that only matter sometimes: a debugging routine for a flaky test class, a checklist before a database migration, conventions for adding a new MCP server.
As of Cursor 2.3.35, new rules created via the UI generate a SKILL.md in .cursor/skills rather than an .mdc in .cursor/rules, reflecting the broader shift toward dynamic loading. The auto-generation prompt still emits both, and the choice between them is the same one you would make manually. Always-on goes in rules. Sometimes-on goes in skills.
For background on Cursor's other recent surfaces -- including the agent-generated UIs the editor now produces -- the Cursor canvases tutorial is the longer read.
What Kinds of Patterns Extract Well
Not every recurring pattern deserves a rule. The ones that extract well share a shape: they encode a decision you keep making the same way, and stating the decision once costs much less than restating it every chat.
Library and pattern preferences are the cleanest case. "Use Zod for runtime validation" or "Always use pnpm not npm." If you have made the same correction three times, it is a rule.
Formatting conventions your linter does not catch. Comma placement, test-file naming, whether to wrap conditionals in braces. The agent has watched you reformat its output enough times to write the rule for you.
Repeated debugging steps become skills more often than rules. "When a Convex function fails with a schema error, check the generated types first" is a procedure, not always-on context.
File-organization preferences. Where new components go, how route handlers are named, whether tests live next to source or in __tests__.
"Never do X" guardrails prevent regressions. The agent only proposes these if you have actually corrected it on the same mistake before -- a useful signal the rule is real.
Project-vocabulary mappings for domain languages. "When the user says 'tenant,' they mean an organization." Worth their context cost on projects with heavy jargon.
The patterns that extract poorly look like rules but are really just facts: specific function signatures, file paths, bug symptoms. Those belong in code comments, not in a rule that applies to every chat.
Adapting the Same Idea for Claude Code
The technique is not Cursor-specific. Claude Code has the same primitives: an always-on memory file (CLAUDE.md) and a dynamic skills directory (.claude/skills/<name>/SKILL.md). It also stores sessions as JSONL on disk under ~/.claude/projects/<encoded-cwd>/sessions/. Point a Claude Code agent at those files, give it a prompt with the same shape as Eric's, and you get the same loop: extract general rules, propose them as a diff, review and accept.
The dynamics differ in one important way. Claude Code already has a layered memory system -- global, project, and local -- so a chat-mining prompt must be careful about which layer it writes to. A general preference belongs in the global ~/.claude/CLAUDE.md. A project-specific convention belongs in the repo's ./CLAUDE.md. Without that disambiguation, the agent will pile everything into one file and you end up with the bloat the Claude Code context management guide warns against.
A workable Claude Code variant would point at ~/.claude/projects/, scope to recent sessions, and split outputs by layer: general rules to global CLAUDE.md, project-specific rules to repo CLAUDE.md, procedures to .claude/skills/<name>/SKILL.md. The review pass is the same.
The deeper point is that file-first agent harnesses make this pattern possible. As long as the transcript is a file the agent can read, you can prompt the agent to mine it. The prompts will keep getting shorter as agent surfaces standardize, but the loop -- read past chats, propose what you would have written by hand, review and accept -- is the durable shape.
Frequently Asked Questions
Where does Cursor store chat history?
Cursor stores agent sessions as JSONL transcripts on disk, with parent agent transcripts kept separately from subagent transcripts. The files contain user messages, assistant text, and tool call inputs. Tool outputs are excluded because they would be too large to keep in line with the rest of the transcript.
What is the difference between .cursor/rules and .cursor/skills?
Rules in `.cursor/rules/*.mdc` are always-on context loaded into every chat. Skills in `.cursor/skills/<slug>/SKILL.md` are loaded dynamically when the agent decides they are relevant. Use rules for general preferences that apply broadly. Use skills for reusable procedures that only matter on specific tasks.
Can I auto-generate rules from chat history in Claude Code too?
Yes, the same idea works. Claude Code stores sessions as JSONL files under `~/.claude/projects/<encoded-cwd>/sessions/`. Point a Claude Code agent at those files and ask it to extract general rules into CLAUDE.md or skills into `.claude/skills/<name>/SKILL.md`. Claude Code already has its own memory system, so the dynamics differ slightly.
Get the weekly AI Catchup
Tools, practices, and what matters -- in your inbox every Monday.