Claude Prompt for Memory & Tool Use
Entity-centric memory model for a education agent performing release note generation. Tracks people, orgs, docs, and relationships with episodic memory patterns over Pinecone.
More prompts for Memory & Tool Use.
Entity-centric memory model for a legal agent performing customer support triage. Tracks people, orgs, docs, and relationships with episodic memory patterns over Milvus.
Implement a knowledge graph memory memory system for a AutoGen agent handling code PR review. Vector store: Supabase pgvector. Covers write, retrieve, prune, and eval.
Implement a knowledge graph memory memory system for a LangGraph agent handling meeting note extraction. Vector store: Chroma. Covers write, retrieve, prune, and eval.
Entity-centric memory model for a sales ops agent performing onboarding coordinator. Tracks people, orgs, docs, and relationships with knowledge graph memory patterns over pgvector.
Managed context window for long-running agents doing contract redlining. Covers rolling summarization, reference-and-expand, budget allocation, and eval of context loss.
Managed context window for long-running agents doing investor update drafting. Covers rolling summarization, reference-and-expand, budget allocation, and eval of context loss.
Build an entity-aware episodic memory system for a Inngest agent-kit agent doing release note generation in the education domain. The agent encounters the same people, companies, documents, and relationships across thousands of interactions — memory must be entity-first, not conversation-first. **Model:** Claude Sonnet 4.5 **Runtime:** Deno 2 **Store:** Pinecone ## Part 1 — Entity schema Define the entity types relevant to education + release note generation. For each: - Canonical fields (id, name, aliases, type) - Domain-specific fields (e.g. for a customer support release note generation: account tier, lifetime value, open tickets) - Relationships to other entities (typed edges) - Source provenance (which conversation/tool call established each fact) Produce the full schema — tables + columns for structured store, vector index for semantic search over attributes. ## Part 2 — Entity extraction On each turn, extract entity mentions + attributes: - Named entity recognition (LLM-based, structured output) - Canonical resolution (dedupe: "Acme Corp", "acme", "ACME, Inc." → one entity) - Conflict detection (new fact contradicts existing — which wins?) Write the extraction prompt + resolution logic. ## Part 3 — Relationship tracking When a new fact connects two entities, store the edge: - Typed relationship (works_at, reports_to, owns, blocked_by, ...) - Temporal validity (when did this start? is it still true?) - Strength / confidence ## Part 4 — Retrieval strategies When the agent needs context about an entity: - **Direct fetch**: by ID or name match - **Neighborhood fetch**: entity + N-hop neighbors - **Attribute search**: "who works at Acme?" → semantic over attributes - **Temporal slice**: "what did we know about X as of date Y?" Write the retrieval API. ## Part 5 — Integration into agent prompts Format retrieved entities for the model: - Structured block (not free text) - Most-recent-first - Mark contested facts explicitly - Include provenance link so the agent can verify ## Part 6 — Consolidation Periodic background job: - Merge duplicate entities missed at write time - Resolve contradictions (latest wins? source-priority? LLM arbitration?) - Decay low-confidence facts - Roll up many small facts about one entity into a 1-paragraph profile ## Part 7 — Privacy Entity memory is the riskiest kind. Enforce: - Per-tenant isolation - PII tagging (email, phone, address fields flagged) - Redaction on write for categories the product shouldn't retain - Per-user view: "what does this agent remember about me?" - Delete-entity API ## Part 8 — education specifics For education, which entity types are sensitive (e.g. in healthcare: patient; in legal: client)? Apply stricter handling (encryption at rest, shorter TTL, access audit). ## Part 9 — Eval - Entity resolution accuracy (benchmark set of 200 ambiguous mentions) - Contradiction resolution correctness - Cross-session recall: facts from session 1 surface correctly in session N - Leakage test: facts about user A never surface to user B ## Part 10 — Implementation Write the code: - Schema migrations for Pinecone + structured DB - Entity manager module (extract, resolve, write, query, consolidate) - Background consolidation worker - Inngest agent-kit integration Produce a reviewable, runnable entity memory layer, not a sketch.