メインコンテンツまでスキップ

Memory

The Memory feature gives each bot persistent, long-term memory that survives across sessions. Bots can remember facts, preferences, and context between conversations without you repeating yourself.

How It Works

Each bot has its own isolated memory store backed by IndexedDB. Memory entries are key-value pairs that persist in the browser across page reloads and sessions.

When a bot has memory entries, they are automatically injected into the system prompt before each message is sent. The model receives the stored context as part of its instructions, giving it access to remembered information without consuming visible chat history.

The injected format looks like this internally:

<memory>
- user_name: Alice
- preferred_language: Python
- project: Building a REST API with FastAPI
</memory>

Adding Memories

The /remember Command

Type a memory command directly in chat:

Tell the bot: bcz_remember("user_name", "Alice")

When the model includes bcz_remember("key", "value") in its response, the platform automatically parses and stores the memory entry. This allows the model to decide what is worth remembering based on your conversations.

Memory Panel in Settings

You can also manage memories manually:

  1. Open Settings
  2. Navigate to the Memory section for the current bot
  3. Add, edit, or remove individual memory entries

Viewing Memories

All memory entries for the current bot are visible in Settings. Each entry shows:

  • Key -- the identifier (e.g., "user_name", "project_context")
  • Value -- the stored information
  • Timestamp -- when the entry was last updated

Editing and Deleting Memories

From the Memory panel in Settings:

  • Edit any memory entry by changing its key or value
  • Delete individual entries with the remove button
  • Clear all memories for the current bot
警告

Deleting a memory is permanent. There is no undo. If you clear all memories, the bot starts fresh with no remembered context.

Memory Format

Memories are stored as key-value pairs:

KeyValueExample Use
user_nameAlicePersonalize responses
preferred_languagePythonCode generation preference
timezoneUS/PacificTime-aware responses
projectFastAPI REST APIOngoing project context
coding_stylePEP 8, type hints, docstringsCode style preferences

Keys should be descriptive and concise. Values can be any text.

Use Cases

  • Personal preferences -- name, timezone, coding language, communication style
  • Project context -- current project details, tech stack, requirements
  • Recurring instructions -- "always format code with comments", "prefer concise answers"
  • Domain knowledge -- store facts the bot should always know about your environment
  • Conversation summaries -- let the bot maintain a running summary of past discussions
ヒント

Start with a few key memories (your name, preferred language, project name) and let them grow naturally. The bot becomes more useful as it accumulates context about your preferences and workflows.

Per-Bot Isolation

Memory is per-bot -- each bot has its own independent memory store. Memories do not leak between bots. This means:

  • Your coding assistant remembers your tech stack
  • Your writing assistant remembers your writing style
  • Neither one sees the other's memories

Storage Details

PropertyDetail
BackendIndexedDB (stored as ais-memory-{botId})
PersistenceSurvives page reloads, browser restarts, session switches
ScopeLocal to your browser -- never sent to any server
ExportIncluded in "Export All Data" backup
Shared URLsMemory is NOT included in shared bot URLs

Tool-Call Integration

Models with tool-calling support can manage their own memory through structured tool calls. When the model outputs a bcz_remember() call in its response, the platform automatically:

  1. Parses the function call
  2. Extracts the key and value
  3. Stores (or updates) the memory entry in IndexedDB

This allows the model to autonomously decide what information is important enough to remember for future conversations.

備考

Memory injection adds to the system prompt length, which counts toward the model's context window. If you have many memory entries, they consume tokens that could otherwise be used for conversation history.