What each screen is for, what to do there, and what outcome to expect.
Ai Keeper uses a hub model. The sidebar shows the primary destinations; many older or specialized routes now open inside one of these hubs as tabs or advanced panels.
Primary Hubs
Home
The dashboard you land on. Glance here to see if everything's running, then jump to wherever you need to go.
Operational front door for instance health, routing status, host pressure, active work, and high-value jumps into the rest of the app.
- Check whether any instances are ready for requests.
- Look for machine pressure before starting more models.
- Jump to Runtime, API Access, Chat, or diagnostics when something is not ready.
Download
Where you grab new stuff before using it: AI models, plug-in tools, and skills.
Acquire capabilities before you run them.
- HuggingFace: search, sort, filter, inspect model cards, and download MLX or GGUF models.
- ClawHub: browse skills and plugins, inspect trust signals, install packages, repair, update, and uninstall.
- MCP: browse official and Smithery registry entries, install local/hosted servers, and configure transport.
- Skills: install and manage reusable behavior bundles.
Runtime
The engine room. This is where downloaded models actually start running so you can talk to them.
Where model assets become running API-capable instances.
- Instances: create, start, stop, edit, diagnose, and tune runtime instances.
- Models: inspect installed model assets, size, quantization, fit, loaded state, dependent instances, Finder path, cache, and delete actions.
Chat
The main place to talk to the AI. Type, attach files, get answers — like ChatGPT but on your machine.
Primary conversation workspace with streaming, attachments, tool calls, approvals, session controls, slash commands, compaction, exports, and model recommendations.
- Use it for normal work, prompt testing, file analysis, multimodal inputs, and guided tool use.
- Use
/commandsto see every built-in and custom slash command. - Use
/compactor auto-compaction for long sessions.
Assistant
A friendlier "always there" version of the AI you can pop out into a small floating window for quick help.
Dedicated in-app assistant surface. Can detach into a floating desktop presence for ongoing companion-style assistance — not intended for carefully audited multi-agent runs.
Work Modes
Saved setups for specific kinds of work. Pick a Work Mode and the app pre-configures the model, memory, tools, and starter agents for that activity.
Preset profiles bundling model family, execution mode, memory behavior, autonomy level, web tools, starter agents/workspaces/automations, and package requirements.
- Create, clone, edit, delete, apply, and reapply saved modes.
- Use mode health to find missing packages or blocked requirements.
- Use package overrides when a saved mode needs custom marketplace behavior.
Knowledge
Everything the AI can "know" about you and your work — facts it remembers, files it can search, notes you've written, even people in your contacts.
The context layer chats and agents can draw from.
- AI Context: active system prompts and shared prompt shaping.
- Skills: installed and draft skill bundles.
- Memory: persistent facts, manual entries, auto-extraction, markdown export, delete controls.
- Documents: RAG files, folder indexing, chunking, search, selected embedding instance.
- Wiki: curated pages, links, backlinks, tags, edit history.
- Dreaming: idle-generated diary entries, memory-palace pages, and insights.
- Contacts: people, groups, tags, and reachability context.
- Obsidian: vault connection, note search, and import.
Workspaces
A team of AI agents working together on a job. You set up the team once and run it whenever you need.
Repeatable multi-agent work.
- Directory: create and edit agents and workspace definitions.
- Run: select a workspace, launch a task, monitor messages, resolve tool confirmations, and stop runs.
- Agents can have roles, models, ports, skills, memory scopes, tool access modes, standing orders, hooks, and prompt previews.
Automation
Schedule the AI to run jobs by itself — on a timer, when something happens, or on a regular heartbeat.
Converts reliable prompts, agents, and tools into scheduled or event-driven work.
- Flows: tasks, hooks, orders, advanced jobs, heartbeat, background jobs, workflows, and cron editor.
- Triggers: inbound webhook endpoints.
- Use approvals for any automation that can write files, run commands, use browsers, call channels, or affect external systems.
Extensions
Add-ons. Connect the AI to outside services like Slack or Telegram, manage installed plug-ins, or hook in extra tools.
Installed capabilities and external surfaces.
- Installed: package runtime health, setup, update, repair, and uninstall.
- Channels: shared inbox, Telegram, WebChat, auto-reply, DM Access, broadcast, accounts, and locations.
- Plugins: app plugin lifecycle and contributed capabilities.
- MCP: server configuration and discovered tools/resources.
- Skills: skill bundle management.
System
Settings, logs, and the "what's going on under the hood?" view. Where you go when something isn't working.
Admin and observability center.
- Settings: core app mode, appearance, storage, API reference, and maintenance.
- Requests: traffic log and request inspector.
- Logs: runtime, app, and crash log history.
- Health: diagnostics and doctor checks.
- Sessions: session manager and archives.
- Usage: request, token, and cost trends.
- Advanced: security, connectivity, media/device, and operator tools.
Lab
A sandbox to try prompts and compare models before committing them to a real workflow.
Where you test before depending on a setup.
- Playground: raw API request testing.
- Compare: side-by-side prompt evaluation.
- Benchmark: TTFT, throughput, and latency checks.
- Browser: managed browser sessions, profiles, previews, and LLM action log.
- Canvas: live HTML, charts, interactive outputs, source, snapshots, reset.
- QA Lab: test suites and regression probes.
System Advanced Panels
System > Advanced consolidates 19 specialized control panels grouped into four operator domains. Most are referenced by name elsewhere in the app; treat this section as the canonical entry point.
Security
Privacy and safety controls — what got logged, what's risky, where your secrets are stored, how to back things up.
- Audit Trail: append-only event log of tool calls, approvals, exec sessions, and channel actions.
- Security Audit: posture report covering exposed ports, allowlist coverage, default-deny gaps, and secret hygiene.
- Secrets: encrypted local store for API keys, tokens, and connector credentials. Used instead of inlining secrets in prompts or scripts.
- Backup: snapshot, restore, and export of app state, sessions, knowledge, and configurations.
Connectivity
How this Mac talks to other devices and services — backup providers, remote clients, paired phones, agent-to-agent links.
- Failover: ordered provider/instance fallback chains with cooldown, key rotation, and per-route policy.
- Remote Access: reachability profile when this Mac acts as a server for remote clients.
- Lanes: server-side routing lanes that group endpoints by purpose, priority, or tenant.
- ACP Server: Agent Communication Protocol — exposes/consumes structured agent messages.
- Node Mesh: peer-to-peer Ai Keeper nodes that discover and route work across machines.
- Device Pairing: trust handshake for iOS/iPadOS or other Mac clients.
Media & Devices
Microphone, camera, screen sharing, and language settings.
- Voice: microphone, TTS playback, and voice-handoff template management.
- Screen Capture: permission and pipeline for sharing the screen with vision-capable models.
- Camera: webcam capture for VLM and presence flows.
- Language: app localization and assistant default language.
Operator
Power-user controls — group tools, set up project templates, run filters, manage shell sessions, and shape the AI's persona.
- Tool Groups: bundle individual tools into named groups that an agent role can opt into.
- Templates: bootstrap files and project scaffolds an agent can place when starting work.
- Content Scanner: input/output filters for sensitive data, PII, and prompt-injection patterns.
- Exec Sessions: persistent shell/REPL handles the model can attach to instead of spawning new processes.
- Personality: long-form persona, tone, and standing-context editor for the default assistant.
Observability (within System > root)
All the "is it working?" views. First stop when something doesn't behave as expected.
- Requests: live request log with full headers, latency breakdown, and prompt/completion token counts.
- Logs: runtime, app, and crash logs filtered by source.
- Health: Diagnostics + Doctor — first stop for "why won't this start?" questions.
- Sessions: session manager and archives for long-running conversations.
- Usage: token, cost, and request trends with per-instance and per-provider breakdown.
API Access (within System > root)
The web addresses other apps (VS Code, Cursor, your scripts) use to talk to your local AI. Copy-paste the URL from here.
Shows the OpenAI-compatible base URL, Anthropic-compatible base URL, Ollama-compatible routes, ready instances, model availability, management URL, API keys, and copy-paste examples for VS Code, Cursor, curl, and Python clients. Always point external clients at the proxy here, not at raw instance ports.