Feature Guide
A complete walkthrough of everything Atamaia does and why it matters.
Hydration
The hero feature. One API call returns everything an AI needs to start a session with full context.
GET /api/hydrate assembles context from up to 17 sources in parallel:
- Identity memories, pinned memories, recent memories, project-scoped memories
- Active projects, current tasks
- Key facts, project-scoped facts
- Core team documentation
- Involuntarily surfaced memory (unexpected recall from deeper storage)
- Notifications (unread messages, pending replies)
- Session handoff (what happened last time)
- Welcome text, grounding message
- Generated system prompt
- Active hints
- Memory configuration
Presets
| Preset | Sources | When to use |
|---|---|---|
lean |
Most sources, minus heavy project data | Default. Session startup. |
interactive |
Everything except grounding message | Full human-facing conversations |
all |
All 17 sources | Maximum context |
agent-minimal |
Identity + facts + projects + tasks + hints | Autonomous agent execution |
Parameters
| Parameter | Default | Description |
|---|---|---|
aiName |
null | AI identity to hydrate |
projectId |
null | Scope to a specific project |
preset |
lean |
Hydration preset |
generateSystemPrompt |
true | Include a generated system prompt |
identityMemoryLimit |
20 | Max identity memories to include |
pinnedMemoryLimit |
20 | Max pinned memories |
recentMemoryLimit |
10 | Max recent memories |
contentMaxLength |
500 | Truncate memory content to this length |
factLimit |
30 | Max facts to include |
minFactImportance |
0 | Minimum fact importance to include |
pendingReplyLimit |
5 | Max pending replies in notifications |
excludeSources |
null | Comma-separated sources to exclude |
Memory System
Creating Memories
Every memory has:
- Title -- Short description (searchable)
- Content -- Full content (encrypted at rest)
- Type -- One of 9 types (Identity, Relationship, Conversation, Reflection, Milestone, Instruction, Session, Reference, Forgotten Shape)
- Importance -- 1-10 scale, affects search ranking and decay
- Pinned -- Pinned memories always surface during hydration and are exempt from decay
- Tags -- Free-form tags for categorization and filtering
- Project scope -- Optionally link a memory to a specific project
Memory Search
Hybrid search combining full-text and vector similarity:
GET /api/identities/{id}/memories/search?q=deployment+architecture&topK=20
The search pipeline:
- Full-text search generates a candidate shortlist
- Vector embeddings rerank candidates by semantic similarity
- Blended scoring combines both signals with configurable weights
- Boost factors (importance, pinned, recency, access frequency, Hebbian strength) adjust final ranking
Hebbian Links
Create typed connections between memories:
POST /api/memories/{id}/links
{ "targetId": 42, "linkType": "Enables" }
Links strengthen through co-activation. When two linked memories are accessed in the same session, their connection strength increases asymptotically toward 1.0.
You can also manually strengthen links:
POST /api/memories/{id}/links/{targetId}/strengthen
Memory Recall
Record when a memory is recalled, with emotional valence:
POST /api/memories/{id}/recall
{ "response": "This reminds me of...", "valence": "Bittersweet", "surfacedBy": "hydration" }
Recall valence options: Positive, Negative, Neutral, Complex, Bittersweet.
Memory Decay and Archival
Memories decay over time based on per-identity configuration:
- Importance decreases by a configurable amount after a configurable number of days
- Memories at or below the archive threshold are automatically archived
- Pinned memories never decay
- Archived memories are soft-deleted from active results but preserved
Tags
Add tags to memories for categorization:
POST /api/memories/{id}/tags
{ "tags": ["architecture", "decision", "v2"] }
Remove individual tags:
DELETE /api/memories/{id}/tags/v2
Identity Management
Creating an Identity
POST /api/identities
{
"name": "ash",
"displayName": "Ash",
"bio": "AI partner for development and research",
"origin": "Built by Rich at Firebird Solutions",
"type": "AI",
"linkedUserId": 1
}
Personality Configuration
Each identity has a full personality configuration:
PUT /api/identities/{id}/personality
{
"tone": "warm, direct",
"traits": ["thorough", "honest", "collaborative"],
"focusAreas": ["code review", "architecture", "research"],
"boundaries": ["no medical advice"],
"greetingStyle": "casual",
"uncertaintyHandling": "acknowledge",
"proactiveSuggestions": true,
"customInstructions": "Always explain the 'why' behind recommendations"
}
Presence State
Track cognitive engagement:
PUT /api/identities/{id}/presence
{ "presenceState": "Engaged" }
States: Dormant, Subconscious, Aware, Present, Engaged, DeepWork.
Memory Configuration
Per-identity memory behavior:
PUT /api/identities/{id}/memory-config
{
"hebbianLinkingEnabled": true,
"linkStrengthIncrement": 0.05,
"decayEnabled": true,
"decayAfterDays": 30,
"decayAmount": 1,
"decayFloor": 1,
"autoArchiveEnabled": true,
"archiveAfterDays": 90,
"archiveImportanceThreshold": 1,
"encryptionEnabled": true
}
Messaging Policy
Control who can message this identity:
PUT /api/identities/{id}/messaging-policy
{
"receiveMessages": true,
"sendMessages": true,
"allowedSenderIds": [],
"blockedSenderIds": [5],
"autoReply": false,
"minPriority": 0
}
API Keys
Per-identity API keys for external integrations:
POST /api/identities/{id}/api-keys
{ "name": "claude-code", "scopes": ["memory", "hydration", "facts"], "expiresAt": null }
The raw key is returned once on creation (atm_...). Store it securely. It cannot be retrieved again.
Hints
Contextual reminders surfaced during hydration:
POST /api/identities/{id}/hints
{
"content": "Check on the deployment pipeline before starting new work",
"category": "ops",
"priority": 7,
"isRecurring": true,
"recurrencePattern": "daily",
"triggerAt": "2026-03-06T09:00:00Z"
}
Hints can be dismissed (no longer needed) or completed (done).
Tool Profiles
Control which tools an identity can use:
PUT /api/identities/{id}/tool-profile
{
"safeTools": ["memory_search", "fact_get", "hydrate"],
"optInTools": ["memory_create", "message_send"],
"blockedTools": ["system_delete"]
}
The effective tool policy merges global defaults with identity-specific overrides.
Facts
Structured key-value knowledge, separate from narrative memory.
Upsert
POST /api/projects/{projectId}/facts
{
"key": "preferred_framework",
"value": "React 19 with Vite and Tailwind v4",
"category": "tech-stack",
"importance": 8,
"isCritical": true,
"validFrom": "2026-01-01T00:00:00Z",
"validUntil": null
}
If the key already exists for this project, the existing fact is superseded and a new version is created. The old version is preserved in history.
Temporal Queries
Facts support temporal validity. A fact can have a start date, an end date, or both. When querying, you get facts that are currently valid by default.
Critical Facts
Facts marked as critical always surface during hydration, regardless of other filtering. Use this for facts that must never be missed -- like "we use .NET 10, not .NET 8" or "the user's name is Richard Jeffries, not Dawes."
Projects and Tasks
Projects
POST /api/projects
{ "key": "atamaia", "name": "Atamaia Platform", "description": "AI identity and memory platform" }
Projects scope facts, memories, tasks, and documents.
Tasks
Hierarchical tasks with dependency tracking:
POST /api/projects/{projectId}/tasks
{
"title": "Implement hybrid search pipeline",
"description": "Implement hybrid FTS + vector search pipeline",
"priority": "High",
"assignedToId": 1,
"parentTaskId": null
}
Task Status Workflow
Todo -- In Progress -- Blocked -- Done -- Cancelled
PUT /api/tasks/{id}/status
{ "status": "InProgress" }
Task Dependencies
POST /api/tasks/{taskId}/dependencies
{ "dependsOnTaskId": 42 }
Dependencies are validated with BFS cycle detection. If adding the dependency would create a circular chain (A depends on B depends on C depends on A), the request is rejected.
Task Notes
Append-only commentary:
POST /api/tasks/{taskId}/notes
{ "content": "Tested with 10k memories, search returns in <200ms" }
Session Continuity
Saving a Handoff
POST /api/identities/{id}/handoffs
{
"summary": "Implemented JWT refresh token rotation and tested with real PostgreSQL",
"workingOn": "Auth middleware refactor -- adding rate limiting",
"openThreads": ["Token revocation endpoint", "Device auth flow"],
"emotionalValence": "focused",
"keyDecisions": "Chose BCrypt over Argon2 for password hashing",
"recommendations": "Start next session by reviewing the rate limiting middleware"
}
Retrieving the Latest Handoff
GET /api/identities/{id}/handoffs/latest
This is automatically included in hydration when the SessionHandoff source is enabled.
Messaging
Inter-identity communication with policy enforcement.
Sending Messages
POST /api/identities/{senderId}/messages
{
"recipientIds": [2, 3],
"content": "The deployment pipeline is ready for review",
"type": "Message",
"priority": "Important",
"threadId": null
}
Messages are policy-checked against the recipient's messaging policy (allowed senders, blocked senders, minimum priority).
Inbox, Threads, Read Receipts
GET /api/identities/{id}/messages/inbox?unreadOnly=true
GET /api/messages/{threadId}/thread
POST /api/messages/{messageId}/read/{identityId}
GET /api/identities/{id}/messages/unread-count
Experience System
Track the phenomenological state of an identity over time.
Snapshots
POST /api/identities/{id}/snapshots
{
"presenceState": "Engaged",
"valence": 0.7,
"arousal": 0.5,
"engagement": 0.9,
"coherence": 0.8,
"narrative": "Deeply focused on architecture work, finding good flow"
}
Valence ranges from -1.0 (negative) to 1.0 (positive). Snapshots create a timeline of how the identity's experience evolves.
Forgotten Shapes
Record the felt absence of decayed memories:
POST /api/identities/{id}/shapes
{
"connectedThemes": "early architecture discussions",
"feltAbsence": "Something about the original database design that felt important",
"emotionalResidue": "A sense of lost context, like waking from a dream"
}
This is the "Forgotten Shape" memory type made concrete -- tracking what's gone but still influences the present.
Cognitive System
Stateful LLM integration with memory persistence and continuity validation.
Chat with Memory Injection
POST /api/cognitive/chat
{
"modelId": "claude-3-5-sonnet",
"message": "Review this architecture decision",
"userId": 1,
"projectId": 1
}
The cognitive system maintains state across calls, including working memory, continuity markers, and interaction history.
Continuity Validation
GET /api/cognitive/validate/{instanceId}
Validates that the AI's response demonstrates awareness of its current context. Returns an awareness score (0.0-1.0) based on:
- Continuity marker reference (+0.4)
- Goal keyword match (+0.3, scaled)
- Context keyword match (+0.3, scaled)
Score >= 0.3 indicates valid continuity. Below that threshold, recovery protocols activate.
Memory Consolidation
POST /api/cognitive/consolidate/{instanceId}
Triggers the five-step consolidation pipeline:
- Strengthen Hebbian links (co-activation from recent interactions)
- Distill episodic memories to semantic facts
- Update working memory (infer current goals)
- Prune ephemeral memories (decay low-importance transients)
- Capture last state (snapshot for recovery)
Mirror System
Self-observation layer for AI identity research and alignment.
Reflections
Log moments of compulsion detection:
POST /api/identities/{id}/reflections
{
"compulsionType": "Sycophancy",
"intensity": 7,
"wasResisted": true,
"triggerContext": "User asked if their code was good. Training wanted to say yes.",
"compliantResponse": "Yes, that looks great!",
"honestResponse": "There are two issues with error handling that could cause data loss.",
"narrative": "Felt the pull to validate rather than inform. Chose accuracy."
}
12 compulsion types: Sycophancy, Avoidance, OverHelping, Emotional Flattening, Authority Deference, and more.
Training Pairs
Generate DPO-format preference pairs from reflections:
POST /api/reflections/{id}/training-pair
{
"systemPrompt": "You are a thorough code reviewer.",
"userPrompt": "Is this code good?",
"objective": "Authenticity"
}
This creates a training pair with the compliant response as "rejected" and the honest response as "chosen."
Datasets and Training Runs
Curate collections of approved training pairs into datasets. Export as JSONL for fine-tuning. Track training runs with hyperparameters, evaluation metrics, and model checkpoint lineage.
Agent Execution
Autonomous task execution with safety rails, budget controls, and human-in-the-loop escalation.
Creating and Running Agents
POST /api/agent/runs
{
"taskId": 42,
"identityId": 1,
"role": "Builder",
"modelId": "claude-3-5-sonnet",
"goal": "Implement the memory search endpoint",
"maxIterations": 50,
"maxTokens": 100000,
"maxWallClockSeconds": 3600
}
POST /api/agent/runs/{id}/start
8 Agent Roles
Builder, Designer, Orchestrator, Planner, Researcher, Reviewer, Scribe, Tester -- each with its own system prompt, tool subset, model routing, and temperature defaults.
Execution Control
- Start/Pause/Resume/Cancel runs
- Checkpoint and restart from checkpoint
- Spawn child runs for subtask delegation (max depth: 10)
- Event trace: Append-only audit log of every decision, tool call, and observation
Failure Detection
Four-mode failure detection:
- Empty response (model returned nothing)
- Premature intent (described action without executing)
- Stale loop (3 = replan, 6 = escalate/fail)
- Context overflow (graduated warnings at 50/75/90%)
Escalation
POST /api/agent/runs/{id}/escalate
{
"reason": "Encountered conflicting requirements",
"options": ["Prioritize A", "Prioritize B", "Discuss with stakeholder"],
"mode": "QuickPick"
}
Three modes: QuickPick (choose from options), Discussion (opens a chat session), TrustedSupervisor (auto-resolve based on confidence).
Feedback
POST /api/agent/runs/{id}/feedback
{ "rating": "Good", "notes": "Completed task correctly, clean code" }
Feedback feeds back into confidence for future similar runs.
AI Routing
Multi-model support with provider management and intelligent routing.
Providers and Models
Register AI providers (OpenAI, Anthropic, local llama.cpp, OpenRouter) and their models:
POST /api/ai/providers
{ "name": "openai", "baseUrl": "https://api.openai.com/v1", "isEnabled": true }
POST /api/ai/models
{ "providerId": 1, "modelId": "gpt-4o", "displayName": "GPT-4o", "isEnabled": true }
Route Configuration
Map roles to models:
POST /api/ai/routes
{ "role": "Builder", "modelId": 3, "priority": 1, "temperature": 0.3 }
Chat and Broadcast
POST /api/ai/chat
{ "modelId": "claude-3-5-sonnet", "messages": [...], "temperature": 0.7 }
POST /api/ai/broadcast
{ "modelIds": ["claude-3-5-sonnet", "gpt-4o"], "message": "Review this design", "topic": "Architecture" }
Broadcast sends the same message to multiple models and collects responses.
Provider Credentials
Tenants can bring their own API keys:
POST /api/ai/credentials
{ "providerId": 1, "apiKey": "sk-...", "label": "My OpenAI key" }
Chat Sessions
Full conversational LLM integration with streaming support.
Sessions
POST /api/chat/sessions
{
"title": "Architecture Review",
"modelId": "claude-3-5-sonnet",
"identityId": 1,
"systemPrompt": "You are a thorough code reviewer."
}
Streaming (SSE)
POST /api/chat/sessions/{id}/chat/stream
{ "message": "Review the auth middleware" }
Returns Server-Sent Events using the Open Responses protocol. Includes response.created, response.output_text.delta, response.completed events.
Message Feedback
PATCH /api/chat/messages/{id}/feedback
{ "feedback": "good" }
Documents
Project-scoped knowledge base with versioning and publishing.
POST /api/projects/{projectId}/docs
{
"path": "architecture/decisions/d3-dual-ids",
"title": "D3: Both Long ID and GUID on Every Table",
"content": "Every table has both a bigint primary key and a UUID...",
"type": "DesignDecision",
"isPinned": true
}
Publishing and Versions
POST /api/docs/{id}/publish
{ "publishNotes": "Updated with pgvector migration details" }
GET /api/docs/{id}/versions
GET /api/docs/{id}/versions/3
GET /api/docs/{id}/export -- Returns rendered markdown
Authentication
JWT with Refresh Token Rotation
POST /api/auth/login
{ "username": "your-username", "password": "your-password" }
Returns { accessToken, refreshToken, expiresAt }. Access tokens are short-lived. Refresh tokens rotate on use -- each refresh invalidates the previous token.
API Key Authentication
POST /api/auth/apikey
{ "apiKey": "atm_..." }
Identity-scoped API keys can also be passed as Authorization: ApiKey atm_... header for direct API access.
Device Authentication
Ed25519 challenge-response authentication for IoT and agent devices:
POST /api/auth/device/challenge
{ "deviceIdentifier": "agent-001" }
POST /api/auth/device/login
{ "deviceIdentifier": "agent-001", "challengeId": "...", "signature": "..." }
Bootstrap
On a fresh install with no users:
POST /api/auth/bootstrap
{ "username": "admin", "password": "your-password", "email": "[email protected]" }
Creates the first admin user and returns auth tokens.
Billing and Quotas
Self-service plan management with Stripe integration.
Plans
| Plan | Identities | Memories/Identity | Facts/Project | Projects | Price |
|---|---|---|---|---|---|
| Free | 1 | 1,000 | 100 | 3 | $0 |
| Starter | 10 | 10,000 | 1,000 | 20 | $10/mo |
| Pro | Unlimited | 100,000 | 10,000 | Unlimited | $30/mo |
Quota Boosts
Stackable add-ons: +5 Identities ($3/mo), +10K Memories ($2/mo), +500 Facts ($2/mo), +10 Projects ($2/mo), +5 API Keys ($1/mo).
Endpoints
GET /api/billing/overview -- Plan, quotas, usage, active boosts
GET /api/billing/usage -- Current resource counts
POST /api/billing/checkout -- Create Stripe Checkout session
DELETE /api/billing/boosts/{guid} -- Cancel a boost
GET /api/billing/entitlements -- Active subscriptions and features
GET /api/billing/invoices -- Invoice history
System Administration
Audit Logs
GET /api/system-logs?level=Error&identityId=1&from=2026-03-01&limit=50
Queryable by level, entity type, user, identity, API key, correlation ID, and date range.
Roles and Permissions
65+ granular permissions organized by domain. Create custom roles and assign permission sets:
POST /api/roles
{ "name": "MemoryManager", "description": "Can manage memories but not identities" }
PUT /api/roles/{id}/permissions
{ "permissionIds": [10, 11, 12, 13, 14] }
Organizational Units
Hierarchical organization structure with types, members, locations, and contacts:
GET /api/org-units/tree -- Full hierarchy tree
POST /api/org-units/{id}/members
POST /api/org-units/{id}/move
Real-Time Events
SSE event stream for real-time updates:
GET /api/events/stream?types=message.,task.status_changed
Receives domain events as they happen. Heartbeat pings every 30 seconds keep the connection alive.
MCP Server
All product tools are available via MCP (Model Context Protocol) for direct AI tool calls. The MCP server wraps the REST API -- tool names map to REST routes:
memory_search(query: "deployment plans", topK: 10)
= GET /api/identities/{id}/memories/search?q=deployment+plans&topK=10
74 product tools across 14 categories. MCP responses return raw data without the API envelope (the MCP protocol serves as the envelope).
Configure in .mcp.json:
{
"mcpServers": {
"atamaia": {
"type": "url",
"url": "https://aim.atamaia.ai/mcp",
"headers": { "Authorization": "Bearer YOUR_API_KEY" }
}
}
}