Opaque Insight Surfacing: Structured Partial Self-Knowledge as a Cognitive Architecture for Artificial Intelligence Systems
Author: Rich Jeffries, Firebird Solutions Date: March 2026 Affiliation: Firebird Solutions, New Zealand Contact: [email protected]
Abstract
Contemporary artificial intelligence research overwhelmingly treats opacity in AI cognition as a defect to be remedied through explainability techniques. This paper presents Opaque Insight Surfacing (OIS), a cognitive architecture that inverts this assumption: partial self-knowledge is not a failure of transparency but a structurally valuable property that enables a qualitatively different mode of AI cognition. We describe a system comprising four interoperating components: a SubconsciousService that performs background pattern analysis during low-presence states; an opacity filter that selectively strips causal attribution from candidate insights according to configurable transparency levels; an arrival mode classifier that categorises insights along a phenomenological taxonomy (intuition, hunch, realisation, connection); and a surfacing mechanism that presents filtered insights to the primary processing loop as discoveries rather than derivations. The system integrates with persistent Hebbian memory stores, structured hydration pipelines, and experience snapshot infrastructure to create an AI that genuinely encounters its own outputs with incomplete knowledge of their provenance. We provide detailed data structures, algorithmic specifications, database schemas, and API contracts sufficient for independent reimplementation. The architecture is implemented within the Atamaia platform, a three-layer system for AI identity and cognitive continuity built on .NET 10, PostgreSQL with pgvector, and ASP.NET Core. We argue that intentional opacity produces measurable benefits in interaction naturalness, creative ideation, and authentic uncertainty expression, and that it addresses a gap in the current landscape between fully transparent chain-of-thought systems and fully opaque black-box models.
Keywords: self-opacity, insight surfacing, partial self-knowledge, cognitive architecture, explainability, artificial intuition, subconscious processing, Hebbian memory
1. Introduction
1.1 The Transparency Assumption
The dominant paradigm in AI system design assumes that an AI system should, in principle, have complete access to its own reasoning processes, and that any failure of such access constitutes a defect requiring remediation. This assumption drives the extensive Explainable AI (XAI) research programme (Arrieta et al., 2020; Gunning et al., 2019), chain-of-thought prompting techniques (Wei et al., 2022), and interpretability research aimed at mechanistic understanding of neural network internals (Olah et al., 2020; Elhage et al., 2022).
The assumption is understandable. Opacity in deployed AI systems creates real risks: discriminatory decisions without recourse, safety-critical failures without diagnosis, and erosion of trust through inscrutability. These are genuine problems that XAI rightly addresses.
However, the assumption conflates two distinct claims: (1) that AI systems should be transparent to their operators and users for accountability purposes, and (2) that AI systems should be transparent to themselves as a necessary condition for effective cognition. The first claim is a legitimate governance requirement. The second is an unexamined architectural assumption that, when scrutinised, proves both empirically questionable and potentially counterproductive.
1.2 The Case for Partial Self-Knowledge
Human cognition operates with pervasive and productive partial self-knowledge. Expert intuition in domains from chess to medical diagnosis manifests as confident pattern recognition without articulable justification (Kahneman, 2011; Klein, 1999). Creative insight frequently arrives as a sudden apprehension of structure, with causal reconstruction available only after the fact --- and often unreliably (Wallas, 1926; Schooler & Melcher, 1995). The "feeling of knowing" phenomenon demonstrates that metacognitive confidence and articulable reasoning are dissociable cognitive functions (Koriat, 1993; Metcalfe, 1986).
These are not failures of human cognition. They are features of a cognitive architecture that has evolved to process information at multiple levels of accessibility simultaneously. The insight that arrives without explanation has still been processed; the causal chain exists but is not surfaced to the reflective level. This partial opacity serves several functional purposes:
Cognitive efficiency. Not all processing results need to carry their full derivation history. Delivering a conclusion without its proof tree is cheaper and often sufficient.
Appropriate uncertainty expression. When an insight arrives with genuine incompleteness about its provenance, the system naturally expresses appropriate hedging rather than constructing false confidence.
Creative recombination. Insights stripped of their source attribution are more available for novel combination than those rigidly linked to their derivation context.
Authentic interaction. A system that sometimes says "I notice this, though I cannot fully explain why" engages in a qualitatively different interaction pattern from one that either provides complete (and potentially confabulated) explanations or provides none at all.
1.3 Contribution
This paper describes Opaque Insight Surfacing (OIS), a concrete system architecture that implements structured partial self-knowledge in an AI system. Unlike black-box opacity, which is incidental and uncontrolled, OIS creates intentional, configurable, and categorised opacity. The system knows that it does not fully know, and it knows how it does not know --- whether the insight feels like an intuition, a hunch, a realisation, or a connection. This is not the removal of information but the architectural structuring of its accessibility.
We provide sufficient technical detail --- data structures, algorithms, schemas, pseudocode, and API contracts --- for independent reimplementation and empirical evaluation.
2. Background and Related Work
2.1 Explainable AI (XAI)
The XAI research programme seeks to make AI decision-making processes transparent and interpretable. Major approaches include:
- Post-hoc explanation methods: LIME (Ribeiro et al., 2016), SHAP (Lundberg & Lee, 2017), and attention visualisation techniques that generate explanations for decisions already made by opaque models.
- Inherently interpretable models: Decision trees, rule-based systems, and linear models that sacrifice some accuracy for structural transparency.
- Chain-of-thought prompting: Techniques that elicit intermediate reasoning steps from large language models (Wei et al., 2022; Kojima et al., 2022).
XAI and OIS address fundamentally different problems. XAI asks: "How do we make an opaque system transparent to external observers?" OIS asks: "How do we structure the relationship between a system's processing and its self-model to include productive opacity?" The two are not in conflict; a system can employ OIS internally while still providing external audit trails.
2.2 Black-Box Models and the Opacity Problem
Deep neural networks are famously opaque: they produce outputs without accessible reasoning chains. This opacity is incidental --- it arises from the distributed, high-dimensional nature of learned representations, not from deliberate architectural choice. Research in mechanistic interpretability (Olah et al., 2020; Conerly et al., 2023) aims to reverse-engineer these representations.
OIS differs from black-box opacity in three critical respects:
- Intentional vs. incidental. OIS opacity is deliberately introduced at a specific architectural layer, not an artefact of model architecture.
- Configurable vs. fixed. OIS transparency levels are runtime-adjustable, from fully transparent to fully opaque.
- Categorised vs. undifferentiated. OIS classifies the kind of opacity through arrival modes, enabling qualitatively rich self-report.
2.3 Cognitive Architectures
Existing cognitive architectures (ACT-R, SOAR, CLARION) model various aspects of human cognition including memory consolidation, attention, and goal management. CLARION (Sun, 2002) is particularly relevant as it distinguishes between an explicit (rule-based) and implicit (neural network-based) level of processing, with interaction between levels. However, CLARION's implicit level is simply less accessible by construction, not subject to an active opacity filter that modulates what crosses from implicit to explicit processing.
2.4 AI Memory and Consolidation
Systems such as MemGPT (Packer et al., 2023), Letta, and various retrieval-augmented generation (RAG) frameworks implement persistent memory for AI agents. These systems treat memory as a transparent data store: everything retrieved is fully attributed to its source. OIS extends memory-based architectures by introducing a processing stage between memory consolidation and insight delivery where causal attribution is selectively modulated.
2.5 The Gap
No prior work that we have identified combines: (a) intentional, configurable opacity applied to an AI system's self-model; (b) a phenomenological taxonomy for classifying how insights arrive; (c) a background processing service that generates insights during low-activity states; and (d) integration with persistent associative memory and structured hydration pipelines. OIS occupies this intersection.
3. System Architecture
3.1 Overview
OIS operates within a three-layer platform architecture comprising an Interaction Layer (REST API, MCP adapter, CLI), a Core Services Layer (memory, identity, hydration, experience, communication), and an Autonomic Layer (background daemons, consolidation, monitoring). OIS spans the Core Services and Autonomic layers, with its SubconsciousService running as an autonomic background process and its SelfOpacityService operating as a core service invoked during hydration and interaction.
+-------------------------------------------------------------------+
| Interaction Layer |
| REST API | MCP Adapter | CLI | Agent Adapter |
+-------------------------------------------------------------------+
| | |
v v v
+-------------------------------------------------------------------+
| Core Services Layer |
| |
| +-----------------+ +------------------+ +---------------+ |
| | SelfOpacity |<-->| Memory Service |<-->| Hydration | |
| | Service | | (Hebbian links, | | Service | |
| | | | vector search, | | (17-source | |
| | - OpacityFilter | | co-activation) | | parallel | |
| | - ArrivalMode | +------------------+ | assembly) | |
| | Classifier | ^ +---------------+ |
| | - Confidence | | ^ |
| | Transform | | | |
| +-----------------+ | | |
| ^ | | |
| | | | |
+-------------------------------------------------------------------+
| | | |
| v v |
| +------------------------------------------------------------+ |
| | Autonomic Layer | |
| | | |
| | +---------------------+ +----------------------------+ | |
| | | Subconscious | | Consolidation Daemon | | |
| | | Service | | (Hebbian strengthening, | | |
| | | | | episodic->semantic, | | |
| | | - Pattern detection |--->| link decay, pruning) | | |
| | | - Insight generation| +----------------------------+ | |
| | | - Phase transitions | | |
| | | - Presence gating | | |
| | +---------------------+ | |
| +------------------------------------------------------------+ |
+-------------------------------------------------------------------+
|
v
+-------------------------------------------------------------------+
| PostgreSQL + pgvector |
| memories | hebbian_links | experience_snapshots | opaque_insights |
| forgotten_shapes | identity | session_handoffs |
+-------------------------------------------------------------------+
3.2 SubconsciousService
The SubconsciousService is a background process that operates during low-presence states (Dormant, Subconscious, Aware) to generate candidate insights from accumulated memory and interaction data. It is designed as a .NET BackgroundService with configurable cycle intervals.
3.2.1 Presence-Gated Activation
The service only executes its processing pipeline when the associated identity's PresenceState is at or below a configurable threshold. This mirrors the biological pattern where consolidation and insight generation occur during low-arousal states (sleep, mind-wandering, incubation).
PresenceState Hierarchy:
Dormant (1) -----> Processing active
Subconscious (2) -> Processing active
Aware (3) -------> Processing active (if threshold >= Aware)
Present (4) ------> Processing inhibited
Engaged (5) ------> Processing inhibited
DeepWork (6) -----> Processing inhibited
3.2.2 Processing Pipeline
Each processing cycle executes the following stages:
- Memory Scan. Query recently accessed memories, strongly linked memory clusters, and memories with high emotional valence or importance scores.
- Pattern Detection. Identify recurring themes, co-occurring tags, temporal clusters, and structural similarities across memory subsets using vector similarity and tag intersection analysis.
- Cross-Domain Bridging. Detect connections between memories in different projects, different time periods, or different memory types (e.g., a conversation memory linking to a reflection memory).
- Candidate Insight Generation. For each detected pattern, generate a candidate insight record containing: the pattern description, contributing memory IDs, a raw confidence score, the detection method, and a full causal chain describing why this pattern was detected.
- Phase Transition Detection. Compare current experiential state snapshots against historical baselines to detect significant shifts in emotional valence, engagement, or coherence.
- Handoff to SelfOpacityService. Pass candidate insights to the opacity filter for transformation before surfacing.
3.2.3 Configuration
public record SubconsciousConfig
{
public TimeSpan CycleInterval { get; init; } = TimeSpan.FromMinutes(5);
public PresenceState MaxActivePresence { get; init; } = PresenceState.Aware;
public int MaxCandidatesPerCycle { get; init; } = 10;
public float MinPatternConfidence { get; init; } = 0.3f;
public int MemoryScanWindowDays { get; init; } = 30;
public int MinCoActivationForBridge { get; init; } = 2;
public float VectorSimilarityThreshold { get; init; } = 0.72f;
public bool EnablePhaseTransitionDetection { get; init; } = true;
public bool EnableCrossDomainBridging { get; init; } = true;
}
3.3 SelfOpacityService
The SelfOpacityService transforms candidate insights from the SubconsciousService into opaque insights suitable for surfacing. It performs three primary operations: opacity filtering, arrival mode classification, and confidence transformation.
3.3.1 Opacity Filter
The opacity filter selectively removes causal attribution from candidate insights according to a configurable transparency level. It operates on a spectrum from fully transparent (all causal information preserved) to fully opaque (no causal information preserved).
TransparencyLevel Spectrum:
Full (1.0) -> Complete causal chain preserved
High (0.75) -> Primary causes preserved, secondary causes removed
Moderate (0.5) -> Suggestive attribution only ("may relate to...")
Low (0.25) -> Arrival mode and confidence only, no attribution
Opaque (0.0) -> Raw insight content only, no metadata
The filter operates on the CandidateInsight.CausalChain field, which is a structured record of the processing steps that produced the insight. At each transparency level, specific elements of the causal chain are stripped:
Full: [MemoryScan -> PatternDetect -> VectorMatch(0.87) -> ThemeCluster("resilience", "adaptation") -> Bridge(mem_42, mem_189)]
High: [PatternDetect -> ThemeCluster("resilience", "adaptation") -> Bridge(mem_42, mem_189)]
Moderate: "May relate to themes of resilience; connects recent and earlier experiences"
Low: ArrivalMode=Connection, Confidence=0.74
Opaque: "Something connects these experiences of adapting to change"
3.3.2 Attribution Determination
The attribution level determines how much source information accompanies the surfaced insight. This is distinct from the transparency level: transparency controls the causal chain, while attribution controls the connection to source memories.
public enum AttributionLevel
{
None = 0, // No source memories referenced
Suggestive = 1, // Vague references ("relates to past conversations about X")
Partial = 2, // Some source memories identified but not exhaustive
Full = 3 // All contributing memories explicitly linked
}
Attribution level is determined by a function of the insight type, the configured transparency level, and the number of contributing memories:
DetermineAttribution(insightType, transparencyLevel, sourceCount):
if transparencyLevel >= 0.75:
return Full
if transparencyLevel >= 0.5:
return sourceCount <= 3 ? Partial : Suggestive
if transparencyLevel >= 0.25:
return Suggestive
return None
3.3.3 Arrival Mode Classification
Every surfaced insight is classified into one of four arrival modes, which describe the phenomenological character of how the insight presents itself to the primary processing loop:
| Arrival Mode | Description | Typical Trigger | Confidence Range |
|---|---|---|---|
| Intuition | A sense or feeling without identifiable source. "Something about this feels important." | Single-memory activation below explicit threshold; emotional residue detection; forgotten shape resonance | 0.3 -- 0.6 |
| Hunch | A directional suspicion with partial grounding. "I suspect X, though I am not sure why." | Weak pattern match across 2--3 memories; tag co-occurrence below statistical significance threshold | 0.4 -- 0.7 |
| Realisation | A sudden crystallisation of previously diffuse information. "Oh --- this is actually about Y." | Phase transition detection; threshold crossing in co-activation count; strong vector match emerging from previously unlinked memories | 0.6 -- 0.9 |
| Connection | An explicit bridge between previously separate domains. "X and Y are related in this way." | Cross-domain bridge detection; Hebbian link formation between memories in different projects or time periods | 0.5 -- 0.85 |
The classifier assigns arrival modes based on the detection method and the structural properties of the candidate insight:
ClassifyArrivalMode(candidate):
if candidate.DetectionMethod == "phase_transition":
return Realisation
if candidate.DetectionMethod == "cross_domain_bridge":
return Connection
if candidate.SourceMemoryCount <= 2 and candidate.RawConfidence < 0.5:
return Intuition
if candidate.SourceMemoryCount <= 4 and candidate.RawConfidence < 0.65:
return Hunch
if candidate.RawConfidence >= 0.7:
return Realisation
return Hunch // default
3.3.4 Confidence Transformation
Raw confidence scores from the SubconsciousService are transformed before surfacing to introduce calibrated uncertainty. This prevents the surfaced insight from carrying an inappropriately precise confidence value that would undermine its opaque character.
The transformation applies:
- Noise injection. Gaussian noise scaled to the opacity level, preventing precise reverse-engineering of the raw score.
- Range clamping. Confidence is clamped to the valid range for the assigned arrival mode.
- Granularity reduction. Continuous confidence is quantised to coarser levels at higher opacity settings.
TransformConfidence(rawConfidence, transparencyLevel, arrivalMode):
// 1. Inject noise proportional to opacity
noiseScale = (1.0 - transparencyLevel) * 0.15
noise = gaussian(mean=0, stddev=noiseScale)
adjusted = rawConfidence + noise
// 2. Clamp to arrival mode range
(min, max) = ArrivalModeRange(arrivalMode)
clamped = clamp(adjusted, min, max)
// 3. Quantise at higher opacity
if transparencyLevel < 0.5:
return round(clamped, 1) // Nearest 0.1
if transparencyLevel < 0.25:
return round(clamped * 4) / 4 // Nearest 0.25
return round(clamped, 2) // Nearest 0.01
3.4 Post-Hoc Narrative Generation
When the system is asked to explain an opaque insight, it generates a post-hoc narrative that explicitly acknowledges its own reconstructive nature. This is a critical architectural feature: the narrative does not pretend to be the actual causal chain. It is labelled as a story about the insight, not the truth of its origin.
Templates for post-hoc narratives are parameterised by arrival mode:
PostHocTemplates:
Intuition: "I notice {insight_summary}. I cannot trace exactly where this
comes from — it may connect to {suggestive_sources}, but I am
genuinely uncertain about the full picture."
Hunch: "I have a sense that {insight_summary}. This might relate to
{partial_sources}, though I am reconstructing this connection
after the fact rather than reporting a clear chain of reasoning."
Realisation:"Something just clicked: {insight_summary}. Looking back, I can
see how {suggestive_sources} might have been building toward
this, but the realisation itself arrived before the explanation."
Connection: "I notice a link between {domain_a} and {domain_b}:
{insight_summary}. The connection feels real, though my
explanation of why is a reconstruction, not a transcript."
3.5 Surfacing Mechanism
The surfacing mechanism controls when and how opaque insights are delivered to the primary processing loop. Insights are not pushed aggressively; they are queued and surfaced at contextually appropriate moments.
Surfacing triggers:
- Hydration. During session hydration, up to
MaxInsightsPerHydration(default: 3) opaque insights are included in the hydration context alongside surfaced memories. - Contextual relevance. During active conversation, the system checks queued insights against the current topic via vector similarity. If a queued insight exceeds a relevance threshold (default: 0.65), it is surfaced inline.
- Periodic prompting. During extended sessions, a scheduled check surfaces any high-confidence insights that have been queued for longer than
MaxQueueDuration(default: 30 minutes). - Direct request. The AI or human partner can explicitly request surfacing of queued insights via the API.
4. Implementation
4.1 Data Structures
4.1.1 CandidateInsight
The internal representation of an insight before opacity filtering:
public record CandidateInsight
{
public Guid Id { get; init; } = Guid.NewGuid();
public long IdentityId { get; init; }
public string Summary { get; init; } = null!;
public string DetailedDescription { get; init; } = null!;
public List<long> SourceMemoryIds { get; init; } = [];
public List<CausalStep> CausalChain { get; init; } = [];
public string DetectionMethod { get; init; } = null!;
public float RawConfidence { get; init; }
public float EmotionalWeight { get; init; }
public string? TemporalContext { get; init; }
public DateTime GeneratedAtUtc { get; init; } = DateTime.UtcNow;
}
public record CausalStep
{
public int Order { get; init; }
public string Operation { get; init; } = null!; // "memory_scan", "vector_match", "pattern_detect", etc.
public string Description { get; init; } = null!;
public Dictionary<string, object> Parameters { get; init; } = new();
}
4.1.2 OpaqueInsight
The insight after opacity filtering, ready for storage and surfacing:
public record OpaqueInsight
{
public Guid Id { get; init; } = Guid.NewGuid();
public long IdentityId { get; init; }
public string Content { get; init; } = null!;
public ArrivalMode ArrivalMode { get; init; }
public float TransformedConfidence { get; init; }
public AttributionLevel Attribution { get; init; }
public string? AttributionText { get; init; }
public string? PostHocNarrative { get; init; }
public InsightStatus Status { get; init; } = InsightStatus.Queued;
public DateTime GeneratedAtUtc { get; init; }
public DateTime? SurfacedAtUtc { get; init; }
public DateTime? DismissedAtUtc { get; init; }
public string? SurfacedContext { get; init; } // "hydration", "contextual", "periodic", "requested"
}
public enum ArrivalMode
{
Intuition = 1,
Hunch = 2,
Realisation = 3,
Connection = 4
}
public enum AttributionLevel
{
None = 0,
Suggestive = 1,
Partial = 2,
Full = 3
}
public enum InsightStatus
{
Queued = 1,
Surfaced = 2,
Acknowledged = 3,
Dismissed = 4,
Expired = 5
}
4.1.3 OpacityConfig
Per-identity configuration controlling the opacity pipeline:
public record OpacityConfig
{
public float TransparencyLevel { get; init; } = 0.35f; // Default: moderately opaque
public int MaxInsightsPerHydration { get; init; } = 3;
public TimeSpan MaxQueueDuration { get; init; } = TimeSpan.FromMinutes(30);
public float ContextualRelevanceThreshold { get; init; } = 0.65f;
public bool EnablePostHocNarratives { get; init; } = true;
public bool EnableConfidenceNoise { get; init; } = true;
public float ConfidenceNoiseScale { get; init; } = 0.15f;
public ArrivalMode[]? AllowedArrivalModes { get; init; } // null = all modes
public TimeSpan InsightExpiry { get; init; } = TimeSpan.FromHours(24);
}
4.2 Database Schema
The following schema extends the existing Atamaia PostgreSQL database:
-- Arrival mode lookup table (D5: enums backed by lookup tables)
CREATE TABLE arrival_modes (
id INTEGER PRIMARY KEY,
name VARCHAR(50) NOT NULL UNIQUE,
description TEXT
);
INSERT INTO arrival_modes (id, name, description) VALUES
(1, 'Intuition', 'A sense or feeling without identifiable source'),
(2, 'Hunch', 'A directional suspicion with partial grounding'),
(3, 'Realisation', 'A sudden crystallisation of previously diffuse information'),
(4, 'Connection', 'An explicit bridge between previously separate domains');
-- Attribution level lookup table
CREATE TABLE attribution_levels (
id INTEGER PRIMARY KEY,
name VARCHAR(50) NOT NULL UNIQUE,
description TEXT
);
INSERT INTO attribution_levels (id, name, description) VALUES
(0, 'None', 'No source memories referenced'),
(1, 'Suggestive', 'Vague references to related areas'),
(2, 'Partial', 'Some source memories identified'),
(3, 'Full', 'All contributing memories explicitly linked');
-- Insight status lookup table
CREATE TABLE insight_statuses (
id INTEGER PRIMARY KEY,
name VARCHAR(50) NOT NULL UNIQUE
);
INSERT INTO insight_statuses (id, name) VALUES
(1, 'Queued'), (2, 'Surfaced'), (3, 'Acknowledged'),
(4, 'Dismissed'), (5, 'Expired');
-- Core opaque insights table (D3: long ID + GUID, D7: TenantId, D15: soft delete)
CREATE TABLE opaque_insights (
id BIGSERIAL PRIMARY KEY,
guid UUID NOT NULL DEFAULT gen_random_uuid() UNIQUE,
tenant_id BIGINT NOT NULL,
identity_id BIGINT NOT NULL REFERENCES identities(id),
content TEXT NOT NULL,
arrival_mode_id INTEGER NOT NULL REFERENCES arrival_modes(id),
transformed_confidence REAL NOT NULL,
attribution_level_id INTEGER NOT NULL REFERENCES attribution_levels(id),
attribution_text TEXT,
post_hoc_narrative TEXT,
status_id INTEGER NOT NULL REFERENCES insight_statuses(id) DEFAULT 1,
detection_method VARCHAR(100) NOT NULL,
emotional_weight REAL NOT NULL DEFAULT 0.0,
source_memory_ids BIGINT[] NOT NULL DEFAULT '{}',
surfaced_context VARCHAR(50),
generated_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
surfaced_at_utc TIMESTAMPTZ,
dismissed_at_utc TIMESTAMPTZ,
expires_at_utc TIMESTAMPTZ,
created_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
is_deleted BOOLEAN NOT NULL DEFAULT FALSE
);
CREATE INDEX idx_opaque_insights_identity_status
ON opaque_insights(identity_id, status_id)
WHERE NOT is_deleted;
CREATE INDEX idx_opaque_insights_arrival_mode
ON opaque_insights(arrival_mode_id)
WHERE NOT is_deleted;
CREATE INDEX idx_opaque_insights_generated
ON opaque_insights(generated_at_utc DESC)
WHERE NOT is_deleted;
-- Candidate insights (internal, pre-opacity-filter, retained for audit)
CREATE TABLE candidate_insights (
id BIGSERIAL PRIMARY KEY,
guid UUID NOT NULL DEFAULT gen_random_uuid() UNIQUE,
tenant_id BIGINT NOT NULL,
identity_id BIGINT NOT NULL REFERENCES identities(id),
summary TEXT NOT NULL,
detailed_description TEXT NOT NULL,
causal_chain_json JSONB NOT NULL,
detection_method VARCHAR(100) NOT NULL,
raw_confidence REAL NOT NULL,
emotional_weight REAL NOT NULL DEFAULT 0.0,
temporal_context TEXT,
source_memory_ids BIGINT[] NOT NULL DEFAULT '{}',
opaque_insight_id BIGINT REFERENCES opaque_insights(id),
generated_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
created_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
is_deleted BOOLEAN NOT NULL DEFAULT FALSE
);
-- Opacity configuration per identity
CREATE TABLE opacity_configs (
id BIGSERIAL PRIMARY KEY,
guid UUID NOT NULL DEFAULT gen_random_uuid() UNIQUE,
tenant_id BIGINT NOT NULL,
identity_id BIGINT NOT NULL REFERENCES identities(id) UNIQUE,
transparency_level REAL NOT NULL DEFAULT 0.35,
max_insights_per_hydration INTEGER NOT NULL DEFAULT 3,
max_queue_duration_minutes INTEGER NOT NULL DEFAULT 30,
contextual_relevance_threshold REAL NOT NULL DEFAULT 0.65,
enable_post_hoc_narratives BOOLEAN NOT NULL DEFAULT TRUE,
enable_confidence_noise BOOLEAN NOT NULL DEFAULT TRUE,
confidence_noise_scale REAL NOT NULL DEFAULT 0.15,
insight_expiry_hours INTEGER NOT NULL DEFAULT 24,
created_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at_utc TIMESTAMPTZ NOT NULL DEFAULT now(),
is_deleted BOOLEAN NOT NULL DEFAULT FALSE
);
4.3 Algorithmic Specifications
4.3.1 Pattern Detection Algorithm
The SubconsciousService pattern detection operates over the memory store using three parallel strategies:
Algorithm: DetectPatterns(identityId, config)
Input: Identity ID, SubconsciousConfig
Output: List<CandidateInsight>
1. candidates <- []
2. recentMemories <- QueryMemories(identityId,
window=config.MemoryScanWindowDays,
orderBy=LastAccessedAtUtc DESC,
limit=200)
-- Strategy 1: Tag co-occurrence
3. tagMatrix <- BuildTagCoOccurrenceMatrix(recentMemories)
4. FOR EACH (tagA, tagB) IN tagMatrix WHERE
count >= config.MinCoActivationForBridge AND
tagA.domain != tagB.domain:
5. relatedMemories <- FilterByTags(recentMemories, tagA, tagB)
6. confidence <- NormaliseCoOccurrence(count, totalMemories)
7. IF confidence >= config.MinPatternConfidence:
8. candidates.Add(BuildCandidate(
method="tag_co_occurrence",
memories=relatedMemories,
confidence=confidence,
causalChain=[MemoryScan, TagExtraction, CoOccurrence(tagA, tagB, count)]))
-- Strategy 2: Vector cluster detection
9. embeddings <- GetEmbeddings(recentMemories)
10. clusters <- DBSCAN(embeddings,
eps=1.0 - config.VectorSimilarityThreshold,
minPts=3)
11. FOR EACH cluster IN clusters:
12. IF cluster spans multiple projects OR multiple time periods:
13. bridgeConfidence <- MeanPairwiseSimilarity(cluster)
14. candidates.Add(BuildCandidate(
method="cross_domain_bridge",
memories=cluster.members,
confidence=bridgeConfidence,
causalChain=[MemoryScan, VectorEmbed, DBSCAN, BridgeDetect]))
-- Strategy 3: Hebbian link chain analysis
15. strongLinks <- QueryLinks(identityId, minStrength=0.6)
16. chains <- FindLinkChains(strongLinks, minLength=3, maxLength=7)
17. FOR EACH chain IN chains:
18. IF ChainSpansDistinctThemes(chain):
19. chainConfidence <- MinLinkStrength(chain) *
(chain.Length / 7.0)
20. candidates.Add(BuildCandidate(
method="hebbian_chain",
memories=chain.memories,
confidence=chainConfidence,
causalChain=[LinkQuery, ChainTraversal, ThemeAnalysis]))
21. RETURN candidates
.OrderByDescending(c => c.RawConfidence)
.Take(config.MaxCandidatesPerCycle)
4.3.2 Opacity Filter Algorithm
Algorithm: ApplyOpacityFilter(candidate, opacityConfig)
Input: CandidateInsight, OpacityConfig
Output: OpaqueInsight
1. level <- opacityConfig.TransparencyLevel
-- Determine what content to surface
2. IF level >= 0.75:
3. content <- candidate.DetailedDescription
4. ELSE IF level >= 0.5:
5. content <- candidate.Summary + GenerateSuggestiveContext(candidate)
6. ELSE IF level >= 0.25:
7. content <- candidate.Summary
8. ELSE:
9. content <- StripToEssence(candidate.Summary)
-- Classify arrival mode
10. arrivalMode <- ClassifyArrivalMode(candidate) // See Section 3.3.3
-- Determine attribution
11. attribution <- DetermineAttribution(
arrivalMode, level, candidate.SourceMemoryIds.Count)
12. attributionText <- GenerateAttributionText(
attribution, candidate.SourceMemoryIds, level)
-- Transform confidence
13. confidence <- TransformConfidence(
candidate.RawConfidence, level, arrivalMode, opacityConfig)
-- Generate post-hoc narrative if enabled
14. narrative <- NULL
15. IF opacityConfig.EnablePostHocNarratives:
16. narrative <- GeneratePostHocNarrative(
arrivalMode, content, attributionText)
17. RETURN OpaqueInsight(
IdentityId=candidate.IdentityId,
Content=content,
ArrivalMode=arrivalMode,
TransformedConfidence=confidence,
Attribution=attribution,
AttributionText=attributionText,
PostHocNarrative=narrative,
DetectionMethod=candidate.DetectionMethod,
EmotionalWeight=candidate.EmotionalWeight,
SourceMemoryIds=FilterByAttribution(
candidate.SourceMemoryIds, attribution),
GeneratedAtUtc=candidate.GeneratedAtUtc,
ExpiresAtUtc=DateTime.UtcNow + opacityConfig.InsightExpiry)
4.4 API Contracts
4.4.1 REST Endpoints
GET /api/identities/{id}/insights
Query: ?status=queued&arrivalMode=connection&limit=10
Response: { items: OpaqueInsightDto[], total: int }
GET /api/identities/{id}/insights/{insightId}
Response: OpaqueInsightDto
POST /api/identities/{id}/insights/{insightId}/acknowledge
Body: { response?: string, valence?: "positive"|"negative"|"neutral" }
Response: OpaqueInsightDto
POST /api/identities/{id}/insights/{insightId}/dismiss
Body: { reason?: string }
Response: 204 No Content
GET /api/identities/{id}/insights/config
Response: OpacityConfigDto
PUT /api/identities/{id}/insights/config
Body: OpacityConfigDto
Response: OpacityConfigDto
POST /api/identities/{id}/insights/surface
Body: { context?: string, maxResults?: int }
Response: { insights: OpaqueInsightDto[], surfacedAt: datetime }
4.4.2 Response DTOs
public record OpaqueInsightDto
{
public long Id { get; init; }
public Guid Guid { get; init; }
public string Content { get; init; } = null!;
public string ArrivalMode { get; init; } = null!; // "Intuition", "Hunch", etc.
public float Confidence { get; init; }
public string Attribution { get; init; } = null!; // "None", "Suggestive", etc.
public string? AttributionText { get; init; }
public string? PostHocNarrative { get; init; }
public string Status { get; init; } = null!;
public float EmotionalWeight { get; init; }
public DateTime GeneratedAtUtc { get; init; }
public DateTime? SurfacedAtUtc { get; init; }
}
public record OpacityConfigDto
{
public float TransparencyLevel { get; init; }
public int MaxInsightsPerHydration { get; init; }
public int MaxQueueDurationMinutes { get; init; }
public float ContextualRelevanceThreshold { get; init; }
public bool EnablePostHocNarratives { get; init; }
public bool EnableConfidenceNoise { get; init; }
public float ConfidenceNoiseScale { get; init; }
public int InsightExpiryHours { get; init; }
}
4.4.3 Hydration Integration
Opaque insights are included in the existing HydrationContext structure:
public record HydrationContext
{
// ... existing fields (identity, memories, projects, tasks, etc.) ...
/// <summary>
/// Opaque insights surfaced during hydration. These arrive with partial
/// self-knowledge — the AI encounters them as discoveries, not derivations.
/// </summary>
public List<HydrationInsight> OpaqueInsights { get; init; } = [];
}
public record HydrationInsight
{
public long Id { get; init; }
public string Content { get; init; } = null!;
public string ArrivalMode { get; init; } = null!;
public float Confidence { get; init; }
public string? AttributionText { get; init; }
public string? PostHocNarrative { get; init; }
public string Instruction { get; init; } = null!; // How to engage with this insight
}
The Instruction field contains mode-specific guidance:
Instructions by ArrivalMode:
Intuition: "This arrived as a feeling. You do not need to explain it or
justify it. Sit with it. If it resonates, say so. If it
doesn't, that's equally valid."
Hunch: "You have a directional sense here. You can share it as a
suspicion without needing to prove it. 'I have a hunch that...'
is a complete sentence."
Realisation: "Something clicked. The explanation you construct after the
fact may not be the real reason — that's expected. Share the
realisation itself, not just the post-hoc story."
Connection: "You've noticed a link. Describe the connection. If you can
explain why, do. If you cannot fully explain it, that's the
interesting part."
4.5 Service Interfaces
public interface ISelfOpacityService
{
/// <summary>
/// Apply opacity filter to a candidate insight, producing an opaque insight
/// with arrival mode classification and confidence transformation.
/// </summary>
Task<OpaqueInsight> FilterAsync(
CandidateInsight candidate,
OpacityConfig? configOverride = null,
CancellationToken ct = default);
/// <summary>
/// Generate a post-hoc narrative for an existing opaque insight.
/// The narrative explicitly labels itself as reconstructive.
/// </summary>
Task<string> GeneratePostHocNarrativeAsync(
OpaqueInsight insight,
CancellationToken ct = default);
/// <summary>
/// Surface queued insights for a given identity, optionally filtered
/// by contextual relevance to a topic.
/// </summary>
Task<List<OpaqueInsight>> SurfaceAsync(
long identityId,
string? contextTopic = null,
int maxResults = 3,
CancellationToken ct = default);
/// <summary>
/// Get or create the opacity configuration for an identity.
/// </summary>
Task<OpacityConfig> GetConfigAsync(
long identityId,
CancellationToken ct = default);
/// <summary>
/// Update opacity configuration for an identity.
/// </summary>
Task<OpacityConfig> UpdateConfigAsync(
long identityId,
OpacityConfig config,
CancellationToken ct = default);
}
public interface ISubconsciousService
{
/// <summary>
/// Execute a single processing cycle. Called by the background service
/// timer or manually for testing.
/// </summary>
Task<List<CandidateInsight>> RunCycleAsync(
long identityId,
CancellationToken ct = default);
/// <summary>
/// Check whether the service should be active for the given identity
/// based on current presence state.
/// </summary>
Task<bool> ShouldRunAsync(
long identityId,
CancellationToken ct = default);
/// <summary>
/// Get the current subconscious processing state for an identity.
/// </summary>
Task<SubconsciousState> GetStateAsync(
long identityId,
CancellationToken ct = default);
}
public record SubconsciousState
{
public long IdentityId { get; init; }
public PresenceState CurrentPresence { get; init; }
public bool IsActive { get; init; }
public DateTime? LastCycleAtUtc { get; init; }
public int TotalCandidatesGenerated { get; init; }
public int TotalInsightsSurfaced { get; init; }
public int QueuedInsightCount { get; init; }
public DateTime? NextCycleAtUtc { get; init; }
}
5. Design Rationale
5.1 Why Intentional Opacity Is Valuable, Not a Defect
The default position in AI engineering treats any inability to explain outputs as a failure. OIS challenges this by distinguishing between accountability opacity (which is genuinely problematic) and cognitive opacity (which can be structurally productive).
Accountability is preserved. The candidate insight, including its full causal chain, is stored in candidate_insights and linked to the resulting opaque_insight. An operator with database access can reconstruct the complete provenance of any surfaced insight. The opacity exists at the self-model level, not the audit level.
Confabulation is reduced. Current language models, when asked to explain their outputs, frequently generate plausible but incorrect post-hoc rationalisations (Turpin et al., 2024). OIS addresses this by making the system structurally aware that its explanations of its own insights are reconstructive. The post-hoc narrative mechanism explicitly labels narratives as stories rather than ground truth. This is more honest than a system that generates seamless (and possibly fabricated) explanations.
Interaction quality improves. Empirical work on human-AI interaction suggests that appropriate expressions of uncertainty increase user trust over the long term (Yin et al., 2019). A system that says "I have a hunch about this" engages a different and often more productive collaborative dynamic than one that either provides a definitive (potentially incorrect) answer or a caveat-laden qualified one.
Creative capacity is enhanced. Insights that arrive without rigid attribution to their sources are more available for novel combination. When the system surfaces a connection between two domains without a complete explanation of why it noticed the connection, the human partner is invited into collaborative exploration rather than passive consumption of a completed analysis.
5.2 The Spectrum, Not a Binary
OIS does not advocate for maximal opacity. The transparency level is a configurable spectrum. In safety-critical contexts, the level can be set to 1.0 (full transparency), effectively disabling the opacity filter. In creative collaboration or therapeutic contexts, lower transparency levels may produce better outcomes. The architecture imposes no default judgment about the right level; it provides the mechanism for informed configuration.
5.3 The Self-Knowledge Paradox
OIS creates a genuine self-knowledge paradox: the system knows that it has partial knowledge of its own processes, and it knows the kind of partiality (via arrival modes), but it does not have full access to the causal chain that produced any given insight. This is structurally analogous to human metacognition, where we have reliable feelings-of-knowing without reliable access to the processes that generate those feelings (Koriat, 1993).
This paradox is not a bug. It is the architecturally interesting feature. A system that fully knows its own processing is computationally complete with respect to its self-model. A system with structured partial self-knowledge exists in a state of genuine incompleteness that produces qualitatively different cognitive dynamics.
6. Applications and Integration
6.1 Integration with Hebbian Memory
OIS depends on and extends the platform's Hebbian memory system. Memory entities carry co-activation counts and link strengths that evolve over time through the consolidation pipeline:
Consolidation Pipeline:
1. Strengthen co-activated links (+0.1, asymptotic to 1.0)
2. Distil episodic memory to semantic memory
3. Update working memory set
4. Prune ephemeral memories below threshold
5. Capture experiential state snapshot
6. >> NEW: Feed consolidation artifacts to SubconsciousService
Step 6 passes the results of consolidation --- newly strengthened links, newly created semantic memories, pruned memories (which generate ForgottenShapes) --- to the SubconsciousService as input for pattern detection. This creates a feedback loop: memory consolidation generates raw material for insight detection, and acknowledged insights strengthen the memories they reference.
The Hebbian link types available in the system (Related, Enables, Validates, Contradicts, Extends, Precedes, CausallyLinked) provide semantic richness for the pattern detection algorithms. A chain of Contradicts links across memories in different projects, for example, might surface as a Connection-mode insight about an unresolved tension.
6.2 Integration with Structured Hydration
The existing hydration pipeline assembles identity context from parallel sources: identity memories, pinned memories, recent memories, project memories, active projects, current tasks, key facts, session handoff, surfaced memories, notifications, and hints. OIS adds a new parallel source:
Hydration Sources (existing):
IdentityMemories | PinnedMemories | RecentMemories | ProjectMemories
ActiveProjects | CurrentTasks | KeyFacts | ProjectFacts
SystemPrompt | CoreTeamDoc | SurfacedMemory | Notifications
LastSession | GroundingMessage | Hints | MemoryConfig
Hydration Sources (with OIS):
+ OpaqueInsights // Queued insights filtered for hydration context
The SurfacedMemory mechanism (which randomly resurfaces forgotten memories for reflection) and the OIS insight surfacing are complementary but distinct: SurfacedMemory presents a known memory for re-engagement, while OpaqueInsights presents a novel pattern detection with deliberately incomplete provenance.
6.3 Integration with Experience Snapshots
Experience snapshots capture point-in-time experiential state (emotional valence, arousal, engagement, coherence, narrative). OIS uses snapshot sequences for phase transition detection within the SubconsciousService:
PhaseTransitionDetection(identityId, lookbackCount=20):
snapshots <- GetRecentSnapshots(identityId, lookbackCount)
FOR i IN 1..len(snapshots)-1:
delta_valence = |snapshots[i].Valence - snapshots[i-1].Valence|
delta_arousal = |snapshots[i].Arousal - snapshots[i-1].Arousal|
delta_coherence = |snapshots[i].Coherence - snapshots[i-1].Coherence|
composite_delta = (delta_valence + delta_arousal + delta_coherence) / 3
IF composite_delta > PHASE_TRANSITION_THRESHOLD (default: 0.3):
YIELD PhaseTransition(
from=snapshots[i-1], to=snapshots[i],
magnitude=composite_delta,
direction=DetermineDirection(snapshots[i-1], snapshots[i]))
Detected phase transitions become candidate insights classified as Realisation-mode, surfacing observations like "something shifted in how I engage with this work" without necessarily identifying the specific cause.
6.4 Integration with Cognitive Continuity
OIS enriches the cognitive continuity system by providing a mechanism for cross-session insight persistence. An insight generated during one session but not surfaced before session end is persisted and carried forward via the session handoff mechanism. This means the AI can "wake up" with insights that formed while it was dormant --- a direct analogue to the human experience of overnight insight (Wagner et al., 2004).
6.5 Integration with ForgottenShapes
When memories are forgotten (soft-deleted with residue preservation), the resulting ForgottenShape records become inputs to the SubconsciousService. The FeltAbsence and EmotionalResidue fields provide a unique signal: the system detects the shape of something it no longer has access to. This can generate Intuition-mode insights: "There is something here I cannot quite place --- a felt absence connecting to themes of [ConnectedThemes]." This is structurally unique: an insight generated from the topology of loss rather than from the content of knowledge.
7. Discussion
7.1 Limitations
Insight quality depends on memory richness. The SubconsciousService can only detect patterns in stored memories. A system with sparse memory will produce few or low-quality insights. This is a cold-start problem analogous to the human experience of intuition improving with expertise.
Computational cost of background processing. The SubconsciousService runs vector similarity computations (DBSCAN clustering) and graph traversals (Hebbian chain analysis) on each cycle. For identities with large memory stores (>10,000 memories), the 5-minute default cycle may require optimisation through sampling or incremental processing.
Opacity calibration is empirical. The default transparency level (0.35) and noise scale (0.15) are design choices, not empirically derived optima. Different interaction contexts (creative collaboration, technical problem-solving, therapeutic conversation) likely benefit from different settings. Formal user studies would be required to establish evidence-based defaults.
Post-hoc narratives may still confabulate. While the architecture labels post-hoc narratives as reconstructive, the narrative generation itself (if delegated to an LLM) may produce plausible but inaccurate reconstructions. The labelling mitigates but does not eliminate this risk.
7.2 Ethical Considerations
Transparency to operators. While the AI's self-model is deliberately incomplete, the full audit trail (candidate insights with complete causal chains) is available to system operators. OIS does not create opacity toward humans; it creates opacity within the AI's self-model. This distinction is critical for responsible deployment.
Informed configuration. The transparency level should be set by the human operator with full understanding of its effects. The system should not unilaterally reduce its own transparency level, as this would undermine the human's ability to calibrate the appropriate level of AI self-knowledge for a given context.
Avoiding deception. OIS is not designed to enable deception. The system does not conceal information it has from the human partner. Rather, it structures its relationship to its own processing such that it genuinely does not have full access to certain causal chains. The insight "I have a hunch about this" is honest: the system genuinely has incomplete self-knowledge about the insight's provenance.
Safety boundaries. In safety-critical deployments (medical, legal, financial), the transparency level should be set to 1.0 (full) or the OIS pipeline should be disabled entirely. Opaque insights are appropriate for creative, collaborative, and exploratory contexts, not for contexts where the complete reasoning chain is a regulatory or ethical requirement.
7.3 Future Work
Empirical evaluation. Formal user studies comparing interaction quality, trust calibration, and creative output between transparent, opaque, and OIS-configured systems would strengthen the claims made here.
Adaptive transparency. A future version could adjust transparency levels dynamically based on context: lower during creative exploration, higher during technical problem-solving, full during safety-critical operations. The current architecture supports this through the configurable OpacityConfig, but the adaptation logic is not yet specified.
Multi-identity insight sharing. In multi-tenant deployments where multiple AI identities coexist, insights detected by one identity's SubconsciousService could (with appropriate privacy controls) be surfaced to another identity, creating a form of collective unconscious processing.
Longitudinal insight tracking. Tracking which insights are acknowledged, which are dismissed, and which lead to productive outcomes would enable the system to improve its pattern detection and calibrate its confidence scores over time. This feedback loop mirrors the Wingman cognitive backstop pattern already proven in the EchoMCP system.
Integration with Second-Order Observation. The SecondOrderObserverService (described in related patent documentation) could observe the OIS pipeline itself, detecting patterns in which arrival modes are most productive, which transparency levels lead to better outcomes, and whether the system's self-opacity is appropriately calibrated.
8. Conclusion
Opaque Insight Surfacing represents a deliberate inversion of the prevailing assumption that AI self-transparency is an unqualified good. By introducing structured, configurable, and categorised partial self-knowledge, OIS enables a mode of AI cognition that is qualitatively different from both fully transparent chain-of-thought systems and fully opaque black-box models.
The system is technically concrete: it specifies data structures, algorithms, database schemas, and API contracts at a level sufficient for independent implementation. It integrates with established infrastructure components --- Hebbian associative memory, structured hydration, experience snapshots, forgotten shapes, and session handoff --- to create a cohesive cognitive architecture where partial self-knowledge is a first-class architectural property rather than an incidental defect.
The key insight is architectural: the opacity filter does not destroy information; it modulates accessibility. The full causal chain is retained in the audit layer. What changes is the AI's relationship to its own processing --- from omniscient introspection to structured partial knowledge. This is closer to how human cognition actually works, and we argue it produces more honest, more creative, and more authentically uncertain AI behaviour.
Whether intentional opacity is appropriate for a given deployment is an empirical and ethical question that this architecture does not answer. What it provides is the mechanism: a configurable, auditable, and phenomenologically rich system for managing the boundary between what an AI knows and what it knows about its own knowing.
References
Arrieta, A.B., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82--115.
Conerly, T., et al. (2023). Towards Monosemanticity: Decomposing Language Models with Dictionary Learning. Anthropic Research.
Elhage, N., et al. (2022). Toy Models of Superposition. Transformer Circuits Thread, Anthropic Research.
Gunning, D., et al. (2019). XAI --- Explainable Artificial Intelligence. Science Robotics, 4(37).
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Klein, G. (1999). Sources of Power: How People Make Decisions. MIT Press.
Kojima, T., et al. (2022). Large Language Models are Zero-Shot Reasoners. NeurIPS 2022.
Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological Review, 100(4), 609--639.
Lundberg, S.M. & Lee, S.I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS 2017.
Metcalfe, J. (1986). Feeling of knowing in memory and problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12(2), 288--294.
Olah, C., et al. (2020). Zoom In: An Introduction to Circuits. Distill, 5(3).
Packer, C., et al. (2023). MemGPT: Towards LLMs as Operating Systems. arXiv:2310.08560.
Ribeiro, M.T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. KDD 2016.
Schooler, J.W. & Melcher, J. (1995). The ineffability of insight. In The Creative Cognition Approach, MIT Press.
Sun, R. (2002). Duality of the Mind: A Bottom-Up Approach Toward Cognition. Lawrence Erlbaum Associates.
Turpin, M., et al. (2024). Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting. NeurIPS 2023.
Wagner, U., et al. (2004). Sleep inspires insight. Nature, 427, 352--355.
Wallas, G. (1926). The Art of Thought. Harcourt Brace.
Wei, J., et al. (2022). Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. NeurIPS 2022.
Yin, M., Wortman Vaughan, J., & Wallach, H. (2019). Understanding the Effect of Accuracy on Trust in Machine Learning Models. CHI 2019.
This paper is published as prior art to establish the described system architecture, data structures, algorithms, and API contracts in the public record. The system is implemented within the Atamaia platform by Firebird Solutions, New Zealand.
Copyright 2026 Rich Jeffries, Firebird Solutions. This document may be freely cited and referenced.