AWM · The intent faculty
What the agent
is doing now.
Attention. Working memory. Intent. The live cursor over the world.
AWM is the Intent faculty of the Ginnung cognitive runtime. While Engram remembers everything and Parliament debates the next move, AWM holds what matters right now — the active goal, the variables in play, the signals worth attending to. It is the agent's foreground, structured and replayable, separate from the long-term trace.
What is AWM
Memory remembers.
Attention chooses.
Long-term memory is everything you have ever known. Working memory is everything you are holding now. The two are different organs. Conflating them is why agent context windows behave the way they do — overstuffed, contradictory, unsearchable in retrospect.
AWM is the working-memory faculty. It holds the active goal, the signals the agent is monitoring, the variables under manipulation, and the next-step plan. Old contents fade. New contents replace. What survives the working-memory window long enough graduates to Engram as a long-term trace.
The same structure runs the daily signal logger — a market-regime example of AWM in motion: a small, focused workspace that watches inputs, applies a regime model, decides whether the day warrants attention, and records the verdict. Same primitive. Different workspace.
What it does
Capabilities
Goal
One active intent at a time
The current goal is a typed value with success criteria and an expiry. Agents can’t silently drift onto a different task — re-planning is an explicit event.
Slots
Bounded working set
A fixed-capacity workspace for the variables under manipulation. When new content arrives, something has to leave. Forgetting is a feature.
Attention
Signals worth watching
Subscribe to streams — markets, sensors, queues, user inputs. AWM ranks, debounces, and surfaces only what crosses the attention threshold.
Plan
Next-step intent
The plan is part of working memory, not buried in a prompt. The agent declares what it intends to do next. Lattice and ACR see it before it happens.
Regimes
Context-aware default
Same primitive runs the AWM daily signal logger — a small workspace that tags each day’s regime and decides whether action is warranted. Bring your own model.
Graduate
What survives becomes memory
Contents that persist long enough — or matter enough — get promoted to Engram with their full context. Working memory is a filter, not a destination.
Why a separate intent faculty
The infinite-context fallacy.
Frontier models keep advertising larger context windows like the problem is solved. The problem is not solved. A million tokens of context is not working memory — it is a haystack. Models retrieve poorly from the middle, weight recency over relevance, and lose track of what they were doing five paragraphs back.
Biology figured this out a long time ago. Working memory is small, fast, and adversarial about what it keeps. Long-term memory is vast and slow. They are different systems with different rules. An agent architecture that treats them as the same field will inherit every failure mode of both.
AWM gives the agent a focused workspace where intent is structured, attention is bounded, and forgetting is deliberate. The context window stays a window. The trail stays in Engram.
Get started
Two ways in.
Self-host
github.com/heybeaux/awm
TypeScript + Python clients. Daily signal logger included as a reference workspace. MIT-licensed.
Compose
As a Ginnung faculty
Pairs with Engram (what graduates), Parliament (what to do next), and ACR (what to invoke). One SonderEvent stream.