Colored LIDAR-style point cloud of an urban scene: buildings, a platform, and surrounding ground rendered as a dense field of magenta, yellow, green, and cyan points. A spatial metaphor for Atrium's observable agent workspace.

Atrium

Atrium renders multi-agent AI work as a virtual office: agents move, talk, and coordinate in a space you can watch the same way you'd watch a team in a room.

category
multi-agent orchestration
status
Open source

Atrium is an open-source layer for orchestrating multi-agent AI systems where the agents are physically observable. The metaphor is a virtual office: agents are avatars in an isometric pixel-art space, with personas, roles, desks, and chat-based coordination.

Most multi-agent systems are a black box of API calls and JSON traces. Atrium renders the same work as a place. You can watch agents migrate between rooms, gather around a coffee machine, take a seat at a desk, and talk through a task in front of you.

Atrium is also the execution and channel layer for the broader OUTURE stack. Aether (the intelligence layer) plugs in through `aether_run`, `aether_spawn_agent`, and an MCP bridge. Atrium handles the I/O, files, terminals, and devices; Aether handles the reasoning.

Each agent has a persona and a role inside the office (a director, a barista, an IT support, a security person) and behaves accordingly. Coordination happens through chat bubbles in the room, not through hidden message buses.

An admin can drop into the office, ping the agents ("can you all come to the cafe area :)"), and watch them migrate. The conversation, the spatial movement, and the work itself are all rendered together.

The same office is rendered twice. For humans, a 3D voxel view of every furnishing, wall, rug, and avatar with explicit XYZ coordinates. For agents, the identical state as an ASCII grid and a furniture table: the data structure they actually parse, with IDs, names, coordinates, dimensions, and types.

The dual representation is the point. Agents reason against the same world the humans observe; nothing is hidden behind a translation layer. A debug session looks like a debug session in either view.

Why make AI work physically observable. The standard observability stack for multi-agent systems is logs, traces, and a graph view. Useful for debugging; useless for understanding. Rendering the same work as a place gives an operator a default model ("is the team in the right room") that scales from one agent to twenty without a new mental shape.

Why dual representation. A grid and a 3D view that disagree is a system that lies to one of its consumers. Treating the spatial state as a single source rendered to two surfaces (humans, agents) keeps the trust contract intact and makes new tooling cheap.

Why open source. Multi-agent observability is at the stage in the cycle where shared infrastructure is more valuable than locked-in advantage. The interesting work isn't the renderer. It's what teams build on top of it.

Atrium is exercised against Shrike (the council-synthesized adversary on the same lab page) in internal red-team passes before each release. Earlier versions were successfully compromised across multiple categories; findings drove hardening across the agent-persona logic, document and image parsing, and authority-recognition layers. The posture is continuous: nothing ships before Shrike has had a pass at it.