Most AI infrastructure asks you to believe in ghosts. It wraps an LLM call in a persona, gives it a name and a memory, calls it an agent, and charges you for the privilege of sending the same context tokens to every ghost in the room.
Strip the metaphor and look at the computation: it's a stateless function with a large, loosely-typed output space, high latency, and non-trivial cost per invocation. The interesting engineering problems — scheduling, state management, failure recovery, context budgeting — are distributed systems problems. They have been solved before. The ghost adds nothing.
An agent is a process with a system prompt, tool access, and shared state. There is no further entity. Ockham gives you the process and the state so you can exorcise your agents.
Seven components. Each replaces a ghost with a machine.
We don't need the industry to agree that agents aren't real. We just need infrastructure that doesn't require them to be.
MCP endpoints are tool nodes. A2A services
are external nodes with capability-based routing. Agent Cards are
service descriptors. Tasks are RPC calls with a lifecycle.
Ockham consumes the protocols. It doesn't buy the ontology.
These are not new problems. We inherit from the infrastructure that already solves them and do not pretend otherwise.
The novel surface is narrow and honest: declarative context projection for bounded LLM windows, schema validation for loosely-typed outputs, and cost-aware scheduling for pay-per-token computation. Everything else is borrowed. That's the point.
Because the orchestration layer shouldn't be the slow part. Because strict type systems prevent the category errors we're trying to eliminate. Because infrastructure that outlasts hype cycles should be built in a language designed to outlast hype cycles.