Private AI
in your building

Public AI was not built for regulated work

Sensitive documents are already being pasted into ChatGPT and Copilot. The traffic leaves the network, lands on infrastructure no one can inspect, and trains models no one in the building controls. There is no sanctioned alternative, and building one in-house is a multi-year project most teams cannot afford to start.

Your rack
Your data
Your law

Ronin cabinet appliance, industrial floor-standing enclosure on a wireframe ground plane.

For the secured room

A floor-standing appliance, sized for one to one hundred users. Standard and high-density compute on the same stack. Inference runs inside the cabinet. The cabinet runs inside your network. Nothing leaves either.

Talk to us about Cabinet
Ronin rack appliance, 19-inch rack-mount frame on a wireframe ground plane.

For the server room

A 19” rack-mount appliance. It slots into the rack you already operate. No AI traffic touches a public model. Same stack as Cabinet. Same service model. For organisations that already run their own infrastructure.

Talk to us about Rack

Architecture

How the architecture works

Sovereignty isn’t a clause in a contract. It’s a property of the architecture, and an auditable one. Ronin is built on three non-negotiables, and they’re the reason a defence supplier, a hospital, or a law firm can put it into production without rewriting their security posture.

The first is sovereignty by default. There is no content telemetry. No always-on vendor tunnels. Remote support is opt-in, session-bound, and auditable on your side. Nothing about the appliance assumes connectivity to the outside world to function. It runs against your data, in your network, on your schedule.

The second is open internals. Compute is the NVIDIA DGX Spark. The serving stack runs on PyTorch and vLLM, both open and inspectable. Nothing in the box is a vendor black box you couldn’t see into, and nothing requires us to be present for the appliance to keep working. If you ever had to support it yourself, you could. That’s the test.

The third is that service is the moat. The box itself isn’t the product. The product is deployment expertise, vertical tuning, and operational reliability, priced into setup and recurring service. Refresh cycles and hardware contingencies are folded in, so the appliance gets better over time instead of silently aging out.

Request a technical brief

Tell us where you are with AI and what is blocking deployment. Every response is read by the founders.

Where are you with AI today?