Kenaz Harness — overview
The Kenaz Harness is the host-side application that turns a sandbox VM into a usable Kenaz session. Where the Sandbox VM is the user's window, the harness is what makes it safe, auditable, and useful:
- Session lifecycle. Spawns and tears down VMs via the launcher, one per session, with per-session ephemeral disks.
- Inference brokering. All LLM calls from the workbench go through the harness; the harness picks the configured provider (Bedrock today; OpenAI, Anthropic, OpenRouter, Bento, or self-hosted for enterprise), shapes the request, and streams the response back over vsock.
- Policy + audit log. Every tool invocation, prompt, and provider response gets a structured event in the audit log before it leaves the host. The audit log is the load-bearing artifact for the security pitch.
- Local secret material. API keys for whichever inference provider the user has configured live on the host (not in the VM), encrypted at rest in the OS keychain.
🚧 This page is a stub. Replace the placeholder sections below as the harness's design firms up.
Source
- Repo: TBD — harness lives in its own repo, separate from
kenaz-sandbox. Update this link once the repo is created.
Architecture sketch
+----------------------------------+
| Kenaz Harness (host) |
| |
| +--------------------------+ |
| | UI (system tray + main | |
| | window for settings, | |
| | audit log viewer) | |
| +--------------------------+ |
| | |
| +--------------------------+ |
| | Session manager | |
| | - spawn / supervise VMs | |
| | - per-session vsock mux | |
| +--------------------------+ |
| | |
| +--------------------------+ |
| | Inference broker | |
| | - Bedrock provider | |
| | - OpenAI provider | |
| | - Anthropic provider | |
| | - OpenRouter / Bento | |
| | - BYO-endpoint shim | |
| +--------------------------+ |
| | |
| +--------------------------+ |
| | Policy engine | |
| | - which providers | |
| | - which tools | |
| | - data-egress rules | |
| +--------------------------+ |
| | |
| +--------------------------+ |
| | Audit log | |
| | - signed event chain | |
| | - local SQLite + S3 sync| |
| | for enterprise tier | |
| +--------------------------+ |
| |
+----------------------------------+
Provider model
The headline pitch for enterprise customers is "bring your own inference
provider." Today the development surface uses Amazon Bedrock via
long-term API keys minted from the
kameas-ai-sandbox AWS account. Production /
enterprise installations swap that out:
| Provider | Status | Notes |
|---|---|---|
| AWS Bedrock | ✅ in development use | Default for the |
| team's harness work; uses long-term API keys minted in the sandbox AWS | ||
| account. | ||
| Anthropic | 🚧 planned | Direct API. |
| OpenAI | 🚧 planned | Direct API. |
| OpenRouter | 🚧 planned | Multi-model gateway. |
| Bento | 🚧 planned | Self-hosted inference for compliance- |
| heavy customers. | ||
| BYO endpoint | 🚧 planned | Generic OpenAI-compatible |
| endpoint shim for whatever the customer is hosting internally. |
📝 TODO: document the provider plugin interface once it lands.
Audit log
🚧 TODO. Cover: the event schema (fields and types), the signing chain (we hash-link events so tampering is detectable), local SQLite layout, and the optional sync-to-S3 path for enterprise deployments where compliance wants the log off-device.
Build + run
📝 TODO once the harness repo exists:
make build,make run, wiring against a local-mock VM, etc.