Skip to main content

Launchers — bringing up a session

Launchers are the host-side glue between the Kenaz Harness and a freshly-booted Sandbox VM. They handle hypervisor bring-up, vsock plumbing, image fetch, and clean teardown when the session ends.

🚧 This page is a stub. Fill in once the launcher binaries stabilize and we've made platform-specific decisions for macOS, Linux, and Windows.

Today's launcher (dev loop)

For local development on the kenaz-sandbox repo, launchers live as make targets:

# Build the image (Linux aarch64 host)
make image-aarch64

# Or, on macOS: download the latest CI-built image
make download-image

# Boot the VM
make run-vm-aarch64

These targets are in Makefile in the kenaz-sandbox repo. They're suitable for engineers but not for end users.

Production launchers (planned)

PlatformHypervisorStatus
macOS (Apple Silicon)Virtualization.framework🚧 planned
macOS (Intel)Virtualization.framework🚧 planned
Linux (x86_64 + aarch64)KVM via libvirt or QEMU directly🚧 planned
WindowsHyper-V or WHPX-backed QEMU🚧 planned

📝 TODO: document the design once the macOS launcher is decided. Open question: ship a Go binary that calls Virtualization.framework via CGo, or ship a small Swift binary and have the Go harness exec it.

VM lifecycle

A session start looks roughly like this:

  1. Harness decides a new session is needed (user clicked "New session" in the host UI).
  2. Harness invokes the launcher with a session ID and any per-session config (which workbench build to load, what tools are allowed, etc.).
  3. Launcher ensures the SigilOS image is present locally (downloads it if not), allocates a fresh ephemeral disk image, and boots the VM with vsock attached.
  4. SigilOS boots; greetd autologs in; sway starts; the workbench-app systemd service starts and dials the harness over vsock.
  5. The user does their work. All workbench ↔ harness traffic flows through that vsock socket.
  6. Harness signals end-of-session; the launcher gracefully shuts down the VM and unlinks the ephemeral disk.

vsock protocol

The host-VM channel is documented in docs/vsock-protocol.md in the kenaz-sandbox repo. Summary: length-prefixed JSON messages, one TCP-like stream per session, with framed events for tool invocations and prompts.

Image fetch + caching

📝 TODO: document the on-host image cache (where it lives, how it validates checksum, how it knows when to refresh). Today this is implicit in make download-image — needs to be productionized.