Skip to content

Driver DSN (Data Source Name) Format

The CI system uses a DSN-style configuration format for specifying orchestration drivers and their parameters.

Format Options

1. Simple Driver Name

bash
--driver=native
--driver=docker
--driver=k8s

Uses default configuration for the specified driver.

2. URL-Style Format

bash
--driver=<driver>://<namespace>?<param1>=<value1>&<param2>=<value2>

Examples:

bash
# K8s with custom namespace
--driver=k8s://production

# K8s with namespace and parameters
--driver=k8s://staging?region=us-east&timeout=300

Components:

  • driver: The driver name (e.g., k8s, docker, native)
  • namespace: Orchestra namespace for resource labeling/grouping
  • param=value: Driver-specific configuration parameters

3. Colon-Separated Format

bash
--driver=<driver>:<param1>=<value1>,<param2>=<value2>

Examples:

bash
# K8s with namespace parameter
--driver=k8s:namespace=production

# Multiple parameters
--driver=k8s:namespace=staging,region=us-west,timeout=600

How It Works

  1. DSN Parsing: The system parses the driver string to extract:

    • Driver name
    • Orchestra namespace (from URL host or generated)
    • Configuration parameters
  2. Driver Initialization: The driver receives configuration directly via parameters:

    • Parameters are passed as a map to the driver initialization function
    • Each driver reads its specific configuration from this parameter map
    • Driver defaults are used for any unspecified parameters

Driver-Specific Parameters

K8s Driver

ParameterDescriptionDefaultExample
namespaceKubernetes namespace for resourcesdefaultk8s:namespace=production
kubeconfigPath to kubeconfig file~/.kube/config or envk8s:kubeconfig=/path/to/config

Examples:

bash
--driver=k8s://my-namespace
--driver=k8s:namespace=staging
--driver=k8s:namespace=prod,kubeconfig=/etc/k8s/config
--driver=k8s://production?kubeconfig=/path/to/config

Note: If not specified, falls back to KUBECONFIG environment variable or default kubeconfig location.

Docker Driver

ParameterDescriptionDefaultExample
hostDocker daemon hostDOCKER_HOST or localdocker:host=ssh://user@remote:22

Examples:

bash
--driver=docker
--driver=docker:host=unix:///var/run/docker.sock
--driver=docker:host=ssh://user@host:22

Note: If not specified, falls back to DOCKER_HOST environment variable or local Docker daemon.

Native Driver

Currently no specific parameters. Uses host process execution.

bash
--driver=native

DigitalOcean Driver

The DigitalOcean driver creates an on-demand droplet running Docker and delegates container operations to it. When the driver is closed, the droplet is automatically deleted.

ParameterDescriptionDefaultExample
tokenDigitalOcean API token(required)digitalocean:token=dop_v1_xxx
imageDroplet image slugdocker-20-04digitalocean:image=docker-24-04
sizeDroplet size slug or autos-1vcpu-1gbdigitalocean:size=s-2vcpu-4gb
regionDroplet regionnyc3digitalocean:region=sfo3
disk_sizeDisk size for Docker volumes (GB)25digitalocean:disk_size=50
tagsComma-separated custom tags(none)digitalocean:tags=prod,myapp
max_workersMaximum concurrent droplets in the pool (≥ 1)1digitalocean:max_workers=3
reuse_workerPark droplets on close instead of deleting themfalsedigitalocean:reuse_worker=true
poll_intervalHow often to check for a free worker slot10sdigitalocean:poll_interval=5s
wait_timeoutMax time to wait for a slot (0 = no limit)10mdigitalocean:wait_timeout=30m

Auto-sizing: When size=auto, the driver automatically selects an appropriate droplet size based on the pipeline's container_limits (CPU and memory):

  • Memory > 8GB or CPU > 4 cores → s-8vcpu-16gb
  • Memory > 4GB or CPU > 2 cores → s-4vcpu-8gb
  • Memory > 2GB or CPU > 1 core → s-2vcpu-4gb
  • Memory > 1GB → s-2vcpu-2gb
  • Memory > 512MB → s-1vcpu-2gb
  • Default → s-1vcpu-1gb

Examples:

bash
# Basic usage with token
--driver=digitalocean:token=dop_v1_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# Using environment variable for token
DIGITALOCEAN_TOKEN=dop_v1_xxx --driver=digitalocean

# Auto-size based on container limits
--driver=digitalocean:size=auto

# Full configuration
--driver=digitalocean://ci-namespace?token=dop_v1_xxx&image=docker-20-04&size=s-2vcpu-4gb&region=sfo3&disk_size=50

# Colon-separated format
--driver=digitalocean:token=dop_v1_xxx,size=auto,region=nyc1

# Allow up to 3 concurrent droplets
--driver=digitalocean:token=dop_v1_xxx,max_workers=3

# Reuse machines across pipeline runs (reduces rate-limit pressure)
--driver=digitalocean:token=dop_v1_xxx,reuse_worker=true,max_workers=2

Environment Variables:

VariableDescription
DIGITALOCEAN_TOKENAPI token (alternative to DSN)
DIGITALOCEAN_IMAGEDefault image slug
DIGITALOCEAN_SIZEDefault size slug
DIGITALOCEAN_REGIONDefault region
DIGITALOCEAN_DISK_SIZEDefault disk size (GB)
DIGITALOCEAN_TAGSDefault custom tags
DIGITALOCEAN_MAX_WORKERSDefault max concurrent droplets
DIGITALOCEAN_REUSE_WORKERDefault reuse-worker flag (true/false)
DIGITALOCEAN_POLL_INTERVALDefault poll interval (e.g. 10s)
DIGITALOCEAN_WAIT_TIMEOUTDefault wait timeout (e.g. 10m, 0 = none)

Resource Tagging: All droplets are automatically tagged with ci and namespace-<namespace>. Worker pool management adds additional tags:

TagMeaning
ci-worker-<namespace>Machine belongs to this namespace's pool (used for max_workers counting)
ci-busy-<namespace>Machine is currently claimed by a running pipeline
ci-idle-<namespace>Machine is parked and available for reuse (only present when reuse_worker=true)

Custom tags can be added via the tags parameter alongside the automatic pool tags.

Worker Pool Behaviour:

  • Before creating a new droplet, the driver counts all ci-worker-<namespace> tagged droplets. If the count equals max_workers, it blocks (polling every poll_interval) until a slot becomes free.
  • When reuse_worker=true, Close() transitions the droplet busy → idle instead of deleting it. The next driver instance with the same namespace will claim the idle droplet (reconnecting SSH) rather than spinning up a new one. Idle machines still count toward max_workers.
  • If wait_timeout is exceeded before a slot opens, the call returns an error. Set wait_timeout=0 to block indefinitely.

Note: The driver generates an SSH key pair per namespace for droplet access. With reuse_worker=false (the default) the key is deleted along with the droplet. With reuse_worker=true the key persists across runs so parked droplets can be reconnected.

Hetzner Driver

The Hetzner driver creates an on-demand cloud server running Docker and delegates container operations to it. When the driver is closed, the server is automatically deleted.

ParameterDescriptionDefaultExample
tokenHetzner Cloud API token(required)hetzner:token=xxx
imageServer image namedocker-cehetzner:image=ubuntu-22.04
server_typeServer type slug or autocx23hetzner:server_type=cx33
locationServer locationnbg1hetzner:location=fsn1
disk_sizeDisk size for Docker volumes (GB)10hetzner:disk_size=50
ssh_timeoutTimeout for SSH availability5mhetzner:ssh_timeout=10m
docker_timeoutTimeout for Docker availability5mhetzner:docker_timeout=10m
labelsComma-separated key=value labels(none)hetzner:labels=env=prod,app=x
max_workersMaximum concurrent servers in the pool (≥ 1)1hetzner:max_workers=3
reuse_workerPark servers on close instead of deleting themfalsehetzner:reuse_worker=true
poll_intervalHow often to check for a free worker slot10shetzner:poll_interval=5s
wait_timeoutMax time to wait for a slot (0 = no limit)10mhetzner:wait_timeout=30m

Auto-sizing: When server_type=auto, the driver automatically selects an appropriate server type based on the pipeline's container_limits (CPU and memory):

  • Memory > 16GB or CPU > 8 cores → cx53 (16 vCPU, 32GB)
  • Memory > 8GB or CPU > 4 cores → cx43 (8 vCPU, 16GB)
  • Memory > 4GB or CPU > 2 cores → cx33 (4 vCPU, 8GB)
  • Default → cx23 (2 vCPU, 4GB)

Examples:

bash
# Basic usage with token
--driver=hetzner:token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# Using environment variable for token
HETZNER_TOKEN=xxx --driver=hetzner

# Auto-size based on container limits
--driver=hetzner:server_type=auto

# Full configuration
--driver=hetzner://ci-namespace?token=xxx&image=docker-ce&server_type=cx33&location=nbg1&disk_size=50

# Colon-separated format
--driver=hetzner:token=xxx,server_type=auto,location=fsn1

# Allow up to 3 concurrent servers
--driver=hetzner:token=xxx,max_workers=3

# Reuse machines across pipeline runs (reduces rate-limit pressure)
--driver=hetzner:token=xxx,reuse_worker=true,max_workers=2

Environment Variables:

VariableDescription
HETZNER_TOKENAPI token (alternative to DSN)
HETZNER_IMAGEDefault image name
HETZNER_SERVER_TYPEDefault server type slug
HETZNER_LOCATIONDefault location
HETZNER_DISK_SIZEDefault disk size (GB)
HETZNER_SSH_TIMEOUTDefault SSH timeout
HETZNER_DOCKER_TIMEOUTDefault Docker timeout
HETZNER_LABELSDefault custom labels
HETZNER_MAX_WORKERSDefault max concurrent servers
HETZNER_REUSE_WORKERDefault reuse-worker flag (true/false)
HETZNER_POLL_INTERVALDefault poll interval (e.g. 10s)
HETZNER_WAIT_TIMEOUTDefault wait timeout (e.g. 10m, 0 = none)

Resource Labeling: All servers are automatically labeled with ci=true and namespace=<namespace>. Worker pool management uses additional labels:

LabelValueMeaning
ci-worker<namespace>Server belongs to this namespace's pool (used for max_workers counting)
ci-worker-statusbusyServer is currently claimed by a running pipeline
ci-worker-statusidleServer is parked and available for reuse (only when reuse_worker=true)

Custom labels can be added via the labels parameter alongside the automatic pool labels.

Worker Pool Behaviour:

  • Before creating a new server, the driver counts all servers with label ci-worker=<namespace>. If the count equals max_workers, it blocks (polling every poll_interval) until a slot becomes free.
  • When reuse_worker=true, Close() updates the server's ci-worker-status label from busy to idle instead of deleting the server. The next driver instance with the same namespace will claim the idle server (reconnecting SSH) rather than creating a new one. Idle servers still count toward max_workers.
  • If wait_timeout is exceeded before a slot opens, the call returns an error. Set wait_timeout=0 to block indefinitely.

Note: The driver generates an SSH key pair per namespace for server access. With reuse_worker=false (the default) the key is deleted along with the server. With reuse_worker=true the key persists across runs so parked servers can be reconnected.

Available Locations:

LocationCity
fsn1Falkenstein, DE
nbg1Nuremberg, DE
hel1Helsinki, FI
ashAshburn, VA, US
hilHillsboro, OR, US

Fly Driver

The Fly driver runs tasks as Fly Machines (lightweight VMs) on Fly.io. Each container maps to a Fly Machine with auto_destroy: true and restart policy no, making it ideal for ephemeral CI workloads. Volumes are Fly persistent volumes attached to machines.

ParameterDescriptionDefaultEnv VarExample
tokenFly API token(required)FLY_API_TOKENfly:token=fo1_xxx
appExisting Fly app name(auto-created)FLY_APPfly:app=my-ci
regionFly region for machines(Fly default)FLY_REGIONfly:region=ord
orgFly organization slugpersonalFLY_ORGfly:org=my-org
sizeMachine size presetshared-cpu-1xfly:size=shared-cpu-2x

App modes:

  • Existing app: Set app to use a pre-existing Fly app. Machines and volumes are created within it and cleaned up on Close(), but the app itself is preserved.
  • Ephemeral app (default): When app is not set, the driver creates a new Fly app named ci-<namespace> and deletes it (along with all resources) on Close().

Examples:

bash
# Using environment variable for token, ephemeral app
FLY_API_TOKEN=fo1_xxx --driver=fly

# Existing app with region
--driver=fly:app=my-ci-app,region=ord

# URL-style with namespace
--driver=fly://my-namespace?token=fo1_xxx&app=my-ci&region=lax

# Full configuration
--driver=fly:token=fo1_xxx,app=my-ci,region=ord,org=my-org,size=shared-cpu-2x

Environment Variables:

VariableDescription
FLY_API_TOKENAPI token (alternative to DSN)
FLY_APPDefault Fly app name
FLY_REGIONDefault region
FLY_ORGDefault organization slug

Machine sizing: The size parameter maps to Fly Machine presets (see Fly Machine sizing). Task-specific container_limits (CPU count, memory in bytes) override the preset if provided.

Logs: Machine event logs (start, exit, exit code, OOM status) are available. For full stdout/stderr streaming, use flyctl logs or Fly's log shipping integrations.

Note: All machines are launched with auto_destroy: true so they self-cleanup after stopping, and the Close() method also explicitly destroys any tracked machines and volumes.

QEMU Driver

The QEMU driver runs tasks inside a local QEMU virtual machine. Commands are executed inside the guest via the QEMU Guest Agent (QGA), and volumes are shared between host and guest via 9p virtfs. The VM is lazily booted on first use and automatically destroyed when the driver is closed.

ParameterDescriptionDefaultExample
memoryVM memory in MB2048qemu:memory=4096
cpusNumber of vCPUs2qemu:cpus=4
accelAcceleration backendhvf (macOS), kvm (Linux), tcgqemu:accel=tcg
qemu_binaryPath to QEMU binaryqemu-system-x86_64 or aarch64qemu:qemu_binary=/usr/bin/qemu-system-x86_64
cache_dirDirectory for cached cloud images~/.cache/ci/qemuqemu:cache_dir=/tmp/qemu-cache
imagePath to a custom qcow2 base imageAuto-downloads Ubuntu cloud imageqemu:image=/path/to/image.qcow2

Acceleration:

  • macOS: Defaults to hvf (Hypervisor.framework) for near-native performance
  • Linux: Defaults to kvm if /dev/kvm is available, otherwise tcg (software emulation)
  • Other: Defaults to tcg

Architecture: The driver auto-detects the host architecture and selects the appropriate QEMU binary (qemu-system-x86_64 or qemu-system-aarch64) and machine type.

Examples:

bash
# Basic usage with defaults
--driver=qemu

# Custom memory and CPU
--driver=qemu:memory=4096,cpus=4

# URL-style with namespace
--driver=qemu://my-namespace?memory=4096&cpus=4

# Custom base image
--driver=qemu:image=/path/to/custom.qcow2

# Software emulation (no hardware acceleration)
--driver=qemu:accel=tcg

Environment Variables:

VariableDescription
QEMU_MEMORYDefault VM memory in MB
QEMU_CPUSDefault number of vCPUs
QEMU_ACCELDefault acceleration backend
QEMU_BINARYDefault QEMU binary path
QEMU_CACHE_DIRDefault image cache directory
QEMU_IMAGEDefault base image path

How it works:

  1. Downloads an Ubuntu cloud image (cached locally) or uses a provided image
  2. Creates a copy-on-write overlay so the base image is never modified
  3. Generates a cloud-init seed ISO to configure the guest (SSH keys, QGA install)
  4. Boots the VM with QMP monitor, QGA channel (TCP), and 9p volume sharing
  5. Waits for cloud-init to complete and QGA to become responsive
  6. Executes task commands via QGA guest-exec / guest-exec-status
  7. Volumes are shared via 9p virtfs, mounted at /mnt/volumes/<name> in the guest

Prerequisites:

  • QEMU installed (brew install qemu on macOS, apt install qemu-system on Linux)
  • qemu-img and genisoimage/mkisofs available on PATH
  • For hardware acceleration: KVM support on Linux, or Hypervisor.framework on macOS

Note: The VM and all temporary files (overlay disk, seed ISO, volumes) are cleaned up when the driver is closed.

Apple Virtualization (VZ) Driver

The VZ driver runs tasks inside a local virtual machine using Apple's Virtualization.framework (macOS only). Commands are executed inside the guest via a custom vsock-based agent, and volumes are shared between host and guest via virtiofs. The VM is lazily booted on first use and automatically destroyed when the driver is closed.

ParameterDescriptionDefaultExample
memoryVM memory in MB2048vz:memory=4096
cpusNumber of vCPUs2vz:cpus=4
cache_dirDirectory for cached cloud images~/.cache/ci/vzvz:cache_dir=/tmp/vz-cache
imagePath to a custom raw base imageAuto-downloads Ubuntuvz:image=/path/to/image.raw

Examples:

bash
# Basic usage with defaults
--driver=vz

# Custom memory and CPU
--driver=vz:memory=4096,cpus=4

# URL-style with namespace
--driver=vz://my-namespace?memory=4096&cpus=4

# Custom base image
--driver=vz:image=/path/to/custom.raw

Environment Variables:

VariableDescription
VZ_MEMORYDefault VM memory in MB
VZ_CPUSDefault number of vCPUs
VZ_CACHE_DIRDefault image cache directory
VZ_IMAGEDefault base image path

How it works:

  1. Downloads an Ubuntu cloud image (cached locally) or uses a provided image
  2. Converts the image to raw format (Apple VZ requires raw disk images)
  3. Creates a writable copy so the base image is never modified
  4. Generates a cloud-init seed ISO to configure the guest (vsock agent, virtiofs)
  5. Boots the VM with EFI boot loader, virtiofs sharing, and vsock communication
  6. Waits for cloud-init to complete and the vsock agent to become responsive
  7. Executes task commands via the vsock agent protocol
  8. Volumes are shared via virtiofs, mounted at /mnt/volumes/<name> in the guest

Prerequisites:

  • macOS 13 (Ventura) or later
  • Binary must be codesigned with com.apple.security.virtualization entitlement
  • qemu-img available on PATH (for qcow2 → raw image conversion)
  • Go toolchain available for cross-compiling the guest agent

Entitlements Setup:

Apple's Virtualization.framework requires executables to be codesigned with the com.apple.security.virtualization entitlement before VMs can be created. Without this, the driver will fail with error: "The process doesn't have the 'com.apple.security.virtualization' entitlement."

To enable VZ driver support:

  1. Create an entitlements.plist file:
xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>com.apple.security.virtualization</key>
    <true/>
</dict>
</plist>
  1. Codesign your binary:
bash
# After building
go build -o pocketci .
codesign -s - -f --entitlements entitlements.plist ./pocketci

# For test binaries
go test -c ./orchestra/vz
codesign -s - -f --entitlements entitlements.plist ./vz.test
./vz.test -test.v

Note: The -s - flag uses ad-hoc signing (no certificate required). For distribution, use a valid Developer ID certificate: -s "Developer ID Application: Your Name (TEAM_ID)".

Architecture: The driver always uses hardware-accelerated virtualization via Apple's Hypervisor.framework. On Apple Silicon Macs, the guest runs arm64 Linux.

Note: The task.Image field (e.g., "busybox") is ignored — commands run directly in the guest OS, similar to the QEMU and native drivers.

Note: The VM and all temporary files (disk copy, seed ISO, volumes) are cleaned up when the driver is closed.

Examples

Development

bash
# Local development with native driver
go run main.go runner --driver=native examples/both/hello-world.ts

# Local Kubernetes (minikube) with default namespace
go run main.go runner --driver=k8s examples/both/hello-world.ts

Staging

bash
# K8s staging environment
go run main.go runner --driver=k8s://staging examples/both/hello-world.ts

# With additional parameters
go run main.go runner --driver=k8s://staging?region=us-east examples/both/hello-world.ts

Production

bash
# K8s production with explicit namespace
go run main.go runner --driver=k8s://production examples/both/hello-world.ts

# With additional parameters
go run main.go runner --driver=k8s://production?region=us-west examples/both/hello-world.ts

Priority Order

Configuration values are resolved in this order (highest to lowest priority):

  1. DSN parameters (e.g., --driver=k8s:namespace=prod)
  2. Driver defaults

Validation

The system validates:

  • Driver name exists
  • DSN syntax is correct
  • Required parameters are provided (driver-specific)

Invalid configurations will fail early with clear error messages.

Future Enhancements

Potential additions:

  • Storage class selection for K8s PVCs
  • Resource quotas and limits
  • Authentication credentials
  • TLS/SSL configuration
  • Custom labels and annotations