Volume Caching
The CI system supports transparent volume caching backed by S3-compatible storage. Caches persist data across pipeline runs, making subsequent runs faster by restoring previously computed artifacts, dependencies, or build outputs.
How It Works
- Volume Creation: When a pipeline creates a volume (directly or via
cachesin YAML), the system checks if a cached version exists in S3. - Cache Restore: If found, the cached data is downloaded, decompressed, and extracted into the volume before the task runs.
- Cache Persist: When the pipeline completes, all volumes are persisted back to S3 with compression.
This is transparent to the pipeline — volumes behave identically whether caching is enabled or not.
Configuration
Caching is configured via the driver DSN using query parameters:
--driver=docker://?cache=s3://bucket-name&cache_compression=zstd&cache_prefix=myprojectDSN Parameters
| Parameter | Description | Default | Example |
|---|---|---|---|
cache | S3 URL for cache storage (required) | — | s3://my-cache-bucket |
cache_compression | Compression algorithm | zstd | zstd, gzip, none |
cache_prefix | Key prefix for all cache entries | "" | myproject → keys become myproject/volume.tar |
S3 URL Format
s3://bucket-name/optional-prefix?region=us-east-1&endpoint=http://localhost:9000&ttl=24h| Parameter | Description | Default | Example |
|---|---|---|---|
region | AWS region | AWS SDK default | us-east-1 |
endpoint | Custom S3 endpoint (for MinIO, etc.) | AWS S3 | http://localhost:9000 |
ttl | Cache expiration duration | No expiration | 24h, 7d, 168h |
Full Examples
AWS S3
pocketci run pipeline.yml \
--driver='docker://?cache=s3://my-pocketci-cache?region=us-west-2&cache_prefix=project-a'MinIO (Local S3-Compatible)
# Start MinIO locally
docker run -p 9000:9000 -p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
minio/minio server /data --console-address ":9001"
# Create bucket
aws --endpoint-url http://localhost:9000 s3 mb s3://cache-bucket
# Run with caching
pocketci run pipeline.yml \
--driver='docker://?cache=s3://cache-bucket?endpoint=http://localhost:9000®ion=us-east-1'With Compression Options
# Use gzip instead of zstd
pocketci run pipeline.yml \
--driver='docker://?cache=s3://bucket&cache_compression=gzip'
# Disable compression (faster for already-compressed data)
pocketci run pipeline.yml \
--driver='docker://?cache=s3://bucket&cache_compression=none'YAML Pipeline Usage
Use the caches field in task configs to define cache directories:
jobs:
- name: build
plan:
- task: install-deps
config:
platform: linux
image_resource:
type: registry-image
source:
repository: node:20
caches:
- path: node_modules
- path: .npm
run:
path: sh
args:
- -c
- |
npm ci
npm run buildCache Behavior
- Path: Relative to the task's working directory
- Name: Derived from the path (e.g.,
node_modules→cache-node_modules) - Sharing: Caches with the same name share data across tasks in the same pipeline run
- Persistence: Caches are uploaded to S3 when the pipeline completes
Multiple Caches
caches:
- path: .cache/go-build # Go build cache
- path: .cache/golangci # Linter cache
- path: vendor # Vendored dependenciesTypeScript/JavaScript Usage
For direct JS/TS pipelines, create named volumes:
const pipeline = async () => {
// Create a cached volume
const cache = await runtime.createVolume({ name: "build-cache" });
// Use the volume in a task
await runtime.run({
name: "build",
image: "node:20",
command: { path: "npm", args: ["run", "build"] },
mounts: [{ name: cache.name, path: "node_modules" }],
});
};
export { pipeline };Supported Drivers
Caching works with drivers that implement VolumeDataAccessor:
| Driver | Caching Support | Notes |
|---|---|---|
docker | ✅ Yes | Uses docker cp for volume data transfer |
native | ✅ Yes | Uses tar directly on the filesystem |
k8s | ✅ Yes | Uses a helper pod for volume data transfer |
Cache Key Structure
Cache keys are structured as:
{cache_prefix}/{volume_name}.tar.{compression}Examples:
myproject/cache-node_modules.tar.zstbuild-cache.tar.zst(no prefix)pocketci/main/vendor.tar.gzip
Environment Variables
AWS credentials can be provided via standard AWS SDK environment variables:
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_REGION=us-east-1
pocketci run pipeline.yml --driver='docker://?cache=s3://bucket'Or use IAM roles, instance profiles, or other AWS SDK credential sources.
Troubleshooting
Cache Not Being Restored
- Check cache key: Ensure
cache_prefixand volume names match between runs - Verify S3 access: Check AWS credentials and bucket permissions
- Check logs: Look for "cache miss" or "restoring volume from cache" messages
Cache Not Being Persisted
- Pipeline must complete: Caches are persisted when the pipeline finishes
- Check S3 write permissions: Ensure the credentials allow
PutObject - Check logs: Look for "persisting volume to cache" messages
Performance Tips
- Use
zstdcompression (default) for best speed/ratio balance - Use
nonecompression for already-compressed data (tar.gz archives, etc.) - Set appropriate
ttlto automatically expire stale caches - Use specific cache paths rather than caching entire directories