Skip to main content

Services & Databases

Eve services follow Docker Compose conventions with Eve-specific extensions for ingress, roles, managed databases, and storage. This guide covers every aspect of service configuration — from basic container definitions through platform-managed databases and persistent volumes.

Service basics

Each key under services in your manifest defines a named service. At minimum, a service needs either an image to pull or a build context to build from:

services:
api:
build:
context: ./apps/api
image: acme-api
ports: [3000]
environment:
NODE_ENV: production

When both build and image are present, Eve builds the image from the context and pushes it to the registry using the image value as the tag. When only image is present, Eve pulls the image directly.

Build configuration

The build block defines how to build the container image:

services:
api:
build:
context: ./apps/api # Build context path (relative to repo root)
dockerfile: Dockerfile.prod # Optional: defaults to Dockerfile
args: # Optional: build arguments
NODE_VERSION: "20"
image: acme-api

If a service has build but no image field and a usable registry is configured, Eve automatically derives the image name from the service name. Explicitly setting image is still recommended for clarity.

Environment variables

Environment variables are declared as a key-value map. Values support secret interpolation and platform variables:

services:
api:
environment:
NODE_ENV: production
DATABASE_URL: postgres://app:${secret.DB_PASSWORD}@db:5432/app
LOG_LEVEL: info

Eve also injects platform variables (EVE_API_URL, EVE_PROJECT_ID, EVE_ENV_NAME, etc.) automatically into every deployed service. You can override any injected variable by declaring it explicitly.

Ports

Ports follow Docker Compose conventions. You can specify them as numbers or strings:

ports: [3000]            # Container port only
ports: ["3000:3000"] # host:container mapping
ports: [3000, 8080] # Multiple ports

When a service exposes ports and the environment has a domain configured, Eve creates ingress routing by default. Control this with the x-eve.ingress block.

Health checks

Health checks tell Eve when a service is ready to receive traffic. They use the standard Docker Compose format:

services:
api:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 3
start_period: 10s
FieldDefaultDescription
testCommand to run. Array form (["CMD", ...]) or string
intervalTime between checks (e.g., 10s, 30s)
timeoutMaximum time for a single check
retries3Consecutive failures before marking unhealthy
start_periodGrace period before health checks begin

For database services, use the engine's built-in readiness tool:

services:
db:
image: postgres:16
healthcheck:
test: ["CMD", "pg_isready", "-U", "app"]
interval: 5s
timeout: 3s
retries: 5
tip

Always define health checks for services that other services depend on. Without a health check, condition: service_healthy dependencies cannot be satisfied.

Service dependencies and ordering

Use depends_on to declare startup ordering between services:

services:
api:
depends_on:
db:
condition: service_healthy
cache:
condition: service_started

Two conditions are supported:

ConditionAlso accepted asBehavior
service_startedstartedWait until the container starts
service_healthyhealthyWait until the health check passes

Dependencies form a directed acyclic graph. Eve resolves the graph and starts services in the correct order. Circular dependencies are rejected at validation time.

The following diagram shows how dependencies control startup ordering for a typical web application:

Eve service roles

The x-eve.role field determines how Eve treats a service at deploy time:

RoleDescription
componentDefault. Long-running service deployed to Kubernetes
workerWorker pool service (uses worker_type for routing)
jobOne-off task — migrations, seed scripts, cleanup. Not deployed as a long-running service
managed_dbPlatform-provisioned database. Not deployed to Kubernetes

Component (default)

The default role. Services without an explicit role are treated as long-running components:

services:
api:
build:
context: ./apps/api
ports: [3000]
x-eve:
role: component # Optional — this is the default

Worker

Worker services participate in a worker pool. Specify worker_type to route work appropriately:

services:
processor:
build:
context: ./apps/processor
x-eve:
role: worker
worker_type: default

Environments can configure worker pools with replica counts:

environments:
staging:
workers:
- type: default
service: processor
replicas: 2

Job

Job services run as one-off tasks. They are not deployed as persistent containers — instead, they are executed on demand by pipeline steps or CLI commands:

services:
migrate:
image: public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest
environment:
DATABASE_URL: ${managed.db.url}
MIGRATIONS_DIR: /migrations
depends_on:
db:
condition: service_healthy
x-eve:
role: job
files:
- source: db/migrations
target: /migrations

Reference job services from pipeline steps using the job action type:

pipelines:
deploy:
steps:
- name: migrate
action: { type: job, service: migrate }
- name: deploy
depends_on: [migrate]
action: { type: deploy }

For the simplest setup, use public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest as a role: job service:

services:
migrate:
image: public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest
environment:
DATABASE_URL: ${managed.db.url}
MIGRATIONS_DIR: /migrations
x-eve:
role: job
files:
- source: db/migrations
target: /migrations

Migration file conventions:

  • Keep SQL files in db/migrations/
  • Use YYYYMMDDHHmmss_description.sql naming
  • Regex pattern: ^(\d{14})_([a-z0-9_]+)\.sql$
  • One migration file per logical migration (may contain multiple SQL statements)

Eve migration behavior:

  • Tracks applied migrations in schema_migrations (name, checksum, applied_at)
  • Re-runs with ROLLBACK safety for failed scripts
  • Auto-creates pgcrypto and uuid-ossp
  • Auto-baseline when the database already contains existing schema objects

Local development example

For docker-compose workflows:

services:
migrate:
image: public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest
environment:
DATABASE_URL: postgres://app:app@db:5432/myapp
volumes:
- ./db/migrations:/migrations:ro
depends_on:
db:
condition: service_healthy

db:
image: postgres:16
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: app
POSTGRES_DB: myapp
healthcheck:
test: ["CMD", "pg_isready", "-U", "app"]
interval: 5s
timeout: 3s
retries: 5
docker compose run --rm migrate        # Apply migrations
docker compose down -v && docker compose up -d db && docker compose run --rm migrate
note

You can still run a BYO migration tool if your org requires it, but the Eve-migrate container is the recommended default for documentation parity.

Managed databases

For production workloads, use Eve's managed database provisioning instead of running your own Postgres container. Managed databases are declared as services with x-eve.role: managed_db:

services:
db:
x-eve:
role: managed_db
managed:
class: db.p1
engine: postgres
engine_version: "16"

Configuration fields

FieldDescription
classDatabase tier — db.p1 (small), db.p2 (medium), db.p3 (large)
engineDatabase engine — currently only postgres is supported
engine_versionEngine version string (e.g., "16")

Provisioning lifecycle

Managed databases are provisioned when you deploy an environment for the first time. They are not rendered into Kubernetes manifests — the platform handles provisioning, credentials, and networking separately.

The lifecycle flow looks like this:

Credentials and connection

The platform manages credentials automatically. Other services can reference managed database values using interpolation placeholders:

services:
api:
environment:
DATABASE_URL: ${managed.db.url}

These placeholders are resolved at deploy time when the managed database is available.

Managing your database

Use the eve db CLI commands to interact with managed databases:

# Check provisioning status
eve db status --env staging

# View the current schema
eve db schema --env staging

# Run a read-only query
eve db sql --env staging --sql "SELECT count(*) FROM users"

# Run a write query (requires --write flag)
eve db sql --env staging --sql "UPDATE settings SET value='v2'" --write

# Run SQL from a file
eve db sql --env staging --file ./scripts/seed.sql

# Run migrations
eve db migrate --env staging --path db/migrations

# List applied migrations
eve db migrations --env staging

# Create a new migration file
eve db new create_users_table --path db/migrations

Migration files follow the naming convention YYYYMMDDHHmmss_description.sql and live under db/migrations/ by default.

Scaling and maintenance

# Scale to a larger tier
eve db scale --env staging --class db.p2

# Rotate database credentials
eve db rotate-credentials --env staging

# Destroy the managed database (irreversible)
eve db destroy --env staging --force
warning

eve db destroy permanently deletes the managed database and all its data. This action cannot be undone. Always ensure you have a backup before running this command.

Persistent storage

For services that need to persist data across container restarts (without using a managed database), attach a persistent volume with x-eve.storage:

services:
minio:
image: minio/minio:latest
ports: [9000]
x-eve:
storage:
mount_path: /data
size: 50Gi
access_mode: ReadWriteOnce
storage_class: standard
FieldDescription
mount_pathAbsolute path inside the container
sizeVolume size (e.g., 10Gi, 50Gi)
access_modeReadWriteOnce, ReadWriteMany, or ReadOnlyMany
storage_classKubernetes storage class name
info

Persistent volumes survive container restarts but are scoped to the environment. Deleting an environment removes its volumes.

File mounts

Mount files from your repository directly into a container using x-eve.files:

services:
nginx:
image: nginx:alpine
x-eve:
files:
- source: ./config/nginx.conf
target: /etc/nginx/nginx.conf
- source: ./config/certs/
target: /etc/nginx/certs/
FieldDescription
sourceRelative path in the repository
targetAbsolute path in the container

File mounts are read from the repository at the deployed git SHA, ensuring the container always receives the version of the file that matches the deployed code.

Private endpoints

Some services live outside your cluster — a GPU machine on your office network, an LM Studio instance on a Mac Mini, an internal API behind a VPN. Eve's private endpoints make these Tailscale-connected services accessible to every pod in the cluster without sidecars, proxies, or per-pod networking configuration.

How it works

Private endpoints use the Tailscale Kubernetes Operator to create an egress proxy in a dedicated eve-tunnels namespace. Eve manages the K8s ExternalName Service that bridges cluster DNS to the tailnet device. Your apps connect using a stable, predictable in-cluster URL — no Tailscale configuration needed in application code.

Every private endpoint gets a DNS name following this pattern:

http://<orgSlug>-<name>.eve-tunnels.svc.cluster.local:<port>

This URL works from app pods, agent runtime pods, and worker runner pods alike.

Registering an endpoint

Use the eve endpoint CLI to register a private endpoint backed by a Tailscale device:

# Register a private endpoint
eve endpoint add \
--name lmstudio \
--tailscale-hostname mac-mini.tail12345.ts.net \
--port 1234

# List registered endpoints
eve endpoint list

# Show endpoint details and connectivity status
eve endpoint show lmstudio

# Run diagnostics
eve endpoint diagnose lmstudio

# Remove an endpoint
eve endpoint remove lmstudio

On success, eve endpoint add prints the in-cluster URL. The endpoint is org-scoped — names must be DNS-safe and are unique per organization.

Connecting your services

Wire the endpoint URL into your services via secrets. This follows the standard BYOK (bring your own key) pattern — Eve provides the connectivity, you configure the environment variables:

# Store the endpoint URL as a secret
eve secrets set LLM_BASE_URL \
"http://myorg-lmstudio.eve-tunnels.svc.cluster.local:1234/v1" \
--scope project

# Store any auth keys the service requires
eve secrets set LLM_API_KEY "your-api-key" --scope project

Then reference the secrets in your manifest:

services:
api:
environment:
LLM_BASE_URL: ${secret.LLM_BASE_URL}
LLM_API_KEY: ${secret.LLM_API_KEY}

Your application code uses these environment variables directly — no Eve-specific SDK or client is needed:

import OpenAI from 'openai';

const client = new OpenAI({
baseURL: process.env.LLM_BASE_URL,
apiKey: process.env.LLM_API_KEY,
});

Prerequisites

The Tailscale Kubernetes Operator must be installed in the cluster before eve endpoint add can create tunnel services. The command checks for the operator and fails with guidance if it is not found. Operator installation is a one-time infrastructure task — see your cluster administrator or the Tailscale operator docs for setup.

tip

For local k3d development where the host machine is already on the tailnet, k3d containers can often route to Tailscale IPs directly via Docker bridge networking. In this case you can skip the operator and set the Tailscale IP in your secrets with eve secrets set. Validate connectivity with kubectl run curl --image=curlimages/curl --rm -it -- curl http://100.x.x.x:1234/v1/models before relying on this shortcut.

External services

Mark a service as external when it represents a dependency that Eve should not deploy — such as a third-party API or a database hosted elsewhere:

services:
stripe:
x-eve:
external: true
url: https://api.stripe.com

legacy-db:
x-eve:
external: true
url: postgres://user:pass@legacy-host:5432/mydb

External services appear in the dependency graph and can be referenced by other services, but Eve skips them during deployment. This is useful for documenting the full system topology in a single manifest.

API spec registration

Register your service's API specification so Eve can discover and catalog it:

services:
api:
x-eve:
api_spec:
type: openapi
spec_url: /openapi.json

Supported spec types:

TypeDefault spec URLDescription
openapi/openapi.jsonOpenAPI / Swagger specification
postgrest/PostgREST auto-generated API
graphql/graphqlGraphQL schema introspection

When on_deploy is true (the default), Eve fetches the spec after each deployment and registers it for agent discovery. Set auth: "none" if the spec endpoint does not require Eve authentication.

For services with multiple APIs, use api_specs instead:

services:
api:
x-eve:
api_specs:
- type: openapi
spec_url: /v1/openapi.json
name: v1
- type: openapi
spec_url: /v2/openapi.json
name: v2

Environment overrides for services

Environments can override service configuration without changing the base definitions. This lets you keep a single service block while varying behavior per environment:

services:
api:
build:
context: ./apps/api
image: acme-api
ports: [3000]
environment:
NODE_ENV: production
LOG_LEVEL: info

environments:
staging:
pipeline: deploy
overrides:
services:
api:
environment:
NODE_ENV: staging
LOG_LEVEL: debug
production:
pipeline: deploy
approval: required

Overrides are merged with the base service configuration at deploy time. Only the specified fields are overridden — unmentioned fields retain their base values. This pattern is especially useful for tuning resource limits, log levels, or feature flags per environment without duplicating your entire service definition.

Putting it all together

Here is a complete services block for a typical fullstack application with a managed database, an API server, a web frontend, and a migration job:

services:
db:
x-eve:
role: managed_db
managed:
class: db.p1
engine: postgres
engine_version: "16"

api:
build:
context: ./apps/api
image: acme-api
ports: [3000]
environment:
DATABASE_URL: ${managed.db.url}
NODE_ENV: production
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 10s
timeout: 5s
retries: 3
x-eve:
ingress:
public: true
port: 3000
api_spec:
type: openapi
spec_url: /openapi.json

web:
build:
context: ./apps/web
image: acme-web
ports: [80]
depends_on:
api:
condition: service_healthy
x-eve:
ingress:
public: true
port: 80

migrate:
image: public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest
environment:
DATABASE_URL: ${managed.db.url}
MIGRATIONS_DIR: /migrations
depends_on:
db:
condition: service_healthy
x-eve:
role: job
files:
- source: db/migrations
target: /migrations

This configuration gives you a managed Postgres database provisioned by the platform, an API with public ingress and OpenAPI discovery, a web frontend, and a migration job that runs before deployments via pipeline steps. All secrets and managed connection strings are resolved at deploy time.

What's next?