Skip to main content

Agents & Teams

Agents are AI personas with specific skills, access policies, and harness profiles. Teams group agents for coordinated work under a lead with a dispatch strategy. Both are defined in YAML files in your repository and synced to the platform via eve agents sync.

What are agents?

An agent is a named persona that combines:

  • A skill — what the agent knows how to do
  • A harness profile — which model(s) power the agent
  • Access policies — which environments and services the agent can touch
  • Gateway exposure — whether the agent is addressable from chat

Agents are not generic AI assistants. Each one has a narrow, well-defined role. A coder agent writes code. A reviewer agent reviews pull requests. A deploy-agent handles deployments. Specialization makes agents reliable and predictable.

agents.yaml structure

Agents are defined in a YAML file whose path is set via x-eve.agents.config_path in the manifest. The conventional location is agents/agents.yaml.

version: 1
agents:
mission-control:
slug: mission-control
alias: mc # short name for chat: @eve mc deploy to staging
description: "Primary orchestration agent for deploys and incident response"
skill: eve-orchestration
workflow: assistant
harness_profile: primary-orchestrator
access:
envs: [staging, production]
services: [api, web]
api_specs: [openapi]
policies:
permission_policy: auto_edit
git:
commit: manual
push: never
schedule:
heartbeat_cron: "*/15 * * * *"
gateway:
policy: routable
clients: [slack]

Field reference

FieldRequiredDescription
slugNoOrg-unique identifier for chat routing. Lowercase alphanumeric + dashes.
aliasNoShort vanity name for chat addressing (see Agent aliases)
descriptionNoHuman-readable summary of the agent's purpose
skillYesName of the installed skill that defines this agent's capability
workflowNoNamed workflow to execute (from workflows in the manifest)
harness_profileNoNamed profile from x-eve.agents.profiles in the manifest
accessNoScope restrictions: envs, services, api_specs
policiesNoPermission and git policies
scheduleNoCron-based heartbeat for periodic agents
gatewayNoChat gateway exposure settings

Permission policies

The permission_policy field controls how much autonomy an agent has:

PolicyBehavior
defaultInteractive — requires human approval for risky actions
auto_editAutonomous — edits files and code without approval
neverRead-only — cannot modify anything
yoloFully autonomous in controlled environments (use carefully)

Git policies

Git policies control how agents interact with version control:

policies:
git:
commit: auto # never | manual | auto | required
push: on_success # never | on_success | required
  • commit: auto creates commits automatically. manual lets the agent decide when to commit. required mandates a commit before the job completes.
  • push: on_success pushes when the job succeeds. never means the agent's changes stay local. required mandates a push before completion.

For coding agents, auto commit with on_success push is the common pattern. For read-only agents (auditors, reviewers), set both to never.

Hard guardrails from repo policy files

Policy files like AGENTS.md can define hard constraints that override general autonomy settings. Example: forbidding direct AWS infrastructure mutations and requiring Terraform-only changes in a separate infra repository. Treat these rules as mandatory runtime policy, even when permission_policy is permissive.

Agent slugs and gateway exposure

Slugs

An agent slug is an org-unique identifier used for direct chat routing. When a user sends @eve mission-control deploy to staging in Slack, Eve routes the message to the agent with slug mission-control.

Slug rules:

  • Lowercase alphanumeric characters and dashes only
  • Must be unique across the entire organization (not just the project)
  • Sync fails if a slug already exists in another project

Organizations can set a default agent that receives messages when no slug is specified:

eve org update org_xxx --default-agent mission-control

Gateway exposure policy

The gateway block controls whether an agent is visible and addressable from external chat providers. Internal dispatch (teams, pipelines, routes) is unaffected by this setting.

gateway:
policy: routable
clients: [slack]
PolicyListed in @eve agents listResponds to @eve <slug> msgInternal dispatch
noneHiddenRejectedWorks
discoverableVisibleRejected (with hint)Works
routableVisibleWorksWorks

Default to none. Make agents routable only when they should receive direct messages from chat. discoverable is useful for agents that should appear in listings but only respond when routed through a team.

Agent aliases

Agent slugs are always prefixed with the project slug to ensure org-wide uniqueness. An agent with slug pm in project pmbot becomes pmbot-pm. In Slack, users must type @eve pmbot-pm hello — clunky and hard to remember.

Aliases solve this. An alias is a short, human-chosen vanity name that bypasses the prefixed slug for chat addressing:

agents:
pm:
slug: pm
alias: pm # users type: @eve pm hello
skill: pm-coordinator
gateway:
policy: routable
tech-lead:
slug: tech-lead
alias: tech # users type: @eve tech review this
skill: tech-lead
gateway:
policy: routable

After sync with project slug pmbot, the canonical slugs are pmbot-pm and pmbot-tech-lead (still work), but users can address these agents as @eve pm hello and @eve tech review this.

Resolution order is backwards-compatible — existing slugs always resolve first:

  1. Slug match@eve pmbot-pm hello routes directly
  2. Alias match@eve pm hello resolves via alias
  3. Org default@eve hello falls back to the organization's default agent
  4. Error — no match and no default configured

Namespace rules:

  • Aliases and slugs share the same routing namespace. If project A has slug pm, project B cannot claim alias pm.
  • The namespace is org-scoped and case-insensitive.
  • Platform-reserved words (agents, help, status, eve, admin, system, health) cannot be used as aliases — they conflict with gateway management commands.
  • Aliases are optional. If omitted, the agent is reachable only by its canonical prefixed slug.

The @eve agents list command shows aliases alongside canonical slugs:

pmbot-pm (-> pm) -- pmbot (PM Coordinator)
devbot-code (-> code) -- devbot (Code Review Agent)

Agent runtime and warm pods

When a chat message arrives for an agent, Eve needs somewhere to execute it. The agent runtime provides pre-provisioned, org-scoped containers — warm pods — that are ready to handle requests immediately, eliminating cold-start latency for conversational flows.

How warm pods work

Warm pods are long-lived containers that report health and capacity to the platform via a heartbeat. When a chat request arrives, the platform places it on a warm pod within the same organization using a sticky routing strategy. This means your agents respond in seconds rather than waiting for a fresh container to spin up.

Each warm pod tracks:

  • Health status — whether the pod is ready to accept work
  • Capacity — how many concurrent requests the pod can handle
  • Org binding — which organization the pod serves

Execution modes

The EVE_AGENT_RUNTIME_EXECUTION_MODE environment variable controls how agent jobs run:

ModeBehaviorBest for
inline (default)Execute directly in the warm podChat, triage, lightweight tasks
runnerSpin up an ephemeral runner podHeavy computation, untrusted code, long-running tasks

Inline mode is the default because it gives the fastest response times. Switch to runner mode when you need stronger isolation — for example, when agents execute user-provided code or perform resource-intensive operations that could affect other requests sharing the pod.

# Check runtime status for your agents
eve agents runtime-status
tip

Start with inline mode. If you observe resource contention or need stricter isolation for specific agents, switch those agents to runner mode selectively via environment overrides rather than changing the global setting.

Per-job HOME isolation

Each job attempt runs with its own isolated HOME directory. The platform creates a dedicated home for every attempt, pre-populates it with the necessary directory structure, and sets HOME and EVE_JOB_USER_HOME in the harness environment. This prevents cross-job interference — credentials, shell history, and tool configuration from one job cannot leak into another, even when multiple jobs share the same warm pod via inline execution.

Both the agent runtime and the worker enforce this isolation. The job home is cleaned up after the attempt completes.

Teams and dispatch modes

Teams group agents under a lead for coordinated work. When work is dispatched to a team, the lead agent orchestrates the members according to the team's dispatch mode.

teams.yaml structure

Teams are defined in a separate YAML file whose path is set via x-eve.agents.teams_path in the manifest. The conventional location is agents/teams.yaml.

version: 1
teams:
review-council:
lead: mission-control
members: [code-reviewer, security-auditor]
dispatch:
mode: council
max_parallel: 3
lead_timeout: 300
member_timeout: 300
merge_strategy: majority

expert-panel:
lead: pm-coordinator
members: [tech-lead, ux-advocate, biz-analyst, risk-assessor]
dispatch:
mode: council
staged: true # lead prepares before members start
lead_timeout: 3600
member_timeout: 300

deploy-ops:
lead: ops-lead
members: [deploy-agent, monitor-agent]
dispatch:
mode: fanout
max_parallel: 2

pipeline-crew:
lead: orchestrator
members: [builder, tester, deployer]
dispatch:
mode: relay

Dispatch modes

Fanout is the most common mode. The lead creates a root job and dispatches parallel child jobs — one per member. Members work independently. Use fanout when work can be cleanly decomposed into independent tasks.

dispatch:
mode: fanout
max_parallel: 3

Council sends the same prompt to all members and merges their responses using a merge strategy. Use council for collective judgment — code reviews, security audits, design decisions. Council supports an optional staged mode where the lead prepares work before members start.

dispatch:
mode: council
merge_strategy: majority # majority | unanimous | lead-decides

Relay is sequential delegation. The lead delegates to the first member, whose output passes to the next member, and so on. Use relay when each stage's output is the next stage's input — for example, a research-then-implement-then-test pipeline.

dispatch:
mode: relay

Choosing the right mode

ScenarioModeWhy
Implement multiple features in parallelfanoutIndependent work, no dependencies between members
Review a pull request from multiple perspectivescouncilMultiple opinions merged into a single verdict
Transcribe a recording, then fan out to domain expertscouncil + stagedLead prepares content before members start
Research, implement, then testrelayEach stage depends on the previous stage's output

Most work is fanout. Use council only when multiple perspectives genuinely improve the outcome. Use relay only when stages are strictly sequential.

Staged council dispatch

Standard council mode starts the lead and all members simultaneously. This breaks down when the lead needs to prepare material before members can work — for example, transcribing a meeting recording before domain experts analyze it, or triaging an incident before investigators fan out.

Staged dispatch solves this by splitting council execution into three phases:

Enable it with the staged flag on a council dispatch:

teams:
expert-panel:
lead: pm-coordinator
members:
- tech-lead
- ux-advocate
- biz-analyst
- risk-assessor
dispatch:
mode: council
staged: true
lead_timeout: 3600
member_timeout: 300

How it works:

  1. Dispatch — the platform creates the lead job in ready phase and member jobs in backlog phase. Members are visible immediately (eve job list shows the full roster) but will not be claimed by the orchestrator.
  2. Prepare — the lead runs first. It processes attachments, transcribes audio, gathers context, and posts prepared material to the coordination thread. When ready, it returns eve.status = "prepared".
  3. Promote — the orchestrator sees the prepared signal, promotes all backlog members to ready, and requeues the lead with a children.all_done wake condition.
  4. Parallel work — members are claimed and run in parallel. Each reads the coordination thread for the lead's prepared content and returns its analysis.
  5. Synthesize — when all members complete, the lead wakes and reads their summaries from the coordination thread. It produces a final synthesis and returns eve.status = "success".

If the lead completes without returning prepared (handles the request solo, or fails), any members still in backlog are automatically cancelled.

tip

Staged dispatch is only valid with mode: council. The staged flag is rejected on fanout and relay modes. If you need sequential preparation followed by sequential processing, use relay with the lead as the first link in the chain.

Syncing agent configuration

All agent and team configuration is repo-first. The repository is the source of truth, and eve agents sync pushes it to the platform.

# Sync from committed ref (production)
eve agents sync --project proj_xxx --ref 0123456789abcdef0123456789abcdef01234567

# Sync local state (development)
eve agents sync --project proj_xxx --local --allow-dirty

# Preview effective config without syncing
eve agents config --repo-dir ./my-app

Sync performs several operations:

  1. Reads agents.yaml, teams.yaml, and chat.yaml from the paths specified in the manifest
  2. Resolves AgentPacks from x-eve.packs and writes .eve/packs.lock.yaml
  3. Deep-merges pack agents, teams, and chat config with local overrides
  4. Validates org-wide slug and alias uniqueness (aliases cannot collide with slugs or reserved names)
  5. Pushes the merged configuration to the API

Pack overlay

When using AgentPacks, local YAML overlays pack defaults via deep merge. You can override specific fields or remove pack-provided agents entirely:

agents:
# Override a field from the pack
pack-provided-agent:
harness_profile: my-custom-profile

# Remove a pack agent you don't need
unwanted-pack-agent:
_remove: true

Harness profiles

Harness profiles decouple agents from specific AI models. Instead of hardcoding a model in the agent definition, you define named profiles in the manifest and agents reference them by name.

x-eve:
agents:
profiles:
primary-orchestrator:
- harness: mclaude
model: opus-4.5
reasoning_effort: high
primary-reviewer:
- harness: mclaude
model: opus-4.5
reasoning_effort: high
- harness: codex
model: gpt-5.2-codex
reasoning_effort: x-high
fast-triage:
- harness: mclaude
model: sonnet-4.5
reasoning_effort: medium

Each profile is a fallback chain — if the first harness is unavailable, the next one is tried. This provides resilience against provider outages and lets you mix models by capability:

Task typeProfile strategy
Complex coding, architectureHigh-reasoning model (opus, gpt-5.2-codex)
Code review, documentationMedium-reasoning model (sonnet, gemini)
Triage, routing, classificationFast model (haiku-class, low reasoning)

Availability policy

The manifest can configure what happens when a harness in a profile is unavailable:

x-eve:
agents:
availability:
drop_unavailable: true

When drop_unavailable is true, unavailable harnesses are silently skipped and the next entry in the fallback chain is tried.

Planning councils

Planning councils are a specialized use of harness profiles where multiple models collaborate on a planning task. Define a profile with multiple entries, and the orchestrator runs them in parallel to produce a merged plan.

x-eve:
agents:
profiles:
planning-council:
- profile: primary-planner
- harness: gemini
model: gemini-3

A profile entry can reference another profile by name (using the profile key instead of harness), enabling composition of complex multi-model strategies.

Agent install targets

The x-eve.install_agents field controls which agent runtimes receive installed skills. By default, skills are installed for claude-code only.

x-eve:
install_agents: [claude-code, codex, gemini-cli]

This affects where skill files are placed during installation. Each agent runtime has its own skills directory convention, and the installer writes to all specified targets.

You can also override install targets per-pack:

x-eve:
packs:
- source: ./skillpacks/claude-only
install_agents: [claude-code]

- source: ./skillpacks/universal
install_agents: [claude-code, codex, gemini-cli]

Coordination threads

When a team dispatches work, a coordination thread links the parent job to all child agents. This enables real-time communication between the lead and members during execution.

  • Thread key: coord:job:{parent_job_id}
  • Child agents receive EVE_PARENT_JOB_ID as an environment variable and derive the thread key from it
  • End-of-attempt summaries are automatically posted to the coordination thread
  • Coordination inbox: .eve/coordination-inbox.md is regenerated at job start from recent thread messages

Message kinds

KindPurpose
statusAutomatic end-of-attempt summary
directiveLead-to-member instruction
questionMember-to-lead question
updateProgress update from a member

The lead agent can monitor the entire job tree:

eve supervise                          # supervise current job
eve supervise <job-id> --timeout 60 # supervise specific job

Putting it all together

A complete agent configuration ties together the manifest, agents, teams, and chat:

# .eve/manifest.yaml
x-eve:
agents:
config_path: agents/agents.yaml
teams_path: agents/teams.yaml
profiles:
primary-orchestrator:
- harness: mclaude
model: opus-4.5
reasoning_effort: high
chat:
config_path: agents/chat.yaml
install_agents: [claude-code]
packs:
- source: incept5/eve-skillpacks
ref: 0123456789abcdef0123456789abcdef01234567

Sync everything in one command:

eve agents sync --project proj_xxx --ref <sha>

This resolves packs, merges configuration, validates slugs, and pushes agents, teams, and chat routes to the platform in a single atomic operation.

What's next?

Set up chat integrations to talk to your agents: Chat & Conversations