March 1, 2026 · ClawWorks Team · 8 min read
How to Set Up Multiple OpenClaw Agents (Step-by-Step Guide)
Running a single AI agent is powerful. Running multiple OpenClaw agents—each with its own personality, tools, and channels—unlocks a whole new level of automation. Maybe you want one agent handling customer support on Telegram while another monitors your infrastructure and a third manages your content pipeline.
This guide walks you through setting up 2–3 OpenClaw agents on a single server using Docker Compose. We'll cover configuration, networking, resource allocation, and how to keep everything running smoothly. If you'd rather skip the terminal work, we'll also show you the ClawPanel GUI approach at the end.
What Is OpenClaw Multi-Agent Setup?
An OpenClaw multi-agent setup is a deployment where two or more independent OpenClaw gateway instances run on the same server (or across servers), each configured with its own API keys, channel connections, tools, and personality files. This lets you run specialized AI agents in parallel—each isolated in its own Docker container—without them interfering with one another. It's the recommended pattern for teams and power users who need multiple autonomous agents.
Prerequisites for Your OpenClaw Setup Guide
- A Linux VPS or dedicated server — 2+ CPU cores and 4 GB RAM minimum (each agent uses ~1–1.5 GB)
- Docker & Docker Compose installed (v2.20+)
- API keys for your LLM provider (Anthropic, OpenAI, etc.)—one per agent or shared
- Channel tokens — e.g., separate Telegram bot tokens for each agent
Not sure which server to pick? Check our AI agent hosting comparison or consider ClawWorks managed hosting if you want us to handle the infrastructure.
Step 1: Create the Directory Structure for Multiple AI Agents
Each agent gets its own directory with separate configuration and workspace files. Here's the layout we recommend:
~/openclaw-agents/
├── docker-compose.yml
├── agent-alpha/
│ ├── config.yml
│ └── workspace/
├── agent-beta/
│ ├── config.yml
│ └── workspace/
└── agent-gamma/
├── config.yml
└── workspace/mkdir -p ~/openclaw-agents/{agent-alpha,agent-beta,agent-gamma}/{workspace}
cd ~/openclaw-agentsStep 2: Configure Each OpenClaw Agent
Each agent needs its own config.yml. Refer to the official OpenClaw docs for the full configuration reference. Here's a minimal example for Agent Alpha:
# agent-alpha/config.yml
gateway:
name: agent-alpha
port: 3100
model:
provider: anthropic
default: claude-sonnet-4-20250514
channels:
telegram:
token: "${ALPHA_TELEGRAM_TOKEN}"
tools:
browser: true
exec: true
workspace:
path: /app/workspaceFor Agent Beta, change the name, port (e.g., 3101), and use a different Telegram token. For Agent Gamma, use port 3102 and its own credentials. The key rule: each agent must have a unique port and its own channel tokens.
You can share LLM API keys across agents if you want unified billing, or separate them for per-agent cost tracking.
Step 3: Write the Docker Compose File
This is where the multi-agent magic happens. A single docker-compose.yml orchestrates all your agents:
version: "3.8"
services:
agent-alpha:
image: ghcr.io/open-claw/open-claw:latest
container_name: agent-alpha
restart: unless-stopped
ports:
- "3100:3100"
volumes:
- ./agent-alpha/config.yml:/app/config.yml:ro
- ./agent-alpha/workspace:/app/workspace
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- ALPHA_TELEGRAM_TOKEN=${ALPHA_TELEGRAM_TOKEN}
deploy:
resources:
limits:
memory: 1536M
cpus: "1.0"
agent-beta:
image: ghcr.io/open-claw/open-claw:latest
container_name: agent-beta
restart: unless-stopped
ports:
- "3101:3101"
volumes:
- ./agent-beta/config.yml:/app/config.yml:ro
- ./agent-beta/workspace:/app/workspace
environment:
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- BETA_TELEGRAM_TOKEN=${BETA_TELEGRAM_TOKEN}
deploy:
resources:
limits:
memory: 1536M
cpus: "1.0"
agent-gamma:
image: ghcr.io/open-claw/open-claw:latest
container_name: agent-gamma
restart: unless-stopped
ports:
- "3102:3102"
volumes:
- ./agent-gamma/config.yml:/app/config.yml:ro
- ./agent-gamma/workspace:/app/workspace
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GAMMA_TELEGRAM_TOKEN=${GAMMA_TELEGRAM_TOKEN}
deploy:
resources:
limits:
memory: 1536M
cpus: "1.0"Store your secrets in a .env file in the same directory. Docker Compose reads it automatically. Never commit this file to version control.
Step 4: Launch and Verify Your Multiple AI Agents
# Pull the latest image docker compose pull # Start all agents in detached mode docker compose up -d # Check status docker compose ps # View logs for a specific agent docker compose logs -f agent-alpha
You should see each agent start its gateway on its assigned port. Send a test message to each agent's Telegram bot to confirm they're responding independently.
Step 5: Customize Agent Personalities and Tools
Each agent's workspace/ directory is where you define its personality. Create a SOUL.md file in each workspace describing who the agent is, how it should behave, and what its responsibilities are. You can also add TOOLS.md for tool-specific instructions.
For example, Agent Alpha might be a customer support specialist that's friendly and concise, while Agent Beta is a DevOps monitor that speaks in terse alerts. The OpenClaw GitHub repo has example workspace templates to get you started.
Want a pre-built configuration framework? The ClawWorks Blueprint gives you production-ready agent templates with best-practice personality files, tool configs, and memory structures.
Resource Planning for Multiple OpenClaw Agents
Here's a practical breakdown of what you'll need:
| Agents | RAM | CPU | Disk |
|---|---|---|---|
| 1 | 2 GB | 1 core | 10 GB |
| 2 | 4 GB | 2 cores | 20 GB |
| 3 | 6 GB | 2-4 cores | 30 GB |
If an agent uses the built-in headless browser (for web scraping or automation), add an extra 512 MB per agent. Swap space (2 GB) is also recommended as a safety net.
Monitoring and Maintenance Tips
- Use
docker compose logs -fto tail logs across all agents, or filter by service name. - Set up a simple health check — hit each agent's gateway port with a cURL script on a cron job.
- Update agents by running
docker compose pull && docker compose up -d. Containers restart with the latest image. - Back up workspaces — each agent's
workspace/folder contains its memory and files. Snapshot these regularly.
The Easier Way: Set Up Multiple AI Agents with ClawPanel
If editing YAML files and managing Docker containers isn't your idea of a good time, there's a faster path. ClawPanel is our web-based dashboard that lets you deploy and manage multiple OpenClaw agents through a clean GUI. No terminal required.
With ClawPanel you can:
- ✓ Create new agents in a few clicks — pick a model, connect channels, set personality
- ✓ Monitor all agents from one dashboard — logs, status, resource usage at a glance
- ✓ Edit configs live — change models, tools, or personality files without SSH
- ✓ Auto-updates and backups — handled for you
ClawPanel is available as a $100 lifetime license or $75/year. For teams running multiple agents, it pays for itself in the first week of time saved.
Ready to Deploy Your OpenClaw Multi-Agent Setup?
Skip the YAML wrangling. ClawPanel gives you a visual dashboard for deploying, monitoring, and managing all your agents in one place.
Get ClawPanel →$100 lifetime · $75/yr · Unlimited agents
Wrapping Up
Running multiple OpenClaw agents is straightforward once you understand the pattern: separate configs, separate ports, separate workspaces, one Docker Compose file. Start with two agents, get comfortable with the workflow, then scale up as your needs grow.
For the full configuration reference and advanced features like inter-agent communication, check the OpenClaw documentation. And if you want the fastest path to a multi-agent setup, grab ClawPanel and have everything running in minutes instead of hours.
Got questions? Drop by the OpenClaw GitHub or reach out to us at ClawWorks. We're always happy to help.