- AI Automation Newsletter
- Posts
- Everyone's running Clawdbot
Everyone's running Clawdbot
Almost no one is backing it up. Here's the fix.

Hey everyone, welcome to the very first (and very delayed) email from AI Automation News (formerly Workflow Automation Tools Newsletter)!
Since Clawdbot is very popular these days, I wanted to kickstart the newsletter with a post about it.
Happy reading!
Running a self-hosted AI agent means you're responsible for everything: code, configs, credentials, and the entire server. Lose any of these, and you're starting from scratch.
After setting up Clawdbot on a $5/mo VPS on Racknerd, I built a 3-layer backup system that costs under $1/month and can restore a complete server in 30 minutes.
The Problem
A typical self-hosted setup has three types of data:
Type | Examples | Risk if Lost |
|---|---|---|
Code & Config | Scripts, agent configs, memory files | Hours of work |
Secrets | API tokens, OAuth credentials, keystores | Auth headaches |
System State | Docker volumes, OS configs, packages | Full rebuild |
Each needs different handling. You can't push secrets to GitHub. You shouldn't pay for full-system backups of recoverable OS files.
The Solution: 3 Layers
┌─────────────────────────────────────────────────────┐
│ BACKUP LAYERS │
├──────────────┬──────────────┬───────────────────────┤
│ GitHub │ Cloudflare │ Backblaze B2 │
│ (Free) │ R2 (Free) │ (~$0.25/mo) │
├──────────────┼──────────────┼───────────────────────┤
│ • Scripts │ • API tokens │ • Docker volumes │
│ • Configs │ • OAuth keys │ • /etc configs │
│ • Memory │ • Keystores │ • Home directories │
│ • Skills │ • .env files │ • Package lists │
├──────────────┼──────────────┼───────────────────────┤
│ Versioned │ Encrypted │ GFS Retention │
│ Daily push │ Daily sync │ 7 daily/4 wk/12 mo │
└──────────────┴──────────────┴───────────────────────┘
Layer 1: GitHub (Code & Config)
What: Version-controlled workspace with scripts, agent configs, and memory files.
Why GitHub: Free, versioned, searchable. You can diff changes and roll back mistakes.
Setup:
# Initialize repo
cd ~/workspace
git init
git remote add origin [email protected]:username/my-agent-workspace.git
# Create .gitignore for secrets
cat > .gitignore << 'EOF'
.env
*.env
**/credentials.json
**/token*.json
**/*.pem
**/*.key
EOF
# Initial commit
git add -A
git commit -m "Initial backup"
git push -u origin main
Automate daily backups (cron or your agent's scheduler):
#!/bin/bash
cd ~/workspace
git add -A
if ! git diff --cached --quiet; then
git commit -m "Daily backup $(date +%Y-%m-%d)"
git push
fi
Layer 2: Cloudflare R2 (Secrets)
What: Encrypted backup of credentials, tokens, and sensitive configs.
Why R2: S3-compatible, 10GB free tier, zero egress fees (critical for restores).
Setup:
Create an R2 bucket at the Cloudflare Dashboard
Generate S3-compatible API token
Configure rclone:
cat > ~/.config/rclone/rclone.conf << 'EOF'
[r2]
type = s3
provider = Cloudflare
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
acl = private
no_check_bucket = true
EOF
Backup script:
#!/bin/bash
# backup-secrets.sh
BACKUP_DIR="/tmp/secrets-backup-$$"
mkdir -p "$BACKUP_DIR"
# Collect secrets
cp -r ~/.config/myapp "$BACKUP_DIR/" 2>/dev/null
cp -r ~/.clawdbot "$BACKUP_DIR/" 2>/dev/null
find ~/workspace -name ".env" -exec cp --parents {} "$BACKUP_DIR/" \;
# Create archive
tar -czf /tmp/secrets-$(date +%Y-%m-%d).tar.gz -C "$BACKUP_DIR" .
# Upload to R2
rclone copy /tmp/secrets-*.tar.gz r2:my-bucket/secrets/
rclone copyto /tmp/secrets-*.tar.gz r2:my-bucket/secrets/latest.tar.gz
# Cleanup
rm -rf "$BACKUP_DIR" /tmp/secrets-*.tar.gz
Layer 3: Backblaze B2 (Full System)
What: Complete system backup including Docker volumes, OS configs, and home directories.
Why B2: Cheapest storage at $0.005/GB/month. GFS retention keeps costs predictable.
Setup:
Create a B2 bucket at the Backblaze Dashboard
Create application key with:
listFiles,readFiles,writeFiles,deleteFilesConfigure rclone:
cat >> ~/.config/rclone/rclone.conf << 'EOF'
[b2]
type = b2
account = YOUR_KEY_ID
key = YOUR_APPLICATION_KEY
EOF
Full system backup script:
#!/bin/bash
# host-backup.sh - Run on HOST, not in Docker
set -e
B2_BUCKET="my-system-backup"
DATE=$(date +%Y-%m-%d)
DAY_OF_WEEK=$(date +%u)
DAY_OF_MONTH=$(date +%d)
BACKUP_DIR="/tmp/backup-$$"
# Retention policy
KEEP_DAILY=7
KEEP_WEEKLY=4
KEEP_MONTHLY=12
mkdir -p "$BACKUP_DIR"
# 1. Docker volumes
mkdir -p "$BACKUP_DIR/docker/volumes"
for vol in $(docker volume ls -q); do
docker run --rm -v "$vol:/data:ro" -v "$BACKUP_DIR/docker/volumes:/backup" \
alpine tar -czf "/backup/${vol}.tar.gz" -C /data .
done
# 2. Docker compose files
find /home -name "docker-compose*.yml" -exec cp --parents {} "$BACKUP_DIR/docker/" \;
# 3. System configs
tar -czf "$BACKUP_DIR/etc.tar.gz" --exclude='/etc/ssl/certs' -C / etc
dpkg --get-selections > "$BACKUP_DIR/packages.txt"
crontab -l > "$BACKUP_DIR/crontab.txt" 2>/dev/null || true
# 4. Home directories (excluding cache)
for user in /home/*; do
username=$(basename "$user")
tar -czf "$BACKUP_DIR/${username}.tar.gz" \
--exclude='node_modules' --exclude='.cache' --exclude='.npm' \
-C /home "$username"
done
# Create final archive
ARCHIVE="/tmp/backup-${DATE}.tar.gz"
tar -czf "$ARCHIVE" -C "$BACKUP_DIR" .
# Upload with GFS rotation
rclone copy "$ARCHIVE" "b2:${B2_BUCKET}/daily/"
[ "$DAY_OF_WEEK" = "7" ] && rclone copy "$ARCHIVE" "b2:${B2_BUCKET}/weekly/"
[ "$DAY_OF_MONTH" = "01" ] && rclone copy "$ARCHIVE" "b2:${B2_BUCKET}/monthly/"
rclone copyto "$ARCHIVE" "b2:${B2_BUCKET}/latest.tar.gz"
# Apply retention
rclone delete "b2:${B2_BUCKET}/daily/" --min-age "${KEEP_DAILY}d"
rclone delete "b2:${B2_BUCKET}/weekly/" --min-age "$((KEEP_WEEKLY * 7))d"
rclone delete "b2:${B2_BUCKET}/monthly/" --min-age "$((KEEP_MONTHLY * 31))d"
# Cleanup
rm -rf "$BACKUP_DIR" "$ARCHIVE"
The Schedule
All backups run automatically:
Time (UTC) | What | Where |
|---|---|---|
04:00 | Full system | Backblaze B2 |
06:00 | Workspace code | GitHub |
06:30 | Secrets | Cloudflare R2 |
Disaster Recovery
When things go wrong, here's the restore order:
Scenario 1: Lost a file
# From GitHub
git checkout HEAD~1 -- path/to/file
Scenario 2: Corrupted secrets
# From R2
rclone copy r2:my-bucket/secrets/latest.tar.gz /tmp/
tar -xzf /tmp/latest.tar.gz -C /
Scenario 3: Server died, starting fresh
# On new VPS
curl https://rclone.org/install.sh | sudo bash
# Configure rclone with B2 credentials, then:
rclone copy b2:my-system-backup/latest.tar.gz /tmp/
cd /tmp && tar -xzf latest.tar.gz
# Restore packages
sudo dpkg --set-selections < packages.txt
sudo apt-get dselect-upgrade -y
# Restore Docker volumes
for vol in docker/volumes/*.tar.gz; do
name=$(basename "$vol" .tar.gz)
docker volume create "$name"
docker run --rm -v "$name:/data" -v "$(pwd)/docker/volumes:/backup" \
alpine tar -xzf "/backup/${name}.tar.gz" -C /data
done
# Restore home dirs
sudo tar -xzf username.tar.gz -C /home/
# Start containers
docker compose up -d
Cost Breakdown
Service | Free Tier | Monthly Cost |
|---|---|---|
GitHub | Unlimited private repos | $0 |
Cloudflare R2 | 10GB storage | $0 |
Backblaze B2 | 10GB storage | ~$0.25 (50GB) |
Total | ~$0.25/month |
Key Takeaways
Separate concerns: Code → GitHub, Secrets → R2, System → B2
Zero egress fees matter: R2 for secrets means free restores
GFS retention: Keep 7 daily + 4 weekly + 12 monthly without runaway costs
Automate everything: If it's not automated, it won't happen
Test your restores: A backup you've never restored is not a backup
The best time to set up backups is before you need them. The second best time is now.
This setup protects a self-hosted Clawdbot instance, but the pattern works for any Docker-based deployment.
Thanks for reading,
Cagri Sarigoz
AI Automation News Founder
Reply