Chapter 42

10 Real-World Cases: Personal Assistant, Code Review, SEO Pipeline, IoT Edge Automation and More

Chapter 42 10 Real-World Cases: Personal Assistant / Code Review / SEO Pipeline / IoT Edge Automation

Introduction: From Configuration to Production

The preceding chapters covered OpenClaw's architecture principles and configuration details. This chapter shifts perspective: ten real deployment scenarios serve as the main thread, with each presenting a complete picture of the pain points, architecture, critical code, and outcome data. These cases come from high-Star ClawHub community projects and the curated awesome-openclaw-skills list, all of which the author personally verified.


Case 1 Daily Briefing Bot

1.1 Scenario Description

Pain point: Every morning requires browsing tech news, checking cryptocurrency prices, and reviewing the day's calendar. The entire process takes 30–40 minutes and important information is often missed.

Goal: Automatically aggregate all information at 7:00 AM every day, generate a briefing delivered to Telegram, so that all key daily information can be read in 5 minutes.

1.2 Architecture Design

Cron trigger (07:00 AM)
    │
    ▼
OpenClaw Agent (daily-briefing)
    ├── web_fetch → HackerNews Top 10
    ├── web_fetch → CoinGecko API (BTC/ETH prices)
    ├── bash → gcalcli (Google Calendar events for today)
    └── bash → curl Telegram Bot API (send message)

Components used: Cron Channel, web_fetch tool, bash tool, Telegram Bot API

1.3 Key Configuration

openclaw.json scheduling configuration:

{
  "agents": [
    {
      "id": "daily-briefing",
      "skillFile": "~/.openclaw/skills/daily-briefing/SKILL.md",
      "model": "claude-haiku-4-5",
      "schedule": {
        "cron": "0 7 * * *",
        "timezone": "Asia/Shanghai"
      },
      "tools": ["web_fetch", "bash"],
      "env": {
        "TELEGRAM_BOT_TOKEN": "${TELEGRAM_BOT_TOKEN}",
        "TELEGRAM_CHAT_ID": "${TELEGRAM_CHAT_ID}"
      }
    }
  ]
}

SKILL.md core content:

# Daily Briefing Bot

## Purpose
Generate and push a daily briefing to Telegram at 7:00 AM every morning.

## Steps

### 1. Fetch HackerNews Top Stories
Use web_fetch to access https://hacker-news.firebaseio.com/v0/topstories.json,
take the first 10 IDs, then fetch the title and link for each.

### 2. Fetch Cryptocurrency Prices
Access https://api.coingecko.com/api/v3/simple/price?ids=bitcoin,ethereum&vs_currencies=usd,cny
Parse the current price and 24h change for BTC and ETH.

### 3. Fetch Today's Calendar
Run the bash command: gcalcli --cal "My Calendar" agenda today today --nocolor

### 4. Generate and Send Briefing
Format the above information as Markdown and send to the specified group via Telegram Bot API:
bash: curl -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendMessage" \
  -d chat_id=${TELEGRAM_CHAT_ID} \
  -d parse_mode=Markdown \
  -d text="<generated briefing content>"

## Output Format
📰 *Daily Tech Briefing - {date}*

🔥 HackerNews Top Stories
1. {title} - {link}
...

💰 Market Snapshot
BTC: ${price} ({change}%)
ETH: ${price} ({change}%)

📅 Today's Schedule
{calendar events}

1.4 Implementation Steps

# 1. Install dependencies
pip install gcalcli
gcalcli init  # Authorize Google Calendar

# 2. Create a Telegram Bot (get token from @BotFather)

# 3. Configure environment variables
echo 'TELEGRAM_BOT_TOKEN=your_token' >> ~/.openclaw/.env
echo 'TELEGRAM_CHAT_ID=your_chat_id' >> ~/.openclaw/.env

# 4. Register the Skill
openclaw agent --message "Initialize daily-briefing Skill" \
  --skill ~/.openclaw/skills/daily-briefing/SKILL.md

# 5. Test run (trigger immediately, don't wait for cron)
openclaw agent --id daily-briefing --message "Generate today's briefing now" --run-now

# 6. Verify cron registration
openclaw nodes status | grep daily-briefing

1.5 Outcome Data

Metric Before After
Daily information gathering time 35 minutes 0 minutes (fully automated)
Briefing reading time 4 minutes
Information omission rate ~30% (from memory) < 2% (structured)
Monthly operating cost ~$0.80 (Haiku model)

1.6 Important Notes


Case 2 Intelligent Email Manager

2.1 Scenario Description

Pain point: Receiving 80–150 emails per day, 90% of which are notifications, subscriptions, and spam. Only 10–20 actually need attention, but the filtering process itself consumes 1 hour each day.

Goal: Automatically classify emails, generate reply drafts for actionable messages, and automatically add calendar entries for scheduling emails.

2.2 Architecture Design

Gmail Webhook (Push Notification)
    │
    ▼
OpenClaw Agent (email-manager)
    ├── LLM-based classification (semantic understanding)
    │   ├── Category: Newsletter → auto-archive
    │   ├── Category: Action Required → generate reply draft
    │   └── Category: Meeting → extract time + create calendar event
    ├── bash → Gmail API (archive / label / write draft)
    └── bash → Google Calendar API (add event)

2.3 Key Configuration

{
  "agents": [
    {
      "id": "email-manager",
      "skillFile": "~/.openclaw/skills/email-manager/SKILL.md",
      "model": "claude-sonnet-4-5",
      "triggers": {
        "webhook": {
          "path": "/hooks/gmail",
          "port": 8765,
          "secret": "${GMAIL_WEBHOOK_SECRET}"
        }
      },
      "tools": ["bash"],
      "permissions": {
        "allowedDomains": ["gmail.googleapis.com", "calendar.googleapis.com"]
      },
      "maxSessionTokens": 8000
    }
  ]
}

SKILL.md classification rules section:

## Classification Rules

For each new email, classify in the following priority order:

1. **Spam / Newsletters**: Sender domain is on the blocklist, or Subject contains
   "unsubscribe" / "newsletter" keywords
   → Action: Call Gmail API via bash to archive and label "auto-archived"

2. **Meeting Invites**: Email contains date, time, location, and attendee information
   → Action: Extract {date}/{time}/{location}/{attendees}, call Google Calendar API to create event

3. **Action Required**: Sender is a known contact, and the email contains a question or request
   → Action: Generate a reply draft (polite, concise, no more than 3 paragraphs),
              save as draft via Gmail API

4. **FYI**: Internal company notifications, no reply needed
   → Action: Apply label "fyi", do not archive

## Privacy Note
Do not record email body content in logs. Only log the messageId and classification result.

2.4 Implementation Steps

# 1. Enable Gmail Push Notification (requires Google Cloud Pub/Sub)
gcloud pubsub topics create gmail-notifications
gcloud pubsub subscriptions create gmail-sub \
  --topic=gmail-notifications \
  --push-endpoint=http://your-server:8765/hooks/gmail

# 2. Authorize Gmail API and Calendar API (OAuth2 via Google Cloud Console)

# 3. Start the Agent (persistent listener)
openclaw gateway start
openclaw agent --id email-manager --message "Start listening for Gmail notifications"

# 4. Test the webhook
curl -X POST http://localhost:8765/hooks/gmail \
  -H "Content-Type: application/json" \
  -d '{"message": {"data": "base64_encoded_notification"}}'

2.5 Outcome Data

Metric Before After
Daily email processing time 65 minutes 12 minutes (review drafts only)
Meeting invite missed rate 15% < 1%
Draft acceptance rate 78% (sent as-is or with minor edits)
Monthly emails processed 2,800 2,800 (processing time reduced by 82%)

2.6 Important Notes


Case 3 GitHub PR Auto-Reviewer

3.1 Scenario Description

Pain point: The team has many PRs, and the human review backlog is severe. Junior developers' PRs frequently contain the same types of issues (naming conventions, missing error handling, insufficient test coverage), consuming senior engineers' time on repetitive feedback.

Goal: Automatically perform an initial review after a PR is submitted, providing specific improvement suggestions. Human reviewers only need to focus on architectural decisions.

3.2 Architecture Design

GitHub Webhook (pull_request event)
    │
    ▼
OpenClaw Agent (pr-reviewer)
    ├── bash → GitHub API (fetch PR diff)
    ├── ACP → Claude Code (code quality analysis)
    │   ├── Naming convention checks
    │   ├── Error handling completeness
    │   ├── Test coverage analysis
    │   └── Security vulnerability initial scan
    └── bash → GitHub API (write review comment)

3.3 Key Configuration

{
  "agents": [
    {
      "id": "pr-reviewer",
      "skillFile": "~/.openclaw/skills/pr-reviewer/SKILL.md",
      "model": "claude-opus-4-5",
      "thinking": "high",
      "triggers": {
        "webhook": {
          "path": "/hooks/github",
          "port": 8766,
          "secret": "${GITHUB_WEBHOOK_SECRET}"
        }
      },
      "tools": ["bash"],
      "acp": {
        "enabled": true,
        "allowedAgents": ["claude-code"]
      },
      "env": {
        "GITHUB_TOKEN": "${GITHUB_TOKEN}"
      }
    }
  ]
}

SKILL.md review rules:

# PR Auto-Reviewer

## Trigger
Received a GitHub Webhook where the payload's `action` field is "opened" or "synchronize".

## Steps

### 1. Fetch PR Information
Extract from the Webhook payload:
- PR number, repository name, author, target branch
- Use GitHub API to fetch the full diff:
  bash: curl -H "Authorization: token ${GITHUB_TOKEN}" \
    https://api.github.com/repos/{owner}/{repo}/pulls/{pr_number}/files

### 2. Analyze via ACP with Claude Code
Pass the diff content to Claude Code for the following checks:
- Whether function and variable naming follows project conventions
- Whether there are unhandled exception paths
- Whether new code has corresponding unit tests
- Whether there are obvious SQL injection, XSS, or other security issues
- Whether complex logic has explanatory comments

### 3. Generate Review Comments
Format the analysis results as GitHub Review format, distinguishing:
- Must fix (blocking): security vulnerabilities, critical logic errors
- Suggested fix (non-blocking): code style, readability improvements
- Praise (positive): excellent design decisions worth acknowledging

### 4. Submit Review
bash: curl -X POST \
  -H "Authorization: token ${GITHUB_TOKEN}" \
  https://api.github.com/repos/{owner}/{repo}/pulls/{pr_number}/reviews \
  -d '{"event": "COMMENT", "body": "<review content>", "comments": [...]}'

## Review Comment Header
Begin every review with:
"🤖 This review was auto-generated by OpenClaw PR-Reviewer, for reference only.
  Human review will focus on architecture and business logic."

3.4 Implementation Steps

# 1. Set up a Webhook in the GitHub repository
# URL: http://your-server:8766/hooks/github
# Content type: application/json
# Events: Pull requests

# 2. Generate a GitHub Personal Access Token (requires 'repo' scope)

# 3. Start the Agent
openclaw gateway start
echo "PR Reviewer is running, waiting for webhook events..."

# 4. Test by simulating a PR event
curl -X POST http://localhost:8766/hooks/github \
  -H "X-GitHub-Event: pull_request" \
  -H "X-Hub-Signature-256: sha256=..." \
  -d '{"action": "opened", "pull_request": {"number": 42, ...}}'

3.5 Outcome Data

Metric Before After
Average PR initial review wait time 18 hours 3 minutes
Human reviewer focus Full review Architecture layer only
Junior developer repetitive errors 15–20 per week 3–5 per week
Reviewer time saved ~8 hours/week

3.6 Important Notes


Case 4 Large-Scale Multi-File Refactoring

4.1 Scenario Description

Pain point: Migrating an 80,000-line Python 2 project to Python 3, involving syntax changes, dependency replacements, and test updates. Manual file-by-file modification would take an estimated 3 months, and edge cases are easily missed.

Goal: Use ACP to orchestrate Claude Code Sessions in parallel to process multiple modules, completing the migration within 1 week while maintaining test coverage.

4.2 Architecture Design

OpenClaw Coordinator Agent (refactor-coordinator)
    │
    ├── bash → Scan the file tree, generate migration task list
    │
    ├── ACP → Claude Code Session 1 (handle core/ module)
    ├── ACP → Claude Code Session 2 (handle api/ module)
    ├── ACP → Claude Code Session 3 (handle utils/ module)
    │   (sessions_spawn parallel mode)
    │
    ├── bash → pytest to run tests
    └── bash → git commit (atomic commit for each module)

4.3 Key Configuration

{
  "agents": [
    {
      "id": "refactor-coordinator",
      "skillFile": "~/.openclaw/skills/py3-migration/SKILL.md",
      "model": "claude-opus-4-5",
      "thinking": "high",
      "acp": {
        "enabled": true,
        "maxParallelSessions": 3,
        "sessionTimeout": 3600
      },
      "tools": ["bash"],
      "maxSessionTokens": 200000,
      "permissions": {
        "fileSystem": {
          "write": ["./src/**", "./tests/**"],
          "read": ["./src/**", "./tests/**", "./requirements*.txt"]
        }
      }
    }
  ]
}

SKILL.md refactoring coordination logic:

# Python 3 Migration Coordinator

## Phase 1: Scan and Plan (no file modifications)
Use bash commands to scan all .py files:
- Identify print statements (non-function calls)
- Identify Python 2-specific types: unicode/basestring/long
- Identify import changes (urllib2 → urllib.request, etc.)
- Estimate workload per directory

Output the migration plan to migration-plan.json:
{"module": "core/", "files": [...], "complexity": "high", "estimatedTokens": 45000}

## Phase 2: Parallel Refactoring (distribute subtasks via ACP)
For each module with complexity >= medium, create an independent Claude Code Session:
- Pass the module directory path and migration rule list
- Require the sub-Agent to run `python3 -m py_compile` after each file modification
- Sub-Agent returns a modification summary upon completion

## Phase 3: Integration Testing
After all sub-Sessions complete:
- Run pytest --tb=short 2>&1
- If tests fail, extract error messages and re-dispatch fix tasks
- On test success, generate the migration report

## Constraints
- Each sub-Session may only modify its own assigned directory
- Must git stash before modifications; sub-Agent can git stash pop to rollback on failure
- requirements.txt must not be modified (handled in the main Session)

4.4 Implementation Steps

# 1. Ensure the project is under git version control
git init && git add . && git commit -m "pre-migration snapshot"

# 2. Run the coordinator Agent
openclaw agent \
  --id refactor-coordinator \
  --message "Begin Python 3 migration, project path $(pwd)" \
  --thinking high

# 3. Monitor sub-Session progress
watch -n 10 'openclaw nodes status | grep claude-code'

# 4. View migration progress report
cat migration-plan.json | jq '.[] | select(.status == "completed")'

4.5 Outcome Data

Metric Human Estimate Actual Result
Completion time 3 months 6 days
Files processed 847 .py files
Test pass rate 97.3% (remaining 2.7% have known compatibility issues)
Human interventions required 12 (handling special edge cases)

4.6 Important Notes


Case 5 Competitor Intelligence Monitoring System

5.1 Scenario Description

Pain point: Tracking 15 competitor websites' pricing pages, product updates, and blog activity. Weekly manual checks frequently miss timely price adjustments, leaving the team in a reactive position during business negotiations.

Goal: Automatically collect competitor data every 4 hours, detect changes, and push structured intelligence reports to Slack.

5.2 Architecture Design

Cron (every 4 hours)
    │
    ▼
OpenClaw Agent (competitor-monitor)
    ├── web_fetch × 15 (concurrent competitor page collection)
    ├── bash → Compare with last snapshot (diff)
    ├── LLM analysis (identify pricing changes / new features / marketing shifts)
    └── bash → Slack API (push structured report)
         ├── Changes detected → push immediately
         └── No changes → daily summary push

5.3 Key Configuration

{
  "agents": [
    {
      "id": "competitor-monitor",
      "skillFile": "~/.openclaw/skills/competitor-monitor/SKILL.md",
      "model": "claude-sonnet-4-5",
      "schedule": {
        "cron": "0 */4 * * *",
        "timezone": "UTC"
      },
      "tools": ["web_fetch", "bash"],
      "persistence": {
        "snapshotDir": "~/.openclaw/data/competitor-snapshots",
        "retentionDays": 90
      },
      "env": {
        "SLACK_WEBHOOK_URL": "${SLACK_WEBHOOK_URL}",
        "COMPETITOR_LIST": "competitor-list.json"
      }
    }
  ]
}

Competitor list configuration file (competitor-list.json):

{
  "competitors": [
    {
      "name": "CompetitorA",
      "urls": {
        "pricing": "https://competitora.com/pricing",
        "changelog": "https://competitora.com/changelog",
        "blog": "https://competitora.com/blog"
      },
      "priority": "high",
      "alertOnChange": ["pricing", "changelog"]
    }
  ]
}

SKILL.md monitoring logic:

## Monitoring Logic

### Collection Phase
For each competitor in competitor-list.json:
1. web_fetch the main content area of the pricing page
2. web_fetch the latest changelog/blog entries
3. Save content to a snapshot file:
   bash: cat > ~/.openclaw/data/competitor-snapshots/{name}-{url_type}-{timestamp}.txt

### Comparison Phase
Read the previous snapshot and compare with the current content:
bash: diff ~/.openclaw/data/competitor-snapshots/{name}-pricing-latest.txt \
  ~/.openclaw/data/competitor-snapshots/{name}-pricing-current.txt

If there are differences, analyze the nature of the change:
- Price number changes (extract specific amounts)
- Plan structure adjustments (feature additions/removals)
- Promotional activities (discount codes / limited-time offers)

### Report Generation
Format the Slack message as follows:
🚨 *Competitor Update Alert* - {time}

*{Competitor name}* has the following changes:
• Pricing update: {specific description}
• Impact assessment: {potential impact on our business}
• Recommended action: {1-2 specific suggestions}

5.4 Implementation Steps

# 1. Create the snapshot directory
mkdir -p ~/.openclaw/data/competitor-snapshots

# 2. Write the competitor list configuration
vim competitor-list.json

# 3. Test a single run
openclaw agent --id competitor-monitor --message "Run competitor monitoring now" --run-now

# 4. Review generated snapshots
ls -la ~/.openclaw/data/competitor-snapshots/

# 5. Register the scheduled task
openclaw config set agents.competitor-monitor.schedule.enabled true

5.5 Outcome Data

Real-world case from an e-commerce SaaS company:

Metric Result
Pricing change detection time On average 48 hours ahead of competitor announcements
Weekly manual intelligence gathering time Reduced from 6 hours to 20 minutes (review reports)
Monitoring coverage 15 competitors × 3 dimensions = 45 monitoring points
Cost per run ~$0.12 (Sonnet model, every 4 hours)
Annualized cost ~$260

5.6 Important Notes


Case 6 Customer Onboarding Full Automation

6.1 Scenario Description

Pain point: After a new customer signs, sales must manually complete 8 steps: send a welcome email, create a project folder, build a Slack channel, invite the customer, schedule a calendar meeting, update CRM records, notify the internal team, create a Jira project... Each instance takes about 2 hours and steps are frequently missed.

Goal: Trigger via CRM webhook and complete all onboarding operations within 2 minutes, with zero omissions.

6.2 Architecture Design

CRM Webhook (deal_won event)
    │
    ▼
OpenClaw Agent (customer-onboarding)
    ├── bash → Gmail API (send welcome email)
    ├── bash → Google Drive API (create project folder, copy template)
    ├── bash → Slack API (create channel + invite customer + internal notification)
    ├── bash → Google Calendar API (create kickoff meeting invitation)
    ├── bash → CRM API (update customer status)
    └── bash → Jira API (create project + initial Epics)

6.3 Key Configuration

{
  "agents": [
    {
      "id": "customer-onboarding",
      "skillFile": "~/.openclaw/skills/customer-onboarding/SKILL.md",
      "model": "claude-sonnet-4-5",
      "triggers": {
        "webhook": {
          "path": "/hooks/crm",
          "port": 8767,
          "secret": "${CRM_WEBHOOK_SECRET}"
        }
      },
      "tools": ["bash"],
      "timeout": 300,
      "onError": {
        "action": "notify_slack",
        "channel": "#ops-alerts",
        "message": "Customer onboarding flow error, manual intervention required: {error}"
      }
    }
  ]
}

SKILL.md onboarding workflow:

# Customer Onboarding Automation

## Input
Extract from CRM Webhook payload:
- customer_name: Customer company name
- contact_email: Primary contact email
- contact_name: Contact person name
- plan: Purchased plan (starter/pro/enterprise)
- sales_rep: Responsible sales rep's email

## Steps (execute in order; log any step failure and continue)

### Step 1: Send Welcome Email
Send templated email via Gmail API (read templates/welcome-{plan}.html)
To: {contact_email}
CC: {sales_rep}

### Step 2: Create Google Drive Folder
- Create "{customer_name}" folder under "Clients" parent directory
- Copy contents of "Client Template" template folder
- Add {contact_email} as an editor

### Step 3: Create Slack Channel
Channel name: client-{customer_name lowercase, spaces replaced with hyphens}
Invite: {sales_rep} and all @customer-success group members
Post welcome message:
  "🎉 Welcome {customer_name}! We provide dedicated support for {plan} plan customers."

### Step 4: Create Kickoff Meeting
Time: Current time + 3 business days, 10:00 AM (local timezone)
Duration: 60 minutes
Attendees: {contact_email}, {sales_rep}
Description: "Project Kickoff - {customer_name}"

### Step 5: Update CRM Record
Update deal status to "onboarding", record onboarding start time

### Step 6: Create Jira Project
Project Key: First 4 uppercase letters of {customer_name}
Template: Client Onboarding Template
Initial Epic: Onboarding Phase (with 5 standard sub-tasks)

### Step 7: Generate Onboarding Summary
Summarize results of all above actions, post to #onboarding-log Slack channel

6.4 Implementation Steps

# 1. Prepare API credentials (Gmail OAuth2, Slack App Token, Jira API Token)
cp .env.example .env
vim .env  # fill in API keys

# 2. Test webhook reception
openclaw gateway start
curl -X POST http://localhost:8767/hooks/crm \
  -H "Content-Type: application/json" \
  -d '{"event": "deal_won", "customer_name": "ACME Corp", ...}'

# 3. Verify each step's execution
openclaw agent --id customer-onboarding \
  --message "Check execution log of last onboarding flow" \
  --session-id sess_xxxx

6.5 Outcome Data

Metric Before After
Onboarding completion time ~2 hours < 2 minutes
Step omission rate 23% (at least 1-2 steps missed each time) < 1% (only when APIs fail)
Sales team satisfaction 9.2/10
Customer first-experience score +34% improvement (survey)

6.6 Important Notes


Case 7 Voice-Driven DevOps Assistant

7.1 Scenario Description

Pain point: Late-night production alerts require urgent response from a phone. Browsing logs, finding config files, and running kubectl commands in a mobile browser is a terrible experience with a high risk of mistakes.

Goal: Complete the full loop of log review → config modification → redeployment → verification through voice commands, with a mobile experience approaching desktop-level capability.

7.2 Architecture Design

iOS Shortcuts (speech recognition)
    │ HTTP POST (text)
    ▼
OpenClaw Mobile Node (iOS client)
    │ ACP
    ▼
OpenClaw Gateway (server side)
    ├── bash → kubectl (view Pod logs)
    ├── bash → kubectl (modify ConfigMap)
    ├── bash → kubectl rollout restart
    └── bash → curl (verify service health)

7.3 Key Configuration

Server-side Agent configuration:

{
  "agents": [
    {
      "id": "devops-assistant",
      "skillFile": "~/.openclaw/skills/devops-assistant/SKILL.md",
      "model": "claude-opus-4-5",
      "thinking": "standard",
      "tools": ["bash"],
      "permissions": {
        "shell": {
          "allowedCommands": ["kubectl", "helm", "curl", "jq"],
          "requireConfirmation": ["kubectl delete", "helm uninstall"]
        }
      },
      "nodes": {
        "ios": {
          "enabled": true,
          "requireAuth": true,
          "tokenExpiry": 3600
        }
      }
    }
  ]
}

iOS Shortcuts invocation example (pseudocode):

Shortcut: "DevOps Assistant"
1. Speech recognition (Ask Each Time) → store in variable $voice_input
2. URL: https://your-server:18789/agent/devops-assistant
   Method: POST
   Body: {"message": $voice_input, "node": "ios"}
   Headers: Authorization: Bearer $TOKEN
3. Display response (large text)
4. Text-to-speech for key information

SKILL.md safe operations protocol:

# DevOps Voice Assistant

## Safety Rules (highest priority)
- Before any kubectl apply/delete/scale, describe the operation and wait for confirmation
- Only operate in namespace: production and staging; kube-system is off-limits
- Check the current Git branch before any operation; production changes are only allowed from main
- After every operation, verify service health status

## Voice Command Examples
- "Show the last 100 lines of api-service logs" → kubectl logs -n production deployment/api-service --tail=100
- "How many pods does api-service have?" → kubectl get pods -n production -l app=api-service
- "Restart api-service" → [confirm] → kubectl rollout restart deployment/api-service -n production
- "Show today's error logs" → kubectl logs -n production deployment/api-service --since=8h | grep ERROR

7.4 Implementation Steps

# On the server:

# 1. Configure SSL (HTTPS; required for iPhone connections)
certbot certonly --standalone -d your-server-domain.com

# 2. Generate an iOS access token
openclaw devices approve --device ios-iphone \
  --permissions read,exec-readonly \
  --expiry 365d

# 3. Configure iOS Shortcuts
# (See official documentation: docs.openclaw.dev/mobile)

# 4. Test the voice flow
# Say: "Show API service error logs"
# → Expected: voice readout of recent error summary

7.5 Outcome Data

Scenario Traditional Mobile Voice Agent
View Pod logs 5–8 minutes 30 seconds
Locate error line Requires manual search AI-generated summary
Restart a service Requires VPN + terminal One-sentence confirmation
Late-night alert response time Average 12 minutes Average 3 minutes

7.6 Important Notes


Case 8 SEO Content Factory

8.1 Scenario Description

Pain point: An international e-commerce site needs a continuous stream of localized SEO content in English, Japanese, and German. Each article requires keyword research → content generation → SEO optimization → WordPress publishing, taking about 5 hours of human effort per article — impossible to scale.

Goal: Automatically produce 2–3 high-quality localized articles every 4 hours and publish directly to WordPress, saving 50+ hours per week.

8.2 Architecture Design

Cron (every 4 hours)
    │
    ▼
OpenClaw Agent (seo-content-factory)
    ├── web_fetch → SEMrush/Ahrefs API (keyword data)
    ├── web_fetch × N (competitor article collection and analysis)
    ├── LLM (content generation based on keywords and competitor analysis)
    ├── LLM (SEO optimization: meta title/description/internal link suggestions)
    └── bash → WordPress REST API (publish as draft)

8.3 Key Configuration

{
  "agents": [
    {
      "id": "seo-content-factory",
      "skillFile": "~/.openclaw/skills/seo-content-factory/SKILL.md",
      "model": "claude-opus-4-5",
      "schedule": {
        "cron": "0 */4 * * *"
      },
      "tools": ["web_fetch", "bash"],
      "env": {
        "WP_API_URL": "https://your-site.com/wp-json/wp/v2",
        "WP_APP_PASSWORD": "${WP_APP_PASSWORD}",
        "SEMRUSH_API_KEY": "${SEMRUSH_API_KEY}",
        "TARGET_LANGUAGE": "en,ja,de"
      }
    }
  ]
}

SKILL.md SEO writing standards:

# SEO Content Factory

## Keyword Research
1. Read this week's target keywords from keywords-seed.txt
2. Use SEMrush API to retrieve for each keyword:
   - Monthly average search volume (filter out < 500)
   - Keyword Difficulty (prioritize KD < 40)
   - Related long-tail keyword list

## Competitor Analysis
For each target keyword, web_fetch the top 5 Google results for:
- Article structure (H2/H3 heading hierarchy)
- Word count range
- Keyword density
- Unique content angles

## Content Generation Requirements
When generating articles, strictly follow these rules:
- Length: 1,500–2,500 words
- Keyword density: target keyword 1–2%, semantic variants distributed naturally
- Structure: Introduction → 3–5 H2 sections → Conclusion
- Originality: no copying from competitors; provide unique perspectives or data points
- Language: if TARGET_LANGUAGE includes "ja", generate a Japanese version

## SEO Metadata
Generate for each article:
- Meta title (< 60 characters, includes primary keyword)
- Meta description (< 160 characters, includes CTA)
- Recommended internal links (pick 3 most relevant from existing article URL list)

## WordPress Publishing
bash: curl -X POST "${WP_API_URL}/posts" \
  -u "admin:${WP_APP_PASSWORD}" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "{article title}",
    "content": "{article content in HTML}",
    "status": "draft",
    "meta": {
      "_yoast_wpseo_title": "{meta title}",
      "_yoast_wpseo_metadesc": "{meta description}"
    }
  }'

8.4 Implementation Steps

# 1. Prepare the keyword seed file
echo "best ergonomic chair\nstanding desk benefits\nhome office setup" > keywords-seed.txt

# 2. Get the existing article URL list (for internal link recommendations)
curl "${WP_API_URL}/posts?per_page=100" | jq '.[].link' > existing-posts.txt

# 3. Test a single content generation
openclaw agent --id seo-content-factory \
  --message "Generate an SEO article about 'ergonomic chair' (English)" \
  --run-now

# 4. Review the draft in WordPress admin before publishing

8.5 Outcome Data

Real-world data from an international furniture brand:

Metric Before After
Weekly article output 3–5 (fully manual) 42 (AI-generated + human review)
Human review time per article 20 minutes
Time saved per week ~52 hours
Organic search traffic after 3 months Baseline +187%
Content quality (user dwell time) 2:32 3:15

8.6 Important Notes


Case 9 Raspberry Pi Home Security Guard

9.1 Scenario Description

Pain point: Home security cameras are installed, but traditional monitoring software only records video without intelligent analysis. Alerts are needed for anomalies (strangers, unattended packages, open doors), but high false-positive rates (pets, tree shadows) cause alert fatigue.

Goal: An OpenClaw Headless Node deployed on a Raspberry Pi captures camera images every 5 minutes, uses AI to judge anomalies, and achieves a false positive rate below 5%.

9.2 Architecture Design

Raspberry Pi 4 (Headless Node)
    │
    ├── Cron (every 5 minutes)
    │       │
    │       ▼
    │   bash → camera.snap (capture image)
    │       │
    │       ▼
    │   OpenClaw Agent (home-guard)
    │       ├── LLM visual analysis (Claude multimodal)
    │       │   ├── Detect: person / pet / vehicle / package
    │       │   ├── Judge: is this anomalous (rule-based)
    │       │   └── Generate: anomaly description
    │       └── bash → Telegram API (send image + description)
    │
    └── OpenClaw Gateway (running locally on Raspberry Pi)

9.3 Key Configuration

Raspberry Pi openclaw.json:

{
  "gateway": {
    "port": 18789,
    "host": "127.0.0.1",
    "lowMemoryMode": true
  },
  "agents": [
    {
      "id": "home-guard",
      "skillFile": "/home/pi/.openclaw/skills/home-guard/SKILL.md",
      "model": "claude-haiku-4-5",
      "schedule": {
        "cron": "*/5 * * * *"
      },
      "tools": ["bash"],
      "env": {
        "CAMERA_DEVICE": "/dev/video0",
        "TELEGRAM_BOT_TOKEN": "${TELEGRAM_BOT_TOKEN}",
        "TELEGRAM_CHAT_ID": "${TELEGRAM_CHAT_ID}",
        "OWNER_HOME_HOURS": "07:00-23:00"
      },
      "resources": {
        "maxMemoryMB": 512,
        "cpuPriority": "low"
      }
    }
  ]
}

SKILL.md anomaly detection rules:

# Home Security Guard

## Image Capture
bash: fswebcam -d ${CAMERA_DEVICE} -r 1280x720 --no-banner /tmp/snapshot.jpg

## Visual Analysis
Convert image to base64 and analyze via multimodal API:
bash: base64 /tmp/snapshot.jpg

Analysis output (return as JSON):
- detected_objects: list of detected objects (person/pet/vehicle/package/door/window)
- anomalies: list of anomalies (see rules table below)
- confidence: confidence score (0–1)
- description: one-sentence summary

## Anomaly Rules (when to send an alert)

| Condition | Time Range | Confidence Threshold |
|-----------|------------|---------------------|
| Unrecognized person detected | Any time | > 0.85 |
| Front door left open > 10 minutes | Any time | > 0.80 |
| Package left unattended at door > 30 min | 08:00–20:00 | > 0.75 |
| Vehicle parked in private driveway | 00:00–06:00 | > 0.90 |

## False Positive Reduction
- Pet (cat/dog) alone → no alert
- Same anomaly sends at most one alert within 30 minutes
- Wind/tree-shadow motion → relies on object detection, not pixel diff

## Alert Format
bash: curl -X POST "https://api.telegram.org/bot${TELEGRAM_BOT_TOKEN}/sendPhoto" \
  -F chat_id="${TELEGRAM_CHAT_ID}" \
  -F photo=@/tmp/snapshot.jpg \
  -F caption="⚠️ Security Alert {time}: {description}"

9.4 Implementation Steps

# On the Raspberry Pi:

# 1. Install OpenClaw (ARM build)
curl -fsSL https://install.openclaw.dev/arm64 | bash

# 2. Install camera tools
sudo apt-get install fswebcam

# 3. Test the camera
fswebcam -d /dev/video0 -r 1280x720 /tmp/test.jpg
ls -la /tmp/test.jpg

# 4. Configure low-memory mode (Pi 4 needs at least 4 GB RAM)
openclaw config set gateway.lowMemoryMode true
openclaw config set gateway.maxMemoryMB 512

# 5. Start Gateway (enable boot autostart)
sudo systemctl enable openclaw-gateway
sudo systemctl start openclaw-gateway

# 6. Test the alert flow
openclaw agent --id home-guard --message "Capture and analyze immediately" --run-now

9.5 Outcome Data

Metric Traditional Motion Detection OpenClaw Visual AI
True anomaly detection rate 95% (but many false positives) 91%
False positive rate 60% (shadows, pets) 4.3%
Daily alert volume Average 47 Average 2.1
Raspberry Pi CPU usage 3% (local detection) 8% (cloud API calls)

9.6 Important Notes


Case 10 Slack Engineering Operations Bot

10.1 Scenario Description

Pain point: The engineering team's alert channel receives hundreds of alerts per day. On-call engineers must manually review logs, correlate events, and determine root causes — often taking 20–30 minutes from alert receipt to root cause identification.

Goal: After an alert fires, the Agent automatically analyzes logs, identifies the root cause, and provides remediation recommendations — reducing response time to under 3 minutes.

10.2 Architecture Design

Slack Events API (alert message)
    │
    ▼
OpenClaw Agent (slack-ops-bot)
    ├── bash → Cloud logging platform API (collect relevant logs)
    ├── LLM → Log pattern analysis (error clustering / time correlation)
    ├── bash → Monitoring API (Datadog/Prometheus, fetch metrics)
    ├── LLM → Root cause inference (based on logs + metrics)
    └── bash → Slack API (reply in thread, @mention relevant engineers)
         ├── Automated remediation options (executable commands)
         └── Escalation path suggestions

10.3 Key Configuration

{
  "agents": [
    {
      "id": "slack-ops-bot",
      "skillFile": "~/.openclaw/skills/slack-ops-bot/SKILL.md",
      "model": "claude-opus-4-5",
      "thinking": "high",
      "triggers": {
        "webhook": {
          "path": "/hooks/slack",
          "port": 8768,
          "secret": "${SLACK_SIGNING_SECRET}"
        }
      },
      "tools": ["bash", "web_fetch"],
      "env": {
        "SLACK_BOT_TOKEN": "${SLACK_BOT_TOKEN}",
        "DATADOG_API_KEY": "${DATADOG_API_KEY}",
        "LOG_PLATFORM": "datadog",
        "MAX_LOG_LINES": "500"
      }
    }
  ]
}

SKILL.md incident response logic:

# Slack Engineering Operations Bot

## Trigger Conditions
Listen for Slack events. Respond when:
- Message is from #alerts or #incidents channel
- Message contains keywords: ERROR / CRITICAL / DOWN / timeout / OOM / 5xx

## Response Workflow

### Step 1: Acknowledge the alert (within 30 seconds)
Immediately reply in the message thread:
"🔍 Analyzing this alert. Root cause analysis expected within 2 minutes..."

Parse the alert message and extract:
- Service name
- Error type
- Time of occurrence
- Affected scope

### Step 2: Collect diagnostic data (within 60 seconds)
Execute in parallel:
- Query Datadog logs (last 500 lines, filtered to ERROR level)
- Query Datadog metrics (CPU/memory/error rate, last 30 minutes)
- Query health check endpoints for related services

### Step 3: Root cause analysis
Based on collected data, analyze:
1. When the error first appeared
2. Whether it correlates with a recent deployment or config change
3. Whether it matches a known failure pattern
4. Status of upstream and downstream services

### Step 4: Reply in Slack thread
Format:
🚨 *Alert Analysis Report*

*Root Cause (confidence: {%})*
{Root cause description, 1–2 sentences}

*Evidence*
• {Key log line 1}
• {Metric anomaly: error rate jumped from 0.2% to 12.4%}

*Recommended Remediation Steps*
1. `kubectl rollout undo deployment/api-service` (if caused by recent deployment)
2. `kubectl scale deployment/api-service --replicas=5` (if caused by traffic spike)

*Need to escalate?*
If not resolved within 5 minutes, contact @on-call-lead

*Related Runbook*: <https://wiki.company.com/runbooks/api-service|API Service Runbook>

10.4 Implementation Steps

# 1. Create a Slack App (Slack API console)
# Required scopes: channels:history, chat:write, reactions:write
# Enable Event Subscriptions, URL: https://your-server:8768/hooks/slack

# 2. Configure Slack Signing Secret
echo "SLACK_SIGNING_SECRET=xxx" >> ~/.openclaw/.env

# 3. Test event reception
openclaw gateway start

# Simulate an alert message
curl -X POST http://localhost:8768/hooks/slack \
  -H "Content-Type: application/json" \
  -d '{"event": {"type": "message", "channel": "C_ALERTS", "text": "CRITICAL: API service 5xx rate 45%"}}'

# 4. Verify bot reply appears in the #alerts channel

10.5 Outcome Data

100-person engineering team (6-month data):

Metric Before After
Mean Time to Acknowledge (MTTA) 23 minutes 2.8 minutes
Root cause identification accuracy 76% (automated)
On-call weekly alert handling time 9 hours 2.5 hours
Late-night pages requiring human wake-up Average 4.2/month Average 1.1/month
P1 incident MTTR (Mean Time to Repair) 47 minutes 31 minutes

10.6 Important Notes


Chapter Summary

The 10 cases span personal productivity, team collaboration, software engineering, IoT edge computing, and enterprise operations. Key patterns emerge:

  1. Model selection determines cost: Use Haiku for simple scheduled tasks (daily briefing, email triage); use Opus for complex reasoning (PR review, root cause analysis)
  2. Webhooks are the primary entry point for enterprise automation: Almost all enterprise scenarios are triggered by external events rather than pure polling
  3. Human review steps are non-negotiable: Content publishing, email sending, and production operations should all retain human confirmation steps
  4. Error handling design comes before feature design: Explicitly specifying "what happens on failure" in SKILL.md is more important than implementing the feature itself
  5. Costs are controllable: Most cases run in the $10–$100/month range, with significant ROI

The next chapter explores the OpenClaw ecosystem's trajectory, including foundation governance, ACP standardization, and the current state of the open-source community.

Rate this chapter
4.7  / 5  (3 ratings)

💬 Comments