Writing Skills from Scratch: Complete Examples for 5 Patterns and Best Practices
Chapter 18: Writing Skills from Scratch — Five Patterns with Complete Examples and Best Practices
Chapter Overview
Enough theory — this chapter is pure hands-on practice. We present all five typical SKILL.md writing patterns with complete, production-ready code, usage-scenario analysis, and explanations of key design decisions. We close with the Degree of Freedom matching principle, the "Concise is Key" philosophy, and the most common pitfalls to avoid.
18.1 Pattern 1: Minimal Tool (Hello World Level)
When to Use
- Wrapping a simple single-step operation
- Teaching the model a specific way to approach a task
- Rapidly validating a Skill concept (MVP stage)
Design Principle
The minimal tool Skill's core is: fewest fields, clearest guidance. Don't add unnecessary complexity just to "look professional."
Complete SKILL.md
---
name: word-count
description: |
Use when the user needs to count words, lines, or characters
in a file or block of text. Provides quick wc command guidance.
metadata:
openclaw:
emoji: "🔢"
requires:
bins:
- wc
---
# Word Count Helper
Quick tool for counting file content statistics.
## Basic Usage
```bash
# Count lines, words, and characters
wc -lwc filename.txt
# Count lines only
wc -l filename.txt
# Count across multiple files
wc -l *.md
Output Format
wc outputs three columns: lines / words / bytes
If the user just says "count it", default to wc -lwc.
### Why This Works
- `description` precisely identifies the trigger scenario (word/line/character counting)
- `requires.bins: [wc]` ensures the Skill won't load in environments without `wc` (Windows without WSL)
- The body is concise: only essential information, no filler
- Includes an edge case ("if user just says count it")
---
## 18.2 Pattern 2: Informational (Multi-Step GitHub PR Workflow)
### When to Use
- Guiding the model through multiple ordered steps
- Workflow contains branching decisions
- Enforcing team conventions
### Design Principle
An informational Skill's value lies in **making tacit team knowledge explicit**. It turns "what senior engineers know but newcomers don't" into repeatable, auditable steps.
### Complete SKILL.md
```yaml
---
name: github-pr
description: |
Use when the user needs to create a Pull Request, conduct PR reviews,
manage PR labels and reviewers, or ask about team PR conventions.
Covers the complete workflow from branch creation to merge.
user-invocable: true
metadata:
openclaw:
emoji: "🔀"
requires:
bins:
- git
- gh
env:
- GITHUB_TOKEN
install:
- type: brew
pkg: gh
---
# GitHub PR Complete Workflow
## Step 1: Prepare the Branch
```bash
# Verify current branch state
git status
git log --oneline -5
# If there are uncommitted changes, commit first
git add -p # Interactive staging — avoids committing unintended files
git commit -m "feat: <short description>"
Commit message conventions (enforce strictly):
| Prefix | Scenario |
|---|---|
feat: |
New feature |
fix: |
Bug fix |
refactor: |
Code restructuring (no new features) |
docs: |
Documentation changes |
test: |
Test-related changes |
chore: |
Build / toolchain changes |
Step 2: Push and Create PR
# Push the current branch
git push -u origin HEAD
# Create PR with GitHub CLI
gh pr create \
--title "feat: Add user authentication module" \
--body "$(cat <<'EOF'
## Changes
- Implement JWT authentication
- Add refresh token mechanism
## Test Plan
- [ ] Unit tests pass
- [ ] Integration tests pass
- [ ] Manual authentication flow tested
EOF
)"
Step 3: Assign Reviewers and Labels
# Add reviewers
gh pr edit --add-reviewer username1,username2
# Add labels
gh pr edit --add-label "needs-review,feature"
# Set milestone
gh pr edit --milestone "v2.0"
Step 4: Handle Review Feedback
When reviewers leave comments:
- Checkout the branch locally
- Modify and commit (use
fix:orrefactor:prefix) git push(PR updates automatically)- Reply in the PR comment: "Addressed in commit xyz"
Step 5: Merge
# Check if merging is allowed (CI green? Reviews approved?)
gh pr checks
gh pr view --json reviewDecision
# Merge (squash keeps main branch history clean)
gh pr merge --squash --delete-branch
Common Issues
Merge conflicts:
git fetch origin main
git rebase origin/main
# Resolve conflicts, then:
git add .
git rebase --continue
git push --force-with-lease
PR stuck on CI:
gh pr checks --watch # Monitor CI status in real time
gh run view <run-id> # Inspect specific failure reasons
---
## 18.3 Pattern 3: Tool Integration (image-lab with requires.bins + install)
### When to Use
- Depends on specific external tools (ffmpeg, imagemagick, etc.)
- Needs auto-installation of dependencies
- Coordinates multiple tools
### Complete SKILL.md
```yaml
---
name: image-lab
description: |
Use when the user needs to process images or video: format conversion,
cropping, resizing, batch processing, watermarking, video screenshots,
GIF creation, image compression and optimization.
Supports PNG/JPG/WebP/AVIF/GIF/MP4 and more.
metadata:
openclaw:
emoji: "🖼️"
os: darwin
requires:
bins:
- ffmpeg
- convert # ImageMagick
anyBins:
- cwebp
- avifenc
primaryEnv: shell
homepage: https://imagemagick.org
install:
- type: brew
pkg: ffmpeg
- type: brew
pkg: imagemagick
- type: brew
pkg: webp
---
# Image Lab — Image and Video Processing Toolkit
## Image Format Conversion
```bash
# JPG → WebP (quality 85)
cwebp -q 85 input.jpg -o output.webp
# PNG → AVIF (high compression)
avifenc --min 0 --max 63 --speed 6 input.png output.avif
# Batch convert all JPGs in directory to WebP
for f in *.jpg; do cwebp -q 85 "$f" -o "${f%.jpg}.webp"; done
Resize Images
# Resize to specified width (height scales proportionally)
convert input.jpg -resize 800x output.jpg
# Resize to exact dimensions (may distort)
convert input.jpg -resize 800x600! output.jpg
# Fit within bounds, pad with white
convert input.jpg -resize 800x600 -background white -gravity center -extent 800x600 output.jpg
Crop Images
# Crop 400x300 region from coordinate (x=100, y=50)
convert input.jpg -crop 400x300+100+50 output.jpg
# Center crop
convert input.jpg -gravity center -crop 400x400+0+0 output.jpg
Video Operations (FFmpeg)
# Extract frame at 10 seconds as PNG
ffmpeg -i video.mp4 -ss 00:00:10 -vframes 1 screenshot.png
# Convert video to optimized GIF
ffmpeg -i video.mp4 -vf "fps=15,scale=480:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" output.gif
# Compress video (H.264, CRF 23)
ffmpeg -i input.mp4 -c:v libx264 -crf 23 -preset medium output.mp4
Batch Watermarking
# Add text watermark to bottom-right of all images
for f in *.jpg; do
convert "$f" \
-gravity southeast \
-fill "rgba(255,255,255,0.7)" \
-pointsize 24 \
-annotate +10+10 "© MyBrand" \
"watermarked_$f"
done
Tool Selection Priority
- WebP output → prefer
cwebp(better quality) - AVIF output → use
avifenc - Complex compositing / filters → use
convert(ImageMagick) - Anything video-related → always use
ffmpeg
---
## 18.4 Pattern 4: command-dispatch (Zero-Reasoning Browser Skill)
### When to Use
- A slash command should immediately trigger a specific tool without model reasoning
- Ultra-low latency is required (millisecond response)
- Command behavior is completely deterministic
### Key Design
`command-dispatch: tool` is the heart of this pattern. It tells OpenClaw: when the user types `/browser`, pass the arguments directly to the `browser_action` tool — **no model tokens consumed**.
### Complete SKILL.md
```yaml
---
name: browser
description: |
Use when the user needs to control a browser, visit a webpage,
click elements, fill forms, take screenshots, or perform any
web automation task. Supports /browser <url> to open a page directly.
user-invocable: true
command-dispatch: tool
command-tool: "browser_action"
command-arg-mode: raw
metadata:
openclaw:
emoji: "🌐"
requires:
anyBins:
- chromium
- google-chrome
- firefox
install:
- type: brew
pkg: chromium
---
# Browser Automation Skill
## Quick Start
/browser https://example.com
Equivalent to calling `browser_action` directly — no model reasoning involved.
## Common Operations
### Navigate and Screenshot
Navigate to a URL and screenshot after load: browser_action("navigate", url="https://target.com") browser_action("screenshot")
### Fill a Form
Find form fields and fill them in: browser_action("find", selector="#email-input") browser_action("type", text="[email protected]") browser_action("find", selector="#submit-btn") browser_action("click")
### Extract Page Content
Get page text: browser_action("extract_text", selector=".article-body")
Get full HTML: browser_action("get_html")
## How command-dispatch Works
When the user types `/browser https://news.ycombinator.com`:
User input → OpenClaw routing → browser_action(url="https://news.ycombinator.com") (no model involved, 0 tokens consumed)
Compare to a normal Skill call:
User input → model reads description → model decision → model calls tool (~500–2000 tokens consumed)
## Notes
- For pages with heavy dynamic content, wait for `networkIdle` state
- Authenticated pages require cookie handling first
- Use `--profile` for persistent browser state across sessions
18.5 Pattern 5: Professional TDD (Full Test-Driven Development Skill)
When to Use
- Standardizing a team's TDD practice
- Complex multi-phase workflows with explicit loops
- Anti-pattern detection is required
Complete SKILL.md
---
name: tdd-pro
description: |
Use when the user needs to write code using the Test-Driven Development
(TDD) methodology: write tests first, then implement, then refactor.
Applicable to new feature development, bug fixes, and legacy code
modernization. Includes the Red-Green-Refactor loop and anti-pattern
detection.
user-invocable: true
metadata:
openclaw:
emoji: "🧪"
requires:
anyBins:
- jest
- vitest
- pytest
- go
primaryEnv: node
homepage: https://martinfowler.com/bliki/TestDrivenDevelopment.html
---
# TDD Pro — Professional Test-Driven Development Guide
## Core Loop: Red → Green → Refactor
┌─────────────────────────────────────────┐ │ 1. RED: Write a failing test │ │ → Run the test; confirm it fails │ │ → Confirm failure reason is │ │ "feature not implemented" │ ├─────────────────────────────────────────┤ │ 2. GREEN: Write the minimum code │ │ to make the test pass │ │ → Don't over-engineer │ │ → Get green first, elegant later │ ├─────────────────────────────────────────┤ │ 3. REFACTOR: Improve code while │ │ keeping tests green │ │ → Eliminate duplication │ │ → Improve naming and structure │ │ → Run tests after every change │ └─────────────────────────────────────────┘
## Step 1: Pattern Discovery (Required Before Writing Any Code)
Analyze the requirements before touching a keyboard:
```markdown
## Requirements Analysis Template
- **What are the inputs?** (types, formats, boundary values)
- **What are the outputs?** (types, formats, error cases)
- **What is the core behavior?** (business rules, transformation logic)
- **What are the edge cases?** (nulls, overflow, concurrency)
- **What external integration points exist?** (dependencies to Mock)
Step 2: Test Case Design
Design tests in order from simplest to most complex:
// Example: designing tests for calculateTax(amount, rate)
describe('calculateTax', () => {
// 1. Basic happy path
it('should calculate tax correctly for positive amount', () => {
expect(calculateTax(100, 0.1)).toBe(10)
})
// 2. Boundary: zero amount
it('should return 0 for zero amount', () => {
expect(calculateTax(0, 0.1)).toBe(0)
})
// 3. Boundary: zero rate
it('should return 0 for zero rate', () => {
expect(calculateTax(100, 0)).toBe(0)
})
// 4. Error case: negative amount
it('should throw for negative amount', () => {
expect(() => calculateTax(-100, 0.1)).toThrow('Amount cannot be negative')
})
// 5. Floating-point precision
it('should handle floating point correctly', () => {
expect(calculateTax(0.1, 0.1)).toBeCloseTo(0.01, 10)
})
})
Step 3: RED Phase
# Run tests; confirm they fail
npm test -- --testNamePattern="calculateTax"
# Expected output: FAIL (function does not exist yet)
# If it unexpectedly passes, the test is wrong
Step 4: GREEN Phase
Key principle: write the minimum code that makes the current failing test pass.
// The "ugly but valid" first implementation
function calculateTax(amount, rate) {
if (amount < 0) throw new Error('Amount cannot be negative')
return amount * rate
}
This is enough. Don't introduce complexity during the GREEN phase.
Step 5: REFACTOR Phase
Once all tests are green, improve code quality:
/**
* Calculate tax amount
* @param {number} amount - Pre-tax amount (must be >= 0)
* @param {number} rate - Tax rate (decimal between 0 and 1)
* @returns {number} Tax amount
*/
function calculateTax(amount, rate) {
if (amount < 0) {
throw new RangeError(`Amount must be non-negative, received: ${amount}`)
}
if (rate < 0 || rate > 1) {
throw new RangeError(`Rate must be between 0 and 1, received: ${rate}`)
}
return Number((amount * rate).toFixed(10))
}
Run tests again after refactoring to confirm they remain green.
Anti-Pattern Detection
| Anti-pattern | Symptoms | Fix |
|---|---|---|
| Implementation before tests | Tests exactly match implementation logic | Delete the implementation and start over |
| Tests too large | Single it() contains multiple assertions |
Split into independent tests |
| Over-mocking | 80%+ of dependencies are Mocked | Consider redesigning the interface |
| Testing implementation details | Testing internal variables rather than behavior | Test the public API instead |
| Skipping the RED phase | "I know it will fail" | Force a run — see the FAIL message |
Test Runner Commands by Language
# JavaScript / TypeScript
npm test -- --watch
# Python
pytest -xvs tests/
# Go
go test ./... -v -run TestCalculateTax
# Rust
cargo test calculate_tax -- --nocapture
Reference Material
Detailed mocking strategies: {baseDir}/references/mocking-strategies.md Common assertion patterns: {baseDir}/references/assertion-patterns.md
---
## 18.6 The Degree of Freedom Matching Principle
Pattern selection should match the task's "degree of freedom":
| Task Freedom | Description | Recommended Pattern |
|-------------|-------------|---------------------|
| Minimal | Fully deterministic single-step command | command-dispatch |
| Low | Fixed process, no branching | Tool or Informational |
| Medium | Branches present, judgment needed | Informational or Tool Integration |
| High | Complex workflow with loops and anti-patterns | TDD Professional |
| Very High | Highly customized task | Informational + references/ extension |
**Anti-example:** Using the TDD Professional pattern to wrap a simple `wc` command — over-engineering that wastes context.
**Correct approach:** Choose the lightest pattern that is sufficient for the task's complexity.
---
## 18.7 "Concise is Key" — The Context Budget Philosophy
In Skill writing, **conciseness is the most important virtue**:
### The Cost of Overly Long Skills
An 8,000-token Skill, even if only 20% of its content is ever used, still fully consumes that context budget when loaded. (Lazy-loading only protects the "whether to load" decision — not the size once loaded.)
### Practical Writing Rules
1. **One instruction per line**: don't use paragraphs where a list works
2. **Code examples over prose**: `wc -l file.txt` uses 80% fewer tokens than "use the -l flag of the wc command to count the number of lines in the specified file"
3. **Move detailed references to references/**: keep SKILL.md focused on operational core; put depth in references/
4. **Avoid repetition**: don't restate in the body what the description already says
### Target Length Reference
| Pattern | Recommended Token Range |
|---------|------------------------|
| Tool (minimal) | 100–300 tokens |
| Informational | 300–800 tokens |
| Tool Integration | 400–1,000 tokens |
| command-dispatch | 50–200 tokens |
| TDD Professional | 800–2,000 tokens |
---
## 18.8 Common Anti-Patterns
### Anti-Pattern 1: Writing Shell Commands as if They Will Execute
```yaml
# Wrong: treating the Skill body as an executable script
---
name: setup-env
description: Set up the development environment
---
#!/bin/bash
npm install
cp .env.example .env
docker-compose up -d
Problem: The Skill body is guidance for the model to read, not an executable script. The model will interpret this as "commands to tell the user to run" — not as something that auto-executes.
Correct approach:
---
name: setup-env
description: Use when the user needs to initialize the project development environment
---
## Initialization Steps
Ask the user to run the following commands in order:
```bash
npm install
cp .env.example .env
docker-compose up -d
After completion, verify: docker ps to confirm containers are running.
### Anti-Pattern 2: Overly Broad description Causing False Triggers
```yaml
# Dangerous: matches almost every conversation
description: Help with various code and development-related tasks
Anti-Pattern 3: Writing Complex Body Content in a command-dispatch Skill
A command-dispatch Skill's body is never read by the model (because it bypasses the model entirely), so writing detailed instructions in the body is pointless.
Anti-Pattern 4: Forgetting {baseDir} and Breaking Path References
# Wrong: hardcoded path
See documentation at ~/.openclaw/skills/my-skill/references/guide.md
# Correct: use the placeholder
See documentation at {baseDir}/references/guide.md
18.9 Chapter Summary
- Five patterns cover all Skill needs from simplest to most complex
- Tool (minimal) → Informational (multi-step) → Tool Integration (with dependencies) → command-dispatch (zero reasoning) → TDD Professional (high complexity)
- Degree of Freedom matching: task complexity drives pattern selection
- "Concise is Key": an overly long Skill is more harmful than an overly short one
- Most common mistake: treating the Skill body as executable code
The next chapter explores the ClawHub ecosystem: the platform architecture behind 13,000+ Skills, the Top 10 picks, and how to use vector search to find genuinely valuable Skills.