Chapter 11

12+ Providers: Anthropic, OpenAI, Google, Bedrock and Local Models

Chapter 11: Connecting 12+ Providers — Complete Configuration Guide from Anthropic to Local Models

Overview

OpenClaw natively supports 12 official LLM Providers, with an additional 30+ third-party channels available through the plugin system. Whether you're working with top-tier cloud models or locally-deployed open-source models, OpenClaw offers a unified configuration interface. This chapter covers the authentication method, configuration examples, and representative model recommendations for each Provider.


11.1 Model ID Format Specification

OpenClaw uses the unified provider/model-id format to reference all models. This design lets you switch seamlessly between different Providers without modifying business logic code.

Format: <provider>/<model-id>
Examples:
  anthropic/claude-opus-4-6
  openai/gpt-5.5
  google/gemini-3.1-pro
  ollama/llama3.2
  deepseek/deepseek-r1

Reference example in configuration file:

{
  "model": "anthropic/claude-sonnet-4-6",
  "fallback_model": "openai/gpt-5.4-mini"
}

Format Rules

  • Provider names are all lowercase, using the officially registered name
  • model-id matches the Provider's official API documentation
  • Local models use the ollama/ or lmstudio/ prefix

11.2 Anthropic

Authentication

Anthropic uses API Key authentication, passed via the ANTHROPIC_API_KEY environment variable.

# Set environment variable
export ANTHROPIC_API_KEY="sk-ant-api03-xxxxxxxxxxxxxxxxxxxxxxxx"

Configuration Example

{
  "providers": {
    "anthropic": {
      "api_key": "${ANTHROPIC_API_KEY}",
      "base_url": "https://api.anthropic.com",
      "version": "2023-06-01",
      "default_model": "anthropic/claude-sonnet-4-6"
    }
  }
}

Representative Models

Model ID Characteristics Recommended Scenarios
anthropic/claude-opus-4-6 Strongest reasoning, highest cost Complex multi-step tasks, code architecture design
anthropic/claude-sonnet-4-6 Balanced performance and cost Primary daily model, most production scenarios
anthropic/claude-haiku-4-5 Fastest speed, lowest cost High-frequency lightweight tasks, summarization, classification

Advanced Configuration

{
  "providers": {
    "anthropic": {
      "api_key": "${ANTHROPIC_API_KEY}",
      "max_tokens": 8192,
      "timeout_seconds": 120,
      "extra_headers": {
        "anthropic-beta": "interleaved-thinking-2025-05-14"
      }
    }
  }
}

11.3 OpenAI

Authentication

export OPENAI_API_KEY="sk-proj-xxxxxxxxxxxxxxxxxxxxxxxx"
# Optional: specify organization ID
export OPENAI_ORG_ID="org-xxxxxxxxxxxxxxxx"

Configuration Example

{
  "providers": {
    "openai": {
      "api_key": "${OPENAI_API_KEY}",
      "organization": "${OPENAI_ORG_ID}",
      "base_url": "https://api.openai.com/v1",
      "default_model": "openai/gpt-5.5"
    }
  }
}

Representative Models

Model ID Characteristics Recommended Scenarios
openai/gpt-5.5 Flagship, multimodal High-complexity reasoning, visual analysis
openai/gpt-5.4-mini Efficient and economical Rapid iteration, prototype development
openai/o3 Deep reasoning chains Math/logic/science problems
openai/o1 Reasoning-enhanced Programming contests, complex derivations

11.4 Google Gemini

Authentication

export GEMINI_API_KEY="AIzaSy-xxxxxxxxxxxxxxxxxxxxxxxx"

Configuration Example

{
  "providers": {
    "google": {
      "api_key": "${GEMINI_API_KEY}",
      "base_url": "https://generativelanguage.googleapis.com/v1beta",
      "default_model": "google/gemini-3.1-pro"
    }
  }
}

Representative Models

Model ID Characteristics Recommended Scenarios
google/gemini-3.1-pro Long context, multimodal Large document analysis, video understanding
google/gemini-3-flash Ultra-fast response Real-time interaction, streaming output

11.5 Amazon Bedrock

Bedrock is AWS's managed model service. Its authentication method differs from a typical API Key — it uses the AWS standard credential system.

Authentication Methods

Method A: Environment Variables (recommended for development)

export AWS_ACCESS_KEY_ID="AKIAIOSFODNN7EXAMPLE"
export AWS_SECRET_ACCESS_KEY="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
export AWS_DEFAULT_REGION="us-east-1"

Method B: AWS Profile (recommended for multi-account management)

# ~/.aws/credentials
[openclaw-prod]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

# ~/.aws/config
[profile openclaw-prod]
region = us-east-1

Method C: IAM Role (recommended for production)

# EC2/ECS/Lambda automatically uses instance role — no explicit credential config needed

Configuration Example

{
  "providers": {
    "bedrock": {
      "region": "us-east-1",
      "profile": "openclaw-prod",
      "default_model": "bedrock/anthropic.claude-sonnet-4-6-v1:0"
    }
  }
}

Bedrock vs. Direct Anthropic Comparison

Dimension Bedrock Direct Anthropic
Authentication AWS IAM API Key
Compliance AWS compliance framework Anthropic compliance
Network AWS VPC internal Public internet
Billing AWS bill Anthropic bill
Latency Low within same region Depends on network

11.6 OpenRouter

OpenRouter acts as a multi-Provider proxy, letting you route to 100+ models with a single API Key.

Use Cases

Configuration Example

{
  "providers": {
    "openrouter": {
      "api_key": "${OPENROUTER_API_KEY}",
      "base_url": "https://openrouter.ai/api/v1",
      "default_model": "openrouter/anthropic/claude-opus-4-6",
      "extra_headers": {
        "HTTP-Referer": "https://yourapp.com",
        "X-Title": "OpenClaw Agent"
      }
    }
  }
}

Model ID Format

OpenRouter uses a double-nested format:

openrouter/<original-provider>/<model-id>
Examples:
  openrouter/anthropic/claude-opus-4-6
  openrouter/openai/gpt-5.5
  openrouter/google/gemini-3.1-pro
  openrouter/meta-llama/llama-3.1-70b-instruct

11.7 Ollama (Local)

export OLLAMA_BASE_URL="http://localhost:11434"
{
  "providers": {
    "ollama": {
      "base_url": "${OLLAMA_BASE_URL}",
      "default_model": "ollama/llama3.2"
    }
  }
}

See Chapter 13 for detailed integration instructions.


11.8 DeepSeek

export DEEPSEEK_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxx"
{
  "providers": {
    "deepseek": {
      "api_key": "${DEEPSEEK_API_KEY}",
      "base_url": "https://api.deepseek.com/v1",
      "default_model": "deepseek/deepseek-r1"
    }
  }
}
Model ID Characteristics
deepseek/deepseek-r1 Deep reasoning, comparable to o1
deepseek/deepseek-v3.x General-purpose dialogue, extremely cost-effective

11.9 MiniMax

export MINIMAX_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxx"
export MINIMAX_GROUP_ID="xxxxxxxxxxxxxxxx"
{
  "providers": {
    "minimax": {
      "api_key": "${MINIMAX_API_KEY}",
      "group_id": "${MINIMAX_GROUP_ID}",
      "base_url": "https://api.minimax.chat/v1",
      "default_model": "minimax/abab6.5s-chat"
    }
  }
}
Model ID Description
minimax/M2.5 Latest flagship
minimax/M2.1 Balanced version
minimax/M2 Lightweight version

11.10 xAI (Grok)

export XAI_API_KEY="xai-xxxxxxxxxxxxxxxxxxxxxxxx"
{
  "providers": {
    "xai": {
      "api_key": "${XAI_API_KEY}",
      "base_url": "https://api.x.ai/v1",
      "default_model": "xai/grok-3"
    }
  }
}

11.11 Moonshot AI (Kimi)

export MOONSHOT_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxx"
{
  "providers": {
    "moonshot": {
      "api_key": "${MOONSHOT_API_KEY}",
      "base_url": "https://api.moonshot.cn/v1",
      "default_model": "moonshot/moonshot-v1-128k"
    }
  }
}

11.12 Vercel AI Gateway

Vercel AI Gateway acts as a proxy gateway with unified rate limiting and log tracing.

{
  "providers": {
    "vercel": {
      "api_key": "${VERCEL_AI_GATEWAY_TOKEN}",
      "base_url": "https://ai-gateway.vercel.sh/v1",
      "default_model": "vercel/anthropic/claude-sonnet-4-6"
    }
  }
}

11.13 GLM / Zhipu AI

export ZHIPU_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxx"
{
  "providers": {
    "zhipu": {
      "api_key": "${ZHIPU_API_KEY}",
      "base_url": "https://open.bigmodel.cn/api/paas/v4",
      "default_model": "zhipu/glm-4-plus"
    }
  }
}

11.14 30+ Additional Provider Integrations

Through the plugin system, OpenClaw supports many more Providers. Here are several commonly used ones:

Groq (Ultra-Low Latency Inference)

{
  "providers": {
    "groq": {
      "api_key": "${GROQ_API_KEY}",
      "base_url": "https://api.groq.com/openai/v1",
      "default_model": "groq/llama-3.1-70b-versatile"
    }
  }
}

Mistral AI

{
  "providers": {
    "mistral": {
      "api_key": "${MISTRAL_API_KEY}",
      "base_url": "https://api.mistral.ai/v1",
      "default_model": "mistral/mistral-large-latest"
    }
  }
}

Qwen (Tongyi Qianwen)

{
  "providers": {
    "qwen": {
      "api_key": "${DASHSCOPE_API_KEY}",
      "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "default_model": "qwen/qwen-max"
    }
  }
}

Doubao (Volcano Engine)

{
  "providers": {
    "doubao": {
      "api_key": "${ARK_API_KEY}",
      "base_url": "https://ark.cn-beijing.volces.com/api/v3",
      "default_model": "doubao/doubao-pro-128k"
    }
  }
}

Complete Provider Support List

Provider Environment Variable Typical Model
Groq GROQ_API_KEY llama-3.1-70b-versatile
Mistral MISTRAL_API_KEY mistral-large-latest
Qwen/DashScope DASHSCOPE_API_KEY qwen-max
Doubao ARK_API_KEY doubao-pro-128k
Cohere COHERE_API_KEY command-r-plus
Together AI TOGETHER_API_KEY meta-llama/Llama-3-70b
Fireworks AI FIREWORKS_API_KEY accounts/fireworks/models/llama-v3-70b
Perplexity PERPLEXITY_API_KEY llama-3.1-sonar-large-128k-online
AI21 AI21_API_KEY jamba-1.5-large
Replicate REPLICATE_API_TOKEN meta/llama-3-70b-instruct
HuggingFace HF_API_TOKEN Any HF-hosted model
Azure OpenAI AZURE_OPENAI_API_KEY Requires deployment spec
LM Studio None (local endpoint) Any GGUF model
vLLM None (local endpoint) Any vLLM-compatible model

11.15 Multi-Provider Coexistence Configuration

Production environments typically require multiple Providers configured simultaneously to enable failover and cost optimization.

{
  "providers": {
    "anthropic": {
      "api_key": "${ANTHROPIC_API_KEY}",
      "default_model": "anthropic/claude-sonnet-4-6"
    },
    "openai": {
      "api_key": "${OPENAI_API_KEY}",
      "default_model": "openai/gpt-5.4-mini"
    },
    "ollama": {
      "base_url": "http://localhost:11434",
      "default_model": "ollama/llama3.2"
    }
  },
  "default_provider": "anthropic",
  "fallback_chain": [
    "anthropic/claude-sonnet-4-6",
    "openai/gpt-5.4-mini",
    "ollama/llama3.2"
  ]
}

11.16 Provider Capability Comparison Overview

Provider Context Window Function Calling Vision Streaming Local Deploy
Anthropic 200K Yes Yes Yes No
OpenAI 128K Yes Yes Yes No
Google Gemini 2M Yes Yes Yes No
Bedrock 200K Yes Yes Yes No
DeepSeek 64K Yes No Yes Yes (open source)
Ollama Model-dependent Partial Partial Yes Yes
Groq 128K Yes No Yes No
Mistral 128K Yes No Yes Yes (open source)

Chapter Summary

Next chapter will dive deep into advanced model configuration techniques: Key Rotation, Failover mechanisms, and inference depth control.

Rate this chapter
4.8  / 5  (34 ratings)

💬 Comments