← Back to Skills Marketplace
chaimengphp

One API key for Chinese AI models. Route to Qwen, Deepseek

by chaimengphp · GitHub ↗ · v1.0.1 · MIT-0
cross-platform ✓ Security Clean
1038
Downloads
0
Stars
2
Active Installs
2
Versions
Install in OpenClaw
/install openclaw-aisa-cn-llm
Description
China LLM Gateway - Unified interface for Chinese LLMs including Qwen, DeepSeek, GLM, Baichuan. OpenAI compatible, one API Key for all models.
README (SKILL.md)

\r \r

OpenClaw CN-LLM 🐉\r

\r China LLM Unified Gateway. Powered by AIsa.\r \r One API Key to access all Chinese LLMs. OpenAI compatible interface.\r \r Qwen, DeepSeek, GLM, Baichuan, Moonshot, and more - unified API access.\r \r

🔥 What You Can Do\r

\r

Intelligent Chat\r

"Use Qwen to answer Chinese questions, use DeepSeek for coding"\r
```\r
\r
### Deep Reasoning\r
```\r
"Use DeepSeek-R1 for complex reasoning tasks"\r
```\r
\r
### Code Generation\r
```\r
"Use DeepSeek-Coder to generate Python code with explanations"\r
```\r
\r
### Long Text Processing\r
```\r
"Use Qwen-Long for ultra-long document summarization"\r
```\r
\r
### Model Comparison\r
```\r
"Compare response quality between Qwen-Max and DeepSeek-V3"\r
```\r
\r
## Supported Models\r
\r
### Qwen (Alibaba)\r
\r
| Model | Input Price | Output Price | Features |\r
|-----|---------|---------|------|\r
| qwen3-max | $1.37/M | $5.48/M | Most powerful general model |\r
| qwen3-max-2026-01-23 | $1.37/M | $5.48/M | Latest version |\r
| qwen3-coder-plus | $2.86/M | $28.60/M | Enhanced code generation |\r
| qwen3-coder-flash | $0.72/M | $3.60/M | Fast code generation |\r
| qwen3-coder-480b-a35b-instruct | $2.15/M | $8.60/M | 480B large model |\r
| qwen3-vl-plus | $0.43/M | $4.30/M | Vision-language model |\r
| qwen3-vl-flash | $0.86/M | $0.86/M | Fast vision model |\r
| qwen3-omni-flash | $4.00/M | $16.00/M | Multimodal model |\r
| qwen-vl-max | $0.23/M | $0.57/M | Vision-language |\r
| qwen-plus-2025-12-01 | $1.26/M | $12.60/M | Plus version |\r
| qwen-mt-flash | $0.168/M | $0.514/M | Fast machine translation |\r
| qwen-mt-lite | $0.13/M | $0.39/M | Lite machine translation |\r
\r
### DeepSeek\r
\r
| Model | Input Price | Output Price | Features |\r
|-----|---------|---------|------|\r
| deepseek-r1 | $2.00/M | $8.00/M | Reasoning model, supports Tools |\r
| deepseek-v3 | $1.00/M | $4.00/M | General chat, 671B parameters |\r
| deepseek-v3-0324 | $1.20/M | $4.80/M | V3 stable version |\r
| deepseek-v3.1 | $4.00/M | $12.00/M | Latest Terminus version |\r
\r
> **Note**: Prices are in M (million tokens). Model availability may change, see [marketplace.aisa.one/pricing](https://marketplace.aisa.one/pricing) for the latest list.\r
\r
## Quick Start\r
\r
```bash\r
export AISA_API_KEY="your-key"\r
```\r
\r
## API Endpoints\r
\r
### OpenAI Compatible Interface\r
\r
```\r
POST https://api.aisa.one/v1/chat/completions\r
```\r
\r
#### Qwen Example\r
\r
```bash\r
curl -X POST "https://api.aisa.one/v1/chat/completions" \\r
  -H "Authorization: Bearer $AISA_API_KEY" \\r
  -H "Content-Type: application/json" \\r
  -d '{\r
    "model": "qwen3-max",\r
    "messages": [\r
      {"role": "system", "content": "You are a professional Chinese assistant."},\r
      {"role": "user", "content": "Please explain what a large language model is?"}\r
    ],\r
    "temperature": 0.7,\r
    "max_tokens": 1000\r
  }'\r
```\r
\r
#### DeepSeek Example\r
\r
```bash\r
# DeepSeek-V3 general chat (671B parameters)\r
curl -X POST "https://api.aisa.one/v1/chat/completions" \\r
  -H "Authorization: Bearer $AISA_API_KEY" \\r
  -H "Content-Type: application/json" \\r
  -d '{\r
    "model": "deepseek-v3",\r
    "messages": [{"role": "user", "content": "Write a quicksort algorithm in Python"}],\r
    "temperature": 0.3\r
  }'\r
\r
# DeepSeek-R1 deep reasoning (supports Tools)\r
curl -X POST "https://api.aisa.one/v1/chat/completions" \\r
  -H "Authorization: Bearer $AISA_API_KEY" \\r
  -H "Content-Type: application/json" \\r
  -d '{\r
    "model": "deepseek-r1",\r
    "messages": [{"role": "user", "content": "A farmer needs to cross a river with a wolf, a sheep, and a cabbage. The boat can only carry the farmer and one item at a time. If the farmer is not present, the wolf will eat the sheep, and the sheep will eat the cabbage. How can the farmer safely cross?"}]\r
  }'\r
\r
# DeepSeek-V3.1 Terminus latest version\r
curl -X POST "https://api.aisa.one/v1/chat/completions" \\r
  -H "Authorization: Bearer $AISA_API_KEY" \\r
  -H "Content-Type: application/json" \\r
  -d '{\r
    "model": "deepseek-v3.1",\r
    "messages": [{"role": "user", "content": "Implement an LRU cache with get and put operations"}]\r
  }'\r
```\r
\r
#### Qwen3 Code Generation Example\r
\r
```bash\r
curl -X POST "https://api.aisa.one/v1/chat/completions" \\r
  -H "Authorization: Bearer $AISA_API_KEY" \\r
  -H "Content-Type: application/json" \\r
  -d '{\r
    "model": "qwen3-coder-plus",\r
    "messages": [{"role": "user", "content": "Implement a thread-safe Map in Go"}]\r
  }'\r
```\r
\r
#### Parameter Reference\r
\r
| Parameter | Type | Required | Description |\r
|-----|------|-----|------|\r
| `model` | string | Yes | Model identifier |\r
| `messages` | array | Yes | Message list |\r
| `temperature` | number | No | Randomness (0-2, default 1) |\r
| `max_tokens` | integer | No | Maximum tokens to generate |\r
| `stream` | boolean | No | Stream output (default false) |\r
| `top_p` | number | No | Nucleus sampling parameter (0-1) |\r
\r
#### Response Format\r
\r
```json\r
{\r
  "id": "chatcmpl-xxx",\r
  "object": "chat.completion",\r
  "created": 1234567890,\r
  "model": "qwen-max",\r
  "choices": [\r
    {\r
      "index": 0,\r
      "message": {\r
        "role": "assistant",\r
        "content": "A large language model (LLM) is a deep learning-based..."\r
      },\r
      "finish_reason": "stop"\r
    }\r
  ],\r
  "usage": {\r
    "prompt_tokens": 30,\r
    "completion_tokens": 150,\r
    "total_tokens": 180,\r
    "cost": 0.001\r
  }\r
}\r
```\r
\r
### Streaming Output\r
\r
```bash\r
curl -X POST "https://api.aisa.one/v1/chat/completions" \\r
  -H "Authorization: Bearer $AISA_API_KEY" \\r
  -H "Content-Type: application/json" \\r
  -d '{\r
    "model": "qwen-plus",\r
    "messages": [{"role": "user", "content": "Tell a Chinese folk story"}],\r
    "stream": true\r
  }'\r
```\r
\r
Returns Server-Sent Events (SSE) format:\r
\r
```\r
data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":"Once"}}]}\r
data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":" upon"}}]}\r
...\r
data: [DONE]\r
```\r
\r
## Python Client\r
\r
### CLI Usage\r
\r
```bash\r
# Qwen chat\r
python3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --message "Hello, please introduce yourself"\r
\r
# Qwen3 code generation\r
python3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-coder-plus --message "Write a binary search algorithm"\r
\r
# DeepSeek-R1 reasoning\r
python3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-r1 --message "Which is larger, 9.9 or 9.11? Please reason in detail"\r
\r
# DeepSeek-V3 chat\r
python3 {baseDir}/scripts/cn_llm_client.py chat --model deepseek-v3 --message "Tell a story" --stream\r
\r
# With system prompt\r
python3 {baseDir}/scripts/cn_llm_client.py chat --model qwen3-max --system "You are a classical poetry expert" --message "Write a poem about plum blossoms"\r
\r
# Model comparison\r
python3 {baseDir}/scripts/cn_llm_client.py compare --models "qwen3-max,deepseek-v3" --message "What is quantum computing?"\r
\r
# List supported models\r
python3 {baseDir}/scripts/cn_llm_client.py models\r
```\r
\r
### Python SDK Usage\r
\r
```python\r
from cn_llm_client import CNLLMClient\r
\r
client = CNLLMClient()  # Uses AISA_API_KEY environment variable\r
\r
# Qwen chat\r
response = client.chat(\r
    model="qwen3-max",\r
    messages=[{"role": "user", "content": "Hello!"}]\r
)\r
print(response["choices"][0]["message"]["content"])\r
\r
# Qwen3 code generation\r
response = client.chat(\r
    model="qwen3-coder-plus",\r
    messages=[\r
        {"role": "system", "content": "You are a professional programmer."},\r
        {"role": "user", "content": "Implement a singleton pattern in Python"}\r
    ],\r
    temperature=0.3\r
)\r
\r
# Streaming output\r
for chunk in client.chat_stream(\r
    model="deepseek-v3",\r
    messages=[{"role": "user", "content": "Tell a story about an idiom"}]\r
):\r
    print(chunk, end="", flush=True)\r
\r
# Model comparison\r
results = client.compare_models(\r
    models=["qwen3-max", "deepseek-v3", "deepseek-r1"],\r
    message="Explain what machine learning is"\r
)\r
for model, result in results.items():\r
    print(f"{model}: {result['response'][:100]}...")\r
```\r
\r
## Use Cases\r
\r
### 1. Chinese Content Generation\r
\r
```python\r
# Copywriting\r
response = client.chat(\r
    model="qwen3-max",\r
    messages=[\r
        {"role": "system", "content": "You are a professional copywriter."},\r
        {"role": "user", "content": "Write a product introduction for a smart watch"}\r
    ]\r
)\r
```\r
\r
### 2. Code Development\r
\r
```python\r
# Code generation and explanation\r
response = client.chat(\r
    model="qwen3-coder-plus",\r
    messages=[{"role": "user", "content": "Implement a thread-safe Map in Go"}]\r
)\r
```\r
\r
### 3. Complex Reasoning\r
\r
```python\r
# Mathematical reasoning\r
response = client.chat(\r
    model="deepseek-r1",\r
    messages=[{"role": "user", "content": "Prove: For any positive integer n, n³-n is divisible by 6"}]\r
)\r
```\r
\r
### 4. Visual Understanding\r
\r
```python\r
# Image understanding\r
response = client.chat(\r
    model="qwen3-vl-plus",\r
    messages=[\r
        {"role": "user", "content": [\r
            {"type": "text", "text": "Describe the content of this image"},\r
            {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}\r
        ]}\r
    ]\r
)\r
```\r
\r
### 5. Model Routing Strategy\r
\r
```python\r
MODEL_MAP = {\r
    "chat": "qwen3-max",           # General chat\r
    "code": "qwen3-coder-plus",    # Code generation\r
    "reasoning": "deepseek-r1",    # Complex reasoning\r
    "vision": "qwen3-vl-plus",     # Visual understanding\r
    "fast": "qwen3-coder-flash",   # Fast response\r
    "translate": "qwen-mt-flash"   # Machine translation\r
}\r
\r
def route_by_task(task_type: str, message: str) -> str:\r
    model = MODEL_MAP.get(task_type, "qwen3-max")\r
    return client.chat(model=model, messages=[{"role": "user", "content": message}])\r
```\r
\r
## Error Handling\r
\r
Errors return JSON with `error` field:\r
\r
```json\r
{\r
  "error": {\r
    "code": "model_not_found",\r
    "message": "Model 'xxx' is not available"\r
  }\r
}\r
```\r
\r
Common error codes:\r
- `401` - Invalid or missing API Key\r
- `402` - Insufficient balance\r
- `404` - Model not found\r
- `429` - Rate limit exceeded\r
- `500` - Server error\r
\r
## Pricing\r
\r
| Model | Input ($/M) | Output ($/M) |\r
|-----|-----------|-----------|\r
| qwen3-max | $1.37 | $5.48 |\r
| qwen3-coder-plus | $2.86 | $28.60 |\r
| qwen3-coder-flash | $0.72 | $3.60 |\r
| qwen3-vl-plus | $0.43 | $4.30 |\r
| deepseek-v3 | $1.00 | $4.00 |\r
| deepseek-r1 | $2.00 | $8.00 |\r
| deepseek-v3.1 | $4.00 | $12.00 |\r
\r
> Price unit: $ per Million tokens. Each response includes `usage.cost` and `usage.credits_remaining`.\r
\r
## Get Started\r
\r
1. Register at [aisa.one](https://aisa.one)\r
2. Get API Key\r
3. Top up (pay-as-you-go)\r
4. Set environment variable: `export AISA_API_KEY="your-key"`\r
\r
## Full API Reference\r
\r
See [API Reference](https://docs.aisa.one/reference/) for complete endpoint documentation.\r
Usage Guidance
This skill appears coherent: it uses one API key (AISA_API_KEY) and calls https://api.aisa.one as documented. Before installing, confirm you trust the AIsa provider and the api.aisa.one domain, do not reuse high-privilege or cloud provider keys, and rotate the key if needed. If you plan to run the included Python script, review the full script locally and run it in a constrained environment (sandbox/container) if you have concerns. Be aware that all prompts and any sensitive data you send will go to the AIsa service and may incur usage costs; avoid sending secrets in prompts.
Capability Analysis
Type: OpenClaw Skill Name: openclaw-aisa-cn-llm Version: 1.0.1 The skill bundle is a legitimate API wrapper for the AIsa China LLM Gateway, providing access to models like Qwen and DeepSeek. The Python client (scripts/cn_llm_client.py) uses standard libraries to interact with the documented endpoint (api.aisa.one) and requires a user-provided API key via environment variables, with no evidence of malicious intent, data exfiltration, or prompt injection.
Capability Assessment
Purpose & Capability
Name/description claim a unified gateway for Chinese models and the package requires only curl, python3, and AISA_API_KEY. The included client talks to https://api.aisa.one and exposes model selection, streaming, and comparison features that match the stated purpose.
Instruction Scope
SKILL.md provides examples and an OpenAI-compatible API endpoint (api.aisa.one). Instructions and examples only reference the AISA_API_KEY and network calls to the gateway; there are no instructions to read unrelated files, system secrets, or send data to third-party endpoints.
Install Mechanism
No install spec — instruction-only plus an included Python script. No downloads from untrusted URLs or archive extraction. Uses system Python and curl (reasonable for this CLI client).
Credentials
Only a single env var (AISA_API_KEY) is required and it is the primary credential used by the client. No unrelated tokens, keys, or config paths are requested.
Persistence & Privilege
always is false and the skill does not request permanent system or cross-skill changes. disable-model-invocation is default (agent may call it autonomously) which is normal; this is not itself a red flag.
How to Use
  1. Make sure OpenClaw is installed (local or Docker)
  2. Run the install command in chat: /install openclaw-aisa-cn-llm
  3. After installation, invoke the skill by name or use /openclaw-aisa-cn-llm
  4. Provide required inputs per the skill's parameter spec and get structured output
Version History
v1.0.1
Version 1.0.1 of openclaw-aisa-cn-llm - No file changes detected; all functionality and documentation remain the same. - No updates, fixes, or new features introduced in this version. - The changelog reflects that this release is functionally identical to the previous version.
v1.0.0
Initial release of OpenClaw CN-LLM v1.0.0 - Provides a unified API gateway for major Chinese LLMs, including Qwen, DeepSeek, GLM, Baichuan, and Moonshot - OpenAI-compatible API with a single API key for all supported models - Supports model selection, chat, advanced reasoning, code generation, long text processing, and model comparison - Command-line and Python SDK clients included for easy integration - Streaming output and server-sent events (SSE) supported - Comprehensive documentation, usage examples, and model pricing tables included
Metadata
Slug openclaw-aisa-cn-llm
Version 1.0.1
License MIT-0
All-time Installs 2
Active Installs 2
Total Versions 2

💬 Comments