🦞
Tutorial

Advanced Configuration

Deep dive into OpenClaw configuration

💡 Chapter Goal: Master OpenClaw's advanced configuration techniques, including Antigravity Manager setup, multi-model switching, cost optimization, and performance tuning.

⚙️ Chapter Contents

  • 11.1 Antigravity Manager Complete Configuration Guide
  • 11.2 Multi-Model Switching Strategy
  • 11.3 Memory Search Configuration
  • 11.4 Cost Optimization Solutions
  • 11.5 Performance Tuning Tips
  • 11.6 Detailed Model Provider Configuration
  • 11.7 Detailed Tool System
  • 11.8 CLI Command Full Reference

11.1 Antigravity Manager Complete Configuration Guide

11.1.1 What is Antigravity Manager?

Definition:

Antigravity Manager is an AI API proxy tool that allows you to access multiple AI models (Claude, Gemini, GPT, etc.) through a local service, unifying API key and request management.

Project Address: https://github.com/lbjlaq/Antigravity-Manager

Why use Antigravity Manager?

By combining OpenClaw with Antigravity Manager, you can:

  • Local Deployment: All data is processed locally, protecting privacy
  • Unified Management: One tool to manage all AI models
  • Cost Control: Use your own API keys, avoiding intermediary markups
  • Flexible Switching: Switch between different models at any time without modifying code
  • Skill Expansion: Install various practical Skills via ClawHub
Antigravity Manager Architecture Diagram - Unified Management of Multiple AI Services

11.1.2 System Requirements and Prerequisites

System Requirements:

  • macOS 10.15+, Windows 10+, or Linux
  • At least 4GB RAM
  • Stable network connection

What you need to prepare:

  1. Antigravity Manager installation package
  2. API Key for AI models (or an exclusive account)
  3. Basic command-line operation skills

11.1.3 Install Antigravity Manager

macOS Users

  1. Visit Antigravity Manager Releases
  2. Download the latest .dmg file
  3. Double-click the .dmg file and drag the application to the Applications folder
  4. Open the application (for the first time, you might need to allow it in "System Preferences → Security & Privacy")

Windows Users

  1. Visit Antigravity Manager Releases
  2. Download the latest .exe installer package
  3. Run the installer and follow the prompts to complete the installation
  4. Launch Antigravity Manager

Linux Users

  1. Visit Antigravity Manager Releases
  2. Download the latest .AppImage or .deb file
  3. Grant execution permissions and run:
chmod +x Antigravity-Manager-*.AppImage
./Antigravity-Manager-*.AppImage

Verify Installation

After launching, the application will run an API service locally, default address: http://127.0.0.1:8045

Visit this address in your browser. If you can see the management interface, the installation is successful.

11.1.4 Configure AI Model Accounts

Antigravity Manager requires you to provide AI model API keys to function.

Solution 1: Use Official API

Claude API

  1. Visit Anthropic Console
  2. Register an account and link a credit card
  3. Create an API Key
  4. Copy and save it

Gemini API

  1. Visit Google AI Studio
  2. Log in with your Google account
  3. Create an API Key
  4. Copy and save it

OpenAI API

  1. Visit OpenAI Platform
  2. Register an account and link a credit card
  3. Create an API Key
  4. Copy and save it

If you don't want to apply for an API yourself, you can purchase an exclusive account:

Recommended: Student account Gemini 3 Pro exclusive account for 12 months (supports Antigravity)

Advantages:

  • ✅ Exclusive account, no need to worry about rate limits
  • ✅ Supports Antigravity Manager
  • ✅ 12-month validity
  • ✅ High cost-effectiveness
  • ✅ Ready to use immediately

Configure API Key in Antigravity Manager

  1. Open the Antigravity Manager management interface
  2. Click "API Keys"
  3. Select the corresponding AI service provider (Claude, Gemini, OpenAI)
  4. Enter the API Key
  5. Click "Save"

11.1.5 Generate User Token

The User Token is the credential for OpenClaw to access Antigravity Manager.

  1. In the Antigravity Manager interface, click "User Tokens" in the top right corner
  2. Click "Create New Token"
  3. Copy the generated Token (e.g., sk-82bc103b51f24af888af525a7835e87c)
  4. ⚠️ Important: Save this Token securely, it will only be displayed once!

11.1.6 Configure OpenClaw

Configure Claude Sonnet 4.5 (Default Model)

This is the most commonly used model, suitable for daily conversations and code generation.

# Add local-anthropic provider
cat ~/.openclaw/openclaw.json | jq '.models.providers["local-anthropic"] = {
  "baseUrl": "http://127.0.0.1:8045",
  "apiKey": "你的User_Token",
  "auth": "api-key",
  "api": "anthropic-messages",
  "models": [
    {
      "id": "claude-sonnet-4-5-20250929",
      "name": "Local Claude Sonnet 4.5",
      "reasoning": false,
      "input": ["text"],
      "cost": {
        "input": 0,
        "output": 0,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 200000,
      "maxTokens": 8192
    }
  ]
}' > /tmp/openclaw-temp.json && mv /tmp/openclaw-temp.json ~/.openclaw/openclaw.json

# Set as default model
openclaw config set agents.defaults.model.primary "local-anthropic/claude-sonnet-4-5-20250929"

Note: Replace 你的User_Token with the Token generated in step three.

Configure Claude Opus 4.5 Thinking (Reasoning Model)

This is Claude's reasoning model, suitable for complex problems and deep thinking.

cat ~/.openclaw/openclaw.json | jq '.models.providers["local-anthropic-opus"] = {
  "baseUrl": "http://127.0.0.1:8045",
  "apiKey": "你的User_Token",
  "auth": "api-key",
  "api": "anthropic-messages",
  "models": [
    {
      "id": "claude-opus-4-5-thinking",
      "name": "Local Claude Opus 4.5 Thinking",
      "reasoning": true,
      "input": ["text"],
      "cost": {
        "input": 0,
        "output": 0,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 200000,
      "maxTokens": 8192
    }
  ]
}' > /tmp/openclaw-temp.json && mv /tmp/openclaw-temp.json ~/.openclaw/openclaw.json

Configure Gemini 3 Pro Image (Multimodal Model)

This is Google's multimodal model, supporting image recognition and analysis.

cat ~/.openclaw/openclaw.json | jq '.models.providers["local-google"] = {
  "baseUrl": "http://127.0.0.1:8045/v1beta",
  "apiKey": "你的User_Token",
  "auth": "api-key",
  "api": "google-generative-ai",
  "models": [
    {
      "id": "gemini-3-pro-image",
      "name": "Local Gemini 3 Pro Image",
      "reasoning": false,
      "input": ["text", "image"],
      "cost": {
        "input": 0,
        "output": 0,
        "cacheRead": 0,
        "cacheWrite": 0
      },
      "contextWindow": 2000000,
      "maxTokens": 8192
    }
  ]
}' > /tmp/openclaw-temp.json && mv /tmp/openclaw-temp.json ~/.openclaw/openclaw.json

11.1.7 Verify Configuration

Check Model List

openclaw models list

You should see:

Model                                      Input      Ctx      Local Auth  Tags
local-anthropic/claude-sonnet-4-5-20250929 text       195k     yes   yes   default
local-anthropic-opus/claude-opus-4-5-thinking text    195k     yes   yes   configured
local-google/gemini-3-pro-image            text,image 1953k    yes   yes   configured

Restart Gateway

openclaw gateway restart

Test Connection

openclaw message send "你好,介绍一下你自己"

If a reply is returned normally, the configuration is successful.

11.1.8 Usage

Use Default Model (Claude Sonnet 4.5)

Just send a message:

openclaw message send "写1个Python脚本,打印Hello World"

Switch to Opus Thinking Model

Suitable for complex problems requiring deep thought:

openclaw config set agents.defaults.model.primary "local-anthropic-opus/claude-opus-4-5-thinking"
openclaw gateway restart

Switch to Gemini Image Model

Suitable for scenarios requiring image recognition:

openclaw config set agents.defaults.model.primary "local-google/gemini-3-pro-image"
openclaw gateway restart

Temporarily Use a Specific Model

Use a specific model temporarily without modifying the default configuration:

# Use Opus Thinking
openclaw agent --model "local-anthropic-opus/claude-opus-4-5-thinking" --message "解释量子计算的原理"

# Use Gemini Image
openclaw agent --model "local-google/gemini-3-pro-image" --message "分析这张图片" --image ./photo.jpg

11.1.9 Model Selection Guide

Claude Sonnet 4.5

Applicable Scenarios:

  • Daily conversations
  • Code generation
  • Document writing
  • Quick Q&A

Features:

  • Fast speed
  • Low cost
  • High quality
  • Context window: 200k tokens

Claude Opus 4.5 Thinking

Applicable Scenarios:

  • Complex reasoning
  • Mathematical problems
  • Algorithm optimization
  • Deep analysis

Features:

  • Strong reasoning ability
  • Visible thought process
  • Suitable for complex problems
  • Context window: 200k tokens

Gemini 3 Pro Image

Applicable Scenarios:

  • Image recognition
  • Multimodal tasks
  • Document analysis
  • Design review

Features:

  • Supports image input
  • Ultra-large context window
  • Accurate recognition
  • Context window: 2000k tokens

11.1.10 Advanced Configuration

Configure Model Alias

Give the model an easy-to-remember name:

openclaw config set agents.defaults.models."local-anthropic/claude-sonnet-4-5-20250929".alias "我的Claude"

Add Multiple API Keys

If you have multiple Antigravity accounts, you can configure multiple providers:

cat ~/.openclaw/openclaw.json | jq '.models.providers["local-anthropic-2"] = {
  "baseUrl": "http://127.0.0.1:8045",
  "apiKey": "另1个User_Token",
  "auth": "api-key",
  "api": "anthropic-messages",
  "models": [...]
}' > /tmp/openclaw-temp.json && mv /tmp/openclaw-temp.json ~/.openclaw/openclaw.json

Configure Cost Tracking

Although local API cost is 0, you can set virtual costs to track usage:

{
  "cost": {
    "input": 0.003,
    "output": 0.015,
    "cacheRead": 0.0003,
    "cacheWrite": 0.00375
  }
}

Backup Configuration

cp ~/.openclaw/openclaw.json ~/.openclaw/openclaw.json.backup

Restore Configuration

cp ~/.openclaw/openclaw.json.backup ~/.openclaw/openclaw.json
openclaw gateway restart

11.1.11 Quick Command Reference

# View model list
openclaw models list

# View current default model
openclaw config get agents.defaults.model.primary

# Switch default model
openclaw config set agents.defaults.model.primary "模型ID"

# Restart Gateway
openclaw gateway restart

# View configuration file
cat ~/.openclaw/openclaw.json | jq '.models.providers'

# Send message
openclaw message send "你的消息"

# Temporarily use a specific model
openclaw agent --model "模型ID" --message "你的消息"

11.1.12 Model ID Quick Reference

local-anthropic/claude-sonnet-4-5-20250929
local-anthropic-opus/claude-opus-4-5-thinking
local-google/gemini-3-pro-image

11.1.13 Troubleshooting

Problem 1: Model list is empty

Cause: Configuration file format error or incorrect path

Solution:

# Check configuration file
cat ~/.openclaw/openclaw.json | jq '.models.providers'

# If an error is returned, restore backup
cp ~/.openclaw/openclaw.json.backup ~/.openclaw/openclaw.json

Problem 2: API connection failed

Cause: Antigravity Manager not started or port occupied

Solution:

# Check if API is normal
curl http://127.0.0.1:8045/v1/models

# Check port occupation (macOS/Linux)
lsof -i :8045

# Restart Antigravity Manager

Problem 3: Model not effective after configuration

Cause: Forgot to restart Gateway

Solution:

openclaw gateway restart

Problem 4: User Token invalid

Cause: Token expired or entered incorrectly

Solution:

  1. Regenerate Token in Antigravity Manager
  2. Update apiKey in the configuration file
  3. Restart Gateway

Test connection

openclaw test api


### 10.1.5 Practical Cases

**Case 1: Configure Claude Sonnet**

Steps:

  1. Get Claude API Key
  2. Add in Antigravity Manager
  3. Configure OpenClaw
  4. Test usage

Result: You: Hello OpenClaw (Claude Sonnet): Hello! I am Claude...


**Case 2: Multi-Account Management**

Scenario: Manage multiple Claude accounts

Configuration:

  • Claude Account 1: Daily use
  • Claude Account 2: Backup
  • Claude Account 3: Peak usage

Advantages:

  • Distribute load
  • Avoid rate limits
  • Improve availability

---

## 11.2 Multi-Model Switching Strategy

### 11.2.1 Model Feature Comparison

| Model | Advantages | Disadvantages | Applicable Scenarios |
|------|------|------|----------|
| Claude Sonnet | Good balance | Medium price | Daily conversation |
| Claude Opus | Strongest capability | Most expensive | Complex tasks |
| GPT-5.2 | Rich features | Slower response | Creative work |
| Gemini 3 Pro | Large free tier | Average capability | Simple tasks |
| DeepSeek-V3 | High cost-effectiveness | Chinese optimized | Programming tasks |

### 11.2.2 Scenario-Based Selection Strategy

**Daily Conversation**:

Recommendation: Claude Sonnet 4.5 Reasons:

  • Fast response speed
  • Stable quality
  • Moderate price
**Complex Reasoning**:

Recommendation: Claude Opus 4.6 Reasons:

  • Strongest reasoning ability
  • Highest accuracy
  • Suitable for difficult problems
**Image Recognition**:

Recommendation: Gemini 3 Pro Reasons:

  • Strong multimodal capability
  • Large free tier
  • Accurate recognition
**Programming Tasks**:

Recommendation: DeepSeek-V3 Reasons:

  • Strong coding ability
  • Cheap price
  • Chinese friendly
### 11.2.3 Model Disaster Recovery Mechanism (Fallback)

> 🛡️ **High Availability Guarantee**: Ensure uninterrupted service by configuring primary and fallback models.

#### What is Model Disaster Recovery?

When the primary model encounters the following situations, the system will automatically switch to fallback models:
- API call failure
- Request timeout
- Rate Limit
- Service unavailability

<img src="https://upload.maynor1024.live/file/1771085328347_service-disaster-recovery.png" alt="Service Disaster Recovery Configuration Example" />

#### Basic Disaster Recovery Configuration

**Configuration file path**: `~/.openclaw/openclaw.json`

```json
{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-opus-4-6",
        "fallbacks": [
          "openai-codex/gpt-5.3-codex",
          "google-antigravity/claude-opus-4-6-thinking"
        ]
      }
    },
    "list": [
      {
        "id": "main",
        "default": true,
        "model": {
          "primary": "anthropic/claude-opus-4-6",
          "fallbacks": [
            "openai-codex/gpt-5.3-codex",
            "google-antigravity/claude-opus-4-6-thinking"
          ]
        }
      }
    ]
  }
}

Workflow:

1. Attempt to use primary model: anthropic/claude-opus-4-6
   ↓ Failure
2. Switch to fallback model 1: openai-codex/gpt-5.3-codex
   ↓ Failure
3. Switch to fallback model 2: google-antigravity/claude-opus-4-6-thinking
   ↓ Failure
4. Return error message

Practical Case 1: Cost-Optimized Disaster Recovery

Scenario: Prioritize cheaper models, use high-quality models if they fail

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "deepseek/deepseek-chat",
        "fallbacks": [
          "anthropic/claude-sonnet-4-5",
          "anthropic/claude-opus-4-6"
        ]
      }
    }
  }
}

Advantages:

  • ✅ Daily use of DeepSeek (extremely low cost)
  • ✅ Automatically switches to Claude Sonnet when DeepSeek is rate-limited
  • ✅ Uses Claude Opus as a fallback for critical task failures
  • ✅ Cost savings of 80%+

Practical Case 2: Performance-First Disaster Recovery

Scenario: Prioritize the strongest model, degrade if it fails

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-opus-4-6",
        "fallbacks": [
          "anthropic/claude-sonnet-4-5",
          "deepseek/deepseek-chat"
        ]
      }
    }
  }
}

Advantages:

  • ✅ Ensures best quality
  • ✅ Automatic degradation during peak hours
  • ✅ Guarantees uninterrupted service

Practical Case 3: Multi-Provider Disaster Recovery

Scenario: Cross-provider disaster recovery to avoid single points of failure

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4-5",
        "fallbacks": [
          "openai/gpt-4o",
          "google/gemini-2.0-flash-exp",
          "deepseek/deepseek-chat"
        ]
      }
    }
  }
}

Advantages:

  • ✅ Switches to OpenAI if Anthropic fails
  • ✅ Switches to Google if OpenAI fails
  • ✅ Finally uses DeepSeek as a fallback
  • ✅ Maximizes service availability

Command Line Configuration

# Set primary model
openclaw config set agents.defaults.model.primary "anthropic/claude-opus-4-6"

# Set fallback models (requires manual JSON editing)
# Or use jq command
cat ~/.openclaw/openclaw.json | jq '.agents.defaults.model.fallbacks = [
  "openai-codex/gpt-5.3-codex",
  "google-antigravity/claude-opus-4-6-thinking"
]' > /tmp/openclaw-temp.json && mv /tmp/openclaw-temp.json ~/.openclaw/openclaw.json

# Restart Gateway for configuration to take effect
openclaw gateway restart

Verify Disaster Recovery Configuration

# View current configuration
openclaw config get agents.defaults.model

# Output example:
{
  "primary": "anthropic/claude-opus-4-6",
  "fallbacks": [
    "openai-codex/gpt-5.3-codex",
    "google-antigravity/claude-opus-4-6-thinking"
  ]
}

Disaster Recovery Best Practices

1. Choose different providers:

✅ Recommended: Anthropic → OpenAI → Google
❌ Not recommended: Claude Opus → Claude Sonnet (same provider)

2. Configure by capability gradient:

✅ Recommended: High capability → Medium capability → Low capability
❌ Not recommended: Low capability → High capability (wastes resources)

3. Consider cost factors:

✅ Recommended: Cheap → Medium → Expensive (cost optimization)
✅ Recommended: Expensive → Medium → Cheap (quality priority)

4. Limit fallback quantity:

✅ Recommended: 2-3 fallback models
❌ Not recommended: 5+ fallback models (overly complex)

11.2.4 Multiple Authentication Profiles + Token Rotation

🔐 Account Management: Configure multiple authentication profiles to achieve account rotation and load balancing.

What is an Authentication Profile?

An authentication profile allows you to configure multiple accounts for the same provider. The system will rotate through them in a specified order to avoid single-account rate limits.

Basic Configuration

Configuration file path: ~/.openclaw/openclaw.json

{
  "auth": {
    "profiles": {
      "openai-codex:default": {
        "provider": "openai-codex",
        "mode": "oauth"
      },
      "anthropic:default": {
        "provider": "anthropic",
        "mode": "token"
      },
      "anthropic:manual": {
        "provider": "anthropic",
        "mode": "token"
      },
      "google-antigravity:mail1@gmail.com": {
        "provider": "google-antigravity",
        "mode": "oauth",
        "email": "mail1@gmail.com"
      },
      "google-antigravity:mail2@gmail.com": {
        "provider": "google-antigravity",
        "mode": "oauth"
      }
    },
    "order": {
      "anthropic": [
        "anthropic:default",
        "anthropic:manual"
      ],
      "google-antigravity": [
        "google-antigravity:mail1@gmail.com",
        "google-antigravity:mail2@gmail.com"
      ]
    }
  }
}

Configuration Description

profiles field:

  • Defines all available authentication configurations
  • Format: "Provider:Identifier"
  • mode: Authentication method (oauth or token)
  • email: OAuth account email (optional)

order field:

  • Defines the usage order of accounts for each provider
  • The system will rotate through them in order
  • Automatically switches to the next account if the current one hits a rate limit

Practical Case 1: Anthropic Dual Account Rotation

Scenario: Configure 2 Claude API Keys to avoid rate limits

{
  "auth": {
    "profiles": {
      "anthropic:account1": {
        "provider": "anthropic",
        "mode": "token"
      },
      "anthropic:account2": {
        "provider": "anthropic",
        "mode": "token"
      }
    },
    "order": {
      "anthropic": [
        "anthropic:account1",
        "anthropic:account2"
      ]
    }
  }
}

Configure API Key:

# Configure two API Keys separately in Antigravity Manager
# Or add in OpenClaw configuration:
{
  "models": {
    "providers": {
      "anthropic": {
        "apiKey": "sk-ant-api-key-1",
        ...
      },
      "anthropic-2": {
        "apiKey": "sk-ant-api-key-2",
        ...
      }
    }
  }
}

Workflow:

1. Use account1 to send request
2. account1 reaches rate limit → automatically switches to account2
3. account2 reaches rate limit → waits for account1 to recover
4. Repeats

Practical Case 2: Google Multi-Email Rotation

Scenario: Use multiple Google accounts to access Gemini

{
  "auth": {
    "profiles": {
      "google-antigravity:work@gmail.com": {
        "provider": "google-antigravity",
        "mode": "oauth",
        "email": "work@gmail.com"
      },
      "google-antigravity:personal@gmail.com": {
        "provider": "google-antigravity",
        "mode": "oauth",
        "email": "personal@gmail.com"
      },
      "google-antigravity:backup@gmail.com": {
        "provider": "google-antigravity",
        "mode": "oauth",
        "email": "backup@gmail.com"
      }
    },
    "order": {
      "google-antigravity": [
        "google-antigravity:work@gmail.com",
        "google-antigravity:personal@gmail.com",
        "google-antigravity:backup@gmail.com"
      ]
    }
  }
}

Advantages:

  • ✅ 3 accounts rotate, reducing rate limit probability by 66%
  • ✅ Free tiers stack (3x free tier)
  • ✅ Automatic load balancing during peak hours

Practical Case 3: Mixed Authentication Mode

Scenario: Use both OAuth and API Token simultaneously

{
  "auth": {
    "profiles": {
      "anthropic:oauth-account": {
        "provider": "anthropic",
        "mode": "oauth"
      },
      "anthropic:token-account": {
        "provider": "anthropic",
        "mode": "token"
      }
    },
    "order": {
      "anthropic": [
        "anthropic:oauth-account",
        "anthropic:token-account"
      ]
    }
  }
}

Use Cases:

  • OAuth account: Daily use (more secure)
  • Token account: Backup (more stable)

Configuration Best Practices

1. Recommended account quantity:

✅ Recommended: 2-3 accounts
❌ Not recommended: 5+ accounts (complex to manage)

2. Choice of authentication method:

OAuth: More secure, suitable for personal accounts
Token: More stable, suitable for API keys

3. Rotation strategy:

✅ Sort by usage frequency (high frequency → low frequency)
✅ Sort by account level (paid → free)

4. Monitoring and maintenance:

# View currently used authentication configuration
openclaw config get auth.profiles

# Test if authentication is valid
openclaw test api

11.2.5 Automatic Switching Configuration

Switching based on task type:

{
  "rules": [
    {
      "condition": "task.type === 'code'",
      "model": "deepseek-v3"
    },
    {
      "condition": "task.type === 'image'",
      "model": "gemini-3-pro"
    },
    {
      "condition": "task.complexity === 'high'",
      "model": "claude-opus-4.6"
    },
    {
      "condition": "default",
      "model": "claude-sonnet-4.5"
    }
  ]
}

Switching based on cost:

{
  "rules": [
    {
      "condition": "cost.daily < 10",
      "model": "claude-opus-4.6"
    },
    {
      "condition": "cost.daily >= 10",
      "model": "claude-sonnet-4.5"
    }
  ]
}

11.3 Memory Search Configuration

🧠 Intelligent Memory: Configure Memory Search to allow OpenClaw to remember historical conversations, providing more intelligent context awareness.

Memory Search is OpenClaw's memory system, which can:

  • Remember historical conversation content
  • Search relevant session records
  • Provide context awareness
  • Support hybrid retrieval (vector + text)

11.3.2 Basic Configuration

Configuration file path: ~/.openclaw/openclaw.json

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "sources": ["memory", "sessions"],
        "experimental": {
          "sessionMemory": true
        },
        "provider": "gemini",
        "remote": {
          "apiKey": "AIzaSy**************************"
        },
        "fallback": "gemini",
        "model": "gemini-embedding-001",
        "query": {
          "hybrid": {
            "enabled": true,
            "vectorWeight": 0.7,
            "textWeight": 0.3
          }
        }
      }
    }
  }
}

11.3.3 Configuration Item Details

sources (Data Sources)

{
  "sources": ["memory", "sessions"]
}

Optional values:

  • memory: Long-term memory (cross-session)
  • sessions: Session records (current session)

Recommended configuration:

// Use only long-term memory
"sources": ["memory"]

// Use both long-term memory and session records
"sources": ["memory", "sessions"]

experimental (Experimental Features)

{
  "experimental": {
    "sessionMemory": true
  }
}

sessionMemory:

  • true: Enable session memory (recommended)
  • false: Disable session memory

provider (Embedding Model Provider)

{
  "provider": "gemini"
}

Supported providers:

  • gemini: Google Gemini (recommended, free)
  • openai: OpenAI Embeddings
  • local: Local embedding models

Recommendation: Use Gemini (free and effective)

remote (Remote API Configuration)

{
  "remote": {
    "apiKey": "AIzaSy**************************"
  }
}

Get Gemini API Key:

  1. Visit Google AI Studio
  2. Log in with your Google account
  3. Create an API Key
  4. Copy and paste it into the configuration

fallback (Fallback Provider)

{
  "fallback": "gemini"
}

When the primary provider fails, use the fallback provider.

model (Embedding Model)

{
  "model": "gemini-embedding-001"
}

Gemini Embedding Models:

  • gemini-embedding-001: Standard model (recommended)
  • text-embedding-004: Advanced model

OpenAI Embedding Models:

  • text-embedding-3-small: Small model (cheaper)
  • text-embedding-3-large: Large model (better performance)

query (Query Configuration)

{
  "query": {
    "hybrid": {
      "enabled": true,
      "vectorWeight": 0.7,
      "textWeight": 0.3
    }
  }
}

hybrid (Hybrid Retrieval):

  • enabled: Whether to enable hybrid retrieval
  • vectorWeight: Vector search weight (0-1)
  • textWeight: Text search weight (0-1)

Weighting suggestions:

Semantic search priority: vectorWeight: 0.7, textWeight: 0.3
Keyword search priority: vectorWeight: 0.3, textWeight: 0.7
Balanced mode: vectorWeight: 0.5, textWeight: 0.5

11.3.4 Practical Case 1: Basic Configuration (Gemini)

Scenario: Use the free Gemini embedding model

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "sources": ["memory", "sessions"],
        "experimental": {
          "sessionMemory": true
        },
        "provider": "gemini",
        "remote": {
          "apiKey": "你的Gemini_API_Key"
        },
        "model": "gemini-embedding-001",
        "query": {
          "hybrid": {
            "enabled": true,
            "vectorWeight": 0.7,
            "textWeight": 0.3
          }
        }
      }
    }
  }
}

Advantages:

  • ✅ Completely free
  • ✅ Excellent performance
  • ✅ Simple configuration

11.3.5 Practical Case 2: Advanced Configuration (OpenAI)

Scenario: Use OpenAI embedding model (higher precision)

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "sources": ["memory", "sessions"],
        "experimental": {
          "sessionMemory": true
        },
        "provider": "openai",
        "remote": {
          "apiKey": "sk-your-openai-api-key"
        },
        "fallback": "gemini",
        "model": "text-embedding-3-large",
        "query": {
          "hybrid": {
            "enabled": true,
            "vectorWeight": 0.8,
            "textWeight": 0.2
          }
        }
      }
    }
  }
}

Advantages:

  • ✅ Higher precision
  • ✅ Supports more languages
  • ✅ Has a fallback option

Cost:

  • text-embedding-3-small: $0.02/million tokens
  • text-embedding-3-large: $0.13/million tokens

11.3.6 Practical Case 3: Local Deployment (Privacy First)

Scenario: Use a local embedding model to protect privacy

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "sources": ["memory", "sessions"],
        "experimental": {
          "sessionMemory": true
        },
        "provider": "local",
        "model": "all-MiniLM-L6-v2",
        "query": {
          "hybrid": {
            "enabled": true,
            "vectorWeight": 0.6,
            "textWeight": 0.4
          }
        }
      }
    }
  }
}

Advantages:

  • ✅ Completely local, protects privacy
  • ✅ No API Key required
  • ✅ No usage restrictions

Disadvantages:

  • ❌ Requires local computing resources
  • ❌ Slightly lower precision than cloud models

11.3.7 Command Line Configuration

# Enable Memory Search
openclaw config set agents.defaults.memorySearch.experimental.sessionMemory true

# Set provider
openclaw config set agents.defaults.memorySearch.provider "gemini"

# Set API Key (requires manual JSON editing)
# Or use jq command
cat ~/.openclaw/openclaw.json | jq '.agents.defaults.memorySearch.remote.apiKey = "你的API_Key"' > /tmp/openclaw-temp.json && mv /tmp/openclaw-temp.json ~/.openclaw/openclaw.json

# Restart Gateway
openclaw gateway restart

11.3.8 Verify Configuration

# View current configuration
openclaw config get agents.defaults.memorySearch

# Test memory search
openclaw message send "记住:我喜欢喝咖啡"
openclaw message send "我喜欢喝什么?"

# Should return: According to my memory, you like to drink coffee.

11.3.9 Use Cases

Scenario 1: Personal Assistant

You: Remember my birthday is January 1, 1990
OpenClaw: Okay, remembered.

(A few days later)
You: When is my birthday?
OpenClaw: According to my memory, your birthday is January 1, 1990.

Scenario 2: Project Management

You: Project A's deadline is March 1, 2026
OpenClaw: Noted.

(A week later)
You: When is Project A due?
OpenClaw: Project A's deadline is March 1, 2026.

Scenario 3: Knowledge Accumulation

You: DeepSeek API price is $0.001/thousand tokens
OpenClaw: Remembered.

(Next conversation)
You: Which model is the cheapest?
OpenClaw: According to my memory, DeepSeek is the cheapest, priced at $0.001/thousand tokens.
### 11.3.10 Best Practices

**1. Choose the Right Provider**:

免费用户:Gemini(免费且效果好) 付费用户:OpenAI(精度更高) 隐私优先:Local(完全本地)

Free Users: Gemini (free and effective) Paid Users: OpenAI (higher accuracy) Privacy First: Local (fully local)

**2. Adjust Hybrid Retrieval Weight**:

语义理解为主:vectorWeight: 0.7-0.8 关键词匹配为主:textWeight: 0.6-0.7 平衡模式:各 0.5

Semantic understanding dominant: vectorWeight: 0.7-0.8 Keyword matching dominant: textWeight: 0.6-0.7 Balanced mode: 0.5 each

**3. Regularly Clean Memory**:
```bash
# 清理过期记忆
openclaw memory clean --older-than 30d

# 查看记忆使用情况
openclaw memory stats
# Clean expired memory
openclaw memory clean --older-than 30d

# View memory usage
openclaw memory stats

4. Back up Important Memories:

# 导出记忆
openclaw memory export --output memory-backup.json

# 导入记忆
openclaw memory import memory-backup.json
# Export memory
openclaw memory export --output memory-backup.json

# Import memory
openclaw memory import memory-backup.json

11.3.11 Troubleshooting

Problem 1: Memory search not working

Cause: API Key invalid or not configured

Solution:

# 检查配置
openclaw config get agents.defaults.memorySearch

# 测试 API Key
curl -H "Content-Type: application/json" \
  -d '{"contents":[{"parts":[{"text":"test"}]}]}' \
  "https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent?key=你的API_Key"
# Check configuration
openclaw config get agents.defaults.memorySearch

# Test API Key
curl -H "Content-Type: application/json" \
  -d '{"contents":[{"parts":[{"text":"test"}]}]}' \
  "https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent?key=你的API_Key"

Problem 2: Search results inaccurate

Cause: Inappropriate hybrid retrieval weights

Solution:

// 调整权重
{
  "query": {
    "hybrid": {
      "vectorWeight": 0.8,  // 提高语义搜索权重
      "textWeight": 0.2
    }
  }
}
// Adjust weights
{
  "query": {
    "hybrid": {
      "vectorWeight": 0.8,  // Increase semantic search weight
      "textWeight": 0.2
    }
  }
}

Problem 3: Memory occupying too much space

Cause: Long-term accumulation without cleanup

Solution:

# 查看记忆大小
openclaw memory stats

# 清理旧记忆
openclaw memory clean --older-than 60d

# 压缩记忆数据库
openclaw memory compact
# View memory size
openclaw memory stats

# Clean old memory
openclaw memory clean --older-than 60d

# Compact memory database
openclaw memory compact

11.4 Cost Optimization Solutions

11.4.1 Token Consumption Analysis

View Consumption Statistics:

# 查看今日消耗
openclaw stats today

# 输出示例:
今日Token消耗:
- Claude Sonnet:150K tokens ($0.75)
- Gemini Pro:50K tokens ($0.00)
- 总计:200K tokens ($0.75)

任务分布:
- 文件搜索:30%
- 日程管理:20%
- 知识管理:25%
- 其他:25%
# View today's consumption
openclaw stats today

# Output example:
Today's Token Consumption:
- Claude Sonnet:150K tokens ($0.75)
- Gemini Pro:50K tokens ($0.00)
- Total:200K tokens ($0.75)

Task Distribution:
- File Search:30%
- Schedule Management:20%
- Knowledge Management:25%
- Other:25%

Consumption Optimization Suggestions:

⚠️ 高消耗任务:
- 文件搜索:每次10K tokens
- 建议:优化搜索范围

✅ 优化方案:
- 使用缓存
- 减少上下文
- 优化提示词
⚠️ High Consumption Tasks:
- File Search: 10K tokens per search
- Suggestion: Optimize search scope

✅ Optimization Solutions:
- Use caching
- Reduce context
- Optimize prompts

11.4.2 Caching Strategy

Enable Caching:

# 启用响应缓存
openclaw config set cache.enabled true

# 设置缓存时间(小时)
openclaw config set cache.ttl 24

# 设置缓存大小(MB)
openclaw config set cache.maxSize 1000
# Enable response caching
openclaw config set cache.enabled true

# Set cache time (hours)
openclaw config set cache.ttl 24

# Set cache size (MB)
openclaw config set cache.maxSize 1000

Caching Effect:

未启用缓存:
- 相同问题每次都调用API
- Token消耗:10K/次
- 成本:$0.05/次

启用缓存后:
- 相同问题直接返回缓存
- Token消耗:0
- 成本:$0
- 节省:100%
Without caching:
- Same question calls API every time
- Token consumption: 10K/call
- Cost: $0.05/call

With caching enabled:
- Same question returns cached response directly
- Token consumption: 0
- Cost: $0
- Savings: 100%

11.4.3 Model Downgrade Solution

Downgrade Strategy:

1. 简单任务用便宜模型
2. 复杂任务用贵模型
3. 失败后降级重试
1. Use cheaper models for simple tasks
2. Use more expensive models for complex tasks
3. Downgrade and retry after failure

Configuration Example:

{
  "fallback": [
    "claude-opus-4.6",    // 首选
    "claude-sonnet-4.5",  // 降级1
    "gemini-3-pro"        // 降级2
  ]
}
{
  "fallback": [
    "claude-opus-4.6",    // Primary
    "claude-sonnet-4.5",  // Fallback 1
    "gemini-3-pro"        // Fallback 2
  ]
}

11.4.4 Cost Control in Practice

Case 1: Reduce Cost by 50%

原方案:
- 全部使用Claude Opus
- 日均消耗:$20

优化方案:
- 简单任务用Sonnet
- 复杂任务用Opus
- 启用缓存

优化后:
- 日均消耗:$10
- 节省:50%
Original plan:
- All tasks use Claude Opus
- Daily consumption: $20

Optimized plan:
- Simple tasks use Sonnet
- Complex tasks use Opus
- Enable caching

After optimization:
- Daily consumption: $10
- Savings: 50%

Case 2: Maximize Free Quota

策略:
1. 优先使用Gemini(免费额度大)
2. 超额后切换到DeepSeek(便宜)
3. 重要任务用Claude

效果:
- 月成本:$5
- 节省:90%
Strategy:
1. Prioritize Gemini (large free quota)
2. Switch to DeepSeek after exceeding quota (cheaper)
3. Use Claude for important tasks

Effect:
- Monthly cost: $5
- Savings: 90%

11.5 Performance Tuning Tips

11.5.1 Response Speed Optimization

Before Optimization:

平均响应时间:5秒
用户体验:一般
Average response time: 5 seconds
User experience: Average

Optimization Solutions:

1. 启用缓存
2. 减少上下文
3. 使用流式输出
4. 并发处理
1. Enable caching
2. Reduce context
3. Use streaming output
4. Concurrent processing

After Optimization:

平均响应时间:2秒
用户体验:优秀
提升:60%
Average response time: 2 seconds
User experience: Excellent
Improvement: 60%

11.5.2 Concurrency Processing Optimization

Configure Concurrency:

# 设置最大并发数
openclaw config set concurrency.max 5

# 设置队列大小
openclaw config set concurrency.queueSize 100
# Set maximum concurrency
openclaw config set concurrency.max 5

# Set queue size
openclaw config set concurrency.queueSize 100

11.5.3 Memory Management

Monitor Memory Usage:

# 查看内存使用
openclaw stats memory

# 输出示例:
内存使用情况:
- 当前:512MB
- 峰值:800MB
- 平均:600MB
# View memory usage
openclaw stats memory

# Output example:
Memory Usage:
- Current: 512MB
- Peak: 800MB
- Average: 600MB

Optimization Suggestions:

⚠️ 内存占用高:
- 清理缓存
- 减少并发
- 重启服务
⚠️ High memory usage:
- Clear cache
- Reduce concurrency
- Restart service

📝 Chapter Summary

Learned about OpenClaw's advanced configurations:

  1. Antigravity Manager configuration
  2. Multi-model switching strategy
  3. Cost optimization solutions
  4. Performance tuning tips

Mastering these techniques can:

  • Reduce costs by over 50%
  • Improve response speed by 60%
  • Enhance system stability

11.6 Model Provider Configuration Details

🤖 Multi-model Support: OpenClaw supports 20+ mainstream AI model providers, offering flexible configuration to meet different needs.

💡 Simplest Way: Use the openclaw onboard command to start the configuration wizard and interactively configure models.

Launch Configuration Wizard

openclaw onboard

Executing this will launch the command-line interactive configuration wizard.

Configuration Process

Step 1: Select Initialization Mode

◇  初始化模式
│  快速开始
◇  Initialization Mode
│  Quick Start

Step 2: Select Model Provider

◆  模型/认证提供商
│  ○ OpenAI (Codex OAuth + API key)
│  ○ Anthropic
│  ○ MiniMax
│  ○ Moonshot AI
│  ○ Google
│  ○ OpenRouter
│  ○ Qwen
│  ○ Z.AI (GLM 4.7)
│  ○ Copilot
│  ○ Vercel AI Gateway
│  ○ OpenCode Zen
│  ○ Xiaomi
│  ○ Synthetic
│  ○ Venice AI
│  ○ Skip for now

Use arrow keys to select, spacebar to confirm.

Step 3: Enter API Key

Enter the API Key for the corresponding provider as prompted.

Step 4: Select Default Model

Select the default model from the list of available models.

Step 5: Complete Configuration

Configuration is automatically saved and Gateway restarts.

Advantages of the Command Line Wizard

Interactive Operation: Step-by-step guidance, less prone to errors ✅ Real-time Verification: API Key validity is checked immediately after input ✅ Automatic Configuration: Configuration files are generated automatically ✅ One-click Save: Automatically saves and restarts the service ✅ Error Prompts: Clear error messages are provided for configuration errors

Verify Configuration

After configuration is complete, verify if the model is available:

# 查看已配置的模型
openclaw models list

# 测试模型连接
openclaw message send "你好,测试一下"
# View configured models
openclaw models list

# Test model connection
openclaw message send "Hello, test it out"

Modify Configuration

If you need to modify the configuration, run again:

openclaw onboard

You can add, delete, or modify model providers.


11.6.1 Supported Model Providers

International Models

ProviderModelFeaturesPrice
OpenAIGPT-4o, GPT-4o-miniComprehensive features, mature ecosystemHigh
AnthropicClaude 3.5 Sonnet, Claude 3 OpusStrong reasoning capabilities, high securityMedium-High
GoogleGemini 2.0 Flash, Gemini 1.5 ProStrong multimodal capabilities, large free quotaMedium
xAIGrok 2Real-time information, humorous styleMedium
MistralMistral Large, Mistral SmallOpen-source friendly, high cost-performanceMedium
CohereCommand R+, Command REnterprise-grade, RAG optimizedMedium

Domestic Models

ProviderModelFeaturesPrice
DeepSeekDeepSeek-V3, DeepSeek-ChatKing of cost-performance, strong programming capabilitiesVery Low
Moonshot AIKimi k2.5Ultra-long context (2 million characters)Low
ZhipuAIGLM-4, GLM-4VMultimodal, Chinese optimizedMedium
Baichuan IntelligentBaichuan-4Good Chinese understandingMedium
MiniMaxabab6.5Speech synthesis, role-playingMedium
Alibaba CloudQwen-Max, Qwen-PlusAlibaba ecosystem, enterprise-gradeMedium
BaiduERNIE 4.0Baidu ecosystem, knowledge enhancedMedium

Local Models

ProviderModelFeaturesPrice
OllamaLlama 3.1, Qwen2.5Fully local, privacy protectionFree
LM StudioVarious open-source modelsGraphical interface, easy to useFree

11.6.2 Configure OpenAI

{
  "models": {
    "mode": "merge",
    "providers": {
      "openai": {
        "baseUrl": "https://api.openai.com/v1",
        "apiKey": "sk-your-api-key",
        "auth": "api-key",
        "api": "openai-chat",
        "models": [
          {
            "id": "gpt-4o",
            "name": "GPT-4o",
            "contextWindow": 128000,
            "maxTokens": 16384
          },
          {
            "id": "gpt-4o-mini",
            "name": "GPT-4o Mini",
            "contextWindow": 128000,
            "maxTokens": 16384
          }
        ]
      }
    }
  }
}

11.6.3 Configure Anthropic (Claude)

{
  "models": {
    "mode": "merge",
    "providers": {
      "anthropic": {
        "baseUrl": "https://api.anthropic.com",
        "apiKey": "sk-ant-your-api-key",
        "auth": "api-key",
        "api": "anthropic",
        "models": [
          {
            "id": "claude-3-5-sonnet-20241022",
            "name": "Claude 3.5 Sonnet",
            "contextWindow": 200000,
            "maxTokens": 8192
          },
          {
            "id": "claude-3-opus-20240229",
            "name": "Claude 3 Opus",
            "contextWindow": 200000,
            "maxTokens": 4096
          }
        ]
      }
    }
  }
}

11.6.4 Configure Google Gemini

{
  "models": {
    "mode": "merge",
    "providers": {
      "google": {
        "baseUrl": "https://generativelanguage.googleapis.com/v1beta",
        "apiKey": "your-google-api-key",
        "auth": "api-key",
        "api": "google-ai",
        "models": [
          {
            "id": "gemini-2.0-flash-exp",
            "name": "Gemini 2.0 Flash",
            "contextWindow": 1000000,
            "maxTokens": 8192
          },
          {
            "id": "gemini-1.5-pro",
            "name": "Gemini 1.5 Pro",
            "contextWindow": 2000000,
            "maxTokens": 8192
          }
        ]
      }
    }
  }
}
{
  "models": {
    "mode": "merge",
    "providers": {
      "deepseek": {
        "baseUrl": "https://api.deepseek.com",
        "apiKey": "sk-your-api-key",
        "auth": "api-key",
        "api": "openai-chat",
        "models": [
          {
            "id": "deepseek-chat",
            "name": "DeepSeek Chat",
            "contextWindow": 64000,
            "maxTokens": 4096
          },
          {
            "id": "deepseek-coder",
            "name": "DeepSeek Coder",
            "contextWindow": 64000,
            "maxTokens": 4096
          }
        ]
      }
    }
  }
}

11.6.6 Configure Kimi (Moonshot AI)

{
  "models": {
    "mode": "merge",
    "providers": {
      "moonshot": {
        "baseUrl": "https://api.moonshot.cn/v1",
        "apiKey": "sk-your-api-key",
        "auth": "api-key",
        "api": "openai-chat",
        "models": [
          {
            "id": "moonshot-v1-8k",
            "name": "Kimi k2.5 8K",
            "contextWindow": 8000,
            "maxTokens": 4096
          },
          {
            "id": "moonshot-v1-32k",
            "name": "Kimi k2.5 32K",
            "contextWindow": 32000,
            "maxTokens": 4096
          },
          {
            "id": "moonshot-v1-128k",
            "name": "Kimi k2.5 128K",
            "contextWindow": 128000,
            "maxTokens": 4096
          }
        ]
      }
    }
  }
}

11.6.7 Configure Ollama (Local Model)

{
  "models": {
    "mode": "merge",
    "providers": {
      "ollama": {
        "baseUrl": "http://localhost:11434",
        "auth": "none",
        "api": "ollama",
        "models": [
          {
            "id": "llama3.1:8b",
            "name": "Llama 3.1 8B",
            "contextWindow": 128000,
            "maxTokens": 4096
          },
          {
            "id": "qwen2.5:7b",
            "name": "Qwen 2.5 7B",
            "contextWindow": 32000,
            "maxTokens": 4096
          }
        ]
      }
    }
  }
}

11.6.8 Multi-Provider Configuration Example

{
  "models": {
    "mode": "merge",
    "providers": {
      "deepseek": {
        "baseUrl": "https://api.deepseek.com",
        "apiKey": "sk-deepseek-key",
        "auth": "api-key",
        "api": "openai-chat",
        "models": [
          {
            "id": "deepseek-chat",
            "name": "DeepSeek Chat",
            "contextWindow": 64000,
            "maxTokens": 4096
          }
        ]
      },
      "anthropic": {
        "baseUrl": "https://api.anthropic.com",
        "apiKey": "sk-ant-key",
        "auth": "api-key",
        "api": "anthropic",
        "models": [
          {
            "id": "claude-3-5-sonnet-20241022",
            "name": "Claude 3.5 Sonnet",
            "contextWindow": 200000,
            "maxTokens": 8192
          }
        ]
      },
      "ollama": {
        "baseUrl": "http://localhost:11434",
        "auth": "none",
        "api": "ollama",
        "models": [
          {
            "id": "llama3.1:8b",
            "name": "Llama 3.1 8B",
            "contextWindow": 128000,
            "maxTokens": 4096
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "deepseek/deepseek-chat",
        "fallback": [
          "anthropic/claude-3-5-sonnet-20241022",
          "ollama/llama3.1:8b"
        ]
      }
    }
  }
}

11.6.9 Model Selection Strategy

Select by Task Type:

// 编程任务
"deepseek/deepseek-coder"

// 长文档处理
"moonshot/moonshot-v1-128k"

// 复杂推理
"anthropic/claude-3-opus-20240229"

// 日常对话
"deepseek/deepseek-chat"

// 多模态(图片)
"google/gemini-2.0-flash-exp"

// 本地隐私
"ollama/llama3.1:8b"
// Programming tasks
"deepseek/deepseek-coder"

// Long document processing
"moonshot/moonshot-v1-128k"

// Complex reasoning
"anthropic/claude-3-opus-20240229"

// Daily conversation
"deepseek/deepseek-chat"

// Multimodal (images)
"google/gemini-2.0-flash-exp"

// Local privacy
"ollama/llama3.1:8b"

Select by Cost:

极低成本:DeepSeek ($0.001/千tokens)
低成本:Kimi, GLM-4 ($0.01/千tokens)
中等成本:Gemini, Mistral ($0.05/千tokens)
高成本:Claude, GPT-4 ($0.15/千tokens)
免费:Ollama(本地)
Very Low Cost: DeepSeek ($0.001/thousand tokens)
Low Cost: Kimi, GLM-4 ($0.01/thousand tokens)
Medium Cost: Gemini, Mistral ($0.05/thousand tokens)
High Cost: Claude, GPT-4 ($0.15/thousand tokens)
Free: Ollama (local)

11.7 Tool System Details

🔧 Extended Capabilities: OpenClaw's tool system enables AI to perform various operations, from file management to API calls.

11.7.1 Built-in Tool List

File System Tools

ToolFunctionExample
read_fileRead file contentRead configuration file
write_fileWrite fileSave notes
list_directoryList directoryView file list
search_filesSearch filesFind all PDFs
move_fileMove fileOrganize files
delete_fileDelete fileClean up temporary files

Shell Tools

ToolFunctionExample
execute_commandExecute commandRun script
run_scriptRun scriptBatch processing tasks

Network Tools

ToolFunctionExample
web_searchWeb searchSearch for latest information
fetch_urlFetch URLDownload content
api_callAPI callCall third-party services

Data Processing Tools

ToolFunctionExample
parse_jsonParse JSONProcess API response
parse_csvParse CSVProcess tabular data
extract_textExtract textExtract from PDF

11.7.2 Enable and Disable Tools

View Available Tools:

openclaw tools list

Enable Tools:

openclaw tools enable read_file write_file

Disable Tools:

openclaw tools disable execute_command

Configuration File Method:

{
  "tools": {
    "enabled": [
      "read_file",
      "write_file",
      "list_directory",
      "web_search"
    ],
    "disabled": [
      "execute_command",
      "delete_file"
    ]
  }
}

11.7.3 Tool Permission Control

Set Tool Permissions:

{
  "tools": {
    "permissions": {
      "read_file": {
        "allowedPaths": [
          "~/Documents",
          "~/Downloads"
        ],
        "deniedPaths": [
          "~/.ssh",
          "~/.openclaw"
        ]
      },
      "execute_command": {
        "allowedCommands": [
          "ls",
          "cat",
          "grep"
        ],
        "deniedCommands": [
          "rm",
          "sudo"
        ]
      }
    }
  }
}

11.7.4 Custom Tool Development

Create Custom Tool:

// ~/.openclaw/tools/my-tool.js
export default {
  name: "my_custom_tool",
  description: "我的自定义工具",
  parameters: {
    type: "object",
    properties: {
      input: {
        type: "string",
        description: "输入参数"
      }
    },
    required: ["input"]
  },
  async execute({ input }) {
    // 工具逻辑
    return {
      success: true,
      result: `处理结果: ${input}`
    };
  }
};
// ~/.openclaw/tools/my-tool.js
export default {
  name: "my_custom_tool",
  description: "My custom tool",
  parameters: {
    type: "object",
    properties: {
      input: {
        type: "string",
        description: "Input parameter"
      }
    },
    required: ["input"]
  },
  async execute({ input }) {
    // Tool logic
    return {
      success: true,
      result: `Processing result: ${input}`
    };
  }
};

Register Tool:

openclaw tools register ~/.openclaw/tools/my-tool.js

11.7.5 Tool Usage Examples

File Search:

你:帮我找到所有包含"发票"的 PDF 文件

OpenClaw 使用工具:
1. search_files(pattern="*.pdf", content="发票")
2. 返回结果:找到 3 个文件
   - 发票_2024_01.pdf
   - 报销发票.pdf
   - 采购发票_Q1.pdf
You: Help me find all PDF files containing "invoice"

OpenClaw uses tool:
1. search_files(pattern="*.pdf", content="invoice")
2. Returns result: Found 3 files
   - Invoice_2024_01.pdf
   - Expense_Invoice.pdf
   - Procurement_Invoice_Q1.pdf

Web Search:

你:Claude 3.5 Sonnet 最新价格是多少?

OpenClaw 使用工具:
1. web_search(query="Claude 3.5 Sonnet pricing")
2. fetch_url(url="https://www.anthropic.com/pricing")
3. 返回结果:
   - 输入:$3/百万 tokens
   - 输出:$15/百万 tokens
You: What is the latest price for Claude 3.5 Sonnet?

OpenClaw uses tool:
1. web_search(query="Claude 3.5 Sonnet pricing")
2. fetch_url(url="https://www.anthropic.com/pricing")
3. Returns result:
   - Input: $3/million tokens
   - Output: $15/million tokens

Data Processing:

你:分析这个 CSV 文件的销售数据

OpenClaw 使用工具:
1. read_file(path="sales.csv")
2. parse_csv(content=...)
3. 分析数据并生成报告
You: Analyze the sales data in this CSV file

OpenClaw uses tool:
1. read_file(path="sales.csv")
2. parse_csv(content=...)
3. Analyze data and generate report

11.7.6 Tool Chaining

OpenClaw can automatically combine multiple tools to complete complex tasks:

任务:下载网页并保存为 Markdown

工具链:
1. fetch_url(url) → 获取网页内容
2. extract_text(html) → 提取文本
3. convert_to_markdown(text) → 转换格式
4. write_file(path, content) → 保存文件
Task: Download webpage and save as Markdown

Toolchain:
1. fetch_url(url) → Get webpage content
2. extract_text(html) → Extract text
3. convert_to_markdown(text) → Convert format
4. write_file(path, content) → Save file

11.7.7 Tool Security Best Practices

1. Principle of Least Privilege:

{
  "tools": {
    "enabled": [
      "read_file",  // 只启用必要的工具
      "web_search"
    ]
  }
}
{
  "tools": {
    "enabled": [
      "read_file",  // Only enable necessary tools
      "web_search"
    ]
  }
}

2. Path Restrictions:

{
  "tools": {
    "permissions": {
      "read_file": {
        "allowedPaths": ["~/Documents"]  // 限制访问范围
      }
    }
  }
}
{
  "tools": {
    "permissions": {
      "read_file": {
        "allowedPaths": ["~/Documents"]  // Restrict access scope
      }
    }
  }
}

3. Command Whitelist:

{
  "tools": {
    "permissions": {
      "execute_command": {
        "allowedCommands": ["ls", "cat"]  // 只允许安全命令
      }
    }
  }
}
{
  "tools": {
    "permissions": {
      "execute_command": {
        "allowedCommands": ["ls", "cat"]  // Only allow safe commands
      }
    }
  }
}

11.8 CLI Command Full Reference

📟 Command Line Tools: OpenClaw provides powerful CLI tools for easy management and operation.

11.8.1 Core Commands

Version and Help

# 查看版本
openclaw --version
openclaw -v

# 查看帮助
openclaw --help
openclaw -h

# 查看子命令帮助
openclaw gateway --help
# View version
openclaw --version
openclaw -v

# View help
openclaw --help
openclaw -h

# View subcommand help
openclaw gateway --help

Initialization and Configuration

# 运行配置向导
openclaw onboard

# 快速开始向导
openclaw setup

# 查看配置
openclaw config list

# 获取配置项
openclaw config get models.providers

# 设置配置项
openclaw config set gateway.port 18790

# 删除配置项
openclaw config delete models.providers.test
# Run configuration wizard
openclaw onboard

# Quick start wizard
openclaw setup

# View configuration
openclaw config list

# Get configuration item
openclaw config get models.providers

# Set configuration item
openclaw config set gateway.port 18790

# Delete configuration item
openclaw config delete models.providers.test

11.8.2 Gateway Management

# 安装/启动 Gateway
openclaw gateway install

# 查看状态
openclaw gateway status

# 停止 Gateway
openclaw gateway stop

# 重启 Gateway
openclaw gateway restart

# 查看日志
openclaw logs
openclaw logs --follow
openclaw logs --tail 100

# 清理日志
openclaw logs clear
# Install/Start Gateway
openclaw gateway install

# View status
openclaw gateway status

# Stop Gateway
openclaw gateway stop

# Restart Gateway
openclaw gateway restart

# View logs
openclaw logs
openclaw logs --follow
openclaw logs --tail 100

# Clear logs
openclaw logs clear

11.8.3 Channel Management

# 列出所有渠道
openclaw channels list

# 查看渠道状态
openclaw channels status

# 添加渠道
openclaw channels add

# 删除渠道
openclaw channels remove feishu

# 测试渠道
openclaw channels test feishu
# List all channels
openclaw channels list

# View channel status
openclaw channels status

# Add channel
openclaw channels add

# Delete channel
openclaw channels remove feishu

# Test channel
openclaw channels test feishu

11.8.4 Pairing Management

# 列出配对请求
openclaw pairing list
openclaw pairing list feishu

# 批准配对
openclaw pairing approve feishu <CODE>

# 拒绝配对
openclaw pairing reject feishu <CODE>

# 清理过期配对
openclaw pairing cleanup
# List pairing requests
openclaw pairing list
openclaw pairing list feishu

# Approve pairing
openclaw pairing approve feishu <CODE>

# Reject pairing
openclaw pairing reject feishu <CODE>

# Clean up expired pairings
openclaw pairing cleanup

11.8.5 Plugin Management

# 列出已安装插件
openclaw plugins list

# 搜索插件
openclaw plugins search feishu

# 安装插件
openclaw plugins install @openclaw/feishu

# 卸载插件
openclaw plugins uninstall @openclaw/feishu

# 更新插件
openclaw plugins update @openclaw/feishu

# 更新所有插件
openclaw plugins update --all
# List installed plugins
openclaw plugins list

# Search plugins
openclaw plugins search feishu

# Install plugin
openclaw plugins install @openclaw/feishu

# Uninstall plugin
openclaw plugins uninstall @openclaw/feishu

# Update plugin
openclaw plugins update @openclaw/feishu

# Update all plugins
openclaw plugins update --all

11.8.6 Tool Management

# 列出所有工具
openclaw tools list

# 启用工具
openclaw tools enable read_file write_file

# 禁用工具
openclaw tools disable execute_command

# 注册自定义工具
openclaw tools register ~/my-tool.js

# 测试工具
openclaw tools test read_file
# List all tools
openclaw tools list

# Enable tools
openclaw tools enable read_file write_file

# Disable tools
openclaw tools disable execute_command

# Register custom tool
openclaw tools register ~/my-tool.js

# Test tool
openclaw tools test read_file

11.8.7 Agent Management

# 列出 Agents
openclaw agents list

# 创建 Agent
openclaw agents create my-agent

# 删除 Agent
openclaw agents delete my-agent

# 切换 Agent
openclaw agents switch my-agent

# 查看 Agent 配置
openclaw agents config my-agent
# List Agents
openclaw agents list

# Create Agent
openclaw agents create my-agent

# Delete Agent
openclaw agents delete my-agent

# Switch Agent
openclaw agents switch my-agent

# View Agent configuration
openclaw agents config my-agent

11.8.8 Session Management

# 列出会话
openclaw sessions list

# 查看会话详情
openclaw sessions show `&lt;session-id>`

# 删除会话
openclaw sessions delete `&lt;session-id>`

# 清理所有会话
openclaw sessions clear

# 导出会话
openclaw sessions export `&lt;session-id>` --output session.json

# 导入会话
openclaw sessions import session.json
# List sessions
openclaw sessions list

# View session details
openclaw sessions show `&lt;session-id>`

# Delete session
openclaw sessions delete `&lt;session-id>`

# Clear all sessions
openclaw sessions clear

# Export session
openclaw sessions export `&lt;session-id>` --output session.json

# Import session
openclaw sessions import session.json

11.8.9 Statistics and Monitoring

# 查看统计信息
openclaw stats

# 查看今日统计
openclaw stats today

# 查看本周统计
openclaw stats week

# 查看 API 消耗
openclaw stats api

# 查看内存使用
openclaw stats memory

# 查看性能指标
openclaw stats performance
# View statistics
openclaw stats

# View today's statistics
openclaw stats today

# View this week's statistics
openclaw stats week

# View API consumption
openclaw stats api

# View memory usage
openclaw stats memory

# View performance metrics
openclaw stats performance

11.8.10 Testing and Diagnostics

# 测试 API 连接
openclaw test api

# 测试渠道
openclaw test channel feishu

# 测试工具
openclaw test tool read_file

# 运行诊断
openclaw diagnose

# 检查配置
openclaw validate config

# 检查健康状态
openclaw health check
# Test API connection
openclaw test api

# Test channel
openclaw test channel feishu

# Test tool
openclaw test tool read_file

# Run diagnostics
openclaw diagnose

# Check configuration
openclaw validate config

# Check health status
openclaw health check

11.8.11 Data Management

# 备份数据
openclaw backup create

# 列出备份
openclaw backup list

# 恢复备份
openclaw backup restore `&lt;backup-id>`

# 清理缓存
openclaw cache clear

# 清理临时文件
openclaw cleanup temp

# 导出数据
openclaw export --output data.json

# 导入数据
openclaw import data.json
# Create backup
openclaw backup create

# List backups
openclaw backup list

# Restore backup
openclaw backup restore `&lt;backup-id>`

# Clear cache
openclaw cache clear

# Clean up temporary files
openclaw cleanup temp

# Export data
openclaw export --output data.json

# Import data
openclaw import data.json

11.8.12 Updates and Maintenance

# 检查更新
openclaw update check

# 更新到最新版本
openclaw update

# 更新到指定版本
openclaw update --version 2026.3.2

# 回滚版本
openclaw rollback

# 卸载
openclaw uninstall
# Check for updates
openclaw update check

# Update to latest version
openclaw update

# Update to specific version
openclaw update --version 2026.3.2

# Rollback version
openclaw rollback

# Uninstall
openclaw uninstall

11.8.13 Development and Debugging

# 开发模式启动
openclaw dev

# 调试模式
openclaw --debug

# 详细日志
openclaw --verbose

# 运行测试
openclaw test

# 构建项目
openclaw build

# 清理构建
openclaw clean
# Start in development mode
openclaw dev

# Debug mode
openclaw --debug

# Verbose logs
openclaw --verbose

# Run tests
openclaw test

# Build project
openclaw build

# Clean build
openclaw clean

11.8.14 Common Command Combinations

Quick Restart:

openclaw gateway stop && openclaw gateway install

View Real-time Logs:

openclaw logs --follow | grep ERROR

Backup and Update:

openclaw backup create && openclaw update

Clean and Restart:

openclaw cache clear && openclaw gateway restart

Full Diagnostics:

openclaw diagnose && openclaw health check && openclaw test api

11.8.15 Environment Variables

# 设置日志级别
export OPENCLAW_LOG_LEVEL=debug

# 设置配置目录
export OPENCLAW_HOME=~/.openclaw

# 设置 Gateway 端口
export OPENCLAW_PORT=18789

# 设置 API Key
export DEEPSEEK_API_KEY=sk-xxx
export MOONSHOT_API_KEY=sk-xxx
# Set log level
export OPENCLAW_LOG_LEVEL=debug

# Set configuration directory
export OPENCLAW_HOME=~/.openclaw

# Set Gateway port
export OPENCLAW_PORT=18789

# Set API Key
export DEEPSEEK_API_KEY=sk-xxx
export MOONSHOT_API_KEY=sk-xxx

11.8.16 Configuration File Locations

# 主配置文件
~/.openclaw/openclaw.json

# 日志文件
~/.openclaw/logs/gateway.log

# 缓存目录
~/.openclaw/cache/

# 数据目录
~/.openclaw/data/

# 插件目录
~/.openclaw/plugins/

# 工具目录
~/.openclaw/tools/
# Main configuration file
~/.openclaw/openclaw.json

# Log file
~/.openclaw/logs/gateway.log

# Cache directory
~/.openclaw/cache/

# Data directory
~/.openclaw/data/

# Plugin directory
~/.openclaw/plugins/

# Tool directory
~/.openclaw/tools/

📝 Chapter Summary

Learned about OpenClaw's advanced configurations:

Core Content

  1. Antigravity Manager Configuration - Unified API management
  2. Multi-model Switching Strategy - Scenario-based selection + model disaster recovery mechanism
  3. Memory Search Configuration - Intelligent context awareness
  4. Cost Optimization Solutions - Reduce costs by 50%+
  5. Performance Tuning Tips - Improve response speed by 60%
  6. Model Provider Configuration - Support for 20+ mainstream models
  7. Tool System Details - Extend AI capabilities
  8. CLI Command Full Reference - 100+ commands explained

Practical Skills

  • ✅ Configure multiple AI model providers
  • ✅ Configure model disaster recovery mechanisms (primary + fallbacks)
  • ✅ Configure multiple authentication profiles for account rotation
  • ✅ Configure memory search system
  • ✅ Select the optimal model based on tasks
  • ✅ Use the tool system to extend functionality
  • ✅ Master CLI commands for efficient management
  • ✅ Optimize costs and performance
  • Daily Use: DeepSeek (best cost-performance)
  • Long Documents: Kimi (2 million character context)
  • Complex Tasks: Claude 3.5 Sonnet (strong reasoning capabilities)
  • Local Privacy: Ollama (fully local)
  • Disaster Recovery Plan: DeepSeek → Claude Sonnet → Claude Opus
  • Memory Search: Gemini Embedding (free and effective)

Next Chapter Preview: Chapter 12 will delve into practical case studies, learning complete workflows for personal productivity enhancement.

Table of Contents

⚙️ Chapter Contents
11.1 Antigravity Manager Complete Configuration Guide
11.1.1 What is Antigravity Manager?
11.1.2 System Requirements and Prerequisites
11.1.3 Install Antigravity Manager
macOS Users
Windows Users
Linux Users
Verify Installation
11.1.4 Configure AI Model Accounts
Solution 1: Use Official API
Solution 2: Purchase Exclusive Account (Recommended)
Configure API Key in Antigravity Manager
11.1.5 Generate User Token
11.1.6 Configure OpenClaw
Configure Claude Sonnet 4.5 (Default Model)
Configure Claude Opus 4.5 Thinking (Reasoning Model)
Configure Gemini 3 Pro Image (Multimodal Model)
11.1.7 Verify Configuration
Check Model List
Restart Gateway
Test Connection
11.1.8 Usage
Use Default Model (Claude Sonnet 4.5)
Switch to Opus Thinking Model
Switch to Gemini Image Model
Temporarily Use a Specific Model
11.1.9 Model Selection Guide
Claude Sonnet 4.5
Claude Opus 4.5 Thinking
Gemini 3 Pro Image
11.1.10 Advanced Configuration
Configure Model Alias
Add Multiple API Keys
Configure Cost Tracking
Backup Configuration
Restore Configuration
11.1.11 Quick Command Reference
11.1.12 Model ID Quick Reference
11.1.13 Troubleshooting
Problem 1: Model list is empty
Problem 2: API connection failed
Problem 3: Model not effective after configuration
Problem 4: User Token invalid
Test connection
Practical Case 1: Cost-Optimized Disaster Recovery
Practical Case 2: Performance-First Disaster Recovery
Practical Case 3: Multi-Provider Disaster Recovery
Command Line Configuration
Verify Disaster Recovery Configuration
Disaster Recovery Best Practices
11.2.4 Multiple Authentication Profiles + Token Rotation
What is an Authentication Profile?
Basic Configuration
Configuration Description
Practical Case 1: Anthropic Dual Account Rotation
Practical Case 2: Google Multi-Email Rotation
Practical Case 3: Mixed Authentication Mode
Configuration Best Practices
11.2.5 Automatic Switching Configuration
11.3 Memory Search Configuration
11.3.1 What is Memory Search?
11.3.2 Basic Configuration
11.3.3 Configuration Item Details
sources (Data Sources)
experimental (Experimental Features)
provider (Embedding Model Provider)
remote (Remote API Configuration)
fallback (Fallback Provider)
model (Embedding Model)
query (Query Configuration)
11.3.4 Practical Case 1: Basic Configuration (Gemini)
11.3.5 Practical Case 2: Advanced Configuration (OpenAI)
11.3.6 Practical Case 3: Local Deployment (Privacy First)
11.3.7 Command Line Configuration
11.3.8 Verify Configuration
11.3.9 Use Cases
11.3.11 Troubleshooting
11.4 Cost Optimization Solutions
11.4.1 Token Consumption Analysis
11.4.2 Caching Strategy
11.4.3 Model Downgrade Solution
11.4.4 Cost Control in Practice
11.5 Performance Tuning Tips
11.5.1 Response Speed Optimization
11.5.2 Concurrency Processing Optimization
11.5.3 Memory Management
📝 Chapter Summary
11.6 Model Provider Configuration Details
11.6.0 Quick Configuration: Using the Command Line Wizard (Recommended for Newbies)
Launch Configuration Wizard
Configuration Process
Advantages of the Command Line Wizard
Verify Configuration
Modify Configuration
11.6.1 Supported Model Providers
International Models
Domestic Models
Local Models
11.6.2 Configure OpenAI
11.6.3 Configure Anthropic (Claude)
11.6.4 Configure Google Gemini
11.6.5 Configure DeepSeek (Recommended)
11.6.6 Configure Kimi (Moonshot AI)
11.6.7 Configure Ollama (Local Model)
11.6.8 Multi-Provider Configuration Example
11.6.9 Model Selection Strategy
11.7 Tool System Details
11.7.1 Built-in Tool List
File System Tools
Shell Tools
Network Tools
Data Processing Tools
11.7.2 Enable and Disable Tools
11.7.3 Tool Permission Control
11.7.4 Custom Tool Development
11.7.5 Tool Usage Examples
11.7.6 Tool Chaining
11.7.7 Tool Security Best Practices
11.8 CLI Command Full Reference
11.8.1 Core Commands
Version and Help
Initialization and Configuration
11.8.2 Gateway Management
11.8.3 Channel Management
11.8.4 Pairing Management
11.8.5 Plugin Management
11.8.6 Tool Management
11.8.7 Agent Management
11.8.8 Session Management
11.8.9 Statistics and Monitoring
11.8.10 Testing and Diagnostics
11.8.11 Data Management
11.8.12 Updates and Maintenance
11.8.13 Development and Debugging
11.8.14 Common Command Combinations
11.8.15 Environment Variables
11.8.16 Configuration File Locations
📝 Chapter Summary
Core Content
Practical Skills
Recommended Configuration