Claude code
1. Claude Code + MaaS
Only supports Claude series models
Claude code
# Pull image Node.js 18 or higher version
npm install -g @anthropic-ai/claude-code
# Environment variables
export ANTHROPIC_AUTH_TOKEN="{YOUR_API_KEY}"
export ANTHROPIC_BASE_URL="https://genaiapi.cloudsway.net/{YOUR_ENDPOINT}"
export ANTHROPIC_MODEL="claude-sonnet-4@20250514"
# Start claude code
claude # Execute claude in any directory
2. Claude Code + LiteLLM + MaaS
Supports MaaS platform to access models in OpenAI format
1. postgres DB
docker pull postgres
docker run --name my-postgres -e POSTGRES_PASSWORD=123456 -p 5432:5432 -d postgres
docker exec -it my-postgres bash
psql -U postgres
CREATE DATABASE mydatabase;
2. LiteLLM
2.1 Configure config.yaml and place it in a custom folder (the path will be used in step 2)
model_list:
- model_name: claude-3-7-sonnet-20250219 # Model name, parameter to be passed when calling the LLM interface
litellm_params:
model: claude-3-7-sonnet-20250219 # Actually called model name
api_base: https://genaiapi.cloudsway.net/{endpoint}/v1/messages
api_key: xxx # Automatically converted to x-api-key: Bearer xxx
- model_name: gemini-2.5-flash
litellm_params:
model: gpt-3.5-turbo
api_base: https://genaiapi.cloudsway.net/v1/ai/{endpoint}
api_key: xxx
headers:
Authorization: Bearer xxx # Authentication method for the api_base interface
general_settings:
master_key: sk-1234
database_url: postgresql://postgres:123456@{db_url}:5432/my_database # Database configured in step 1
2.2 Start
docker pull ghcr.io/berriai/litellm:main-latest
docker run \
-v $(pwd)/config.yaml:/app/config.yaml \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-latest \
--config /app/config.yaml --detailed_debug
2.3 Model Calling
# Check which models are configured in config.yaml
curl http://127.0.0.1:4000/v1/models
# Generate an authentication key for the model gemini-2.5-flash
curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"models": ["gemini-2.5-flash"], "metadata": {"user": "ishaan@berri.ai"}}'
# Response: {"key": "sk-wnm6JQ-f1U2Sixpa8L-0xg"}
# Call the model interface
curl --location --request POST 'http://127.0.0.1:4000/chat/completions' \
--header 'Authorization: Bearer sk-wnm6JQ-f1U2Sixpa8L-0xg' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gemini-2.5-flash",
"max_tokens": 2000,
"stream": false,
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
}
]
}'
3. Claude code
3.1 Usage of claude code
# Pull image Node.js 18 or higher version
npm install -g @anthropic-ai/claude-code
# Environment variables
export ANTHROPIC_BASE_URL=http://127.0.0.1:4000 # IP and port of litellm
export ANTHROPIC_MODEL=gemini-2.5-flash # Which model to call
export ANTHROPIC_AUTH_TOKEN=sk-1234 # Authentication key
# Start claude code
claude # Execute claude in any directory
3.2 claude code ui
http://{claude code ip}:{claude code port}/ui/
Username:admin
Password:sk-1234