Use Claude Code with OpenAI Models 🤝
A proxy server that lets you use Claude Code with OpenAI models like GPT-4o / gpt-4.5 and o3-mini. 🌉
- OpenAI API key 🔑
-
Clone this repository:
git clone https://github.com/1rgs/claude-code-openai.git cd claude-code-openai
-
Install UV:
curl -LsSf https://astral.sh/uv/install.sh | sh
-
Configure your API keys: Create a
.env
file with:OPENAI_API_KEY=your-openai-key # Optional: customize which models are used # BIG_MODEL=gpt-4o # SMALL_MODEL=gpt-4o-mini
-
Start the proxy server:
uv run uvicorn server:app --host 0.0.0.0 --port 8082
-
Install Claude Code (if you haven't already):
npm install -g @anthropic-ai/claude-code
-
Connect to your proxy:
ANTHROPIC_BASE_URL=http://localhost:8082 claude
-
That's it! Your Claude Code client will now use OpenAI models through the proxy. 🎯
The proxy automatically maps Claude models to OpenAI models:
Claude Model | OpenAI Model |
---|---|
haiku | gpt-4o-mini (default) |
sonnet | gpt-4o (default) |
You can customize which OpenAI models are used via environment variables:
BIG_MODEL
: The OpenAI model to use for Claude Sonnet models (default: "gpt-4o")SMALL_MODEL
: The OpenAI model to use for Claude Haiku models (default: "gpt-4o-mini")
Add these to your .env
file to customize:
OPENAI_API_KEY=your-openai-key
BIG_MODEL=gpt-4o
SMALL_MODEL=gpt-4o-mini
Or set them directly when running the server:
BIG_MODEL=gpt-4o SMALL_MODEL=gpt-4o-mini uv run uvicorn server:app --host 0.0.0.0 --port 8082
This proxy works by:
- Receiving requests in Anthropic's API format 📥
- Translating the requests to OpenAI format via LiteLLM 🔄
- Sending the translated request to OpenAI 📤
- Converting the response back to Anthropic format 🔄
- Returning the formatted response to the client ✅
The proxy handles both streaming and non-streaming responses, maintaining compatibility with all Claude clients. 🌊
Contributions are welcome! Please feel free to submit a Pull Request. 🎁