Bring OpenAI style API with low cost.
- Use Gemini models to complete OpenAI API
chatandembeddingcalls. - Use supertonic to implement
tts-1model.
Gemini-OpenAI-Proxy is a proxy software. It is designed to convert OpenAI API protocol calls into Google Gemini Pro protocol, so that software using OpenAI protocol can use Gemini Pro model without perception.
If you're interested in using Google Gemini but don't want to modify your software, Gemini-OpenAI-Proxy is a great option. It allows you to easily integrate the powerful features of Google Gemini without having to do any complex development work.
Get api key from https://makersuite.google.com/app/apikey Deploy supertonic on huggingface or see github, set
TTS_ENDPOINTin.dev.vars
✅ Gemini Pro
curl -s http://localhost:8000/v1/chat/completions \
-H "Authorization: Bearer $YOUR_GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [{"role": "user", "content": "Hello, Who are you?"}],
"temperature": 0.7
}'curl http://localhost:8000/v1/audio/speech -X POST -H "Authorization: Bearer $YOUR_GEMINI_API_KEY" -H "Content-Type: application/json" -d '{ "input": "Hello secure world!!", "voice": "alloy" ,"model":"tts-1" }' --output test.wav✅ Gemini Pro Vision
curl -s http://localhost:8000/v1/chat/completions \
-H "Authorization: Bearer $YOUR_GEMINI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4-vision-preview",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What do you see in this picture?"
},
{
"type": "image_url",
"image_url": {
"url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADAAAAAnAgMAAAA0vyM3AAAACVBMVEX/4WwCAgF3aTMcpbzGAAAAa0lEQVR4nGOgAWB1QOYEIHFEcXKmhCBxQqYgcSLEEGymAFEEhzFAFYmTwNoA53A6IDmB1YETidPAiLBVFGgEgrNqJYIzNTQU4Z5QZA6QNQ3hGpAZcNegceBOADFQOQlQDhfQyUwLkPxKVwAABbkRCcDA66QAAAAASUVORK5CYII="
}
}
]
}
],
"stream": false
}'-
/v1/chat/completions- stream
- complete
| Request Model | Target Gemini Model |
|---|---|
| gpt-3.5-turbo | gemini-1.5-flash-8b-latest |
| gpt-4 | gemini-1.5-pro-latest |
| gpt-4o | gemini-1.5-flash-latest |
| gpt-4o-mini | gemini-1.5-flash-8b-latest |
| gpt-4-vision-preview | gemini-1.5-flash-latest |
| gpt-4-turbo | gemini-1.5-pro-latest |
| gpt-4-turbo-preview | gemini-2.0-flash-exp |
| gemini* | gemini* |
| ...(others) | gemini-1.5-flash-latest |
Copy main_cloudflare-workers.mjs to
cloudflare-workers
Copy main_deno.mjs to deno deploy
- Alternatively can be deployed with cli:
vercel deploy - Serve locally:
vercel dev - Vercel Functions limitations (with Edge runtime)
curl -fsSL https://deno.land/install.sh | sh
source ~/.bashrcdeno task build:denodeno task start:denonode dist/main_node.mjsbun dist/main_bun.mjsdeno task build:deno
mkdir -p cf
cp ./dist/main_cloudflare-workers.mjs ./cf/_worker.js
npx wrangler dev ./cf/_worker.js --name gemini-openai --compatibility-date 2025-10-04 --port 8000
npx wrangler deploy ./cf/_worker.js --name gemini-openai --compatibility-date 2025-10-04
npx wrangler pages deploy ./cf --project-name gemini-openai
docker run -d -p 8000:8000 ghcr.io/zuisong/gemini-openai-proxy:deno
## or
docker run -d -p 8000:8000 ghcr.io/zuisong/gemini-openai-proxy:bun
## or
docker run -d -p 8000:8000 ghcr.io/zuisong/gemini-openai-proxy:node
