Skip to content

OpenAI API Error: 405 Method not allowed. Only POST requests are supported. ??? #166

@SkybuckFlying

Description

@SkybuckFlying

What happened?

# Set OpenAI-compatible variables
$env:OPENAI_API_KEY = "key123"
$env:OPENAI_BASE_URL = "http://api.together.xyz/v1"
$env:OPENAI_MODEL = "Qwen/Qwen3-235B-A22B-Instruct-2507-tput"

$env:GEMINI_API_KEY = "key123"  # Must be a non-empty string
$env:GEMINI_BASE_URL = "http://api.together.xyz/v1"
$env:GEMINI_MODEL = "Qwen/Qwen3-235B-A22B-Instruct-2507-tput"

# Set Gemini-compatible variables (for internal redirecting)
# $env:GEMINI_API_KEY = "dummy"  # Must be a non-empty string
# $env:GEMINI_BASE_URL = "http://127.0.0.1:1234/v1"
# $env:GEMINI_MODEL = "google/gemma-3n-e4b"

# Optional: run your main script
qwen.ps1

I tried with and without gemini settings.

What is wrong ? how to do it right ?:

(I do have key):


│ ⚠  Failed to load tiktoken, falling back to character approximation: Error: Missing tiktoken_bg.wasm            │
│        at node_modules/tiktoken/tiktoken.cjs                                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154029:30)                   │
│        at __require2 (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:19:51)     │
│        at file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154382:79              │
│        at async OpenAIContentGenerator.countTokens                                                              │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154382:36)                   │
│        at async GeminiClient.tryCompressChat                                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247415:49)                   │
│        at async GeminiClient.sendMessageStream                                                                  │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247222:24)                   │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255622:24        │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255715:34        │
│ ⚠  Failed to load tiktoken, falling back to character approximation: TypeError: get_encoding is not a function  │
│        at OpenAIContentGenerator.countTokens                                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154383:28)                   │
│        at async GeminiClient.sendMessageStream                                                                  │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247239:51)                   │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255622:24        │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255715:34        │
│ ⚠  Failed to load tiktoken, falling back to character approximation: TypeError: get_encoding is not a function  │
│        at OpenAIContentGenerator.countTokens                                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154383:28)                   │
│        at process.processTicksAndRejections (node:internal/process/task_queues:105:5)                           │
│        at async OpenAIContentGenerator.generateContentStream                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154283:38)                   │
│        at async retryWithBackoff                                                                                │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:246355:14)                   │
│        at async GeminiChat.sendMessageStream                                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:246735:31)                   │
│        at async Turn.run                                                                                        │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:245696:30)                   │
│        at async GeminiClient.sendMessageStream                                                                  │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247257:22)                   │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255622:24        │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255715:34        │
│ ✖  OpenAI API Streaming Error: 405 Method not allowed. Only POST requests are supported.                        │
│ ✖  Error when talking to Gemini API Full report available at:                                                   │
│    G:\Temp\gemini-client-error-Turn.run-sendMessageStream-2025-07-31T23-14-41-147Z.json                         │
│ ⚠  Failed to load tiktoken, falling back to character approximation: TypeError: get_encoding is not a function  │
│        at OpenAIContentGenerator.countTokens                                                                    │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154383:28)                   │
│        at process.processTicksAndRejections (node:internal/process/task_queues:105:5)                           │
│        at async OpenAIContentGenerator.generateContent                                                          │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:154136:38)                   │
│        at async retryWithBackoff                                                                                │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:246355:14)                   │
│        at async GeminiClient.generateJson                                                                       │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247300:22)                   │
│        at async checkNextSpeaker                                                                                │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:246254:28)                   │
│        at async GeminiClient.sendMessageStream                                                                  │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247269:32)                   │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255622:24        │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255715:34        │
│ ✖  OpenAI API Error: 405 Method not allowed. Only POST requests are supported.                                  │
│ ✖  Error generating JSON content via API. Full report available at:                                             │
│    G:\Temp\gemini-client-error-generateJson-api-2025-07-31T23-14-41-236Z.json                                   │
│ ⚠  Failed to talk to Gemini endpoint when seeing if conversation should continue. Error: Failed to generate     │
│    JSON content: OpenAI API error: 405 Method not allowed. Only POST requests are supported.                    │
│        at GeminiClient.generateJson                                                                             │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247345:13)                   │
│        at async checkNextSpeaker                                                                                │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:246254:28)                   │
│        at async GeminiClient.sendMessageStream                                                                  │
│    (file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:247269:32)                   │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255622:24        │
│        at async file:///G:/Tools/npm-global/node_modules/@qwen-code/qwen-code/bundle/gemini.js:255715:34   

What did you expect to happen?

It should work ?

Client information

│ CLI Version 0.0.1-alpha.12 │
│ Git Commit d360b86 (local modifications) │
│ Model Qwen/Qwen3-235B-A22B-Instruct-2507-tput │
│ Sandbox no sandbox │
│ OS win32 │
│ Auth Method openai

Login information

No response

Anything else we need to know?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    type/bugSomething isn't working as expected

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions