You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat: configurable LLM request timeout via LLM_REQUEST_TIMEOUT_SECS (#615) (#630)
Add LLM_REQUEST_TIMEOUT_SECS env var (default: 120) to configure the
HTTP request timeout for LLM API calls. Primarily useful for local
models (Ollama, vLLM, LM Studio) that need more time for prompt
evaluation on consumer hardware.
The timeout is applied to the NearAI provider's HTTP client. Other
providers (Anthropic, OpenAI) use rig-core's default client.
- Add request_timeout_secs field to LlmConfig
- Thread timeout through create_llm_provider -> NearAiChatProvider
- Add NearAiChatProvider::new_with_timeout constructor
- Add .env.example documentation
- 2 regression tests for default and custom timeout values
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
0 commit comments