Releases: he-yufeng/CoreCoder
v0.3.0 — LiteLLM backend for non-OpenAI providers
What's new
LiteLLM backend for non-OpenAI providers (#10, contributed by @RheagalFire)
Set CORECODER_PROVIDER=litellm and use any LiteLLM model string —
anthropic/claude-3-haiku, bedrock/anthropic.claude-v2, vertex_ai/gemini-pro, etc.
pip install 'corecoder[litellm]'
export CORECODER_PROVIDER=litellm
export CORECODER_MODEL=anthropic/claude-3-haiku
export ANTHROPIC_API_KEY=sk-ant-...
corecoderRoutes through to 100+ providers (Bedrock, Vertex AI, Cohere, Groq, Replicate, Anyscale, …) via one model-string convention.
Compatibility
The default openai backend is unchanged. Existing setups using OpenAI-compatible endpoints (Kimi, DeepSeek, Qwen, Ollama, …) keep working with no migration.
Install
pip install corecoder # default
pip install 'corecoder[litellm]' # with LiteLLMv0.2.0 - Renamed to CoreCoder
CoreCoder v0.2.0
Renamed from NanoCoder to CoreCoder to avoid confusion with Nano-Collective/nanocoder.
What changed
- Package name:
nanocoderagent→corecoder - CLI command:
nanocoder→corecoder - Env vars:
NANOCODER_*→CORECODER_* - Config dir:
~/.nanocoder→~/.corecoder
Install
pip install corecoderOld links to he-yufeng/NanoCoder redirect here automatically.
v0.1.0
NanoCoder v0.1.0 — 512,000 lines of Claude Code → 1,300 lines of Python.
Install: pip install nanocoderagent
- 7 tools (bash, read, write, edit, glob, grep, agent)
- Parallel tool execution, 3-layer context compression, sub-agents
- Any OpenAI-compatible LLM: Kimi K2.5, Claude Opus 4.6, GPT-5, DeepSeek V3, Qwen 3.5, Ollama
- Architecture deep dive (7 articles)