feat(providers): add Liquid AI (LFM2) as OpenAI-compatible provider#543
feat(providers): add Liquid AI (LFM2) as OpenAI-compatible provider#543
Conversation
Liquid AI's LFM2 models are edge-optimized and aligned with ZeptoClaw's IoT/robot-first thesis. Their cloud API at https://labs.liquid.ai/api/v1 is OpenAI-compatible, so this is purely registry plumbing — no new provider code, no new dependencies. Changes: - Add `providers.liquid` field to `ProvidersConfig` - Add `liquid` entry to `PROVIDER_REGISTRY` (keywords: liquid, lfm, lfm2) - Wire `provider_config_by_name()` match arm - Add `ZEPTOCLAW_PROVIDERS_LIQUID_{API_KEY,API_BASE,MODEL}` env overrides with `LIQUID_API_KEY` fallback, mirroring Novita - Keep `RUNTIME_SUPPORTED_PROVIDERS` in sync (required by test_runtime_supported_constant_stays_in_sync) - Tests: registry resolution, env override, model-keyword inference Closes #541
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughThe pull request adds Liquid AI (LFM) as a runtime-supported provider by extending configuration infrastructure with environment variable overrides, creating a new provider specification entry in the registry with OpenAI-compatible API routing, and registering it in the supported providers constant. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary
https://labs.liquid.ai/api/v1) — aligned with ZeptoClaw's edge/IoT thesis and the LFM2 model family (LFM2-1.2B-Tool, LFM2.5-1.2B-Instruct, LFM2-24B-A2B, etc.)novita/kimi/qianfanliquid,lfm,lfm2) allow auto-inference from model IDs likelfm2-24b-a2bRelated Issue
Closes #541
Files changed (5 locations, 4 files)
src/config/types.rs— addedpub liquid: Option<ProviderConfig>toProvidersConfigwith#[serde(default)]src/providers/registry.rs— addedProviderSpecentry toPROVIDER_REGISTRY+ match arm inprovider_config_by_name()src/config/mod.rs— added env handlers forZEPTOCLAW_PROVIDERS_LIQUID_{API_KEY,API_BASE,MODEL}(withLIQUID_API_KEYfallback)src/providers/mod.rs— added"liquid"toRUNTIME_SUPPORTED_PROVIDERS(required by the existingtest_runtime_supported_constant_stays_in_syncsync test — this was the only deviation from the 5-location spec)test_liquid_resolves_with_default_base_url,test_liquid_model_inference(inregistry.rs), andtest_liquid_env_override_api_key(inconfig/mod.rs)Pre-submit Checklist
upstream/maincargo fmt --checkpassescargo clippy -- -D warningspassescargo nextest run --libpasses (3420 passed, 6 skipped)cargo test --docpasses (128 passed)Security Considerations
N/A — adds a config-only provider entry; the Liquid AI API key is handled by the same env/config path as all existing providers, with the existing redaction and env-override precedence rules.
Test Plan
Automated:
cargo nextest run --lib— full suite green (3420/3420)cargo test --doc— 128 passedlfm/lfm2/liquid/liquid/lfm2-*, andZEPTOCLAW_PROVIDERS_LIQUID_API_KEYenv overrideSmoke test (requires a Liquid Labs API key from https://labs.liquid.ai):
Expected: normal chat completion response; tool calling works for
lfm2-*-toolandlfm2-24b-a2b(native function calling per Liquid AI docs).Summary by CodeRabbit
New Features
Tests