Skip to content

feat(providers): add Liquid AI (LFM2) as OpenAI-compatible provider#543

Open
qhkm wants to merge 1 commit intomainfrom
feat/liquid-provider
Open

feat(providers): add Liquid AI (LFM2) as OpenAI-compatible provider#543
qhkm wants to merge 1 commit intomainfrom
feat/liquid-provider

Conversation

@qhkm
Copy link
Copy Markdown
Owner

@qhkm qhkm commented Apr 23, 2026

Summary

  • Adds Liquid AI as a first-class OpenAI-compatible provider (base URL https://labs.liquid.ai/api/v1) — aligned with ZeptoClaw's edge/IoT thesis and the LFM2 model family (LFM2-1.2B-Tool, LFM2.5-1.2B-Instruct, LFM2-24B-A2B, etc.)
  • Pure registry plumbing — no new provider code, no new dependencies, reuses existing OpenAI-compat infrastructure exactly like novita / kimi / qianfan
  • Model keywords (liquid, lfm, lfm2) allow auto-inference from model IDs like lfm2-24b-a2b

Related Issue

Closes #541

Files changed (5 locations, 4 files)

  1. src/config/types.rs — added pub liquid: Option<ProviderConfig> to ProvidersConfig with #[serde(default)]
  2. src/providers/registry.rs — added ProviderSpec entry to PROVIDER_REGISTRY + match arm in provider_config_by_name()
  3. src/config/mod.rs — added env handlers for ZEPTOCLAW_PROVIDERS_LIQUID_{API_KEY,API_BASE,MODEL} (with LIQUID_API_KEY fallback)
  4. src/providers/mod.rs — added "liquid" to RUNTIME_SUPPORTED_PROVIDERS (required by the existing test_runtime_supported_constant_stays_in_sync sync test — this was the only deviation from the 5-location spec)
  5. Tests: test_liquid_resolves_with_default_base_url, test_liquid_model_inference (in registry.rs), and test_liquid_env_override_api_key (in config/mod.rs)

Pre-submit Checklist

  • I branched from upstream/main
  • This PR contains only commits related to this change
  • cargo fmt --check passes
  • cargo clippy -- -D warnings passes
  • cargo nextest run --lib passes (3420 passed, 6 skipped)
  • cargo test --doc passes (128 passed)
  • I added or updated tests for my changes (3 new tests)
  • New constants/limits are shared (not duplicated across files)
  • No new dependencies unless necessary (we target ~6 MB binary)

Security Considerations

N/A — adds a config-only provider entry; the Liquid AI API key is handled by the same env/config path as all existing providers, with the existing redaction and env-override precedence rules.

Test Plan

Automated:

  • cargo nextest run --lib — full suite green (3420/3420)
  • cargo test --doc — 128 passed
  • New tests cover: default base URL resolution, model-keyword inference for lfm/lfm2/liquid/liquid/lfm2-*, and ZEPTOCLAW_PROVIDERS_LIQUID_API_KEY env override

Smoke test (requires a Liquid Labs API key from https://labs.liquid.ai):

export LIQUID_API_KEY=<key>
cargo run --release -- agent --provider liquid --model lfm2-24b-a2b -m "ping"
# optional: test the tool-specialized 1.2B edge variant
cargo run --release -- agent --provider liquid --model lfm2-1.2b-tool -m "What is 2+2?"

Expected: normal chat completion response; tool calling works for lfm2-*-tool and lfm2-24b-a2b (native function calling per Liquid AI docs).

Summary by CodeRabbit

  • New Features

    • Liquid AI provider is now available for use in the application
    • Configuration support for API key, base URL, and model selection
    • Environment variables can override Liquid provider settings
  • Tests

    • Added verification tests for Liquid provider configuration and model routing

Liquid AI's LFM2 models are edge-optimized and aligned with ZeptoClaw's
IoT/robot-first thesis. Their cloud API at https://labs.liquid.ai/api/v1
is OpenAI-compatible, so this is purely registry plumbing — no new
provider code, no new dependencies.

Changes:
- Add `providers.liquid` field to `ProvidersConfig`
- Add `liquid` entry to `PROVIDER_REGISTRY` (keywords: liquid, lfm, lfm2)
- Wire `provider_config_by_name()` match arm
- Add `ZEPTOCLAW_PROVIDERS_LIQUID_{API_KEY,API_BASE,MODEL}` env overrides
  with `LIQUID_API_KEY` fallback, mirroring Novita
- Keep `RUNTIME_SUPPORTED_PROVIDERS` in sync (required by
  test_runtime_supported_constant_stays_in_sync)
- Tests: registry resolution, env override, model-keyword inference

Closes #541
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: dbb81a2b-db83-421f-bc5d-858732aa7079

📥 Commits

Reviewing files that changed from the base of the PR and between 1aa1a4e and 4739f8c.

📒 Files selected for processing (4)
  • src/config/mod.rs
  • src/config/types.rs
  • src/providers/mod.rs
  • src/providers/registry.rs

📝 Walkthrough

Walkthrough

The pull request adds Liquid AI (LFM) as a runtime-supported provider by extending configuration infrastructure with environment variable overrides, creating a new provider specification entry in the registry with OpenAI-compatible API routing, and registering it in the supported providers constant.

Changes

Cohort / File(s) Summary
Configuration Infrastructure
src/config/mod.rs, src/config/types.rs
Added environment variable override logic for Liquid API key (with fallback), API base, and per-provider model selection; introduced optional liquid field to ProvidersConfig; includes unit test validating env override application.
Provider Registration
src/providers/mod.rs, src/providers/registry.rs
Registered "liquid" in RUNTIME_SUPPORTED_PROVIDERS constant and added new ProviderSpec entry with model keyword mapping (lfm, lfm2 families), OpenAI-compatible backend routing, default base URL (https://labs.liquid.ai/api/v1), and API-key requirement; includes unit tests for default base URL resolution and model inference routing.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

🐰 Through liquid labs the models flow,
Edge-native LFMs all aglow,
A config hop, a test that squeaks,
The smallest models, strongest techniques!
✨ Hop-hop for Liquid's edge-native dreams!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change: adding Liquid AI as an OpenAI-compatible provider with LFM2 model support.
Linked Issues check ✅ Passed The PR fulfills Phase 1 objectives [#541]: cloud provider configured, model inference working, tool-calling via OpenAI compatibility. Phases 2 and 3 remain as future work, matching issue scope.
Out of Scope Changes check ✅ Passed All changes align with Phase 1 scope: config/registry additions for Liquid provider, env variable handling, and unit tests. No unrelated refactoring or Phase 2/3 work present.
Docstring Coverage ✅ Passed Docstring coverage is 88.89% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/liquid-provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(providers): Liquid AI (LFM) provider integration — edge-native models

1 participant