Skip to content

v0.10.1

Latest

Choose a tag to compare

@advaitjain advaitjain released this 03 Apr 01:51

πŸ”₯ Gemma 4 support

Deploy Gemma 4 across a broad range of hardware with stellar performance (blog).

πŸ‘‰ Try on Linux, macOS, Windows (WSL) or Raspberry Pi with the
LiteRT-LM CLI:

litert-lm run  \
   --from-huggingface-repo=litert-community/gemma-4-E2B-it-litert-lm \
   gemma-4-E2B-it.litertlm \
   --prompt="What is the capital of France?"

Release Notes

  • CLI Enhancements & Migration: Migrated the CLI from fire to click, adding features like --verbose, --version, improved help formatting, and enhanced terminal output styling (#1784, #1733, #1791, #1792).
  • Hugging Face Integration: Added support for importing models directly from Hugging Face and implemented auto-conversion for missing models during "run" commands (#1797, #1735).
  • Core Performance & Features: Introduced a LiteRT-based KV cache implementation, speculative decoding support, and improved context merging for conversation history (#1601, #1793, #1742).
  • Platform & Build Improvements: Refactored CMake for better Android/cross-compilation support, updated the Windows build with a CPU sampler workaround, and transitioned nightly releases to Ubuntu-22.04 (#1741, #1734, #1772).
  • API & Documentation: Expanded the Kotlin API for response channel configuration and launched new Python API resources, including a "Getting Started" guide and a Colab notebook (#1724, #1737, #1757).