v3.9.0 #7713
mudler
announced in
Announcements
v3.9.0
#7713
Replies: 1 comment
-
|
Hi LocalAI team 👋 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Xmas-release 🎅 LocalAI 3.9.0! 🚀
LocalAI 3.9.0 is focused on stability, resource efficiency, and smarter agent workflows. We've addressed critical issues with model loading, improved system resource management, and introduced a new Agent Jobs panel for scheduling and managing background agentic tasks. Whether you're running models locally or orchestrating complex agent workflows, this release makes it faster, more reliable, and easier to manage.
📌 TL;DR
🚀 New Features
🤖 Agent Jobs Panel: Schedule & Automate Tasks
LocalAI 3.9.0 introduces a new Agent Jobs panel in the web UI and API, allowing you to create, run, and schedule agentic tasks in the background that can be started programmatically via API or from the Web interface.
🧠 Smart Memory Reclaimer: Auto-Optimize GPU Resources
We’ve introduced a new Memory Reclaimer that monitors system memory usage and automatically frees up GPU/VRAM when needed.
This is a foundational step toward adaptive resource management — future versions will expand this with more advanced policies and fleet-wide control.
🔁 LRU Model Eviction: Intelligent Model Management
Building on the new reclaimer, LocalAI now supports LRU (Least Recently Used) eviction for loaded models.
single_active_backendmode (now defaults to LRU=1 for backward compatibility).🖥️ UI & UX Polish
/browse/instead ofbrowse) for consistency.📦 Backward Compatibility & Architecture
/usr/shareto/var/lib: follows Linux conventions for mutable data.🛠️ Fixes & Improvements
/readyzand/healthzendpoints required authentication, breaking Docker health checks and monitoring toolshuggingface://user/model/GGUF/model.gguf).🚀 The Complete Local Stack for Privacy-First AI
LocalAI
The free, Open Source OpenAI alternative. Drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.
Link: https://github.com/mudler/LocalAI
LocalAGI
Local AI agent management platform. Drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.
Link: https://github.com/mudler/LocalAGI
LocalRecall
RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Works alongside LocalAI and LocalAGI.
Link: https://github.com/mudler/LocalRecall
❤️ Thank You
LocalAI is a true FOSS movement — built by contributors, powered by community.
If you believe in privacy-first AI:
Your support keeps this stack alive.
✅ Full Changelog
📋 Click to expand full changelog
What's Changed
Breaking Changes 🛠
Bug fixes 🐛
Exciting New Features 🎉
🧠 Models
📖 Documentation and examples
👒 Dependencies
Other Changes
eec1e33a9ed71b79422e39cc489719cf4f8e0777by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toeec1e33a9ed71b79422e39cc489719cf4f8e0777#73634abef75f2cf2eee75eb5083b30a94cf981587394by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to4abef75f2cf2eee75eb5083b30a94cf981587394#73828c32d9d96d9ae345a0150cae8572859e9aafea0bby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8c32d9d96d9ae345a0150cae8572859e9aafea0b#73957f8ef50cce40e3e7e4526a3696cb45658190e69aby @mudler in chore: ⬆️ Update ggml-org/llama.cpp to7f8ef50cce40e3e7e4526a3696cb45658190e69a#7402ec18edfcba94dacb166e6523612fc0129cead67aby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toec18edfcba94dacb166e6523612fc0129cead67a#740661bde8e21f4a1f9a98c9205831ca3e55457b4c78by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to61bde8e21f4a1f9a98c9205831ca3e55457b4c78#74155865b5e7034801af1a288a9584631730b25272c6by @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp to5865b5e7034801af1a288a9584631730b25272c6#7422e9f9483464e6f01d843d7f0293bd9c7bc6b2221cby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toe9f9483464e6f01d843d7f0293bd9c7bc6b2221c#74218160b38a5fa8a25490ca33ffdd200cda51405688by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to8160b38a5fa8a25490ca33ffdd200cda51405688#7438a88b93f85f08fc6045e5d8a8c3f94b7be0ac8bceby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toa88b93f85f08fc6045e5d8a8c3f94b7be0ac8bce#7448db97837385edfbc772230debbd49e5efae843a71by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp todb97837385edfbc772230debbd49e5efae843a71#7447a8f45ab11d6731e591ae3d0230be3fec6c2efc91by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp toa8f45ab11d6731e591ae3d0230be3fec6c2efc91#7483086a63e3a5d2dbbb7183a74db453459e544eb55aby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to086a63e3a5d2dbbb7183a74db453459e544eb55a#74969f5ed26e43c680bece09df7bdc8c1b7835f0e537by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to9f5ed26e43c680bece09df7bdc8c1b7835f0e537#75094dff236a522bd0ed949331d6cb1ee2a1b3615c35by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to4dff236a522bd0ed949331d6cb1ee2a1b3615c35#7508a81a569577cc38b32558958b048228150be63eaeby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toa81a569577cc38b32558958b048228150be63eae#752911ab095230b2b67210f5da4d901588d56c71fe3aby @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp to11ab095230b2b67210f5da4d901588d56c71fe3a#75392551e4ce98db69027d08bd99bcc3f1a4e2ad2cefby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to2551e4ce98db69027d08bd99bcc3f1a4e2ad2cef#756143a70e819b9254dee0d017305d6992f6bb27f850by @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp to43a70e819b9254dee0d017305d6992f6bb27f850#75625266379bcae74214af397f36aa81b2a08b15d545by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to5266379bcae74214af397f36aa81b2a08b15d545#75635c8a717128cc98aa9e5b1c44652f5cf458fd426eby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to5c8a717128cc98aa9e5b1c44652f5cf458fd426e#7573200cb6f2ca07e40fa83b610a4e595f4da06ec709by @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp to200cb6f2ca07e40fa83b610a4e595f4da06ec709#7597ef83fb8601229ff650d952985be47e82d644bfaaby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toef83fb8601229ff650d952985be47e82d644bfaa#76113e79e73eee32e924fbd34587f2f2ac5a45a26b61by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to3e79e73eee32e924fbd34587f2f2ac5a45a26b61#7630d37fc935059211454e9ad2e2a44e8ed78fd6d1ceby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tod37fc935059211454e9ad2e2a44e8ed78fd6d1ce#7629f9ec8858edea4a0ecfea149d6815ebfb5ecc3bcdby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tof9ec8858edea4a0ecfea149d6815ebfb5ecc3bcd#76426c22e792cb0ee155b6587ce71a8410c3aeb06949by @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to6c22e792cb0ee155b6587ce71a8410c3aeb06949#7644ce734a8a2f9fb6eb4f0383ab1370a1b0014ab787by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toce734a8a2f9fb6eb4f0383ab1370a1b0014ab787#765452ab19df633f3de5d4db171a16f2d9edd2342fecby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to52ab19df633f3de5d4db171a16f2d9edd2342fec#7665langchain-localaiintegration package to documentation by @mkhludnev in docs: Addlangchain-localaiintegration package to documentation #7677New Contributors
Full Changelog: v3.8.0...v3.9.0
This discussion was created from the release v3.9.0.
Beta Was this translation helpful? Give feedback.
All reactions