Releases: letta-ai/letta
v0.11.3
What's Changed
- mv dictconfig out of getlogger by @andrewrfitz in #2759
- chore: bump v0.11.3 by @carenthomas in #2760
Full Changelog: 0.11.2...0.11.3
v0.11.2
What's Changed
- fix: incorrect URL for Ollama embeddings endpoint by @antondevson in #2750
- fix: all model types returned from ollama provider by @antondevson in #2744
- feat: Add max_steps parameter to agent export by @mattzh72 in https://github.com/letta-ai/letta/pull/3828
Full Changelog: 0.11.1...0.11.2
v0.11.1
This release adds support for the latest model releases, and makes improvements to base memory and file tools.
🧠 Improved LLM model support
- Added support for Claude Opus 4.1 and GPT-5 models (#3806)
- Added
minimaloption forreasoning_effortparameter in toLLMConfig(#3816)
🔨 Built-in tool improvements
v0.11.0
⚠️ Legacy LocalClient and RestClient fully deprecated
- Legacy clients are fully removed moving forward and replaced by the new Letta sdk clients (python and typescript supported)
⛩️ Jinja Templating optimizations
- Jinja template engine is now offloaded to the thread pool to minimize CPU-bound operations blocking the async event loop
- Removed redundant rendering operations in critical paths
📈 Add Signoz integration for traces exporting
- You can configure exporting otel traces to Signoz by passing the required enviornment variables:
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
...
-e SIGNOZ_ENDPOINT=${SIGNOZ_ENDPOINT} \
-e SIGNOZ_INGESTION_KEY=${SIGNOZ_INGESTION_KEY} \
-e LETTA_OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 \
letta/letta:latest
Other Improvements
- chore: bumped min python version for letta package to
3.11 - fix: incorrect context_window or embedding_dim using ollama by @antondevson in #2743
- feat: add filesystem demo with file upload and streaming by @cpfiffer in #2736
- chore: remove python 3.10 support and testing by @carenthomas in #2754
New Contributors
- @antondevson made their first contribution in #2743
- @cpfiffer made their first contribution in #2736
Full Changelog: 0.10.1...0.11.0
v0.10.0
This release introduces a new LettaPing message type for maintaining long streaming connections, MCP OAuth support, and various bugfixes and performance improvements.
🏓 LettaPing message type for long-running streaming requests
To avoid connection terminations during long running tools, Letta will send a LettaPing message at a specific interval - currently 90 seconds (#3651, #3652, #3647). You will only see the LettaPing message if your tool execution takes longer than the ping interval, and you are using streaming endpoints.
Moving to memgpt_v2_agent base architecture (#3624)
Default Agent Architecture Change: Letta now defaults to the new memgpt_v2_agent architecture (see documentation). Archival memory tools are also no longer added by default, but can be added explicitly.
🔐 MCP OAuth Support
We now support OAuth MCP providers such as Linear and Github.
🧠 LLM Provider Patches
We've added patches and improved support for LMStudio and Ollama:
- LMStudio: Added support for Qwen and Llama models with manual token counting for streaming
- Ollama: integration moved to new agent loop architecture (#3615)
Bug Fixes
- Fixed null check in voice endpoint (#3690)
- Fixed builtin web search tests (#3680)
- Fixed flaky test issues (#3629)
Other Improvements
- Removed deprecation warnings and excessive logging (#3560)
- Updated test infrastructure to use Gemini 2.5 (#3548)
- Set default Letta environment to dev for OSS (#3627)
- Added modal sandbox functionality with conditional imports (commit, #3668)
- Added
not_indexableproperty to agents (#3653) - Improved name generation (#3592)
- Default project configuration for agent listings (#3660)
- Removed OpenTelemetry file exports (#3691)
- API key validation before using token counter for Anthropic (#3628)
- Async system prompt compilation for better performance (#3685)
- Memory blocks viewer for better memory management (#3610)
- Polished open files tools (#3575)
- Desktop application improvements (#3537)
- Web search performance optimizations (#3650, #3649)
- Enhanced error handling for cancellations (#3641)
- Enhanced tool generation with better prompting (#3633, #3635)
- Improved tool renaming based on JSON schema (#3676)
- Multi-function files support with documentation (#3679)
v0.9.1
What's Changed
- Various bug fixes for new filesystem feature
- Asyncify jinja template rendering in core routes
- Add reverse index for agent tags
Full Changelog: 0.9.0...0.9.1
v0.9.0
v0.9.0
Letta Filesystem
We are introducing a big revamp in how Letta handles uploaded documents like PDFs! Letta Filesystem allows you to create folders and files (uploaded documents) that contextualize your agent's responses -- think Claude Projects, but transparent and controllable.
# upload a file into the folder
job = client.folders.files.upload(
folder_id=folder.id,
file=open("my_file.txt", "rb")
)
# wait until the job is completed
while True:
job = client.jobs.retrieve(job.id)
if job.status == "completed":
break
elif job.status == "failed":
raise ValueError(f"Job failed: {job.metadata}")
print(f"Job status: {job.status}")
time.sleep(1)When documents are uploaded to Letta, they are represented in the context window as files in a folder, and can be "open" (in-context) or "closed" (hidden). The agent manages this automatically will tools, but you can also manually open/close files (add/remove to the context window).
Options for converting documents to markdown
There are two options for parsing files: the default markitdown packages, or Mistral's OCR endpoint. To use Mistral, you must set LETTA_MISTRAL_API_KEY. If you're using Letta Cloud, Mistral OCR will be enabled by default.
Memory blocks vs. filesystem vs. archival memory
Curious on when to use memory blocks vs. filesystem vs. archival memory? See our guidelines on how to manage a context hierarchy.