Skip to content

Releases: Mirascope/mirascope

Mirascope v2.0.0-alpha5

31 Dec 04:52
8e8a2ef

Choose a tag to compare

Pre-release

Hello and happy holidays! Here are the short-but-sweet release notes for the latest Mirascope v2 Alpha Release.

  • Improved exception handling: All LLM calls now emit consistent exceptions, such as llm.RateLimitError or llm.AuthenticationError. You can learn more via our API docs, or by reading the source.
  • Improved semantics for Response.usage. Specifically, input_tokens now includes all input tokens (including cache reads and writes) and output_tokens includes all output tokens (including reasoning).
  • Added llm.reset_provider_registry() for resetting all provider configuration to defaults.
  • Removed llm.load_provider() and made it an implementation detail of llm.register_provider() (which both registers and returns the provider).

What's Changed

  • refactor(cloud): add initial effect/sql-drizzle infra for migration by @willbakst in #1714
  • refactor(cloud): Users (@effect/sql-drizzle) by @willbakst in #1715
  • refactor(cloud): Sessions (@effect/sql-drizzle) by @willbakst in #1716
  • refactor(cloud): OrganizationMemberships (@effect/sql-drizzle) by @willbakst in #1724
  • refactor(cloud): Organizations (@effect/sql-drizzle) by @willbakst in #1725
  • feat(cloud): add desktop header and footer for home and other pages by @sourishkrout in #1721
  • fix(cloud): center app ui properly to avoid it hanging off to the side by @sourishkrout in #1723
  • fix(cloud): use tailwind for homepage buttons background color by @sourishkrout in #1749
  • refactor(cloud): ProjectMemberships (@effect/sql-drizzle) by @willbakst in #1729
  • refactor(cloud): Projects (@effect/sql-drizzle) by @willbakst in #1730
  • tests(cloud): use in-process test container rather than separate postgres (ensures fresh state) by @willbakst in #1731
  • fix(cloud): test warnings on testcontainer setup by @willbakst in #1733
  • refactor(cloud): remove old db services and replace with @effect/drizzle-sql implementation by @willbakst in #1734
  • fix(cloud): use fk violation to enforce org membership requirement on project membership creation by @willbakst in #1735
  • feat(cloud): db/Environments by @willbakst in #1591
  • feat(cloud): add environments API endpoints and tests by @willbakst in #1592
  • feat(cloud): add environments frontend (hooks, context, sidebar update) by @willbakst in #1593
  • feat(cloud): add API keys database schema, service, and tests by @willbakst in #1594
  • feat(cloud): add API keys API endpoints and tests by @willbakst in #1595
  • feat(cloud): add API key-based authentication for v0 routes by @willbakst in #1596
  • feat(cloud): add API keys frontend (hooks and settings section) by @willbakst in #1597
  • feat(cloud): add organization settings page with full management by @willbakst in #1598
  • feat(python-sdk): generate api for environments and API keys by @willbakst in #1599
  • fix(cloud): settings content using hooks outside of their context provider by @willbakst in #1602
  • fix(cloud): org deletion failure and api key copy button styling issues by @willbakst in #1603
  • chore: update AGENTS.md by @willbakst in #1600
  • tests(cloud): ignore lines in db client not worth covering by @willbakst in #1738
  • refactor(cloud): clean up auth/oauth.ts to improve readability and documentation by @willbakst in #1739
  • tests(cloud): auth 100% test coverage (ignoring oauth.ts which we test live) by @willbakst in #1740
  • refactor(cloud): remove Effect from naming now that @effect/sql-drizzle refactor is complete by @willbakst in #1744
  • fix(cloud): update makeReady to use Proxy instead of creating a new object by @willbakst in #1745
  • fix(cloud): enforce uniqueness on organizations, projects, and environments with slugs by @willbakst in #1746
  • chore(python-sdk): regenerate API for recent udpates like schemas and slugs etc. by @willbakst in #1747
  • Remove ops.version decorator from closure by @koxudaxi in #1697
  • refactor(cloud): use mirple in place for Mirascope purple by @sourishkrout in #1751
  • test: ci.yml only invokes relevant tests based on changes by @teamdandelion in #1754
  • fix(api): distinguish handler NotFoundError from route-not-found in Effect API by @koxudaxi in #1635
  • feat(db): add traces and spans schema by @koxudaxi in #1604
  • feat(cloud): Effect-native Stripe wrapper by @willbakst in #1757
  • feat(cloud): create stripe customer on organization creation by @willbakst in #1758
  • feat(cloud): automatically add new organization to credits subscription for usage access by @willbakst in #1759
  • feat(cloud): display user's credit grants in their organization dashboard (temp just to prove we can show it, UI will be cleaned up) by @willbakst in #1760
  • feat(cloud): update stripe on organization update by @willbakst in #1761
  • feat(cloud): handle organization deletion with stripe by @willbakst in #1762
  • refactor(cloud): Payments service by @willbakst in #1763
  • test: ensure provider registry is cleared before all llm tests by @teamdandelion in #1782
  • feat: anthropic streams handle RateLimitError by @teamdandelion in #1783
  • feat: handle RateLimitErrors across anthropic by @teamdandelion in #1784
  • chore(vite): upgrade vite to v7 and bump its dependencies by @sourishkrout in #1753
  • feat: wrap rate limit errors in google provider by @teamdandelion in #1785
  • feat: wrap rate limit errors for openai completions by @teamdandelion in #1786
  • feat: wrap rate limit errors for openai responses by @teamdandelion in #1787
  • feat: llm.NotFoundError when models not found by @teamdandelion in #1788
  • feat: wrap llm authentication errors by @teamdandelion in #1789
  • feat: error wrapping for all exported anthropic api errors by @teamdandelion in #1790
  • feat: wrap openai errors by @teamdandelion in #1791
  • feat: wrap google errors by @teamdandelion in #1792
  • feat: wrap unknown provider errors in APIError by @teamdandelion in #1793
  • feat: wrap original exception and status code on errors by @teamdandelion in #1794
  • refactor(python-sdk): unify error handling at the llm.Model level by @willbakst in #1796
  • refactor(python-sdk): move error wrapping from llm.Model to BaseProvider by @teamdandelion in #1797
  • refactor(python-sdk): refactor and cleanup provider registry by @teamdandelion in #1798
  • chore: bump version to alpha.5 for release by @teamdandelion in #1795

Full Changelog: v2.0.0-alpha.4...v2.0.0-alpha.5

Mirascope v2.0.0-alpha.4

19 Dec 19:38
0f3e7e7

Choose a tag to compare

Pre-release

Response usage tracking

The llm.Response class now has a .usage property, which records provider token usage. When available, this will be set on both regular responses and stream responses. The llm.Usage class includes input tokens, output tokens and (when available with the chosen provider), cache read/write tokens and reasoning tokens.

Anthropic strict structured outputs

We now support strict structured response formatting with Anthropic, for supported models. Since this is still a beta feature on Anthropic's end, it will only be enabled if you manually specify strict mode using the llm.format(formattable, mode="strict") function. Otherwise, the Anthropic provider defaults to tool mode.

Improved Google strict structured outputs

Recent Google models improve support for using strict mode along with tool calling, so Mirascope now takes advantage of that and uses strict mode by default in more cases.

Anthropic Cache Control

We now automatically add cache control blocks to Anthropic prompts, bringing more consistency with other providers (which automatically configure caching).

Added support: Together.ai and Ollama providers

We now have builtin support for using Together.ai and Ollama as LLM providers.
If you use a model_id starting with "ollama/", it will automatically invoke the ollama provider (defaulting to an API endpoint at http://localhost:11434/v1/).
If you'd like to use Together.ai to use an ollama model (or any other model supported by Together), you can use register_provider, as follows:

llm.register_provider("ollama", "ollama/")      # e.g. for `ollama/gemma3:4b`
llm.register_provider("together", "together/")  # e.g. for `together/moonshotai/Kimi-K2-Thinking`

What's Changed

  • fix(cloud): add path param aware where clause checks to base service methods by @willbakst in #1674
  • refactor(cloud): change ANNOTATOR role to VIEWER by @willbakst in #1675
  • refactor(cloud): add parent scoped where to findAll in base services + clean up and document base service file by @willbakst in #1676
  • docs(cloud): improve docs for organization service by @willbakst in #1677
  • feat(cloud): OrganizationMembershipService by @willbakst in #1678
  • refactor(cloud): update mocks to use test users in organization service tests where it makes sense by @willbakst in #1660
  • refactor(cloud): rename organization membership path to /members/:memberId by @willbakst in #1679
  • feat(cloud): add organization membership ACL change audit log by @willbakst in #1691
  • feat: use model_features framework to track anthropic model ids by @teamdandelion in #1693
  • feat: bump anthropic to 0.75.0\ by @teamdandelion in #1694
  • feat: track strict structured output support for anthropic models by @teamdandelion in #1695
  • refactor: move ensure_additional_properties_false to base utils by @teamdandelion in #1696
  • chore: fixup mlx cassette tests by @teamdandelion in #1705
  • chore: include claude-sonnet-4-0 as a tracked anthropic model by @teamdandelion in #1706
  • chore(cloud): provide prettier default config plus ignore file to avoid vscode/cursor using their own defaults by @sourishkrout in #1708
  • fix(cloud): prettier formatting and exclusion of generated files by @sourishkrout in #1722
  • refactor(cloud): refactor UI code to prepare for incremental port of website features by @sourishkrout in #1633
  • feat(cloud): add shadcn-based ui design system by @sourishkrout in #1692
  • feat(refactor): disable builtin CSS validation to avoid tailwind v4 false positives and skip vendored ui components by @sourishkrout in #1703
  • feat(cloud): add barebones home page with highly local CSS styles by @sourishkrout in #1704
  • feat(cloud): add optimized images vite middleware in development by @sourishkrout in #1720
  • feat: introduce AnthropicBetaProvider for beta API support by @koxudaxi in #1713
  • refactor: reorg the AnthropicBetaProvider by @teamdandelion in #1718
  • feat: support strict structured outputs in anthropic provider by @teamdandelion in #1707
  • feat: google provider uses strict outputs wherever possible by @teamdandelion in #1709
  • feat: add response.usage by @teamdandelion in #1719
  • feat: add usage tracking for streams by @teamdandelion in #1726
  • chore: bump version to 2.0.0a4 by @teamdandelion in #1727
  • refactor(cloud): update organization roles to OWNER, ADMIN, and MEMBER by @willbakst in #1698
  • fix(cloud): add cascade delete so deleting an organization deletes memberships/audits by @willbakst in #1699
  • feat(cloud): add project and project_memberships schemas by @willbakst in #1685
  • fix(cloud): soft delete users and scrub PII by @willbakst in #1700
  • chore(cloud): remove old unused test code that MockDatabase replaced by @willbakst in #1710
  • fix(cloud): allow self removal from an organization by @willbakst in #1711
  • feat(cloud): ProjectMembershipService by @willbakst in #1686
  • refactor: deduplicate openai model tracking by @teamdandelion in #1728
  • feat: add anthropic cache control by @teamdandelion in #1732
  • feat(cloud): ProjectService by @willbakst in #1588
  • refactor(cloud): update api tests to be more effectful using @effect/vitest, deps, and fixtures by @willbakst in #1712
  • feat: add projects API endpoints by @willbakst in #1589
  • fix(cloud): convert _tag -> tag so Fern autogeneration works by @willbakst in #1649
  • feat(python-sdk): Fern autogenerated Projects API by @willbakst in #1650
  • feat: add projects frontend (hooks, contexts, sidebar, dashboard) by @willbakst in #1590
  • feat(providers): add OpenAICompatibleProvider base class by @koxudaxi in #1702
  • Implement support for together provider by @koxudaxi in #1701
  • Implement support for ollama provider by @koxudaxi in #1717
  • Separate OpenAI-compatible provider tests into dedicated test suite by @koxudaxi in #1736
  • refactor(tests): use before_record_request hook for VCR sanitization by @koxudaxi in #1741
  • fix: tighten up semantics on response.Usage by @teamdandelion in #1743

Full Changelog: v2.0.0-alpha.3...v2.0.0-alpha.4

v2.0.0-alpha.3

13 Dec 00:49
519aa68

Choose a tag to compare

v2.0.0-alpha.3 Pre-release
Pre-release

Mirascope v2.0.0-alpha.3 Release Notes

Hello everyone! We are pleased to announce the latest alpha release of Mirascope: v2.0.0-alpha.3.

This release has an exciting new feature: the MLX Provider, allowing local inference on Apple Silicon for Macbook users.

We also have a refactored provider system, where you can use llm.register_provider to configure what provider will be used for any given LLM model. This means you can now use Mirascope v2 with any OpenAI-compatible model, by registering the OpenAI provider with a custom base url.

There are also core interface improvements like upgrading the @llm.prompt decorator as a model-agnostic alternative to @llm.call. Read more in the full notes below.

Generally, we are feeling good about the code quality and interfaces for Mirascope v2, and we're going to be releasing the official beta in the coming weeks. We welcome all feedback on the interfaces, as well as contributions from the community! You can start chatting with us in the #mirascope-v2-alpha channel to get involved.

Changes

New: MLX Support

First, a big thank you to @tnadav for implementing Mirascope MLX support! You can now run local models from the mlx-community on HuggingFace directly on your Macbook. Here's an example:

from mirascope import llm

@llm.call("mlx-community/Qwen3-8B-4bit-DWQ-053125")
def recommend_book(genre: str):
    return f"Recommend a {genre} book."

response = recommend_book("fantasy")
print(response.pretty())

Right now tools and structured formats are not supported, but @tnadav is working on them actively :)

New: llm.register_provider to customize provider behavior

You can now use llm.register_provider to specify what provider should be used for a given model. By default, Mirascope makes the OpenAI, Anthropic, Google, and MLX providers available (assuming their dependencies are installed). However, you can use register_provider to customize them, or give them additional scope. For example:

from mirascope import llm

llm.register_provider(
    "openai:completions",
    scope="ollama/",
    base_url="http://localhost:11434/v1",
    api_key="ollama",
)

@llm.call("ollama/gpt-oss:20b")
def recommend_book(genre: str):
    return f"Recommend a {genre} book"

response = recommend_book("fantasy")
print(response.pretty())

In this example, a local ollama model can be called via Mirascope's OpenAIProvider.

The scope is simply a string with prefix matching (so ollama/ will match any model_id starting with ollama/).

New llm.Prompt class from the @llm.prompt decorator

The @llm.prompt decorator now accepts tools and output format, and produces a llm.Prompt class that can be called with any model. This makes the prompt decorator a model agnostic version of the call decorator, so you can use it to fully define a prompt without specifying a default model. The logic is "Prompt + Model = Call".

Here's an example:

from mirascope import llm

@llm.prompt
def recommend_book(genre: str):
  return f"Recommend a {genre} book"

model = llm.model("openai/gpt-5")
response = recommend_book(model, "fantasy")
print(response.pretty())

Note you can add tools or a response format to the prompt decorator, and that Prompt works just like Call in supporting streaming via recommend_book.stream(model, "fantasy"), supporting async or context prompts, etc.

Unified ModelId type

We now specify models via a unified ModelId that specifies the model developer and the model name.
So, instead of @llm.call(provider="openai", model_id="gpt-5"), we now use @llm.call("openai/gpt-5").

Unified OpenAIProvider

Originally, we had two separate providers for OpenAI, depending on whether you wanted to use the Completions or Responses APIs. We've now unified them into a single OpenAI provider, with optional api mode selection by adding a :responses or :completions suffix to the model id string. For example, @llm.call("openai/gpt-5:responses") will use the responses API, or @llm.call("openai/gpt-5:completions") will hit the completions API. If you don't specify API modes, it will be selected automatically for you (usually Responses, unless you are using audio or your chosen model does not support responses).

Use llm.Model as a context manager

You may now use a llm.Model as a context manager directly, in which case it will override the default model in your Calls:

@llm.call("anthropic/claude-4-5-sonnet")
def recommend_book(genre: str):
  return f"Recommend a {genre} book."

model = llm.Model("openai/gpt-5", temperature=0.42)
with model:
  response = recommend_book("fantasy")

Pass llm.Model to @llm.call

You can now pass a llm.Model directly to the call decorator, in lieu of a model id. This allows you to define a model once with custom params, and then pass it into all of your calls.

model = llm.Model("openai/gpt-5", temperature=0.42)

@llm.call(model)
def recommend_book(genre: str):
  return f"Recommend a {genre} book."

New ops module for instrumenting llm.call with OpenTelemetry

Rather than supporting various tools (which are impossible to keep up with), we've chosen to support OpenTelemetry as a standard that various tools should support.

You can easily instrument any llm.call decorated function or llm.model function with ops.trace to export spans:

from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from mirascope import llm, ops

# Configure an SDK tracer provider with a simple console exporter.
provider = TracerProvider()
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))

# Enable GenAI 1.38 instrumentation.
ops.configure(tracer_provider=provider)
ops.instrument_llm()


@ops.trace
@llm.call("anthropic/claude-haiku-4-5")
def recommend_book(genre: str):
    return f"Recommend a {genre} book"


print(
    "\n=== TracedCall.stream: recommend_book.stream('mystery') returns StreamResponse directly ==="
)
stream_response = recommend_book.stream("mystery")
for chunk in stream_response.pretty_stream():
    print(chunk, end="", flush=True)

Get Involved

Let us know your thoughts on Mirascope v2 here in the discord, share what you're creating with it, and we'd love to have you involved!

What's Changed

  • Port the lilypad.span implementation in Mirascope v2 as ops.span by @koxudaxi in #1507
  • Port the lilypad.trace functionality into Mirascope v2 as ops.trace by @koxudaxi in #1508
  • Port context propagation from Lilypad into Mirascope v2 by @koxudaxi in #1509
  • refactor(cloud): adds initial structure for mirascope cloud by @willbakst in #1510
  • docs: add AGENTS.md and update README/STRUCTURE for agent usage by @willbakst in #1511
  • feat: autogenerate Python SDK for Mirascope Cloud using Fern by @willbakst in #1512
  • refactor: move Fern generated SDK + Miracope/AsyncMirascope clients into an api/ package by @willbakst in #1515
  • refactor: cloud/ git ignores into cloud/.gitignore + update STRUCTURE.md by @willbakst in #1516
  • feat: implement an oRPC API for OpenAPI generation in prep for tanstack start migration by @willbakst in #1517
  • refactor: use TanStack Start by @willbakst in #1518
  • chore: rename api schemas more explicitly and move openapi generation script to be colocated with api by @willbakst in #1519
  • chore: specify ToolOutput[Jsonable] by @teamdandelion in #1521
  • chore: fixup more implicit Unknown types in tests and utils by @teamdandelion in #1522
  • chore: resolve implicit unknown generic types involving tools by @teamdandelion in #1523
  • chore: fix inaccurate return types for context responses in clients by @teamdandelion in #1524
  • chore: fix unknown type issues in client code by @teamdandelion in #1525
  • chore: disallow unknown parameter types by @teamdandelion in #1526
  • chore: disallow missing parameter types by @teamdandelion in #1527
  • chore: pyright disallow missing type arguments by @teamdandelion in #1528
  • chore: enable strictListInference in pyright by @teamdandelion in #1529
  • chore: remove agent and graphs directories by @teamdandelion in #1530
  • Instrument the llm.Model class with OpenTelemetry Gen AI by @koxudaxi in #1496
  • Add code generation for Gen AI types from OpenTelemetry schemas by @koxudaxi in #1551
  • Add OpenTelemetry instrumentation for Model.stream by @koxudaxi in #1503
  • Add OpenTelemetry instrumentation for Model.call_async by @koxudaxi in #1504
  • Add OpenTelemetry instrumentation for Model.context_call by @koxudaxi in #1505
  • Add OpenTelemetry instrumentation for Model.context_call_async by @koxudaxi in https://githu...
Read more

v2.0.0-alpha.2

15 Nov 20:46

Choose a tag to compare

v2.0.0-alpha.2 Pre-release
Pre-release

What's Changed

Full Changelog: v2.0.0-alpha.1...v2.0.0-alpha.2

v1.25.7

08 Nov 00:03

Choose a tag to compare

What's Changed

Full Changelog: v1.25.6...v1.25.7

v2.0.0-alpha.1

06 Nov 22:49
617f88d

Choose a tag to compare

v2.0.0-alpha.1 Pre-release
Pre-release

What's Changed

Full Changelog: v2.0.0-alpha.0...v2.0.0-alpha.1

v2.0.0-alpha.0

06 Nov 01:42
b7a7571

Choose a tag to compare

v2.0.0-alpha.0 Pre-release
Pre-release

What's Changed

Read more

v1.25.6

14 Aug 22:53
48c65d8

Choose a tag to compare

What's Changed

  • feat: add Chroma Cloud support by @kylediaz in #1189
  • fix: issue with mistral client when not set at a global level by @willbakst in #1212
  • fix: update fallback to use context so it can work with streaming by @willbakst in #1211

New Contributors

Full Changelog: v1.25.5...v1.25.6

v1.25.5

05 Aug 22:06

Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.25.4...v1.25.5

v1.25.4

07 Jul 20:46
dea6fea

Choose a tag to compare

What's Changed

Full Changelog: v1.25.3...v1.25.4