Releases: Mirascope/mirascope
Mirascope v2.0.0-alpha5
Hello and happy holidays! Here are the short-but-sweet release notes for the latest Mirascope v2 Alpha Release.
- Improved exception handling: All LLM calls now emit consistent exceptions, such as
llm.RateLimitErrororllm.AuthenticationError. You can learn more via our API docs, or by reading the source. - Improved semantics for
Response.usage. Specifically,input_tokensnow includes all input tokens (including cache reads and writes) andoutput_tokensincludes all output tokens (including reasoning). - Added
llm.reset_provider_registry()for resetting all provider configuration to defaults. - Removed
llm.load_provider()and made it an implementation detail ofllm.register_provider()(which both registers and returns the provider).
What's Changed
- refactor(cloud): add initial effect/sql-drizzle infra for migration by @willbakst in #1714
- refactor(cloud): Users (@effect/sql-drizzle) by @willbakst in #1715
- refactor(cloud): Sessions (@effect/sql-drizzle) by @willbakst in #1716
- refactor(cloud): OrganizationMemberships (@effect/sql-drizzle) by @willbakst in #1724
- refactor(cloud): Organizations (@effect/sql-drizzle) by @willbakst in #1725
- feat(cloud): add desktop header and footer for home and other pages by @sourishkrout in #1721
- fix(cloud): center app ui properly to avoid it hanging off to the side by @sourishkrout in #1723
- fix(cloud): use tailwind for homepage buttons background color by @sourishkrout in #1749
- refactor(cloud): ProjectMemberships (@effect/sql-drizzle) by @willbakst in #1729
- refactor(cloud): Projects (@effect/sql-drizzle) by @willbakst in #1730
- tests(cloud): use in-process test container rather than separate postgres (ensures fresh state) by @willbakst in #1731
- fix(cloud): test warnings on testcontainer setup by @willbakst in #1733
- refactor(cloud): remove old db services and replace with @effect/drizzle-sql implementation by @willbakst in #1734
- fix(cloud): use fk violation to enforce org membership requirement on project membership creation by @willbakst in #1735
- feat(cloud): db/Environments by @willbakst in #1591
- feat(cloud): add environments API endpoints and tests by @willbakst in #1592
- feat(cloud): add environments frontend (hooks, context, sidebar update) by @willbakst in #1593
- feat(cloud): add API keys database schema, service, and tests by @willbakst in #1594
- feat(cloud): add API keys API endpoints and tests by @willbakst in #1595
- feat(cloud): add API key-based authentication for v0 routes by @willbakst in #1596
- feat(cloud): add API keys frontend (hooks and settings section) by @willbakst in #1597
- feat(cloud): add organization settings page with full management by @willbakst in #1598
- feat(python-sdk): generate
apifor environments and API keys by @willbakst in #1599 - fix(cloud): settings content using hooks outside of their context provider by @willbakst in #1602
- fix(cloud): org deletion failure and api key copy button styling issues by @willbakst in #1603
- chore: update AGENTS.md by @willbakst in #1600
- tests(cloud): ignore lines in db client not worth covering by @willbakst in #1738
- refactor(cloud): clean up auth/oauth.ts to improve readability and documentation by @willbakst in #1739
- tests(cloud): auth 100% test coverage (ignoring oauth.ts which we test live) by @willbakst in #1740
- refactor(cloud): remove Effect from naming now that @effect/sql-drizzle refactor is complete by @willbakst in #1744
- fix(cloud): update makeReady to use Proxy instead of creating a new object by @willbakst in #1745
- fix(cloud): enforce uniqueness on organizations, projects, and environments with slugs by @willbakst in #1746
- chore(python-sdk): regenerate API for recent udpates like schemas and slugs etc. by @willbakst in #1747
- Remove ops.version decorator from closure by @koxudaxi in #1697
- refactor(cloud): use mirple in place for Mirascope purple by @sourishkrout in #1751
- test: ci.yml only invokes relevant tests based on changes by @teamdandelion in #1754
- fix(api): distinguish handler NotFoundError from route-not-found in Effect API by @koxudaxi in #1635
- feat(db): add traces and spans schema by @koxudaxi in #1604
- feat(cloud): Effect-native Stripe wrapper by @willbakst in #1757
- feat(cloud): create stripe customer on organization creation by @willbakst in #1758
- feat(cloud): automatically add new organization to credits subscription for usage access by @willbakst in #1759
- feat(cloud): display user's credit grants in their organization dashboard (temp just to prove we can show it, UI will be cleaned up) by @willbakst in #1760
- feat(cloud): update stripe on organization update by @willbakst in #1761
- feat(cloud): handle organization deletion with stripe by @willbakst in #1762
- refactor(cloud): Payments service by @willbakst in #1763
- test: ensure provider registry is cleared before all llm tests by @teamdandelion in #1782
- feat: anthropic streams handle RateLimitError by @teamdandelion in #1783
- feat: handle RateLimitErrors across anthropic by @teamdandelion in #1784
- chore(vite): upgrade vite to v7 and bump its dependencies by @sourishkrout in #1753
- feat: wrap rate limit errors in google provider by @teamdandelion in #1785
- feat: wrap rate limit errors for openai completions by @teamdandelion in #1786
- feat: wrap rate limit errors for openai responses by @teamdandelion in #1787
- feat: llm.NotFoundError when models not found by @teamdandelion in #1788
- feat: wrap llm authentication errors by @teamdandelion in #1789
- feat: error wrapping for all exported anthropic api errors by @teamdandelion in #1790
- feat: wrap openai errors by @teamdandelion in #1791
- feat: wrap google errors by @teamdandelion in #1792
- feat: wrap unknown provider errors in APIError by @teamdandelion in #1793
- feat: wrap original exception and status code on errors by @teamdandelion in #1794
- refactor(python-sdk): unify error handling at the llm.Model level by @willbakst in #1796
- refactor(python-sdk): move error wrapping from llm.Model to BaseProvider by @teamdandelion in #1797
- refactor(python-sdk): refactor and cleanup provider registry by @teamdandelion in #1798
- chore: bump version to alpha.5 for release by @teamdandelion in #1795
Full Changelog: v2.0.0-alpha.4...v2.0.0-alpha.5
Mirascope v2.0.0-alpha.4
Response usage tracking
The llm.Response class now has a .usage property, which records provider token usage. When available, this will be set on both regular responses and stream responses. The llm.Usage class includes input tokens, output tokens and (when available with the chosen provider), cache read/write tokens and reasoning tokens.
Anthropic strict structured outputs
We now support strict structured response formatting with Anthropic, for supported models. Since this is still a beta feature on Anthropic's end, it will only be enabled if you manually specify strict mode using the llm.format(formattable, mode="strict") function. Otherwise, the Anthropic provider defaults to tool mode.
Improved Google strict structured outputs
Recent Google models improve support for using strict mode along with tool calling, so Mirascope now takes advantage of that and uses strict mode by default in more cases.
Anthropic Cache Control
We now automatically add cache control blocks to Anthropic prompts, bringing more consistency with other providers (which automatically configure caching).
Added support: Together.ai and Ollama providers
We now have builtin support for using Together.ai and Ollama as LLM providers.
If you use a model_id starting with "ollama/", it will automatically invoke the ollama provider (defaulting to an API endpoint at http://localhost:11434/v1/).
If you'd like to use Together.ai to use an ollama model (or any other model supported by Together), you can use register_provider, as follows:
llm.register_provider("ollama", "ollama/") # e.g. for `ollama/gemma3:4b`
llm.register_provider("together", "together/") # e.g. for `together/moonshotai/Kimi-K2-Thinking`What's Changed
- fix(cloud): add path param aware where clause checks to base service methods by @willbakst in #1674
- refactor(cloud): change ANNOTATOR role to VIEWER by @willbakst in #1675
- refactor(cloud): add parent scoped where to findAll in base services + clean up and document base service file by @willbakst in #1676
- docs(cloud): improve docs for organization service by @willbakst in #1677
- feat(cloud): OrganizationMembershipService by @willbakst in #1678
- refactor(cloud): update mocks to use test users in organization service tests where it makes sense by @willbakst in #1660
- refactor(cloud): rename organization membership path to /members/:memberId by @willbakst in #1679
- feat(cloud): add organization membership ACL change audit log by @willbakst in #1691
- feat: use model_features framework to track anthropic model ids by @teamdandelion in #1693
- feat: bump anthropic to 0.75.0\ by @teamdandelion in #1694
- feat: track strict structured output support for anthropic models by @teamdandelion in #1695
- refactor: move ensure_additional_properties_false to base utils by @teamdandelion in #1696
- chore: fixup mlx cassette tests by @teamdandelion in #1705
- chore: include claude-sonnet-4-0 as a tracked anthropic model by @teamdandelion in #1706
- chore(cloud): provide prettier default config plus ignore file to avoid vscode/cursor using their own defaults by @sourishkrout in #1708
- fix(cloud): prettier formatting and exclusion of generated files by @sourishkrout in #1722
- refactor(cloud): refactor UI code to prepare for incremental port of website features by @sourishkrout in #1633
- feat(cloud): add shadcn-based ui design system by @sourishkrout in #1692
- feat(refactor): disable builtin CSS validation to avoid tailwind v4 false positives and skip vendored ui components by @sourishkrout in #1703
- feat(cloud): add barebones home page with highly local CSS styles by @sourishkrout in #1704
- feat(cloud): add optimized images vite middleware in development by @sourishkrout in #1720
- feat: introduce AnthropicBetaProvider for beta API support by @koxudaxi in #1713
- refactor: reorg the AnthropicBetaProvider by @teamdandelion in #1718
- feat: support strict structured outputs in anthropic provider by @teamdandelion in #1707
- feat: google provider uses strict outputs wherever possible by @teamdandelion in #1709
- feat: add
response.usageby @teamdandelion in #1719 - feat: add usage tracking for streams by @teamdandelion in #1726
- chore: bump version to 2.0.0a4 by @teamdandelion in #1727
- refactor(cloud): update organization roles to OWNER, ADMIN, and MEMBER by @willbakst in #1698
- fix(cloud): add cascade delete so deleting an organization deletes memberships/audits by @willbakst in #1699
- feat(cloud): add project and project_memberships schemas by @willbakst in #1685
- fix(cloud): soft delete users and scrub PII by @willbakst in #1700
- chore(cloud): remove old unused test code that MockDatabase replaced by @willbakst in #1710
- fix(cloud): allow self removal from an organization by @willbakst in #1711
- feat(cloud): ProjectMembershipService by @willbakst in #1686
- refactor: deduplicate openai model tracking by @teamdandelion in #1728
- feat: add anthropic cache control by @teamdandelion in #1732
- feat(cloud): ProjectService by @willbakst in #1588
- refactor(cloud): update api tests to be more effectful using @effect/vitest, deps, and fixtures by @willbakst in #1712
- feat: add projects API endpoints by @willbakst in #1589
- fix(cloud): convert _tag -> tag so Fern autogeneration works by @willbakst in #1649
- feat(python-sdk): Fern autogenerated Projects API by @willbakst in #1650
- feat: add projects frontend (hooks, contexts, sidebar, dashboard) by @willbakst in #1590
- feat(providers): add OpenAICompatibleProvider base class by @koxudaxi in #1702
- Implement support for together provider by @koxudaxi in #1701
- Implement support for ollama provider by @koxudaxi in #1717
- Separate OpenAI-compatible provider tests into dedicated test suite by @koxudaxi in #1736
- refactor(tests): use before_record_request hook for VCR sanitization by @koxudaxi in #1741
- fix: tighten up semantics on response.Usage by @teamdandelion in #1743
Full Changelog: v2.0.0-alpha.3...v2.0.0-alpha.4
v2.0.0-alpha.3
Mirascope v2.0.0-alpha.3 Release Notes
Hello everyone! We are pleased to announce the latest alpha release of Mirascope: v2.0.0-alpha.3.
This release has an exciting new feature: the MLX Provider, allowing local inference on Apple Silicon for Macbook users.
We also have a refactored provider system, where you can use llm.register_provider to configure what provider will be used for any given LLM model. This means you can now use Mirascope v2 with any OpenAI-compatible model, by registering the OpenAI provider with a custom base url.
There are also core interface improvements like upgrading the @llm.prompt decorator as a model-agnostic alternative to @llm.call. Read more in the full notes below.
Generally, we are feeling good about the code quality and interfaces for Mirascope v2, and we're going to be releasing the official beta in the coming weeks. We welcome all feedback on the interfaces, as well as contributions from the community! You can start chatting with us in the #mirascope-v2-alpha channel to get involved.
Changes
New: MLX Support
First, a big thank you to @tnadav for implementing Mirascope MLX support! You can now run local models from the mlx-community on HuggingFace directly on your Macbook. Here's an example:
from mirascope import llm
@llm.call("mlx-community/Qwen3-8B-4bit-DWQ-053125")
def recommend_book(genre: str):
return f"Recommend a {genre} book."
response = recommend_book("fantasy")
print(response.pretty())Right now tools and structured formats are not supported, but @tnadav is working on them actively :)
New: llm.register_provider to customize provider behavior
You can now use llm.register_provider to specify what provider should be used for a given model. By default, Mirascope makes the OpenAI, Anthropic, Google, and MLX providers available (assuming their dependencies are installed). However, you can use register_provider to customize them, or give them additional scope. For example:
from mirascope import llm
llm.register_provider(
"openai:completions",
scope="ollama/",
base_url="http://localhost:11434/v1",
api_key="ollama",
)
@llm.call("ollama/gpt-oss:20b")
def recommend_book(genre: str):
return f"Recommend a {genre} book"
response = recommend_book("fantasy")
print(response.pretty())In this example, a local ollama model can be called via Mirascope's OpenAIProvider.
The scope is simply a string with prefix matching (so ollama/ will match any model_id starting with ollama/).
New llm.Prompt class from the @llm.prompt decorator
The @llm.prompt decorator now accepts tools and output format, and produces a llm.Prompt class that can be called with any model. This makes the prompt decorator a model agnostic version of the call decorator, so you can use it to fully define a prompt without specifying a default model. The logic is "Prompt + Model = Call".
Here's an example:
from mirascope import llm
@llm.prompt
def recommend_book(genre: str):
return f"Recommend a {genre} book"
model = llm.model("openai/gpt-5")
response = recommend_book(model, "fantasy")
print(response.pretty())Note you can add tools or a response format to the prompt decorator, and that Prompt works just like Call in supporting streaming via recommend_book.stream(model, "fantasy"), supporting async or context prompts, etc.
Unified ModelId type
We now specify models via a unified ModelId that specifies the model developer and the model name.
So, instead of @llm.call(provider="openai", model_id="gpt-5"), we now use @llm.call("openai/gpt-5").
Unified OpenAIProvider
Originally, we had two separate providers for OpenAI, depending on whether you wanted to use the Completions or Responses APIs. We've now unified them into a single OpenAI provider, with optional api mode selection by adding a :responses or :completions suffix to the model id string. For example, @llm.call("openai/gpt-5:responses") will use the responses API, or @llm.call("openai/gpt-5:completions") will hit the completions API. If you don't specify API modes, it will be selected automatically for you (usually Responses, unless you are using audio or your chosen model does not support responses).
Use llm.Model as a context manager
You may now use a llm.Model as a context manager directly, in which case it will override the default model in your Calls:
@llm.call("anthropic/claude-4-5-sonnet")
def recommend_book(genre: str):
return f"Recommend a {genre} book."
model = llm.Model("openai/gpt-5", temperature=0.42)
with model:
response = recommend_book("fantasy")Pass llm.Model to @llm.call
You can now pass a llm.Model directly to the call decorator, in lieu of a model id. This allows you to define a model once with custom params, and then pass it into all of your calls.
model = llm.Model("openai/gpt-5", temperature=0.42)
@llm.call(model)
def recommend_book(genre: str):
return f"Recommend a {genre} book."New ops module for instrumenting llm.call with OpenTelemetry
Rather than supporting various tools (which are impossible to keep up with), we've chosen to support OpenTelemetry as a standard that various tools should support.
You can easily instrument any llm.call decorated function or llm.model function with ops.trace to export spans:
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
from mirascope import llm, ops
# Configure an SDK tracer provider with a simple console exporter.
provider = TracerProvider()
provider.add_span_processor(SimpleSpanProcessor(ConsoleSpanExporter()))
# Enable GenAI 1.38 instrumentation.
ops.configure(tracer_provider=provider)
ops.instrument_llm()
@ops.trace
@llm.call("anthropic/claude-haiku-4-5")
def recommend_book(genre: str):
return f"Recommend a {genre} book"
print(
"\n=== TracedCall.stream: recommend_book.stream('mystery') returns StreamResponse directly ==="
)
stream_response = recommend_book.stream("mystery")
for chunk in stream_response.pretty_stream():
print(chunk, end="", flush=True)Get Involved
Let us know your thoughts on Mirascope v2 here in the discord, share what you're creating with it, and we'd love to have you involved!
What's Changed
- Port the lilypad.span implementation in Mirascope v2 as ops.span by @koxudaxi in #1507
- Port the lilypad.trace functionality into Mirascope v2 as ops.trace by @koxudaxi in #1508
- Port context propagation from Lilypad into Mirascope v2 by @koxudaxi in #1509
- refactor(cloud): adds initial structure for mirascope cloud by @willbakst in #1510
- docs: add AGENTS.md and update README/STRUCTURE for agent usage by @willbakst in #1511
- feat: autogenerate Python SDK for Mirascope Cloud using Fern by @willbakst in #1512
- refactor: move Fern generated SDK + Miracope/AsyncMirascope clients into an api/ package by @willbakst in #1515
- refactor: cloud/ git ignores into cloud/.gitignore + update STRUCTURE.md by @willbakst in #1516
- feat: implement an oRPC API for OpenAPI generation in prep for tanstack start migration by @willbakst in #1517
- refactor: use TanStack Start by @willbakst in #1518
- chore: rename api schemas more explicitly and move openapi generation script to be colocated with api by @willbakst in #1519
- chore: specify
ToolOutput[Jsonable]by @teamdandelion in #1521 - chore: fixup more implicit Unknown types in tests and utils by @teamdandelion in #1522
- chore: resolve implicit unknown generic types involving tools by @teamdandelion in #1523
- chore: fix inaccurate return types for context responses in clients by @teamdandelion in #1524
- chore: fix unknown type issues in client code by @teamdandelion in #1525
- chore: disallow unknown parameter types by @teamdandelion in #1526
- chore: disallow missing parameter types by @teamdandelion in #1527
- chore: pyright disallow missing type arguments by @teamdandelion in #1528
- chore: enable strictListInference in pyright by @teamdandelion in #1529
- chore: remove agent and graphs directories by @teamdandelion in #1530
- Instrument the llm.Model class with OpenTelemetry Gen AI by @koxudaxi in #1496
- Add code generation for Gen AI types from OpenTelemetry schemas by @koxudaxi in #1551
- Add OpenTelemetry instrumentation for Model.stream by @koxudaxi in #1503
- Add OpenTelemetry instrumentation for Model.call_async by @koxudaxi in #1504
- Add OpenTelemetry instrumentation for Model.context_call by @koxudaxi in #1505
- Add OpenTelemetry instrumentation for Model.context_call_async by @koxudaxi in https://githu...
v2.0.0-alpha.2
What's Changed
- fix: resolve import error with python 3.13.9 by @teamdandelion in #1497
- test: ensure tests are run against python 3.13 by @teamdandelion in #1498
- fix: fix the website build by @teamdandelion in #1501
- fix: add top level pyrightconfig that points to python/ by @teamdandelion in #1499
- test: e2e override tests use google as default provider by @teamdandelion in #1500
Full Changelog: v2.0.0-alpha.1...v2.0.0-alpha.2
v1.25.7
What's Changed
- Fix thinking stream reconstruction for Anthropic by @willbakst in #1495
Full Changelog: v1.25.6...v1.25.7
v2.0.0-alpha.1
What's Changed
- docs: write alpha intro docs by @teamdandelion in #1479
- fix: include pydantic and httpx as required deps by @teamdandelion in #1490
- fix: suppress MCP import errors by @teamdandelion in #1491
- fix: handle missing provider packages gracefully by @teamdandelion in #1492
- fix: upgrade deprecated tool.uv.dev-dependencies usage by @teamdandelion in #1493
- fix: bump version to 2.0.0-alpha.1, add install instructions by @teamdandelion in #1494
Full Changelog: v2.0.0-alpha.0...v2.0.0-alpha.1
v2.0.0-alpha.0
What's Changed
- Setup documentation site for v2 by @teamdandelion in #991
- chore: cleanup v2 branch to only contain v2 code by @teamdandelion in #992
- chore: remove Text content dataclass by @teamdandelion in #1026
- V2 docs cloudflare by @willbakst in #1028
- chore: update comment on robots.txt disallow for v2 by @willbakst in #1032
- Add messages.mdx with examples by @teamdandelion in #1033
- chore: demote shorthand msg constructors by @teamdandelion in #1036
- v2 Prompt cleanup by @teamdandelion in #1071
- feat: initial v2 typescript structure by @willbakst in #1034
- v2: monorepo reorg by @teamdandelion in #1078
- v2: fix docs build by @teamdandelion in #1079
- Mirascope v2 learn/prompts.mdx by @teamdandelion in #1077
- fix: improve typescript typechecking by @teamdandelion in #1081
- v2: Set up consistent CI across the monorepo by @teamdandelion in #1089
- v2: Setup lint-staged precommit hooks by @teamdandelion in #1090
- TypeScript Prompt Templates Interface & Documentation by @willbakst in #1092
- fix(v2): issue with response format interface by @willbakst in #1093
- Add calls.mdx, iterate on Response api by @teamdandelion in #1094
- Rewrite calls and prompt templates guides by @teamdandelion in #1096
- V2 dev improvements by @teamdandelion in #1097
- refactor: wrap all str content in a llm.Text by @teamdandelion in #1103
- feat: document async calls in calls.mdx by @teamdandelion in #1101
- refactor: Remove ResponseContent and ContextResponse/ContextStream by @teamdandelion in #1104
- Stream & ContentChunk refactors by @teamdandelion in #1105
- add streams.mdx by @teamdandelion in #1100
- nit: fix various inconsistencies in the calls guide by @teamdandelion in #1107
- v2: more consistent media representation & llm.ImageUrl by @teamdandelion in #1108
- refactor: separate llm.Content and llm.Message types by @teamdandelion in #1109
- v2: Agent takes context wrapping deps, not deps directly by @teamdandelion in #1112
- v2: add response.to_tool, stream.to_tool, remove ContextTool by @teamdandelion in #1111
- Merge of PRs 1113-1119 by @teamdandelion in #1121
- Rework tools interface, add support for async tools by @teamdandelion in #1122
- refactor: drop
llm.contextctx manager, usellm.Contextexplicitly instead by @teamdandelion in #1123 - refactor: make ctx optional for agents by @teamdandelion in #1124
- refactor: unify BaseAgent/Agent/AsyncAgent into agent.py by @teamdandelion in #1125
- chore: use
@llm.toolnot@llm.tool()by @teamdandelion in #1126 - feat: add unified examples.mdx for LLM consumption by @teamdandelion in #1127
- refactor: rename ResponseFormat -> Format by @teamdandelion in #1128
- feat: exploratory response format interface, with examples by @teamdandelion in #1129
- refactor: remove ctx from response/stream by @teamdandelion in #1130
- nit: enforce usage of ruff format by @teamdandelion in #1131
- feat: add jinja2-based templates for example generation by @teamdandelion in #1132
- feat: redone agent interface + fresh examples by @teamdandelion in #1133
- feat: update script for examples.mdx by @teamdandelion in #1134
- nit: fixup call pattern in basic_usage.j2 by @teamdandelion in #1135
- feat: add llm.Partial for partial base models by @teamdandelion in #1136
- feat: 64 auto-generated "sazed" examples with ts templates by @teamdandelion in #1137
- fix: report unnecessary type ignores by @teamdandelion in #1139
- feat: split call and tool decorators into context/non context by @teamdandelion in #1140
- feat: add mcp client by @teamdandelion in #1142
- chore: refactor example generator by @teamdandelion in #1143
- feat: infer deps_type for context calls and context tools by @teamdandelion in #1145
- chore: consolidate content files by @teamdandelion in #1147
- refactor: create Text content type rather than raw str by @teamdandelion in #1148
- chore: extract prompt types into prompts/types.py by @teamdandelion in #1149
- chore: comment out provider-specific overloads by @teamdandelion in #1150
- feat: verify context tool DepsT matches call/agent by @teamdandelion in #1151
- feat: remove deps_type arg from agent decorator by @teamdandelion in #1152
- chore: nuke all non-sazed examples by @teamdandelion in #1155
- feat: add
tool.__call__, renametool.calltotool.executeby @teamdandelion in #1153 - refactor: cleanup response to remove singleton accessors (except text) by @teamdandelion in #1154
- feat: redone StreamResponse / streams API by @teamdandelion in #1156
- refactor: streamline chunks, accumulate partials on streams by @teamdandelion in #1160
- chore: remove unnecessary overloads on base LLM class by @teamdandelion in #1161
- chore: reorganize clients directory by @teamdandelion in #1162
- test: ensure mirascope package can be imported by @teamdandelion in #1163
- feat: implement messages with testing by @teamdandelion in #1164
- test: start tracking code coverage by @teamdandelion in #1165
- refactor: clients only use generic llm.Message by @teamdandelion in #1168
- feat: implement Response & remove Usage by @teamdandelion in #1169
- feat: implement basic call for anthropic client by @teamdandelion in #1167
- test: remove coverage exclusions (except 2 files) by @teamdandelion in #1170
- feat: split provider and model into separate args by @teamdandelion in #1171
- nit: allow claude to use mkdir by @teamdandelion in #1174
- refactor: replace BaseConverter with utils by @teamdandelion in #1175
- chore: remove raw anthropic response snapshotting by @teamdandelion in #1176
- nit: add linear issue tracking for updating finish_reason by @teamdandelion in #1177
- feat: implement openai call handling by @teamdandelion in #1178
- feat: implement GoogleClient.call for text calls by @teamdandelion in #1179
- refactor: add client field to BaseClient by @teamdandelion in #1180
- chore: address downstack review nits by @teamdandelion in #1182
- feat: implement StreamResponse.chunk_stream() with testing by @teamdandelion in #1183
- feat: StreamingResponse supports finish_reason by @teamdandelion in #1184
- feat: anthropic streaming by @teamdandelion in #1185
- feat: openai streaming by @teamdandelion in #1186
- feat: add google streaming by @teamdandelion in https://github.com/Mirascope/mirascope...
v1.25.6
What's Changed
- feat: add Chroma Cloud support by @kylediaz in #1189
- fix: issue with mistral client when not set at a global level by @willbakst in #1212
- fix: update fallback to use context so it can work with streaming by @willbakst in #1211
New Contributors
Full Changelog: v1.25.5...v1.25.6
v1.25.5
What's Changed
- Fix _load_media fails with HTTP 403 Forbidden due to missing User-Agent #1172 by @emilioramirez in #1173
New Contributors
- @emilioramirez made their first contribution in #1173
Full Changelog: v1.25.4...v1.25.5
v1.25.4
What's Changed
- Fix inconsistent partial structured streaming by @willbakst in #1106
Full Changelog: v1.25.3...v1.25.4