Skip to content

feat (provider): support reasoning tokens, cached input tokens, total token in usage information #6140

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
May 5, 2025

Conversation

lgrammel
Copy link
Collaborator

@lgrammel lgrammel commented May 4, 2025

Background

Over the past year, additional token usage information such as cached input tokens and reasoning tokens were added by many providers. In addition, the meaning of total tokens is now provider-dependent, with some providers including reasoning tokens in the total token count.

Summary

Standardize sending reasoning tokens, cached prompt tokens, and total tokens from the providers.

@lgrammel lgrammel changed the title 1 support reasoning tokens, cached input tokens, total token in usage information May 4, 2025
@lgrammel lgrammel changed the title support reasoning tokens, cached input tokens, total token in usage information feat (provider): support reasoning tokens, cached input tokens, total token in usage information May 4, 2025
@lgrammel lgrammel marked this pull request as ready for review May 5, 2025 07:45
@lgrammel lgrammel merged commit 7979f7f into v5 May 5, 2025
8 checks passed
@lgrammel lgrammel deleted the lg/ZJBhbtEH branch May 5, 2025 07:58
samdenty added a commit that referenced this pull request May 5, 2025
* origin/v5: (23 commits)
  chore (ai): remove StreamData and mergeStreams (#6150)
  chore (build): fix typecheck setup (#6151)
  chore (ai): remove IdGenerator duplication (#6147)
  chore (ai): remove content from ui messages (#6145)
  Version Packages (canary) (#6115)
  feat (provider): support reasoning tokens, cached input tokens, total token in usage information (#6140)
  test (ai): use `mock-provider` instead of `openai` to match used `MockImageModelV2` (#6132)
  fix(providers): always use optional instead of mix of nullish for providerOptions (#6122)
  chore (ai): remove sendExtraMessageFields (#6120)
  chore(provider-utils): move ToolResultContent to provider-utils (#6121)
  feat(embed-many): respect supportsParallelCalls & concurrency (#6108)
  chore (provider-utils): return IdGenerator interface (#6118)
  chore (provider): improve jsdoc comments (#6117)
  chore (ai): remove outdated typecasts (#6116)
  chore (ai): remove useChat keepLastMessageOnError (#6114)
  fix(providers/openai): logprobs for stream alongside completion model (#6091)
  chore (ai): move maxSteps into UseChatOptions (#6113)
  docs (ai): add notes on schema .optional() (#6111)
  chore: update references to use ai-sdk.dev (#6076) (#6107)
  Version Packages (canary) (#6069)
  ...
iteratetograceness pushed a commit to iteratetograceness/ai that referenced this pull request May 9, 2025
… token in usage information (vercel#6140)

Over the past year, additional token usage information such as cached
input tokens and reasoning tokens were added by many providers. In
addition, the meaning of total tokens is now provider-dependent, with
some providers including reasoning tokens in the total token count.

Standardize sending reasoning tokens, cached prompt tokens, and total
tokens from the providers.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant