You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- patch changeset for #509 fix
- clamp noCache at 0 if cached_tokens > prompt_tokens
- group prompt_tokens_details cases under their own describe block
- add intent comment explaining the OpenAI-style cached_tokens convention
Made-with: Cursor
Map `inputTokens.cacheRead` and `inputTokens.noCache` from Workers AI's `usage.prompt_tokens_details.cached_tokens` instead of always reporting them as `undefined`. This makes prompt-cache hits visible to consumers that compute pricing or telemetry from `LanguageModelV3Usage` (`generateText`/`streamText``result.usage`).
6
+
7
+
`cached_tokens` is treated as `cacheRead`; `cacheWrite` remains `undefined` because the OpenAI-style usage shape Workers AI returns does not distinguish cache reads from writes.
0 commit comments