Skip to content

fix(api): correct some Responses types #1391

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 18, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 81
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-c8579861bc21d4d2155a5b9e8e7d54faee8083730673c4d32cbbe573d7fb4116.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-f763c1a35c8b9b02f1e31b9b2e09e21f98bfe8413e5079c86cbb07da2dd7779b.yml
8 changes: 4 additions & 4 deletions src/resources/batches.ts
Original file line number Diff line number Diff line change
Expand Up @@ -220,11 +220,11 @@ export interface BatchCreateParams {

/**
* The endpoint to be used for all requests in the batch. Currently
* `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
* Note that `/v1/embeddings` batches are also restricted to a maximum of 50,000
* embedding inputs across all requests in the batch.
* `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
* are supported. Note that `/v1/embeddings` batches are also restricted to a
* maximum of 50,000 embedding inputs across all requests in the batch.
*/
endpoint: '/v1/chat/completions' | '/v1/embeddings' | '/v1/completions';
endpoint: '/v1/responses' | '/v1/chat/completions' | '/v1/embeddings' | '/v1/completions';

/**
* The ID of an uploaded file that contains requests for the new batch.
Expand Down
18 changes: 12 additions & 6 deletions src/resources/chat/completions/completions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -377,10 +377,13 @@ export interface ChatCompletionChunk {
/**
* An optional field that will only be present when you set
* `stream_options: {"include_usage": true}` in your request. When present, it
* contains a null value except for the last chunk which contains the token usage
* statistics for the entire request.
* contains a null value **except for the last chunk** which contains the token
* usage statistics for the entire request.
*
* **NOTE:** If the stream is interrupted or cancelled, you may not receive the
* final usage chunk which contains the total token usage for the request.
*/
usage?: CompletionsAPI.CompletionUsage | null;
usage?: CompletionsAPI.CompletionUsage;
}

export namespace ChatCompletionChunk {
Expand Down Expand Up @@ -551,7 +554,7 @@ export namespace ChatCompletionContentPart {
/**
* The name of the file, used when passing the file to the model as a string.
*/
file_name?: string;
filename?: string;
}
}
}
Expand Down Expand Up @@ -930,8 +933,11 @@ export interface ChatCompletionStreamOptions {
/**
* If set, an additional chunk will be streamed before the `data: [DONE]` message.
* The `usage` field on this chunk shows the token usage statistics for the entire
* request, and the `choices` field will always be an empty array. All other chunks
* will also include a `usage` field, but with a null value.
* request, and the `choices` field will always be an empty array.
*
* All other chunks will also include a `usage` field, but with a null value.
* **NOTE:** If the stream is interrupted, you may not receive the final usage
* chunk which contains the total token usage for the request.
*/
include_usage?: boolean;
}
Expand Down
26 changes: 21 additions & 5 deletions src/resources/responses/responses.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1298,11 +1298,6 @@ export interface ResponseFunctionCallArgumentsDoneEvent {
* for more information.
*/
export interface ResponseFunctionToolCall {
/**
* The unique ID of the function tool call.
*/
id: string;

/**
* A JSON string of the arguments to pass to the function.
*/
Expand All @@ -1323,6 +1318,11 @@ export interface ResponseFunctionToolCall {
*/
type: 'function_call';

/**
* The unique ID of the function tool call.
*/
id?: string;

/**
* The status of the item. One of `in_progress`, `completed`, or `incomplete`.
* Populated when items are returned via API.
Expand Down Expand Up @@ -2241,6 +2241,11 @@ export interface ResponseUsage {
*/
input_tokens: number;

/**
* A detailed breakdown of the input tokens.
*/
input_tokens_details: ResponseUsage.InputTokensDetails;

/**
* The number of output tokens.
*/
Expand All @@ -2258,6 +2263,17 @@ export interface ResponseUsage {
}

export namespace ResponseUsage {
/**
* A detailed breakdown of the input tokens.
*/
export interface InputTokensDetails {
/**
* The number of tokens that were retrieved from the cache.
* [More on prompt caching](https://platform.openai.com/docs/guides/prompt-caching).
*/
cached_tokens: number;
}

/**
* A detailed breakdown of the output tokens.
*/
Expand Down
4 changes: 2 additions & 2 deletions src/resources/shared.ts
Original file line number Diff line number Diff line change
Expand Up @@ -171,10 +171,10 @@ export interface Reasoning {
* supported values are `low`, `medium`, and `high`. Reducing reasoning effort can
* result in faster responses and fewer tokens used on reasoning in a response.
*/
effort: ReasoningEffort | null;
effort?: ReasoningEffort | null;

/**
* **o-series models only**
* **computer_use_preview only**
*
* A summary of the reasoning performed by the model. This can be useful for
* debugging and understanding the model's reasoning process. One of `concise` or
Expand Down
4 changes: 2 additions & 2 deletions tests/api-resources/batches.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ describe('resource batches', () => {
test('create: only required params', async () => {
const responsePromise = client.batches.create({
completion_window: '24h',
endpoint: '/v1/chat/completions',
endpoint: '/v1/responses',
input_file_id: 'input_file_id',
});
const rawResponse = await responsePromise.asResponse();
Expand All @@ -27,7 +27,7 @@ describe('resource batches', () => {
test('create: required and optional params', async () => {
const response = await client.batches.create({
completion_window: '24h',
endpoint: '/v1/chat/completions',
endpoint: '/v1/responses',
input_file_id: 'input_file_id',
metadata: { foo: 'string' },
});
Expand Down