Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
Name | Type | Description | Notes |
---|---|---|---|
id | String | A unique identifier for the chat completion. Each chunk has the same ID. | |
choices | List<CreateChatCompletionStreamResponseChoicesInner> | A list of chat completion choices. Can contain more than one elements if `n` is greater than 1. Can also be empty for the last chunk if you set `stream_options: {"include_usage": true}`. | |
created | Integer | The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp. | |
model | String | The model to generate the completion. | |
serviceTier | ServiceTierEnum | The service tier used for processing the request. This field is only included if the `service_tier` parameter is specified in the request. | [optional] |
systemFingerprint | String | This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism. | [optional] |
_object | ObjectEnum | The object type, which is always `chat.completion.chunk`. | |
usage | CreateChatCompletionStreamResponseUsage | [optional] |
Name | Value |
---|---|
SCALE | "scale" |
DEFAULT | "default" |
Name | Value |
---|---|
CHAT_COMPLETION_CHUNK | "chat.completion.chunk" |