Class: OpenAI::Models::Chat::ChatCompletionChunk

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/chat/chat_completion_chunk.rb

Defined Under Namespace

Modules: ServiceTier Classes: Choice

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(content: , refusal: ) ⇒ void

Log probability information for the choice.



# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 365

Instance Attribute Details

#choicesArray<OpenAI::Models::Chat::ChatCompletionChunk::Choice>

A list of chat completion choices. Can contain more than one elements if n is greater than 1. Can also be empty for the last chunk if you set stream_options: {"include_usage": true}.



19
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 19

required :choices, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionChunk::Choice] }

#createdInteger

The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.



26
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 26

required :created, Integer

#idString

A unique identifier for the chat completion. Each chunk has the same ID.



11
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 11

required :id, String

#modelString

The model to generate the completion.



32
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 32

required :model, String

#objectSymbol, :"chat.completion.chunk"

The object type, which is always chat.completion.chunk.



38
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 38

required :object, const: :"chat.completion.chunk"

#service_tierSymbol, ...

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier. Contact sales to learn more about Priority processing.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.



60
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 60

optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletionChunk::ServiceTier }, nil?: true

#system_fingerprintString?

This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.



68
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 68

optional :system_fingerprint, String

#usageOpenAI::Models::CompletionUsage?

An optional field that will only be present when you set stream_options: {"include_usage": true} in your request. When present, it contains a null value except for the last chunk which contains the token usage statistics for the entire request.

NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.



80
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 80

optional :usage, -> { OpenAI::CompletionUsage }, nil?: true