Class: OpenAI::Models::Responses::Response

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/responses/response.rb

Overview

Defined Under Namespace

Modules: Instructions, ServiceTier, ToolChoice, Truncation Classes: IncompleteDetails

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, inspect, #inspect, new_coerce_state, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(id: , created_at: , error: , incomplete_details: , instructions: , metadata: , model: , output: , parallel_tool_calls: , temperature: , tool_choice: , tools: , top_p: , background: nil, max_output_tokens: nil, max_tool_calls: nil, previous_response_id: nil, prompt: nil, reasoning: nil, service_tier: nil, status: nil, text: nil, top_logprobs: nil, truncation: nil, usage: nil, user: nil, object: :response) ⇒ void

Some parameter documentations has been truncated, see OpenAI::Models::Responses::Response for more details.

Parameters:



# File 'lib/openai/models/responses/response.rb', line 298

Instance Attribute Details

#backgroundBoolean?

Whether to run the model response in the background. Learn more.

Returns:



140
# File 'lib/openai/models/responses/response.rb', line 140

optional :background, OpenAI::Internal::Type::Boolean, nil?: true

#created_atFloat

Unix timestamp (in seconds) of when this Response was created.

Returns:

  • (Float)


20
# File 'lib/openai/models/responses/response.rb', line 20

required :created_at, Float

#errorOpenAI::Models::Responses::ResponseError?

An error object returned when the model fails to generate a Response.



26
# File 'lib/openai/models/responses/response.rb', line 26

required :error, -> { OpenAI::Responses::ResponseError }, nil?: true

#idString

Unique identifier for this Response.

Returns:

  • (String)


14
# File 'lib/openai/models/responses/response.rb', line 14

required :id, String

#incomplete_detailsOpenAI::Models::Responses::Response::IncompleteDetails?

Details about why the response is incomplete.



32
# File 'lib/openai/models/responses/response.rb', line 32

required :incomplete_details, -> { OpenAI::Responses::Response::IncompleteDetails }, nil?: true

#instructionsString, ...

A system (or developer) message inserted into the model's context.

When using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses.



42
# File 'lib/openai/models/responses/response.rb', line 42

required :instructions, union: -> { OpenAI::Responses::Response::Instructions }, nil?: true

#max_output_tokensInteger?

An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.

Returns:

  • (Integer, nil)


148
# File 'lib/openai/models/responses/response.rb', line 148

optional :max_output_tokens, Integer, nil?: true

#max_tool_callsInteger?

The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.

Returns:

  • (Integer, nil)


157
# File 'lib/openai/models/responses/response.rb', line 157

optional :max_tool_calls, Integer, nil?: true

#metadataHash{Symbol=>String}?

Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.

Returns:

  • (Hash{Symbol=>String}, nil)


53
# File 'lib/openai/models/responses/response.rb', line 53

required :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true

#modelString, ...

Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. Refer to the model guide to browse and compare available models.



63
# File 'lib/openai/models/responses/response.rb', line 63

required :model, union: -> { OpenAI::ResponsesModel }

#objectSymbol, :response

The object type of this resource - always set to response.

Returns:

  • (Symbol, :response)


69
# File 'lib/openai/models/responses/response.rb', line 69

required :object, const: :response

#outputArray<OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseOutputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseOutputItem::LocalShellCall, OpenAI::Models::Responses::ResponseOutputItem::McpCall, OpenAI::Models::Responses::ResponseOutputItem::McpListTools, OpenAI::Models::Responses::ResponseOutputItem::McpApprovalRequest>

An array of content items generated by the model.

  • The length and order of items in the output array is dependent on the model's response.
  • Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.


81
# File 'lib/openai/models/responses/response.rb', line 81

required :output, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::ResponseOutputItem] }

#parallel_tool_callsBoolean

Whether to allow the model to run tool calls in parallel.

Returns:



87
# File 'lib/openai/models/responses/response.rb', line 87

required :parallel_tool_calls, OpenAI::Internal::Type::Boolean

#previous_response_idString?

The unique ID of the previous response to the model. Use this to create multi-turn conversations. Learn more about conversation state.

Returns:

  • (String, nil)


165
# File 'lib/openai/models/responses/response.rb', line 165

optional :previous_response_id, String, nil?: true

#promptOpenAI::Models::Responses::ResponsePrompt?

Reference to a prompt template and its variables. Learn more.



172
# File 'lib/openai/models/responses/response.rb', line 172

optional :prompt, -> { OpenAI::Responses::ResponsePrompt }, nil?: true

#prompt_cache_keyString?

Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.

Returns:

  • (String, nil)


180
# File 'lib/openai/models/responses/response.rb', line 180

optional :prompt_cache_key, String

#reasoningOpenAI::Models::Reasoning?

o-series models only

Configuration options for reasoning models.

Returns:



189
# File 'lib/openai/models/responses/response.rb', line 189

optional :reasoning, -> { OpenAI::Reasoning }, nil?: true

#safety_identifierString?

A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.

Returns:

  • (String, nil)


199
# File 'lib/openai/models/responses/response.rb', line 199

optional :safety_identifier, String

#service_tierSymbol, ...

Specifies the processing type used for serving the request.

  • If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
  • If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
  • If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier. Contact sales to learn more about Priority processing.
  • When not set, the default behavior is 'auto'.

When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.



221
# File 'lib/openai/models/responses/response.rb', line 221

optional :service_tier, enum: -> { OpenAI::Responses::Response::ServiceTier }, nil?: true

#statusSymbol, ...

The status of the response generation. One of completed, failed, in_progress, cancelled, queued, or incomplete.



228
# File 'lib/openai/models/responses/response.rb', line 228

optional :status, enum: -> { OpenAI::Responses::ResponseStatus }

#temperatureFloat?

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

Returns:

  • (Float, nil)


96
# File 'lib/openai/models/responses/response.rb', line 96

required :temperature, Float, nil?: true

#textOpenAI::Models::Responses::ResponseTextConfig?

Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:



238
# File 'lib/openai/models/responses/response.rb', line 238

optional :text, -> { OpenAI::Responses::ResponseTextConfig }

#tool_choiceSymbol, ...

How the model should select which tool (or tools) to use when generating a response. See the tools parameter to see how to specify which tools the model can call.



104
# File 'lib/openai/models/responses/response.rb', line 104

required :tool_choice, union: -> { OpenAI::Responses::Response::ToolChoice }

#toolsArray<OpenAI::Models::Responses::FunctionTool, OpenAI::Models::Responses::FileSearchTool, OpenAI::Models::Responses::ComputerTool, OpenAI::Models::Responses::Tool::Mcp, OpenAI::Models::Responses::Tool::CodeInterpreter, OpenAI::Models::Responses::Tool::ImageGeneration, OpenAI::Models::Responses::Tool::LocalShell, OpenAI::Models::Responses::WebSearchTool>

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

The two categories of tools you can provide the model are:

  • Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
  • Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code. Learn more about function calling.


123
# File 'lib/openai/models/responses/response.rb', line 123

required :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Responses::Tool] }

#top_logprobsInteger?

An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

Returns:

  • (Integer, nil)


245
# File 'lib/openai/models/responses/response.rb', line 245

optional :top_logprobs, Integer, nil?: true

#top_pFloat?

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

Returns:

  • (Float, nil)


133
# File 'lib/openai/models/responses/response.rb', line 133

required :top_p, Float, nil?: true

#truncationSymbol, ...

The truncation strategy to use for the model response.

  • auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.
  • disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.


257
# File 'lib/openai/models/responses/response.rb', line 257

optional :truncation, enum: -> { OpenAI::Responses::Response::Truncation }, nil?: true

#usageOpenAI::Models::Responses::ResponseUsage?

Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.



264
# File 'lib/openai/models/responses/response.rb', line 264

optional :usage, -> { OpenAI::Responses::ResponseUsage }

#userString?

Deprecated.

This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations. A stable identifier for your end-users. Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.

Returns:

  • (String, nil)


276
# File 'lib/openai/models/responses/response.rb', line 276

optional :user, String

Class Method Details

.valuesArray<Symbol>

Returns:

  • (Array<Symbol>)


# File 'lib/openai/models/responses/response.rb', line 382

.variantsArray(Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp)



# File 'lib/openai/models/responses/response.rb', line 441

Instance Method Details

#output_textString

Convenience property that aggregates all output_text items from the output list.

If no output_text content blocks exist, then an empty string is returned.

Returns:

  • (String)


283
284
285
286
287
288
289
290
291
292
293
294
295
296
# File 'lib/openai/models/responses/response.rb', line 283

def output_text
  texts = []

  output.each do |item|
    next unless item.type == :message
    item.content.each do |content|
      if content.type == :output_text
        texts << content.text
      end
    end
  end

  texts.join
end