Class: Aws::BedrockRuntime::Types::InferenceConfiguration
- Inherits:
-
Struct
- Object
- Struct
- Aws::BedrockRuntime::Types::InferenceConfiguration
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-bedrockruntime/types.rb
Overview
Base inference parameters to pass to a model in a call to
- Converse][1
-
or [ConverseStream]. For more information, see
[Inference parameters for foundation models].
If you need to pass additional parameters that the model supports, use the ‘additionalModelRequestFields` request field in the call to `Converse` or `ConverseStream`. For more information, see [Model parameters].
[1]: docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html [2]: docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_ConverseStream.html [3]: docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#max_tokens ⇒ Integer
The maximum number of tokens to allow in the generated response.
-
#stop_sequences ⇒ Array<String>
A list of stop sequences.
-
#temperature ⇒ Float
The likelihood of the model selecting higher-probability options while generating a response.
-
#top_p ⇒ Float
The percentage of most-likely candidates that the model considers for the next token.
Instance Attribute Details
#max_tokens ⇒ Integer
The maximum number of tokens to allow in the generated response. The default value is the maximum allowed value for the model that you are using. For more information, see [Inference parameters for foundation models].
[1]: docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html
1469 1470 1471 1472 1473 1474 1475 1476 |
# File 'lib/aws-sdk-bedrockruntime/types.rb', line 1469 class InferenceConfiguration < Struct.new( :max_tokens, :temperature, :top_p, :stop_sequences) SENSITIVE = [] include Aws::Structure end |
#stop_sequences ⇒ Array<String>
A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.
1469 1470 1471 1472 1473 1474 1475 1476 |
# File 'lib/aws-sdk-bedrockruntime/types.rb', line 1469 class InferenceConfiguration < Struct.new( :max_tokens, :temperature, :top_p, :stop_sequences) SENSITIVE = [] include Aws::Structure end |
#temperature ⇒ Float
The likelihood of the model selecting higher-probability options while generating a response. A lower value makes the model more likely to choose higher-probability options, while a higher value makes the model more likely to choose lower-probability options.
The default value is the default value for the model that you are using. For more information, see [Inference parameters for foundation models].
[1]: docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html
1469 1470 1471 1472 1473 1474 1475 1476 |
# File 'lib/aws-sdk-bedrockruntime/types.rb', line 1469 class InferenceConfiguration < Struct.new( :max_tokens, :temperature, :top_p, :stop_sequences) SENSITIVE = [] include Aws::Structure end |
#top_p ⇒ Float
The percentage of most-likely candidates that the model considers for the next token. For example, if you choose a value of 0.8 for ‘topP`, the model selects from the top 80% of the probability distribution of tokens that could be next in the sequence.
The default value is the default value for the model that you are using. For more information, see [Inference parameters for foundation models].
[1]: docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html
1469 1470 1471 1472 1473 1474 1475 1476 |
# File 'lib/aws-sdk-bedrockruntime/types.rb', line 1469 class InferenceConfiguration < Struct.new( :max_tokens, :temperature, :top_p, :stop_sequences) SENSITIVE = [] include Aws::Structure end |