Class: Aws::Bedrock::Types::TextInferenceConfig
- Inherits:
-
Struct
- Object
- Struct
- Aws::Bedrock::Types::TextInferenceConfig
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-bedrock/types.rb
Overview
The configuration details for text generation using a language model via the ‘RetrieveAndGenerate` function.
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#max_tokens ⇒ Integer
The maximum number of tokens to generate in the output text.
-
#stop_sequences ⇒ Array<String>
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens.
-
#temperature ⇒ Float
Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options.
-
#top_p ⇒ Float
A probability distribution threshold which controls what the model considers for the set of possible next tokens.
Instance Attribute Details
#max_tokens ⇒ Integer
The maximum number of tokens to generate in the output text. Do not use the minimum of 0 or the maximum of 65536. The limit values described here are arbitrary values, for actual values consult the limits defined by your specific model.
6830 6831 6832 6833 6834 6835 6836 6837 |
# File 'lib/aws-sdk-bedrock/types.rb', line 6830 class TextInferenceConfig < Struct.new( :temperature, :top_p, :max_tokens, :stop_sequences) SENSITIVE = [] include Aws::Structure end |
#stop_sequences ⇒ Array<String>
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitrary values, for actual values consult the limits defined by your specific model.
6830 6831 6832 6833 6834 6835 6836 6837 |
# File 'lib/aws-sdk-bedrock/types.rb', line 6830 class TextInferenceConfig < Struct.new( :temperature, :top_p, :max_tokens, :stop_sequences) SENSITIVE = [] include Aws::Structure end |
#temperature ⇒ Float
Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options. A lower temperature value (e.g. 0.2 or 0.3) makes model outputs more deterministic or predictable, while a higher temperature (e.g. 0.8 or 0.9) makes the outputs more creative or unpredictable.
6830 6831 6832 6833 6834 6835 6836 6837 |
# File 'lib/aws-sdk-bedrock/types.rb', line 6830 class TextInferenceConfig < Struct.new( :temperature, :top_p, :max_tokens, :stop_sequences) SENSITIVE = [] include Aws::Structure end |
#top_p ⇒ Float
A probability distribution threshold which controls what the model considers for the set of possible next tokens. The model will only consider the top p% of the probability distribution when generating the next token.
6830 6831 6832 6833 6834 6835 6836 6837 |
# File 'lib/aws-sdk-bedrock/types.rb', line 6830 class TextInferenceConfig < Struct.new( :temperature, :top_p, :max_tokens, :stop_sequences) SENSITIVE = [] include Aws::Structure end |