Class: Aws::Bedrock::Types::TextInferenceConfig

Inherits:
Struct
  • Object
show all
Includes:
Structure
Defined in:
lib/aws-sdk-bedrock/types.rb

Overview

The configuration details for text generation using a language model via the ‘RetrieveAndGenerate` function.

Constant Summary collapse

SENSITIVE =
[]

Instance Attribute Summary collapse

Instance Attribute Details

#max_tokensInteger

The maximum number of tokens to generate in the output text. Do not use the minimum of 0 or the maximum of 65536. The limit values described here are arbitrary values, for actual values consult the limits defined by your specific model.

Returns:

  • (Integer)


6830
6831
6832
6833
6834
6835
6836
6837
# File 'lib/aws-sdk-bedrock/types.rb', line 6830

class TextInferenceConfig < Struct.new(
  :temperature,
  :top_p,
  :max_tokens,
  :stop_sequences)
  SENSITIVE = []
  include Aws::Structure
end

#stop_sequencesArray<String>

A list of sequences of characters that, if generated, will cause the model to stop generating further tokens. Do not use a minimum length of 1 or a maximum length of 1000. The limit values described here are arbitrary values, for actual values consult the limits defined by your specific model.

Returns:

  • (Array<String>)


6830
6831
6832
6833
6834
6835
6836
6837
# File 'lib/aws-sdk-bedrock/types.rb', line 6830

class TextInferenceConfig < Struct.new(
  :temperature,
  :top_p,
  :max_tokens,
  :stop_sequences)
  SENSITIVE = []
  include Aws::Structure
end

#temperatureFloat

Controls the random-ness of text generated by the language model, influencing how much the model sticks to the most predictable next words versus exploring more surprising options. A lower temperature value (e.g. 0.2 or 0.3) makes model outputs more deterministic or predictable, while a higher temperature (e.g. 0.8 or 0.9) makes the outputs more creative or unpredictable.

Returns:

  • (Float)


6830
6831
6832
6833
6834
6835
6836
6837
# File 'lib/aws-sdk-bedrock/types.rb', line 6830

class TextInferenceConfig < Struct.new(
  :temperature,
  :top_p,
  :max_tokens,
  :stop_sequences)
  SENSITIVE = []
  include Aws::Structure
end

#top_pFloat

A probability distribution threshold which controls what the model considers for the set of possible next tokens. The model will only consider the top p% of the probability distribution when generating the next token.

Returns:

  • (Float)


6830
6831
6832
6833
6834
6835
6836
6837
# File 'lib/aws-sdk-bedrock/types.rb', line 6830

class TextInferenceConfig < Struct.new(
  :temperature,
  :top_p,
  :max_tokens,
  :stop_sequences)
  SENSITIVE = []
  include Aws::Structure
end