Class: Google::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1InferenceParameter

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dialogflow_v2beta1/classes.rb,
lib/google/apis/dialogflow_v2beta1/representations.rb,
lib/google/apis/dialogflow_v2beta1/representations.rb

Overview

The parameters of inference.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudDialogflowV2beta1InferenceParameter

Returns a new instance of GoogleCloudDialogflowV2beta1InferenceParameter.



14821
14822
14823
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 14821

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#max_output_tokensFixnum

Optional. Maximum number of the output tokens for the generator. Corresponds to the JSON property maxOutputTokens

Returns:

  • (Fixnum)


14787
14788
14789
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 14787

def max_output_tokens
  @max_output_tokens
end

#temperatureFloat

Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0. Corresponds to the JSON property temperature

Returns:

  • (Float)


14794
14795
14796
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 14794

def temperature
  @temperature
end

#top_kFixnum

Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40. Corresponds to the JSON property topK

Returns:

  • (Fixnum)


14807
14808
14809
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 14807

def top_k
  @top_k
end

#top_pFloat

Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95. Corresponds to the JSON property topP

Returns:

  • (Float)


14819
14820
14821
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 14819

def top_p
  @top_p
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



14826
14827
14828
14829
14830
14831
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 14826

def update!(**args)
  @max_output_tokens = args[:max_output_tokens] if args.key?(:max_output_tokens)
  @temperature = args[:temperature] if args.key?(:temperature)
  @top_k = args[:top_k] if args.key?(:top_k)
  @top_p = args[:top_p] if args.key?(:top_p)
end