Class: Google::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1InputAudioConfig

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/dialogflow_v2beta1/classes.rb,
lib/google/apis/dialogflow_v2beta1/representations.rb,
lib/google/apis/dialogflow_v2beta1/representations.rb

Overview

Instructs the speech recognizer on how to process the audio content.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudDialogflowV2beta1InputAudioConfig

Returns a new instance of GoogleCloudDialogflowV2beta1InputAudioConfig.



12602
12603
12604
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12602

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#audio_encodingString

Required. Audio encoding of the audio content to process. Corresponds to the JSON property audioEncoding

Returns:

  • (String)


12488
12489
12490
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12488

def audio_encoding
  @audio_encoding
end

#barge_in_configGoogle::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1BargeInConfig

Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: * No barge-in phase: which goes first and during which speech detection should not be carried out. * Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length fromt the the start of the input audio. The flow goes like below: --> Time without speech detection | utterance only | utterance or no-speech event | | +- ------------+ | +------------+ | +---------------+ ----------+ no barge-in +-|-

  • barge-in +-|-+ normal period +----------- +-------------+ | +------------+ | +---------------+ No-speech event is a response with END_OF_UTTERANCE without any transcript following up. Corresponds to the JSON property bargeInConfig


12511
12512
12513
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12511

def barge_in_config
  @barge_in_config
end

#disable_no_speech_recognized_eventBoolean Also known as: disable_no_speech_recognized_event?

Only used in Participants.AnalyzeContent and Participants. StreamingAnalyzeContent. If false and recognition doesn't return any result, trigger NO_SPEECH_RECOGNIZED event to Dialogflow agent. Corresponds to the JSON property disableNoSpeechRecognizedEvent

Returns:

  • (Boolean)


12518
12519
12520
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12518

def disable_no_speech_recognized_event
  @disable_no_speech_recognized_event
end

#enable_automatic_punctuationBoolean Also known as: enable_automatic_punctuation?

Enable automatic punctuation option at the speech backend. Corresponds to the JSON property enableAutomaticPunctuation

Returns:

  • (Boolean)


12524
12525
12526
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12524

def enable_automatic_punctuation
  @enable_automatic_punctuation
end

#enable_word_infoBoolean Also known as: enable_word_info?

If true, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information. Corresponds to the JSON property enableWordInfo

Returns:

  • (Boolean)


12533
12534
12535
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12533

def enable_word_info
  @enable_word_info
end

#language_codeString

Required. The language of the supplied audio. Dialogflow does not do translations. See Language Support for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. Corresponds to the JSON property languageCode

Returns:

  • (String)


12543
12544
12545
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12543

def language_code
  @language_code
end

#modelString

Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance: - phone_call (best for Agent Assist and telephony) - latest_short (best for Dialogflow non-telephony)

  • command_and_search (best for very short utterances and commands) Corresponds to the JSON property model

Returns:

  • (String)


12558
12559
12560
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12558

def model
  @model
end

#model_variantString

Which variant of the Speech model to use. Corresponds to the JSON property modelVariant

Returns:

  • (String)


12563
12564
12565
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12563

def model_variant
  @model_variant
end

#phrase_hintsArray<String>

A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See the Cloud Speech documentation for more details. This field is deprecated. Please use speech_contexts instead. If you specify both phrase_hints and speech_contexts, Dialogflow will treat the phrase_hints as a single additional SpeechContext. Corresponds to the JSON property phraseHints

Returns:

  • (Array<String>)


12573
12574
12575
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12573

def phrase_hints
  @phrase_hints
end

#sample_rate_hertzFixnum

Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details. Corresponds to the JSON property sampleRateHertz

Returns:

  • (Fixnum)


12580
12581
12582
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12580

def sample_rate_hertz
  @sample_rate_hertz
end

#single_utteranceBoolean Also known as: single_utterance?

If false (default), recognition does not cease until the client closes the stream. If true, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance. Corresponds to the JSON property singleUtterance

Returns:

  • (Boolean)


12592
12593
12594
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12592

def single_utterance
  @single_utterance
end

#speech_contextsArray<Google::Apis::DialogflowV2beta1::GoogleCloudDialogflowV2beta1SpeechContext>

Context information to assist speech recognition. See the Cloud Speech documentation for more details. Corresponds to the JSON property speechContexts



12600
12601
12602
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12600

def speech_contexts
  @speech_contexts
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



12607
12608
12609
12610
12611
12612
12613
12614
12615
12616
12617
12618
12619
12620
# File 'lib/google/apis/dialogflow_v2beta1/classes.rb', line 12607

def update!(**args)
  @audio_encoding = args[:audio_encoding] if args.key?(:audio_encoding)
  @barge_in_config = args[:barge_in_config] if args.key?(:barge_in_config)
  @disable_no_speech_recognized_event = args[:disable_no_speech_recognized_event] if args.key?(:disable_no_speech_recognized_event)
  @enable_automatic_punctuation = args[:enable_automatic_punctuation] if args.key?(:enable_automatic_punctuation)
  @enable_word_info = args[:enable_word_info] if args.key?(:enable_word_info)
  @language_code = args[:language_code] if args.key?(:language_code)
  @model = args[:model] if args.key?(:model)
  @model_variant = args[:model_variant] if args.key?(:model_variant)
  @phrase_hints = args[:phrase_hints] if args.key?(:phrase_hints)
  @sample_rate_hertz = args[:sample_rate_hertz] if args.key?(:sample_rate_hertz)
  @single_utterance = args[:single_utterance] if args.key?(:single_utterance)
  @speech_contexts = args[:speech_contexts] if args.key?(:speech_contexts)
end