Class: Google::Cloud::Dialogflow::V2::StreamingRecognitionResult
- Inherits:
-
Object
- Object
- Google::Cloud::Dialogflow::V2::StreamingRecognitionResult
- Extended by:
- Protobuf::MessageExts::ClassMethods
- Includes:
- Protobuf::MessageExts
- Defined in:
- proto_docs/google/cloud/dialogflow/v2/session.rb
Overview
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.
While end-user audio is being processed, Dialogflow sends a series of
results. Each result may contain a transcript
value. A transcript
represents a portion of the utterance. While the recognizer is processing
audio, transcript values may be interim values or finalized values.
Once a transcript is finalized, the is_final
value is set to true and
processing continues for the next transcript.
If StreamingDetectIntentRequest.query_input.audio_config.single_utterance
was true, and the recognizer has completed processing audio,
the message_type
value is set to `END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.
The complete end-user utterance is determined by concatenating the finalized transcript values received for the series of results.
In the following example, single utterance is enabled. In the case where single utterance is not enabled, result 7 would not occur.
Num | transcript | message_type | is_final
--- | ----------------------- | ----------------------- | --------
1 | "tube" | TRANSCRIPT | false
2 | "to be a" | TRANSCRIPT | false
3 | "to be" | TRANSCRIPT | false
4 | "to be or not to be" | TRANSCRIPT | true
5 | "that's" | TRANSCRIPT | false
6 | "that is | TRANSCRIPT | false
7 | unset | END_OF_SINGLE_UTTERANCE | unset
8 | " that is the question" | TRANSCRIPT | true
Concatenating the finalized transcripts with is_final
set to true,
the complete utterance becomes "to be or not to be that is the question".
Defined Under Namespace
Modules: MessageType
Instance Attribute Summary collapse
-
#confidence ⇒ ::Float
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
-
#is_final ⇒ ::Boolean
If
false
, theStreamingRecognitionResult
represents an interim result that may change. -
#language_code ⇒ ::String
Detected language code for the transcript.
-
#message_type ⇒ ::Google::Cloud::Dialogflow::V2::StreamingRecognitionResult::MessageType
Type of the result message.
-
#speech_end_offset ⇒ ::Google::Protobuf::Duration
Time offset of the end of this Speech recognition result relative to the beginning of the audio.
-
#speech_word_info ⇒ ::Array<::Google::Cloud::Dialogflow::V2::SpeechWordInfo>
Word-specific information for the words recognized by Speech in transcript.
-
#transcript ⇒ ::String
Transcript text representing the words that the user spoke.
Instance Attribute Details
#confidence ⇒ ::Float
Returns The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |
#is_final ⇒ ::Boolean
Returns If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |
#language_code ⇒ ::String
Returns Detected language code for the transcript.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |
#message_type ⇒ ::Google::Cloud::Dialogflow::V2::StreamingRecognitionResult::MessageType
Returns Type of the result message.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |
#speech_end_offset ⇒ ::Google::Protobuf::Duration
Returns Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |
#speech_word_info ⇒ ::Array<::Google::Cloud::Dialogflow::V2::SpeechWordInfo>
Returns Word-specific information for the words recognized by Speech in
transcript.
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |
#transcript ⇒ ::String
Returns Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 |
# File 'proto_docs/google/cloud/dialogflow/v2/session.rb', line 620 class StreamingRecognitionResult include ::Google::Protobuf::MessageExts extend ::Google::Protobuf::MessageExts::ClassMethods # Type of the response message. module MessageType # Not specified. Should never be used. MESSAGE_TYPE_UNSPECIFIED = 0 # Message contains a (possibly partial) transcript. TRANSCRIPT = 1 # This event indicates that the server has detected the end of the user's # speech utterance and expects no additional inputs. # Therefore, the server will not process additional audio (although it may # subsequently return additional results). The client should stop sending # additional audio data, half-close the gRPC connection, and wait for any # additional results until the server closes the gRPC connection. This # message is only sent if `single_utterance` was set to `true`, and is not # used otherwise. END_OF_SINGLE_UTTERANCE = 2 end end |