Class: Aws::TranscribeStreamingService::Types::StartCallAnalyticsStreamTranscriptionRequest
- Inherits:
-
Struct
- Object
- Struct
- Aws::TranscribeStreamingService::Types::StartCallAnalyticsStreamTranscriptionRequest
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-transcribestreamingservice/types.rb
Overview
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#audio_stream ⇒ Types::AudioStream
An encoded stream of audio blobs.
-
#content_identification_type ⇒ String
Labels all personally identifiable information (PII) identified in your transcript.
-
#content_redaction_type ⇒ String
Redacts all personally identifiable information (PII) identified in your transcript.
-
#enable_partial_results_stabilization ⇒ Boolean
Enables partial result stabilization for your transcription.
-
#language_code ⇒ String
Specify the language code that represents the language spoken in your audio.
-
#language_model_name ⇒ String
Specify the name of the custom language model that you want to use when processing your transcription.
-
#media_encoding ⇒ String
Specify the encoding of your input audio.
-
#media_sample_rate_hertz ⇒ Integer
The sample rate of the input audio (in hertz).
-
#partial_results_stability ⇒ String
Specify the level of stability to use when you enable partial results stabilization (‘EnablePartialResultsStabilization`).
-
#pii_entity_types ⇒ String
Specify which types of personally identifiable information (PII) you want to redact in your transcript.
-
#session_id ⇒ String
Specify a name for your Call Analytics transcription session.
-
#vocabulary_filter_method ⇒ String
Specify how you want your vocabulary filter applied to your transcript.
-
#vocabulary_filter_name ⇒ String
Specify the name of the custom vocabulary filter that you want to use when processing your transcription.
-
#vocabulary_name ⇒ String
Specify the name of the custom vocabulary that you want to use when processing your transcription.
Instance Attribute Details
#audio_stream ⇒ Types::AudioStream
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see [Transcribing streaming audio].
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#content_identification_type ⇒ String
Labels all personally identifiable information (PII) identified in your transcript.
Content identification is performed at the segment level; PII specified in ‘PiiEntityTypes` is flagged upon complete transcription of an audio segment. If you don’t include ‘PiiEntityTypes` in your request, all PII is identified.
You can’t set ‘ContentIdentificationType` and `ContentRedactionType` in the same request. If you set both, your request returns a `BadRequestException`.
For more information, see [Redacting or identifying personally identifiable information].
[1]: docs.aws.amazon.com/transcribe/latest/dg/pii-redaction.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#content_redaction_type ⇒ String
Redacts all personally identifiable information (PII) identified in your transcript.
Content redaction is performed at the segment level; PII specified in ‘PiiEntityTypes` is redacted upon complete transcription of an audio segment. If you don’t include ‘PiiEntityTypes` in your request, all PII is redacted.
You can’t set ‘ContentRedactionType` and `ContentIdentificationType` in the same request. If you set both, your request returns a `BadRequestException`.
For more information, see [Redacting or identifying personally identifiable information].
[1]: docs.aws.amazon.com/transcribe/latest/dg/pii-redaction.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#enable_partial_results_stabilization ⇒ Boolean
Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see [Partial-result stabilization].
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html#streaming-partial-result-stabilization
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#language_code ⇒ String
Specify the language code that represents the language spoken in your audio.
For a list of languages supported with real-time Call Analytics, refer to the [Supported languages] table.
[1]: docs.aws.amazon.com/transcribe/latest/dg/supported-languages.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#language_model_name ⇒ String
Specify the name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.
The language of the specified language model must match the language code you specify in your transcription request. If the languages don’t match, the custom language model isn’t applied. There are no errors or warnings associated with a language mismatch.
For more information, see [Custom language models].
[1]: docs.aws.amazon.com/transcribe/latest/dg/custom-language-models.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#media_encoding ⇒ String
Specify the encoding of your input audio. Supported formats are:
-
FLAC
-
OPUS-encoded audio in an Ogg container
-
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see [Media formats].
[1]: docs.aws.amazon.com/transcribe/latest/dg/how-input.html#how-input-audio
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#media_sample_rate_hertz ⇒ Integer
The sample rate of the input audio (in hertz). Low-quality audio, such as telephone audio, is typically around 8,000 Hz. High-quality audio typically ranges from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#partial_results_stability ⇒ String
Specify the level of stability to use when you enable partial results stabilization (‘EnablePartialResultsStabilization`).
Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.
For more information, see [Partial-result stabilization].
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html#streaming-partial-result-stabilization
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#pii_entity_types ⇒ String
Specify which types of personally identifiable information (PII) you want to redact in your transcript. You can include as many types as you’d like, or you can select ‘ALL`.
Values must be comma-separated and can include: ‘ADDRESS`, `BANK_ACCOUNT_NUMBER`, `BANK_ROUTING`, `CREDIT_DEBIT_CVV`, `CREDIT_DEBIT_EXPIRY`, `CREDIT_DEBIT_NUMBER`, `EMAIL`, `NAME`, `PHONE`, `PIN`, `SSN`, or `ALL`.
Note that if you include ‘PiiEntityTypes` in your request, you must also include `ContentIdentificationType` or `ContentRedactionType`.
If you include ‘ContentRedactionType` or `ContentIdentificationType` in your request, but do not include `PiiEntityTypes`, all PII is redacted or identified.
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#session_id ⇒ String
Specify a name for your Call Analytics transcription session. If you don’t include this parameter in your request, Amazon Transcribe generates an ID and returns it in the response.
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#vocabulary_filter_method ⇒ String
Specify how you want your vocabulary filter applied to your transcript.
To replace words with ‘***`, choose `mask`.
To delete words, choose ‘remove`.
To flag words without changing them, choose ‘tag`.
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#vocabulary_filter_name ⇒ String
Specify the name of the custom vocabulary filter that you want to use when processing your transcription. Note that vocabulary filter names are case sensitive.
If the language of the specified custom vocabulary filter doesn’t match the language identified in your media, the vocabulary filter is not applied to your transcription.
For more information, see [Using vocabulary filtering with unwanted words].
[1]: docs.aws.amazon.com/transcribe/latest/dg/vocabulary-filtering.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |
#vocabulary_name ⇒ String
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
If the language of the specified custom vocabulary doesn’t match the language identified in your media, the custom vocabulary is not applied to your transcription.
For more information, see [Custom vocabularies].
[1]: docs.aws.amazon.com/transcribe/latest/dg/custom-vocabulary.html
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1101 class StartCallAnalyticsStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :session_id, :audio_stream, :vocabulary_filter_name, :vocabulary_filter_method, :language_model_name, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types) SENSITIVE = [] include Aws::Structure end |