Class: Aws::ChimeSDKMediaPipelines::Types::AmazonTranscribeProcessorConfiguration
- Inherits:
-
Struct
- Object
- Struct
- Aws::ChimeSDKMediaPipelines::Types::AmazonTranscribeProcessorConfiguration
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-chimesdkmediapipelines/types.rb
Overview
A structure that contains the configuration settings for an Amazon Transcribe processor.
<note markdown=“1”> Calls to this API must include a ‘LanguageCode`, `IdentifyLanguage`, or `IdentifyMultipleLanguages` parameter. If you include more than one of those parameters, your transcription job fails.
</note>
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#content_identification_type ⇒ String
Labels all personally identifiable information (PII) identified in your transcript.
-
#content_redaction_type ⇒ String
Redacts all personally identifiable information (PII) identified in your transcript.
-
#enable_partial_results_stabilization ⇒ Boolean
Enables partial result stabilization for your transcription.
-
#filter_partial_results ⇒ Boolean
If true, ‘TranscriptEvents` with `IsPartial: true` are filtered out of the insights target.
-
#identify_language ⇒ Boolean
Turns language identification on or off.
-
#identify_multiple_languages ⇒ Boolean
Turns language identification on or off for multiple languages.
-
#language_code ⇒ String
The language code that represents the language spoken in your audio.
-
#language_model_name ⇒ String
The name of the custom language model that you want to use when processing your transcription.
-
#language_options ⇒ String
The language options for the transcription, such as automatic language detection.
-
#partial_results_stability ⇒ String
The level of stability to use when you enable partial results stabilization (‘EnablePartialResultsStabilization`).
-
#pii_entity_types ⇒ String
The types of personally identifiable information (PII) to redact from a transcript.
-
#preferred_language ⇒ String
The preferred language for the transcription.
-
#show_speaker_label ⇒ Boolean
Enables speaker partitioning (diarization) in your transcription output.
-
#vocabulary_filter_method ⇒ String
The vocabulary filtering method used in your Call Analytics transcription.
-
#vocabulary_filter_name ⇒ String
The name of the custom vocabulary filter that you specified in your Call Analytics request.
-
#vocabulary_filter_names ⇒ String
The names of the custom vocabulary filter or filters using during transcription.
-
#vocabulary_name ⇒ String
The name of the custom vocabulary that you specified in your Call Analytics request.
-
#vocabulary_names ⇒ String
The names of the custom vocabulary or vocabularies used during transcription.
Instance Attribute Details
#content_identification_type ⇒ String
Labels all personally identifiable information (PII) identified in your transcript.
Content identification is performed at the segment level; PII specified in ‘PiiEntityTypes` is flagged upon complete transcription of an audio segment.
You can’t set ‘ContentIdentificationType` and `ContentRedactionType` in the same request. If you set both, your request returns a `BadRequestException`.
For more information, see [Redacting or identifying personally identifiable information] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/pii-redaction.html
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#content_redaction_type ⇒ String
Redacts all personally identifiable information (PII) identified in your transcript.
Content redaction is performed at the segment level; PII specified in PiiEntityTypes is redacted upon complete transcription of an audio segment.
You can’t set ContentRedactionType and ContentIdentificationType in the same request. If you set both, your request returns a ‘BadRequestException`.
For more information, see [Redacting or identifying personally identifiable information] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/pii-redaction.html
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#enable_partial_results_stabilization ⇒ Boolean
Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy.
For more information, see [Partial-result stabilization] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html#streaming-partial-result-stabilization
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#filter_partial_results ⇒ Boolean
If true, ‘TranscriptEvents` with `IsPartial: true` are filtered out of the insights target.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#identify_language ⇒ Boolean
Turns language identification on or off.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#identify_multiple_languages ⇒ Boolean
Turns language identification on or off for multiple languages.
<note markdown=“1”> Calls to this API must include a ‘LanguageCode`, `IdentifyLanguage`, or `IdentifyMultipleLanguages` parameter. If you include more than one of those parameters, your transcription job fails.
</note>
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#language_code ⇒ String
The language code that represents the language spoken in your audio.
If you’re unsure of the language spoken in your audio, consider using ‘IdentifyLanguage` to enable automatic language identification.
For a list of languages that real-time Call Analytics supports, see the [Supported languages table] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/supported-languages.html
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#language_model_name ⇒ String
The name of the custom language model that you want to use when processing your transcription. Note that language model names are case sensitive.
The language of the specified language model must match the language code you specify in your transcription request. If the languages don’t match, the custom language model isn’t applied. There are no errors or warnings associated with a language mismatch.
For more information, see [Custom language models] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/custom-language-models.html
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#language_options ⇒ String
The language options for the transcription, such as automatic language detection.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#partial_results_stability ⇒ String
The level of stability to use when you enable partial results stabilization (‘EnablePartialResultsStabilization`).
Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.
For more information, see [Partial-result stabilization] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html#streaming-partial-result-stabilization
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#pii_entity_types ⇒ String
The types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you’d like, or you can select ‘ALL`.
To include ‘PiiEntityTypes` in your Call Analytics request, you must also include `ContentIdentificationType` or `ContentRedactionType`, but you can’t include both.
Values must be comma-separated and can include: ‘ADDRESS`, `BANK_ACCOUNT_NUMBER`, `BANK_ROUTING`, `CREDIT_DEBIT_CVV`, `CREDIT_DEBIT_EXPIRY`, `CREDIT_DEBIT_NUMBER`, `EMAIL`, `NAME`, `PHONE`, `PIN`, `SSN`, or `ALL`.
If you leave this parameter empty, the default behavior is equivalent to ‘ALL`.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#preferred_language ⇒ String
The preferred language for the transcription.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#show_speaker_label ⇒ Boolean
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see [Partitioning speakers (diarization)] in the *Amazon Transcribe Developer Guide*.
[1]: docs.aws.amazon.com/transcribe/latest/dg/diarization.html
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#vocabulary_filter_method ⇒ String
The vocabulary filtering method used in your Call Analytics transcription.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#vocabulary_filter_name ⇒ String
The name of the custom vocabulary filter that you specified in your Call Analytics request.
Length Constraints: Minimum length of 1. Maximum length of 200.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#vocabulary_filter_names ⇒ String
The names of the custom vocabulary filter or filters using during transcription.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#vocabulary_name ⇒ String
The name of the custom vocabulary that you specified in your Call Analytics request.
Length Constraints: Minimum length of 1. Maximum length of 200.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |
#vocabulary_names ⇒ String
The names of the custom vocabulary or vocabularies used during transcription.
423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 |
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 423 class AmazonTranscribeProcessorConfiguration < Struct.new( :language_code, :vocabulary_name, :vocabulary_filter_name, :vocabulary_filter_method, :show_speaker_label, :enable_partial_results_stabilization, :partial_results_stability, :content_identification_type, :content_redaction_type, :pii_entity_types, :language_model_name, :filter_partial_results, :identify_language, :identify_multiple_languages, :language_options, :preferred_language, :vocabulary_names, :vocabulary_filter_names) SENSITIVE = [] include Aws::Structure end |