Class: Aws::ChimeSDKMediaPipelines::Types::AmazonTranscribeCallAnalyticsProcessorConfiguration

Inherits:
Struct
  • Object
show all
Includes:
Structure
Defined in:
lib/aws-sdk-chimesdkmediapipelines/types.rb

Overview

A structure that contains the configuration settings for an Amazon Transcribe call analytics processor.

Constant Summary collapse

SENSITIVE =
[]

Instance Attribute Summary collapse

Instance Attribute Details

#call_analytics_stream_categoriesArray<String>

By default, all ‘CategoryEvents` are sent to the insights target. If this parameter is specified, only included categories are sent to the insights target.

Returns:

  • (Array<String>)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#content_identification_typeString

Labels all personally identifiable information (PII) identified in your transcript.

Content identification is performed at the segment level; PII specified in ‘PiiEntityTypes` is flagged upon complete transcription of an audio segment.

You can’t set ‘ContentIdentificationType` and `ContentRedactionType` in the same request. If you do, your request returns a `BadRequestException`.

For more information, see [Redacting or identifying personally identifiable information] in the *Amazon Transcribe Developer Guide*.

[1]: docs.aws.amazon.com/transcribe/latest/dg/pii-redaction.html

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#content_redaction_typeString

Redacts all personally identifiable information (PII) identified in your transcript.

Content redaction is performed at the segment level; PII specified in ‘PiiEntityTypes` is redacted upon complete transcription of an audio segment.

You can’t set ‘ContentRedactionType` and `ContentIdentificationType` in the same request. If you do, your request returns a `BadRequestException`.

For more information, see [Redacting or identifying personally identifiable information] in the *Amazon Transcribe Developer Guide*.

[1]: docs.aws.amazon.com/transcribe/latest/dg/pii-redaction.html

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#enable_partial_results_stabilizationBoolean

Enables partial result stabilization for your transcription. Partial result stabilization can reduce latency in your output, but may impact accuracy. For more information, see [Partial-result stabilization] in the *Amazon Transcribe Developer Guide*.

[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html#streaming-partial-result-stabilization

Returns:

  • (Boolean)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#filter_partial_resultsBoolean

If true, ‘UtteranceEvents` with `IsPartial: true` are filtered out of the insights target.

Returns:

  • (Boolean)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#language_codeString

The language code in the configuration.

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#language_model_nameString

Specifies the name of the custom language model to use when processing a transcription. Note that language model names are case sensitive.

The language of the specified language model must match the language code specified in the transcription request. If the languages don’t match, the custom language model isn’t applied. Language mismatches don’t generate errors or warnings.

For more information, see [Custom language models] in the *Amazon Transcribe Developer Guide*.

[1]: docs.aws.amazon.com/transcribe/latest/dg/custom-language-models.html

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#partial_results_stabilityString

Specifies the level of stability to use when you enable partial results stabilization (‘EnablePartialResultsStabilization`).

Low stability provides the highest accuracy. High stability transcribes faster, but with slightly lower accuracy.

For more information, see [Partial-result stabilization] in the *Amazon Transcribe Developer Guide*.

[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html#streaming-partial-result-stabilization

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#pii_entity_typesString

Specifies the types of personally identifiable information (PII) to redact from a transcript. You can include as many types as you’d like, or you can select ‘ALL`.

To include ‘PiiEntityTypes` in your Call Analytics request, you must also include `ContentIdentificationType` or `ContentRedactionType`, but you can’t include both.

Values must be comma-separated and can include: ‘ADDRESS`, `BANK_ACCOUNT_NUMBER`, `BANK_ROUTING`, `CREDIT_DEBIT_CVV`, `CREDIT_DEBIT_EXPIRY`, `CREDIT_DEBIT_NUMBER`, `EMAIL`, `NAME`, `PHONE`, `PIN`, `SSN`, or `ALL`.

Length Constraints: Minimum length of 1. Maximum length of 300.

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#post_call_analytics_settingsTypes::PostCallAnalyticsSettings

The settings for a post-call analysis task in an analytics configuration.



202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#vocabulary_filter_methodString

Specifies how to apply a vocabulary filter to a transcript.

To replace words with *******, choose ‘mask`.

To delete words, choose ‘remove`.

To flag words without changing them, choose ‘tag`.

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#vocabulary_filter_nameString

Specifies the name of the custom vocabulary filter to use when processing a transcription. Note that vocabulary filter names are case sensitive.

If the language of the specified custom vocabulary filter doesn’t match the language identified in your media, the vocabulary filter is not applied to your transcription.

For more information, see [Using vocabulary filtering with unwanted words] in the *Amazon Transcribe Developer Guide*.

Length Constraints: Minimum length of 1. Maximum length of 200.

[1]: docs.aws.amazon.com/transcribe/latest/dg/vocabulary-filtering.html

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end

#vocabulary_nameString

Specifies the name of the custom vocabulary to use when processing a transcription. Note that vocabulary names are case sensitive.

If the language of the specified custom vocabulary doesn’t match the language identified in your media, the custom vocabulary is not applied to your transcription.

For more information, see [Custom vocabularies] in the *Amazon Transcribe Developer Guide*.

Length Constraints: Minimum length of 1. Maximum length of 200.

[1]: docs.aws.amazon.com/transcribe/latest/dg/custom-vocabulary.html

Returns:

  • (String)


202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# File 'lib/aws-sdk-chimesdkmediapipelines/types.rb', line 202

class AmazonTranscribeCallAnalyticsProcessorConfiguration < Struct.new(
  :language_code,
  :vocabulary_name,
  :vocabulary_filter_name,
  :vocabulary_filter_method,
  :language_model_name,
  :enable_partial_results_stabilization,
  :partial_results_stability,
  :content_identification_type,
  :content_redaction_type,
  :pii_entity_types,
  :filter_partial_results,
  :post_call_analytics_settings,
  :call_analytics_stream_categories)
  SENSITIVE = []
  include Aws::Structure
end