Class: Aws::TranscribeStreamingService::Types::StartMedicalStreamTranscriptionRequest
- Inherits:
-
Struct
- Object
- Struct
- Aws::TranscribeStreamingService::Types::StartMedicalStreamTranscriptionRequest
- Includes:
- Structure
- Defined in:
- lib/aws-sdk-transcribestreamingservice/types.rb
Overview
Constant Summary collapse
- SENSITIVE =
[]
Instance Attribute Summary collapse
-
#audio_stream ⇒ Types::AudioStream
An encoded stream of audio blobs.
-
#content_identification_type ⇒ String
Labels all personal health information (PHI) identified in your transcript.
-
#enable_channel_identification ⇒ Boolean
Enables channel identification in multi-channel audio.
-
#language_code ⇒ String
Specify the language code that represents the language spoken in your audio.
-
#media_encoding ⇒ String
Specify the encoding used for the input audio.
-
#media_sample_rate_hertz ⇒ Integer
The sample rate of the input audio (in hertz).
-
#number_of_channels ⇒ Integer
Specify the number of channels in your audio stream.
-
#session_id ⇒ String
Specify a name for your transcription session.
-
#show_speaker_label ⇒ Boolean
Enables speaker partitioning (diarization) in your transcription output.
-
#specialty ⇒ String
Specify the medical specialty contained in your audio.
-
#type ⇒ String
Specify the type of input audio.
-
#vocabulary_name ⇒ String
Specify the name of the custom vocabulary that you want to use when processing your transcription.
Instance Attribute Details
#audio_stream ⇒ Types::AudioStream
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see [Transcribing streaming audio].
[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#content_identification_type ⇒ String
Labels all personal health information (PHI) identified in your transcript.
Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.
For more information, see [Identifying personal health information (PHI) in a transcription].
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#enable_channel_identification ⇒ Boolean
Enables channel identification in multi-channel audio.
Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.
If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.
If you include ‘EnableChannelIdentification` in your request, you must also include `NumberOfChannels`.
For more information, see [Transcribing multi-channel audio].
[1]: docs.aws.amazon.com/transcribe/latest/dg/channel-id.html
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#language_code ⇒ String
Specify the language code that represents the language spoken in your audio.
Amazon Transcribe Medical only supports US English (‘en-US`).
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#media_encoding ⇒ String
Specify the encoding used for the input audio. Supported formats are:
-
FLAC
-
OPUS-encoded audio in an Ogg container
-
PCM (only signed 16-bit little-endian audio formats, which does not include WAV)
For more information, see [Media formats].
[1]: docs.aws.amazon.com/transcribe/latest/dg/how-input.html#how-input-audio
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#media_sample_rate_hertz ⇒ Integer
The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#number_of_channels ⇒ Integer
Specify the number of channels in your audio stream. This value must be ‘2`, as only two channels are supported. If your audio doesn’t contain multiple channels, do not include this parameter in your request.
If you include ‘NumberOfChannels` in your request, you must also include `EnableChannelIdentification`.
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#session_id ⇒ String
Specify a name for your transcription session. If you don’t include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#show_speaker_label ⇒ Boolean
Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.
For more information, see [Partitioning speakers (diarization)].
[1]: docs.aws.amazon.com/transcribe/latest/dg/diarization.html
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#specialty ⇒ String
Specify the medical specialty contained in your audio.
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#type ⇒ String
Specify the type of input audio. For example, choose ‘DICTATION` for a provider dictating patient notes and `CONVERSATION` for a dialogue between a patient and a medical professional.
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |
#vocabulary_name ⇒ String
Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.
1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 |
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339 class StartMedicalStreamTranscriptionRequest < Struct.new( :language_code, :media_sample_rate_hertz, :media_encoding, :vocabulary_name, :specialty, :type, :show_speaker_label, :session_id, :audio_stream, :enable_channel_identification, :number_of_channels, :content_identification_type) SENSITIVE = [] include Aws::Structure end |