Class: Aws::TranscribeStreamingService::Types::StartMedicalStreamTranscriptionRequest

Inherits:
Struct
  • Object
show all
Includes:
Structure
Defined in:
lib/aws-sdk-transcribestreamingservice/types.rb

Overview

Constant Summary collapse

SENSITIVE =
[]

Instance Attribute Summary collapse

Instance Attribute Details

#audio_streamTypes::AudioStream

An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.

For more information, see [Transcribing streaming audio].

[1]: docs.aws.amazon.com/transcribe/latest/dg/streaming.html

Returns:



1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#content_identification_typeString

Labels all personal health information (PHI) identified in your transcript.

Content identification is performed at the segment level; PHI is flagged upon complete transcription of an audio segment.

For more information, see [Identifying personal health information (PHI) in a transcription].

[1]: docs.aws.amazon.com/transcribe/latest/dg/phi-id.html

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#enable_channel_identificationBoolean

Enables channel identification in multi-channel audio.

Channel identification transcribes the audio on each channel independently, then appends the output for each channel into one transcript.

If you have multi-channel audio and do not enable channel identification, your audio is transcribed in a continuous manner and your transcript is not separated by channel.

If you include ‘EnableChannelIdentification` in your request, you must also include `NumberOfChannels`.

For more information, see [Transcribing multi-channel audio].

[1]: docs.aws.amazon.com/transcribe/latest/dg/channel-id.html

Returns:

  • (Boolean)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#language_codeString

Specify the language code that represents the language spoken in your audio.

Amazon Transcribe Medical only supports US English (‘en-US`).

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#media_encodingString

Specify the encoding used for the input audio. Supported formats are:

  • FLAC

  • OPUS-encoded audio in an Ogg container

  • PCM (only signed 16-bit little-endian audio formats, which does not include WAV)

For more information, see [Media formats].

[1]: docs.aws.amazon.com/transcribe/latest/dg/how-input.html#how-input-audio

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#media_sample_rate_hertzInteger

The sample rate of the input audio (in hertz). Amazon Transcribe Medical supports a range from 16,000 Hz to 48,000 Hz. Note that the sample rate you specify must match that of your audio.

Returns:

  • (Integer)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#number_of_channelsInteger

Specify the number of channels in your audio stream. This value must be ‘2`, as only two channels are supported. If your audio doesn’t contain multiple channels, do not include this parameter in your request.

If you include ‘NumberOfChannels` in your request, you must also include `EnableChannelIdentification`.

Returns:

  • (Integer)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#session_idString

Specify a name for your transcription session. If you don’t include this parameter in your request, Amazon Transcribe Medical generates an ID and returns it in the response.

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#show_speaker_labelBoolean

Enables speaker partitioning (diarization) in your transcription output. Speaker partitioning labels the speech from individual speakers in your media file.

For more information, see [Partitioning speakers (diarization)].

[1]: docs.aws.amazon.com/transcribe/latest/dg/diarization.html

Returns:

  • (Boolean)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#specialtyString

Specify the medical specialty contained in your audio.

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#typeString

Specify the type of input audio. For example, choose ‘DICTATION` for a provider dictating patient notes and `CONVERSATION` for a dialogue between a patient and a medical professional.

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end

#vocabulary_nameString

Specify the name of the custom vocabulary that you want to use when processing your transcription. Note that vocabulary names are case sensitive.

Returns:

  • (String)


1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
# File 'lib/aws-sdk-transcribestreamingservice/types.rb', line 1339

class StartMedicalStreamTranscriptionRequest < Struct.new(
  :language_code,
  :media_sample_rate_hertz,
  :media_encoding,
  :vocabulary_name,
  :specialty,
  :type,
  :show_speaker_label,
  :session_id,
  :audio_stream,
  :enable_channel_identification,
  :number_of_channels,
  :content_identification_type)
  SENSITIVE = []
  include Aws::Structure
end