Class: AssemblyAI::Transcripts::Transcript
- Inherits:
-
Object
- Object
- AssemblyAI::Transcripts::Transcript
- Defined in:
- lib/assemblyai/transcripts/types/transcript.rb
Overview
A transcript object
Constant Summary collapse
- OMIT =
Object.new
Instance Attribute Summary collapse
-
#acoustic_model ⇒ String
readonly
The acoustic model that was used for the transcript.
-
#additional_properties ⇒ OpenStruct
readonly
Additional properties unmapped to the current class definition.
-
#audio_channels ⇒ Integer
readonly
The number of audio channels in the audio file.
-
#audio_duration ⇒ Integer
readonly
The duration of this transcript object’s media file, in seconds.
-
#audio_end_at ⇒ Integer
readonly
The point in time, in milliseconds, in the file at which the transcription was terminated.
-
#audio_start_from ⇒ Integer
readonly
The point in time, in milliseconds, in the file at which the transcription was started.
-
#audio_url ⇒ String
readonly
The URL of the media that was transcribed.
-
#auto_chapters ⇒ Boolean
readonly
Whether [Auto Chapters](www.assemblyai.com/docs/models/auto-chapters) is enabled, can be true or false.
-
#auto_highlights ⇒ Boolean
readonly
Whether Key Phrases is enabled, either true or false.
- #auto_highlights_result ⇒ AssemblyAI::Transcripts::AutoHighlightsResult readonly
-
#boost_param ⇒ String
readonly
The word boost parameter value.
-
#chapters ⇒ Array<AssemblyAI::Transcripts::Chapter>
readonly
An array of temporally sequential chapters for the audio file.
-
#confidence ⇒ Float
readonly
The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence).
-
#content_safety ⇒ Boolean
readonly
Whether [Content Moderation](www.assemblyai.com/docs/models/content-moderation) is enabled, can be true or false.
- #content_safety_labels ⇒ AssemblyAI::Transcripts::ContentSafetyLabelsResult readonly
-
#custom_spelling ⇒ Array<AssemblyAI::Transcripts::TranscriptCustomSpelling>
readonly
Customize how words are spelled and formatted using to and from values.
-
#custom_topics ⇒ Boolean
readonly
Whether custom topics is enabled, either true or false.
-
#disfluencies ⇒ Boolean
readonly
Transcribe Filler Words, like “umm”, in your media file; can be true or false.
-
#dual_channel ⇒ Boolean
readonly
Whether [Dual channel ://www.assemblyai.com/docs/models/speech-recognition#dual-channel-transcription) was enabled in the transcription request, either true or false.
-
#entities ⇒ Array<AssemblyAI::Transcripts::Entity>
readonly
An array of results for the Entity Detection model, if it is enabled.
-
#entity_detection ⇒ Boolean
readonly
Whether [Entity Detection](www.assemblyai.com/docs/models/entity-detection) is enabled, can be true or false.
-
#error ⇒ String
readonly
Error message of why the transcript failed.
-
#filter_profanity ⇒ Boolean
readonly
Whether [Profanity ](www.assemblyai.com/docs/models/speech-recognition#profanity-filtering) is enabled, either true or false.
-
#format_text ⇒ Boolean
readonly
Whether Text Formatting is enabled, either true or false.
-
#iab_categories ⇒ Boolean
readonly
Whether [Topic Detection](www.assemblyai.com/docs/models/topic-detection) is enabled, can be true or false.
- #iab_categories_result ⇒ AssemblyAI::Transcripts::TopicDetectionModelResult readonly
-
#id ⇒ String
readonly
The unique identifier of your transcript.
-
#language_code ⇒ AssemblyAI::Transcripts::TranscriptLanguageCode
readonly
The language of your audio file.
-
#language_confidence ⇒ Float
readonly
The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence).
-
#language_confidence_threshold ⇒ Float
readonly
The confidence threshold for the automatically detected language.
-
#language_detection ⇒ Boolean
readonly
Whether [Automatic language /www.assemblyai.com/docs/models/speech-recognition#automatic-language-detection) is enabled, either true or false.
-
#language_model ⇒ String
readonly
The language model that was used for the transcript.
-
#multichannel ⇒ Boolean
readonly
Whether [Multichannel ://www.assemblyai.com/docs/models/speech-recognition#multichannel-transcription) was enabled in the transcription request, either true or false.
-
#punctuate ⇒ Boolean
readonly
Whether Automatic Punctuation is enabled, either true or false.
-
#redact_pii ⇒ Boolean
readonly
Whether [PII Redaction](www.assemblyai.com/docs/models/pii-redaction) is enabled, either true or false.
-
#redact_pii_audio ⇒ Boolean
readonly
Whether a redacted version of the audio file was generated, either true or false.
- #redact_pii_audio_quality ⇒ AssemblyAI::Transcripts::RedactPiiAudioQuality readonly
-
#redact_pii_policies ⇒ Array<AssemblyAI::Transcripts::PiiPolicy>
readonly
The list of PII Redaction policies that were enabled, if PII Redaction is enabled.
-
#redact_pii_sub ⇒ AssemblyAI::Transcripts::SubstitutionPolicy
readonly
The replacement logic for detected PII, can be “entity_type” or “hash”.
-
#sentiment_analysis ⇒ Boolean
readonly
Whether [Sentiment Analysis](www.assemblyai.com/docs/models/sentiment-analysis) is enabled, can be true or false.
-
#sentiment_analysis_results ⇒ Array<AssemblyAI::Transcripts::SentimentAnalysisResult>
readonly
An array of results for the Sentiment Analysis model, if it is enabled.
-
#speaker_labels ⇒ Boolean
readonly
Whether [Speaker diarization](www.assemblyai.com/docs/models/speaker-diarization) is enabled, can be true or false.
-
#speakers_expected ⇒ Integer
readonly
Tell the speaker label model how many speakers it should attempt to identify, up to 10.
- #speech_model ⇒ AssemblyAI::Transcripts::SpeechModel readonly
-
#speech_threshold ⇒ Float
readonly
Defaults to null.
-
#speed_boost ⇒ Boolean
readonly
Whether speed boost is enabled.
-
#status ⇒ AssemblyAI::Transcripts::TranscriptStatus
readonly
The status of your transcript.
-
#summarization ⇒ Boolean
readonly
Whether [Summarization](www.assemblyai.com/docs/models/summarization) is enabled, either true or false.
-
#summary ⇒ String
readonly
The generated summary of the media file, if [Summarization](www.assemblyai.com/docs/models/summarization) is enabled.
-
#summary_model ⇒ String
readonly
The Summarization model used to generate the summary, if [Summarization](www.assemblyai.com/docs/models/summarization) is enabled.
-
#summary_type ⇒ String
readonly
The type of summary generated, if [Summarization](www.assemblyai.com/docs/models/summarization) is enabled.
-
#text ⇒ String
readonly
The textual transcript of your media file.
-
#throttled ⇒ Boolean
readonly
True while a request is throttled and false when a request is no longer throttled.
-
#topics ⇒ Array<String>
readonly
The list of custom topics provided if custom topics is enabled.
-
#utterances ⇒ Array<AssemblyAI::Transcripts::TranscriptUtterance>
readonly
When dual_channel or speaker_labels is enabled, a list of turn-by-turn utterance objects.
-
#webhook_auth ⇒ Boolean
readonly
Whether webhook authentication details were provided.
-
#webhook_auth_header_name ⇒ String
readonly
The header name to be sent with the transcript completed or failed webhook requests.
-
#webhook_status_code ⇒ Integer
readonly
The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided.
-
#webhook_url ⇒ String
readonly
The URL to which we send webhook requests.
-
#word_boost ⇒ Array<String>
readonly
The list of custom vocabulary to boost transcription probability for.
-
#words ⇒ Array<AssemblyAI::Transcripts::TranscriptWord>
readonly
An array of temporally-sequential word objects, one for each word in the transcript.
Class Method Summary collapse
-
.from_json(json_object:) ⇒ AssemblyAI::Transcripts::Transcript
Deserialize a JSON object to an instance of Transcript.
-
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given hash and check each fields type against the current object’s property definitions.
Instance Method Summary collapse
- #initialize(id:, audio_url:, status:, webhook_auth:, auto_highlights:, redact_pii:, summarization:, language_model:, acoustic_model:, language_code: OMIT, language_detection: OMIT, language_confidence_threshold: OMIT, language_confidence: OMIT, speech_model: OMIT, text: OMIT, words: OMIT, utterances: OMIT, confidence: OMIT, audio_duration: OMIT, punctuate: OMIT, format_text: OMIT, disfluencies: OMIT, multichannel: OMIT, audio_channels: OMIT, dual_channel: OMIT, webhook_url: OMIT, webhook_status_code: OMIT, webhook_auth_header_name: OMIT, speed_boost: OMIT, auto_highlights_result: OMIT, audio_start_from: OMIT, audio_end_at: OMIT, word_boost: OMIT, boost_param: OMIT, filter_profanity: OMIT, redact_pii_audio: OMIT, redact_pii_audio_quality: OMIT, redact_pii_policies: OMIT, redact_pii_sub: OMIT, speaker_labels: OMIT, speakers_expected: OMIT, content_safety: OMIT, content_safety_labels: OMIT, iab_categories: OMIT, iab_categories_result: OMIT, custom_spelling: OMIT, auto_chapters: OMIT, chapters: OMIT, summary_type: OMIT, summary_model: OMIT, summary: OMIT, custom_topics: OMIT, topics: OMIT, sentiment_analysis: OMIT, sentiment_analysis_results: OMIT, entity_detection: OMIT, entities: OMIT, speech_threshold: OMIT, throttled: OMIT, error: OMIT, additional_properties: nil) ⇒ AssemblyAI::Transcripts::Transcript constructor
-
#to_json(*_args) ⇒ String
Serialize an instance of Transcript to a JSON object.
Constructor Details
#initialize(id:, audio_url:, status:, webhook_auth:, auto_highlights:, redact_pii:, summarization:, language_model:, acoustic_model:, language_code: OMIT, language_detection: OMIT, language_confidence_threshold: OMIT, language_confidence: OMIT, speech_model: OMIT, text: OMIT, words: OMIT, utterances: OMIT, confidence: OMIT, audio_duration: OMIT, punctuate: OMIT, format_text: OMIT, disfluencies: OMIT, multichannel: OMIT, audio_channels: OMIT, dual_channel: OMIT, webhook_url: OMIT, webhook_status_code: OMIT, webhook_auth_header_name: OMIT, speed_boost: OMIT, auto_highlights_result: OMIT, audio_start_from: OMIT, audio_end_at: OMIT, word_boost: OMIT, boost_param: OMIT, filter_profanity: OMIT, redact_pii_audio: OMIT, redact_pii_audio_quality: OMIT, redact_pii_policies: OMIT, redact_pii_sub: OMIT, speaker_labels: OMIT, speakers_expected: OMIT, content_safety: OMIT, content_safety_labels: OMIT, iab_categories: OMIT, iab_categories_result: OMIT, custom_spelling: OMIT, auto_chapters: OMIT, chapters: OMIT, summary_type: OMIT, summary_model: OMIT, summary: OMIT, custom_topics: OMIT, topics: OMIT, sentiment_analysis: OMIT, sentiment_analysis_results: OMIT, entity_detection: OMIT, entities: OMIT, speech_threshold: OMIT, throttled: OMIT, error: OMIT, additional_properties: nil) ⇒ AssemblyAI::Transcripts::Transcript
349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 349 def initialize(id:, audio_url:, status:, webhook_auth:, auto_highlights:, redact_pii:, summarization:, language_model:, acoustic_model:, language_code: OMIT, language_detection: OMIT, language_confidence_threshold: OMIT, language_confidence: OMIT, speech_model: OMIT, text: OMIT, words: OMIT, utterances: OMIT, confidence: OMIT, audio_duration: OMIT, punctuate: OMIT, format_text: OMIT, disfluencies: OMIT, multichannel: OMIT, audio_channels: OMIT, dual_channel: OMIT, webhook_url: OMIT, webhook_status_code: OMIT, webhook_auth_header_name: OMIT, speed_boost: OMIT, auto_highlights_result: OMIT, audio_start_from: OMIT, audio_end_at: OMIT, word_boost: OMIT, boost_param: OMIT, filter_profanity: OMIT, redact_pii_audio: OMIT, redact_pii_audio_quality: OMIT, redact_pii_policies: OMIT, redact_pii_sub: OMIT, speaker_labels: OMIT, speakers_expected: OMIT, content_safety: OMIT, content_safety_labels: OMIT, iab_categories: OMIT, iab_categories_result: OMIT, custom_spelling: OMIT, auto_chapters: OMIT, chapters: OMIT, summary_type: OMIT, summary_model: OMIT, summary: OMIT, custom_topics: OMIT, topics: OMIT, sentiment_analysis: OMIT, sentiment_analysis_results: OMIT, entity_detection: OMIT, entities: OMIT, speech_threshold: OMIT, throttled: OMIT, error: OMIT, additional_properties: nil) @id = id @audio_url = audio_url @status = status @language_code = language_code if language_code != OMIT @language_detection = language_detection if language_detection != OMIT @language_confidence_threshold = language_confidence_threshold if language_confidence_threshold != OMIT @language_confidence = language_confidence if language_confidence != OMIT @speech_model = speech_model if speech_model != OMIT @text = text if text != OMIT @words = words if words != OMIT @utterances = utterances if utterances != OMIT @confidence = confidence if confidence != OMIT @audio_duration = audio_duration if audio_duration != OMIT @punctuate = punctuate if punctuate != OMIT @format_text = format_text if format_text != OMIT @disfluencies = disfluencies if disfluencies != OMIT @multichannel = multichannel if multichannel != OMIT @audio_channels = audio_channels if audio_channels != OMIT @dual_channel = dual_channel if dual_channel != OMIT @webhook_url = webhook_url if webhook_url != OMIT @webhook_status_code = webhook_status_code if webhook_status_code != OMIT @webhook_auth = webhook_auth @webhook_auth_header_name = webhook_auth_header_name if webhook_auth_header_name != OMIT @speed_boost = speed_boost if speed_boost != OMIT @auto_highlights = auto_highlights @auto_highlights_result = auto_highlights_result if auto_highlights_result != OMIT @audio_start_from = audio_start_from if audio_start_from != OMIT @audio_end_at = audio_end_at if audio_end_at != OMIT @word_boost = word_boost if word_boost != OMIT @boost_param = boost_param if boost_param != OMIT @filter_profanity = filter_profanity if filter_profanity != OMIT @redact_pii = redact_pii @redact_pii_audio = redact_pii_audio if redact_pii_audio != OMIT @redact_pii_audio_quality = redact_pii_audio_quality if redact_pii_audio_quality != OMIT @redact_pii_policies = redact_pii_policies if redact_pii_policies != OMIT @redact_pii_sub = redact_pii_sub if redact_pii_sub != OMIT @speaker_labels = speaker_labels if speaker_labels != OMIT @speakers_expected = speakers_expected if speakers_expected != OMIT @content_safety = content_safety if content_safety != OMIT @content_safety_labels = content_safety_labels if content_safety_labels != OMIT @iab_categories = iab_categories if iab_categories != OMIT @iab_categories_result = iab_categories_result if iab_categories_result != OMIT @custom_spelling = custom_spelling if custom_spelling != OMIT @auto_chapters = auto_chapters if auto_chapters != OMIT @chapters = chapters if chapters != OMIT @summarization = summarization @summary_type = summary_type if summary_type != OMIT @summary_model = summary_model if summary_model != OMIT @summary = summary if summary != OMIT @custom_topics = custom_topics if custom_topics != OMIT @topics = topics if topics != OMIT @sentiment_analysis = sentiment_analysis if sentiment_analysis != OMIT @sentiment_analysis_results = sentiment_analysis_results if sentiment_analysis_results != OMIT @entity_detection = entity_detection if entity_detection != OMIT @entities = entities if entities != OMIT @speech_threshold = speech_threshold if speech_threshold != OMIT @throttled = throttled if throttled != OMIT @error = error if error != OMIT @language_model = language_model @acoustic_model = acoustic_model @additional_properties = additional_properties @_field_set = { "id": id, "audio_url": audio_url, "status": status, "language_code": language_code, "language_detection": language_detection, "language_confidence_threshold": language_confidence_threshold, "language_confidence": language_confidence, "speech_model": speech_model, "text": text, "words": words, "utterances": utterances, "confidence": confidence, "audio_duration": audio_duration, "punctuate": punctuate, "format_text": format_text, "disfluencies": disfluencies, "multichannel": multichannel, "audio_channels": audio_channels, "dual_channel": dual_channel, "webhook_url": webhook_url, "webhook_status_code": webhook_status_code, "webhook_auth": webhook_auth, "webhook_auth_header_name": webhook_auth_header_name, "speed_boost": speed_boost, "auto_highlights": auto_highlights, "auto_highlights_result": auto_highlights_result, "audio_start_from": audio_start_from, "audio_end_at": audio_end_at, "word_boost": word_boost, "boost_param": boost_param, "filter_profanity": filter_profanity, "redact_pii": redact_pii, "redact_pii_audio": redact_pii_audio, "redact_pii_audio_quality": redact_pii_audio_quality, "redact_pii_policies": redact_pii_policies, "redact_pii_sub": redact_pii_sub, "speaker_labels": speaker_labels, "speakers_expected": speakers_expected, "content_safety": content_safety, "content_safety_labels": content_safety_labels, "iab_categories": iab_categories, "iab_categories_result": iab_categories_result, "custom_spelling": custom_spelling, "auto_chapters": auto_chapters, "chapters": chapters, "summarization": summarization, "summary_type": summary_type, "summary_model": summary_model, "summary": summary, "custom_topics": custom_topics, "topics": topics, "sentiment_analysis": sentiment_analysis, "sentiment_analysis_results": sentiment_analysis_results, "entity_detection": entity_detection, "entities": entities, "speech_threshold": speech_threshold, "throttled": throttled, "error": error, "language_model": language_model, "acoustic_model": acoustic_model }.reject do |_k, v| v == OMIT end end |
Instance Attribute Details
#acoustic_model ⇒ String (readonly)
Returns The acoustic model that was used for the transcript.
211 212 213 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 211 def acoustic_model @acoustic_model end |
#additional_properties ⇒ OpenStruct (readonly)
Returns Additional properties unmapped to the current class definition.
213 214 215 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 213 def additional_properties @additional_properties end |
#audio_channels ⇒ Integer (readonly)
Returns The number of audio channels in the audio file. This is only present when multichannel is enabled.
80 81 82 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 80 def audio_channels @audio_channels end |
#audio_duration ⇒ Integer (readonly)
Returns The duration of this transcript object’s media file, in seconds.
67 68 69 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 67 def audio_duration @audio_duration end |
#audio_end_at ⇒ Integer (readonly)
Returns The point in time, in milliseconds, in the file at which the transcription was terminated.
109 110 111 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 109 def audio_end_at @audio_end_at end |
#audio_start_from ⇒ Integer (readonly)
Returns The point in time, in milliseconds, in the file at which the transcription was started.
106 107 108 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 106 def audio_start_from @audio_start_from end |
#audio_url ⇒ String (readonly)
Returns The URL of the media that was transcribed.
28 29 30 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 28 def audio_url @audio_url end |
#auto_chapters ⇒ Boolean (readonly)
Returns Whether [Auto Chapters](www.assemblyai.com/docs/models/auto-chapters) is enabled, can be true or false.
162 163 164 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 162 def auto_chapters @auto_chapters end |
#auto_highlights ⇒ Boolean (readonly)
Returns Whether Key Phrases is enabled, either true or false.
101 102 103 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 101 def auto_highlights @auto_highlights end |
#auto_highlights_result ⇒ AssemblyAI::Transcripts::AutoHighlightsResult (readonly)
103 104 105 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 103 def auto_highlights_result @auto_highlights_result end |
#boost_param ⇒ String (readonly)
Returns The word boost parameter value.
113 114 115 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 113 def boost_param @boost_param end |
#chapters ⇒ Array<AssemblyAI::Transcripts::Chapter> (readonly)
Returns An array of temporally sequential chapters for the audio file.
164 165 166 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 164 def chapters @chapters end |
#confidence ⇒ Float (readonly)
Returns The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence).
65 66 67 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 65 def confidence @confidence end |
#content_safety ⇒ Boolean (readonly)
Returns Whether [Content Moderation](www.assemblyai.com/docs/models/content-moderation) is enabled, can be true or false.
149 150 151 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 149 def content_safety @content_safety end |
#content_safety_labels ⇒ AssemblyAI::Transcripts::ContentSafetyLabelsResult (readonly)
151 152 153 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 151 def content_safety_labels @content_safety_labels end |
#custom_spelling ⇒ Array<AssemblyAI::Transcripts::TranscriptCustomSpelling> (readonly)
Returns Customize how words are spelled and formatted using to and from values.
159 160 161 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 159 def custom_spelling @custom_spelling end |
#custom_topics ⇒ Boolean (readonly)
Returns Whether custom topics is enabled, either true or false.
179 180 181 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 179 def custom_topics @custom_topics end |
#disfluencies ⇒ Boolean (readonly)
Returns Transcribe Filler Words, like “umm”, in your media file; can be true or false.
73 74 75 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 73 def disfluencies @disfluencies end |
#dual_channel ⇒ Boolean (readonly)
Returns Whether [Dual channel ://www.assemblyai.com/docs/models/speech-recognition#dual-channel-transcription) was enabled in the transcription request, either true or false.
84 85 86 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 84 def dual_channel @dual_channel end |
#entities ⇒ Array<AssemblyAI::Transcripts::Entity> (readonly)
Returns An array of results for the Entity Detection model, if it is enabled. See [Entity detection](www.assemblyai.com/docs/models/entity-detection) for more information.
198 199 200 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 198 def entities @entities end |
#entity_detection ⇒ Boolean (readonly)
Returns Whether [Entity Detection](www.assemblyai.com/docs/models/entity-detection) is enabled, can be true or false.
194 195 196 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 194 def entity_detection @entity_detection end |
#error ⇒ String (readonly)
Returns Error message of why the transcript failed.
207 208 209 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 207 def error @error end |
#filter_profanity ⇒ Boolean (readonly)
Returns Whether [Profanity ](www.assemblyai.com/docs/models/speech-recognition#profanity-filtering) is enabled, either true or false.
117 118 119 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 117 def filter_profanity @filter_profanity end |
#format_text ⇒ Boolean (readonly)
Returns Whether Text Formatting is enabled, either true or false.
71 72 73 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 71 def format_text @format_text end |
#iab_categories ⇒ Boolean (readonly)
Returns Whether [Topic Detection](www.assemblyai.com/docs/models/topic-detection) is enabled, can be true or false.
155 156 157 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 155 def iab_categories @iab_categories end |
#iab_categories_result ⇒ AssemblyAI::Transcripts::TopicDetectionModelResult (readonly)
157 158 159 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 157 def iab_categories_result @iab_categories_result end |
#id ⇒ String (readonly)
Returns The unique identifier of your transcript.
26 27 28 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 26 def id @id end |
#language_code ⇒ AssemblyAI::Transcripts::TranscriptLanguageCode (readonly)
Returns The language of your audio file. Possible values are found in [Supported Languages](www.assemblyai.com/docs/concepts/supported-languages). The default value is ‘en_us’.
36 37 38 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 36 def language_code @language_code end |
#language_confidence ⇒ Float (readonly)
Returns The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence).
46 47 48 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 46 def language_confidence @language_confidence end |
#language_confidence_threshold ⇒ Float (readonly)
Returns The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold.
43 44 45 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 43 def language_confidence_threshold @language_confidence_threshold end |
#language_detection ⇒ Boolean (readonly)
Returns Whether [Automatic language /www.assemblyai.com/docs/models/speech-recognition#automatic-language-detection) is enabled, either true or false.
40 41 42 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 40 def language_detection @language_detection end |
#language_model ⇒ String (readonly)
Returns The language model that was used for the transcript.
209 210 211 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 209 def language_model @language_model end |
#multichannel ⇒ Boolean (readonly)
Returns Whether [Multichannel ://www.assemblyai.com/docs/models/speech-recognition#multichannel-transcription) was enabled in the transcription request, either true or false.
77 78 79 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 77 def multichannel @multichannel end |
#punctuate ⇒ Boolean (readonly)
Returns Whether Automatic Punctuation is enabled, either true or false.
69 70 71 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 69 def punctuate @punctuate end |
#redact_pii ⇒ Boolean (readonly)
Returns Whether [PII Redaction](www.assemblyai.com/docs/models/pii-redaction) is enabled, either true or false.
120 121 122 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 120 def redact_pii @redact_pii end |
#redact_pii_audio ⇒ Boolean (readonly)
Returns Whether a redacted version of the audio file was generated, either true or false. See [PII redaction](www.assemblyai.com/docs/models/pii-redaction) for more information.
125 126 127 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 125 def redact_pii_audio @redact_pii_audio end |
#redact_pii_audio_quality ⇒ AssemblyAI::Transcripts::RedactPiiAudioQuality (readonly)
127 128 129 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 127 def redact_pii_audio_quality @redact_pii_audio_quality end |
#redact_pii_policies ⇒ Array<AssemblyAI::Transcripts::PiiPolicy> (readonly)
Returns The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See [PII redaction](www.assemblyai.com/docs/models/pii-redaction) for more information.
132 133 134 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 132 def redact_pii_policies @redact_pii_policies end |
#redact_pii_sub ⇒ AssemblyAI::Transcripts::SubstitutionPolicy (readonly)
Returns The replacement logic for detected PII, can be “entity_type” or “hash”. See [PII redaction](www.assemblyai.com/docs/models/pii-redaction) for more details.
136 137 138 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 136 def redact_pii_sub @redact_pii_sub end |
#sentiment_analysis ⇒ Boolean (readonly)
Returns Whether [Sentiment Analysis](www.assemblyai.com/docs/models/sentiment-analysis) is enabled, can be true or false.
185 186 187 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 185 def sentiment_analysis @sentiment_analysis end |
#sentiment_analysis_results ⇒ Array<AssemblyAI::Transcripts::SentimentAnalysisResult> (readonly)
Returns An array of results for the Sentiment Analysis model, if it is enabled. See [Sentiment Analysis](www.assemblyai.com/docs/models/sentiment-analysis) for more information.
190 191 192 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 190 def sentiment_analysis_results @sentiment_analysis_results end |
#speaker_labels ⇒ Boolean (readonly)
Returns Whether [Speaker diarization](www.assemblyai.com/docs/models/speaker-diarization) is enabled, can be true or false.
140 141 142 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 140 def speaker_labels @speaker_labels end |
#speakers_expected ⇒ Integer (readonly)
Returns Tell the speaker label model how many speakers it should attempt to identify, up to 10. See [Speaker diarization](www.assemblyai.com/docs/models/speaker-diarization) for more details.
145 146 147 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 145 def speakers_expected @speakers_expected end |
#speech_model ⇒ AssemblyAI::Transcripts::SpeechModel (readonly)
48 49 50 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 48 def speech_model @speech_model end |
#speech_threshold ⇒ Float (readonly)
Returns Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive.
202 203 204 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 202 def speech_threshold @speech_threshold end |
#speed_boost ⇒ Boolean (readonly)
Returns Whether speed boost is enabled.
99 100 101 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 99 def speed_boost @speed_boost end |
#status ⇒ AssemblyAI::Transcripts::TranscriptStatus (readonly)
Returns The status of your transcript. Possible values are queued, processing, completed, or error.
31 32 33 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 31 def status @status end |
#summarization ⇒ Boolean (readonly)
Returns Whether [Summarization](www.assemblyai.com/docs/models/summarization) is enabled, either true or false.
167 168 169 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 167 def summarization @summarization end |
#summary ⇒ String (readonly)
Returns The generated summary of the media file, if [Summarization](www.assemblyai.com/docs/models/summarization) is enabled.
177 178 179 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 177 def summary @summary end |
#summary_model ⇒ String (readonly)
Returns The Summarization model used to generate the summary, if [Summarization](www.assemblyai.com/docs/models/summarization) is enabled.
174 175 176 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 174 def summary_model @summary_model end |
#summary_type ⇒ String (readonly)
Returns The type of summary generated, if [Summarization](www.assemblyai.com/docs/models/summarization) is enabled.
170 171 172 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 170 def summary_type @summary_type end |
#text ⇒ String (readonly)
Returns The textual transcript of your media file.
50 51 52 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 50 def text @text end |
#throttled ⇒ Boolean (readonly)
Returns True while a request is throttled and false when a request is no longer throttled.
205 206 207 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 205 def throttled @throttled end |
#topics ⇒ Array<String> (readonly)
Returns The list of custom topics provided if custom topics is enabled.
181 182 183 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 181 def topics @topics end |
#utterances ⇒ Array<AssemblyAI::Transcripts::TranscriptUtterance> (readonly)
Returns When dual_channel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See [Speaker diarization](www.assemblyai.com/docs/models/speaker-diarization) for more information.
62 63 64 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 62 def utterances @utterances end |
#webhook_auth ⇒ Boolean (readonly)
Returns Whether webhook authentication details were provided.
94 95 96 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 94 def webhook_auth @webhook_auth end |
#webhook_auth_header_name ⇒ String (readonly)
Returns The header name to be sent with the transcript completed or failed webhook requests.
97 98 99 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 97 def webhook_auth_header_name @webhook_auth_header_name end |
#webhook_status_code ⇒ Integer (readonly)
Returns The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided.
92 93 94 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 92 def webhook_status_code @webhook_status_code end |
#webhook_url ⇒ String (readonly)
Returns The URL to which we send webhook requests. We sends two different types of webhook requests. One request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.
89 90 91 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 89 def webhook_url @webhook_url end |
#word_boost ⇒ Array<String> (readonly)
Returns The list of custom vocabulary to boost transcription probability for.
111 112 113 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 111 def word_boost @word_boost end |
#words ⇒ Array<AssemblyAI::Transcripts::TranscriptWord> (readonly)
Returns An array of temporally-sequential word objects, one for each word in the transcript. See [Speech recognition](www.assemblyai.com/docs/models/speech-recognition) for more information.
56 57 58 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 56 def words @words end |
Class Method Details
.from_json(json_object:) ⇒ AssemblyAI::Transcripts::Transcript
Deserialize a JSON object to an instance of Transcript
482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 482 def self.from_json(json_object:) struct = JSON.parse(json_object, object_class: OpenStruct) parsed_json = JSON.parse(json_object) id = struct["id"] audio_url = struct["audio_url"] status = struct["status"] language_code = struct["language_code"] language_detection = struct["language_detection"] language_confidence_threshold = struct["language_confidence_threshold"] language_confidence = struct["language_confidence"] speech_model = struct["speech_model"] text = struct["text"] words = parsed_json["words"]&.map do |v| v = v.to_json AssemblyAI::Transcripts::TranscriptWord.from_json(json_object: v) end utterances = parsed_json["utterances"]&.map do |v| v = v.to_json AssemblyAI::Transcripts::TranscriptUtterance.from_json(json_object: v) end confidence = struct["confidence"] audio_duration = struct["audio_duration"] punctuate = struct["punctuate"] format_text = struct["format_text"] disfluencies = struct["disfluencies"] multichannel = struct["multichannel"] audio_channels = struct["audio_channels"] dual_channel = struct["dual_channel"] webhook_url = struct["webhook_url"] webhook_status_code = struct["webhook_status_code"] webhook_auth = struct["webhook_auth"] webhook_auth_header_name = struct["webhook_auth_header_name"] speed_boost = struct["speed_boost"] auto_highlights = struct["auto_highlights"] if parsed_json["auto_highlights_result"].nil? auto_highlights_result = nil else auto_highlights_result = parsed_json["auto_highlights_result"].to_json auto_highlights_result = AssemblyAI::Transcripts::AutoHighlightsResult.from_json(json_object: auto_highlights_result) end audio_start_from = struct["audio_start_from"] audio_end_at = struct["audio_end_at"] word_boost = struct["word_boost"] boost_param = struct["boost_param"] filter_profanity = struct["filter_profanity"] redact_pii = struct["redact_pii"] redact_pii_audio = struct["redact_pii_audio"] redact_pii_audio_quality = struct["redact_pii_audio_quality"] redact_pii_policies = struct["redact_pii_policies"] redact_pii_sub = struct["redact_pii_sub"] speaker_labels = struct["speaker_labels"] speakers_expected = struct["speakers_expected"] content_safety = struct["content_safety"] if parsed_json["content_safety_labels"].nil? content_safety_labels = nil else content_safety_labels = parsed_json["content_safety_labels"].to_json content_safety_labels = AssemblyAI::Transcripts::ContentSafetyLabelsResult.from_json(json_object: content_safety_labels) end iab_categories = struct["iab_categories"] if parsed_json["iab_categories_result"].nil? iab_categories_result = nil else iab_categories_result = parsed_json["iab_categories_result"].to_json iab_categories_result = AssemblyAI::Transcripts::TopicDetectionModelResult.from_json(json_object: iab_categories_result) end custom_spelling = parsed_json["custom_spelling"]&.map do |v| v = v.to_json AssemblyAI::Transcripts::TranscriptCustomSpelling.from_json(json_object: v) end auto_chapters = struct["auto_chapters"] chapters = parsed_json["chapters"]&.map do |v| v = v.to_json AssemblyAI::Transcripts::Chapter.from_json(json_object: v) end summarization = struct["summarization"] summary_type = struct["summary_type"] summary_model = struct["summary_model"] summary = struct["summary"] custom_topics = struct["custom_topics"] topics = struct["topics"] sentiment_analysis = struct["sentiment_analysis"] sentiment_analysis_results = parsed_json["sentiment_analysis_results"]&.map do |v| v = v.to_json AssemblyAI::Transcripts::SentimentAnalysisResult.from_json(json_object: v) end entity_detection = struct["entity_detection"] entities = parsed_json["entities"]&.map do |v| v = v.to_json AssemblyAI::Transcripts::Entity.from_json(json_object: v) end speech_threshold = struct["speech_threshold"] throttled = struct["throttled"] error = struct["error"] language_model = struct["language_model"] acoustic_model = struct["acoustic_model"] new( id: id, audio_url: audio_url, status: status, language_code: language_code, language_detection: language_detection, language_confidence_threshold: language_confidence_threshold, language_confidence: language_confidence, speech_model: speech_model, text: text, words: words, utterances: utterances, confidence: confidence, audio_duration: audio_duration, punctuate: punctuate, format_text: format_text, disfluencies: disfluencies, multichannel: multichannel, audio_channels: audio_channels, dual_channel: dual_channel, webhook_url: webhook_url, webhook_status_code: webhook_status_code, webhook_auth: webhook_auth, webhook_auth_header_name: webhook_auth_header_name, speed_boost: speed_boost, auto_highlights: auto_highlights, auto_highlights_result: auto_highlights_result, audio_start_from: audio_start_from, audio_end_at: audio_end_at, word_boost: word_boost, boost_param: boost_param, filter_profanity: filter_profanity, redact_pii: redact_pii, redact_pii_audio: redact_pii_audio, redact_pii_audio_quality: redact_pii_audio_quality, redact_pii_policies: redact_pii_policies, redact_pii_sub: redact_pii_sub, speaker_labels: speaker_labels, speakers_expected: speakers_expected, content_safety: content_safety, content_safety_labels: content_safety_labels, iab_categories: iab_categories, iab_categories_result: iab_categories_result, custom_spelling: custom_spelling, auto_chapters: auto_chapters, chapters: chapters, summarization: summarization, summary_type: summary_type, summary_model: summary_model, summary: summary, custom_topics: custom_topics, topics: topics, sentiment_analysis: sentiment_analysis, sentiment_analysis_results: sentiment_analysis_results, entity_detection: entity_detection, entities: entities, speech_threshold: speech_threshold, throttled: throttled, error: error, language_model: language_model, acoustic_model: acoustic_model, additional_properties: struct ) end |
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given
hash and check each fields type against the current object's property
definitions.
656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 656 def self.validate_raw(obj:) obj.id.is_a?(String) != false || raise("Passed value for field obj.id is not the expected type, validation failed.") obj.audio_url.is_a?(String) != false || raise("Passed value for field obj.audio_url is not the expected type, validation failed.") obj.status.is_a?(AssemblyAI::Transcripts::TranscriptStatus) != false || raise("Passed value for field obj.status is not the expected type, validation failed.") obj.language_code&.is_a?(AssemblyAI::Transcripts::TranscriptLanguageCode) != false || raise("Passed value for field obj.language_code is not the expected type, validation failed.") obj.language_detection&.is_a?(Boolean) != false || raise("Passed value for field obj.language_detection is not the expected type, validation failed.") obj.language_confidence_threshold&.is_a?(Float) != false || raise("Passed value for field obj.language_confidence_threshold is not the expected type, validation failed.") obj.language_confidence&.is_a?(Float) != false || raise("Passed value for field obj.language_confidence is not the expected type, validation failed.") obj.speech_model&.is_a?(AssemblyAI::Transcripts::SpeechModel) != false || raise("Passed value for field obj.speech_model is not the expected type, validation failed.") obj.text&.is_a?(String) != false || raise("Passed value for field obj.text is not the expected type, validation failed.") obj.words&.is_a?(Array) != false || raise("Passed value for field obj.words is not the expected type, validation failed.") obj.utterances&.is_a?(Array) != false || raise("Passed value for field obj.utterances is not the expected type, validation failed.") obj.confidence&.is_a?(Float) != false || raise("Passed value for field obj.confidence is not the expected type, validation failed.") obj.audio_duration&.is_a?(Integer) != false || raise("Passed value for field obj.audio_duration is not the expected type, validation failed.") obj.punctuate&.is_a?(Boolean) != false || raise("Passed value for field obj.punctuate is not the expected type, validation failed.") obj.format_text&.is_a?(Boolean) != false || raise("Passed value for field obj.format_text is not the expected type, validation failed.") obj.disfluencies&.is_a?(Boolean) != false || raise("Passed value for field obj.disfluencies is not the expected type, validation failed.") obj.multichannel&.is_a?(Boolean) != false || raise("Passed value for field obj.multichannel is not the expected type, validation failed.") obj.audio_channels&.is_a?(Integer) != false || raise("Passed value for field obj.audio_channels is not the expected type, validation failed.") obj.dual_channel&.is_a?(Boolean) != false || raise("Passed value for field obj.dual_channel is not the expected type, validation failed.") obj.webhook_url&.is_a?(String) != false || raise("Passed value for field obj.webhook_url is not the expected type, validation failed.") obj.webhook_status_code&.is_a?(Integer) != false || raise("Passed value for field obj.webhook_status_code is not the expected type, validation failed.") obj.webhook_auth.is_a?(Boolean) != false || raise("Passed value for field obj.webhook_auth is not the expected type, validation failed.") obj.webhook_auth_header_name&.is_a?(String) != false || raise("Passed value for field obj.webhook_auth_header_name is not the expected type, validation failed.") obj.speed_boost&.is_a?(Boolean) != false || raise("Passed value for field obj.speed_boost is not the expected type, validation failed.") obj.auto_highlights.is_a?(Boolean) != false || raise("Passed value for field obj.auto_highlights is not the expected type, validation failed.") obj.auto_highlights_result.nil? || AssemblyAI::Transcripts::AutoHighlightsResult.validate_raw(obj: obj.auto_highlights_result) obj.audio_start_from&.is_a?(Integer) != false || raise("Passed value for field obj.audio_start_from is not the expected type, validation failed.") obj.audio_end_at&.is_a?(Integer) != false || raise("Passed value for field obj.audio_end_at is not the expected type, validation failed.") obj.word_boost&.is_a?(Array) != false || raise("Passed value for field obj.word_boost is not the expected type, validation failed.") obj.boost_param&.is_a?(String) != false || raise("Passed value for field obj.boost_param is not the expected type, validation failed.") obj.filter_profanity&.is_a?(Boolean) != false || raise("Passed value for field obj.filter_profanity is not the expected type, validation failed.") obj.redact_pii.is_a?(Boolean) != false || raise("Passed value for field obj.redact_pii is not the expected type, validation failed.") obj.redact_pii_audio&.is_a?(Boolean) != false || raise("Passed value for field obj.redact_pii_audio is not the expected type, validation failed.") obj.redact_pii_audio_quality&.is_a?(AssemblyAI::Transcripts::RedactPiiAudioQuality) != false || raise("Passed value for field obj.redact_pii_audio_quality is not the expected type, validation failed.") obj.redact_pii_policies&.is_a?(Array) != false || raise("Passed value for field obj.redact_pii_policies is not the expected type, validation failed.") obj.redact_pii_sub&.is_a?(AssemblyAI::Transcripts::SubstitutionPolicy) != false || raise("Passed value for field obj.redact_pii_sub is not the expected type, validation failed.") obj.speaker_labels&.is_a?(Boolean) != false || raise("Passed value for field obj.speaker_labels is not the expected type, validation failed.") obj.speakers_expected&.is_a?(Integer) != false || raise("Passed value for field obj.speakers_expected is not the expected type, validation failed.") obj.content_safety&.is_a?(Boolean) != false || raise("Passed value for field obj.content_safety is not the expected type, validation failed.") obj.content_safety_labels.nil? || AssemblyAI::Transcripts::ContentSafetyLabelsResult.validate_raw(obj: obj.content_safety_labels) obj.iab_categories&.is_a?(Boolean) != false || raise("Passed value for field obj.iab_categories is not the expected type, validation failed.") obj.iab_categories_result.nil? || AssemblyAI::Transcripts::TopicDetectionModelResult.validate_raw(obj: obj.iab_categories_result) obj.custom_spelling&.is_a?(Array) != false || raise("Passed value for field obj.custom_spelling is not the expected type, validation failed.") obj.auto_chapters&.is_a?(Boolean) != false || raise("Passed value for field obj.auto_chapters is not the expected type, validation failed.") obj.chapters&.is_a?(Array) != false || raise("Passed value for field obj.chapters is not the expected type, validation failed.") obj.summarization.is_a?(Boolean) != false || raise("Passed value for field obj.summarization is not the expected type, validation failed.") obj.summary_type&.is_a?(String) != false || raise("Passed value for field obj.summary_type is not the expected type, validation failed.") obj.summary_model&.is_a?(String) != false || raise("Passed value for field obj.summary_model is not the expected type, validation failed.") obj.summary&.is_a?(String) != false || raise("Passed value for field obj.summary is not the expected type, validation failed.") obj.custom_topics&.is_a?(Boolean) != false || raise("Passed value for field obj.custom_topics is not the expected type, validation failed.") obj.topics&.is_a?(Array) != false || raise("Passed value for field obj.topics is not the expected type, validation failed.") obj.sentiment_analysis&.is_a?(Boolean) != false || raise("Passed value for field obj.sentiment_analysis is not the expected type, validation failed.") obj.sentiment_analysis_results&.is_a?(Array) != false || raise("Passed value for field obj.sentiment_analysis_results is not the expected type, validation failed.") obj.entity_detection&.is_a?(Boolean) != false || raise("Passed value for field obj.entity_detection is not the expected type, validation failed.") obj.entities&.is_a?(Array) != false || raise("Passed value for field obj.entities is not the expected type, validation failed.") obj.speech_threshold&.is_a?(Float) != false || raise("Passed value for field obj.speech_threshold is not the expected type, validation failed.") obj.throttled&.is_a?(Boolean) != false || raise("Passed value for field obj.throttled is not the expected type, validation failed.") obj.error&.is_a?(String) != false || raise("Passed value for field obj.error is not the expected type, validation failed.") obj.language_model.is_a?(String) != false || raise("Passed value for field obj.language_model is not the expected type, validation failed.") obj.acoustic_model.is_a?(String) != false || raise("Passed value for field obj.acoustic_model is not the expected type, validation failed.") end |
Instance Method Details
#to_json(*_args) ⇒ String
Serialize an instance of Transcript to a JSON object
646 647 648 |
# File 'lib/assemblyai/transcripts/types/transcript.rb', line 646 def to_json(*_args) @_field_set&.to_json end |