Class: Vapi::Assistant
- Inherits:
-
Object
- Object
- Vapi::Assistant
- Defined in:
- lib/vapi_server_sdk/types/assistant.rb
Constant Summary collapse
- OMIT =
Object.new
Instance Attribute Summary collapse
-
#additional_properties ⇒ OpenStruct
readonly
Additional properties unmapped to the current class definition.
-
#analysis_plan ⇒ Vapi::AnalysisPlan
readonly
This is the plan for analysis of assistant’s calls.
-
#artifact_plan ⇒ Vapi::ArtifactPlan
readonly
This is the plan for artifacts generated during assistant’s calls.
-
#backchanneling_enabled ⇒ Boolean
readonly
This determines whether the model says ‘mhmm’, ‘ahem’ etc.
-
#background_denoising_enabled ⇒ Boolean
readonly
This enables filtering of noise and background speech while the user is talking.
-
#background_sound ⇒ Vapi::AssistantBackgroundSound
readonly
This is the background sound in the call.
-
#client_messages ⇒ Array<Vapi::AssistantClientMessagesItem>
readonly
These are the messages that will be sent to your Client SDKs.
-
#created_at ⇒ DateTime
readonly
This is the ISO 8601 date-time string of when the assistant was created.
-
#credential_ids ⇒ Array<String>
readonly
These are the credentials that will be used for the assistant calls.
-
#end_call_message ⇒ String
readonly
This is the message that the assistant will say if it ends the call.
-
#end_call_phrases ⇒ Array<String>
readonly
This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up.
-
#first_message ⇒ String
readonly
This is the first message that the assistant will say.
-
#first_message_mode ⇒ Vapi::AssistantFirstMessageMode
readonly
This is the mode for the first message.
-
#hipaa_enabled ⇒ Boolean
readonly
When this is enabled, no logs, recordings, or transcriptions will be stored.
-
#id ⇒ String
readonly
This is the unique identifier for the assistant.
-
#max_duration_seconds ⇒ Float
readonly
This is the maximum number of seconds that the call will last.
-
#message_plan ⇒ Vapi::MessagePlan
readonly
This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`.
-
#metadata ⇒ Hash{String => Object}
readonly
This is for metadata you want to store on the assistant.
-
#model ⇒ Vapi::AssistantModel
readonly
These are the options for the assistant’s LLM.
-
#model_output_in_messages_enabled ⇒ Boolean
readonly
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech.
-
#monitor_plan ⇒ Vapi::MonitorPlan
readonly
This is the plan for real-time monitoring of the assistant’s calls.
-
#name ⇒ String
readonly
This is the name of the assistant.
-
#org_id ⇒ String
readonly
This is the unique identifier for the org that this assistant belongs to.
-
#server_messages ⇒ Array<Vapi::AssistantServerMessagesItem>
readonly
These are the messages that will be sent to your Server URL.
-
#server_url ⇒ String
readonly
This is the URL Vapi will communicate with via HTTP GET and POST Requests.
-
#server_url_secret ⇒ String
readonly
This is the secret you can set that Vapi will send with every request to your server.
-
#silence_timeout_seconds ⇒ Float
readonly
How many seconds of silence to wait before ending the call.
-
#start_speaking_plan ⇒ Vapi::StartSpeakingPlan
readonly
This is the plan for when the assistant should start talking.
-
#stop_speaking_plan ⇒ Vapi::StopSpeakingPlan
readonly
This is the plan for when assistant should stop talking on customer interruption.
-
#transcriber ⇒ Vapi::AssistantTranscriber
readonly
These are the options for the assistant’s transcriber.
-
#transport_configurations ⇒ Array<Vapi::TransportConfigurationTwilio>
readonly
These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio.
-
#updated_at ⇒ DateTime
readonly
This is the ISO 8601 date-time string of when the assistant was last updated.
-
#voice ⇒ Vapi::AssistantVoice
readonly
These are the options for the assistant’s voice.
-
#voicemail_detection ⇒ Vapi::TwilioVoicemailDetection
readonly
These are the settings to configure or disable voicemail detection.
-
#voicemail_message ⇒ String
readonly
This is the message that the assistant will say if the call is forwarded to voicemail.
Class Method Summary collapse
-
.from_json(json_object:) ⇒ Vapi::Assistant
Deserialize a JSON object to an instance of Assistant.
-
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given hash and check each fields type against the current object’s property definitions.
Instance Method Summary collapse
- #initialize(id:, org_id:, created_at:, updated_at:, transcriber: OMIT, model: OMIT, voice: OMIT, first_message_mode: OMIT, hipaa_enabled: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, backchanneling_enabled: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, name: OMIT, first_message: OMIT, voicemail_detection: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, metadata: OMIT, server_url: OMIT, server_url_secret: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, additional_properties: nil) ⇒ Vapi::Assistant constructor
-
#to_json(*_args) ⇒ String
Serialize an instance of Assistant to a JSON object.
Constructor Details
#initialize(id:, org_id:, created_at:, updated_at:, transcriber: OMIT, model: OMIT, voice: OMIT, first_message_mode: OMIT, hipaa_enabled: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, backchanneling_enabled: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, name: OMIT, first_message: OMIT, voicemail_detection: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, metadata: OMIT, server_url: OMIT, server_url_secret: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, additional_properties: nil) ⇒ Vapi::Assistant
302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 302 def initialize(id:, org_id:, created_at:, updated_at:, transcriber: OMIT, model: OMIT, voice: OMIT, first_message_mode: OMIT, hipaa_enabled: OMIT, client_messages: OMIT, server_messages: OMIT, silence_timeout_seconds: OMIT, max_duration_seconds: OMIT, background_sound: OMIT, backchanneling_enabled: OMIT, background_denoising_enabled: OMIT, model_output_in_messages_enabled: OMIT, transport_configurations: OMIT, name: OMIT, first_message: OMIT, voicemail_detection: OMIT, voicemail_message: OMIT, end_call_message: OMIT, end_call_phrases: OMIT, metadata: OMIT, server_url: OMIT, server_url_secret: OMIT, analysis_plan: OMIT, artifact_plan: OMIT, message_plan: OMIT, start_speaking_plan: OMIT, stop_speaking_plan: OMIT, monitor_plan: OMIT, credential_ids: OMIT, additional_properties: nil) @transcriber = transcriber if transcriber != OMIT @model = model if model != OMIT @voice = voice if voice != OMIT @first_message_mode = if != OMIT @hipaa_enabled = hipaa_enabled if hipaa_enabled != OMIT @client_messages = if != OMIT @server_messages = if != OMIT @silence_timeout_seconds = silence_timeout_seconds if silence_timeout_seconds != OMIT @max_duration_seconds = max_duration_seconds if max_duration_seconds != OMIT @background_sound = background_sound if background_sound != OMIT @backchanneling_enabled = backchanneling_enabled if backchanneling_enabled != OMIT @background_denoising_enabled = background_denoising_enabled if background_denoising_enabled != OMIT @model_output_in_messages_enabled = if != OMIT @transport_configurations = transport_configurations if transport_configurations != OMIT @name = name if name != OMIT @first_message = if != OMIT @voicemail_detection = voicemail_detection if voicemail_detection != OMIT @voicemail_message = if != OMIT @end_call_message = if != OMIT @end_call_phrases = end_call_phrases if end_call_phrases != OMIT @metadata = if != OMIT @server_url = server_url if server_url != OMIT @server_url_secret = server_url_secret if server_url_secret != OMIT @analysis_plan = analysis_plan if analysis_plan != OMIT @artifact_plan = artifact_plan if artifact_plan != OMIT @message_plan = if != OMIT @start_speaking_plan = start_speaking_plan if start_speaking_plan != OMIT @stop_speaking_plan = stop_speaking_plan if stop_speaking_plan != OMIT @monitor_plan = monitor_plan if monitor_plan != OMIT @credential_ids = credential_ids if credential_ids != OMIT @id = id @org_id = org_id @created_at = created_at @updated_at = updated_at @additional_properties = additional_properties @_field_set = { "transcriber": transcriber, "model": model, "voice": voice, "firstMessageMode": , "hipaaEnabled": hipaa_enabled, "clientMessages": , "serverMessages": , "silenceTimeoutSeconds": silence_timeout_seconds, "maxDurationSeconds": max_duration_seconds, "backgroundSound": background_sound, "backchannelingEnabled": backchanneling_enabled, "backgroundDenoisingEnabled": background_denoising_enabled, "modelOutputInMessagesEnabled": , "transportConfigurations": transport_configurations, "name": name, "firstMessage": , "voicemailDetection": voicemail_detection, "voicemailMessage": , "endCallMessage": , "endCallPhrases": end_call_phrases, "metadata": , "serverUrl": server_url, "serverUrlSecret": server_url_secret, "analysisPlan": analysis_plan, "artifactPlan": artifact_plan, "messagePlan": , "startSpeakingPlan": start_speaking_plan, "stopSpeakingPlan": stop_speaking_plan, "monitorPlan": monitor_plan, "credentialIds": credential_ids, "id": id, "orgId": org_id, "createdAt": created_at, "updatedAt": updated_at }.reject do |_k, v| v == OMIT end end |
Instance Attribute Details
#additional_properties ⇒ OpenStruct (readonly)
Returns Additional properties unmapped to the current class definition.
176 177 178 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 176 def additional_properties @additional_properties end |
#analysis_plan ⇒ Vapi::AnalysisPlan (readonly)
Returns This is the plan for analysis of assistant’s calls. Stored in ‘call.analysis`.
123 124 125 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 123 def analysis_plan @analysis_plan end |
#artifact_plan ⇒ Vapi::ArtifactPlan (readonly)
Returns This is the plan for artifacts generated during assistant’s calls. Stored in ‘call.artifact`. Note: `recordingEnabled` is currently at the root level. It will be moved to `artifactPlan` in the future, but will remain backwards compatible.
128 129 130 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 128 def artifact_plan @artifact_plan end |
#backchanneling_enabled ⇒ Boolean (readonly)
Returns This determines whether the model says ‘mhmm’, ‘ahem’ etc. while user is speaking. Default ‘false` while in beta. @default false.
67 68 69 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 67 def backchanneling_enabled @backchanneling_enabled end |
#background_denoising_enabled ⇒ Boolean (readonly)
Returns This enables filtering of noise and background speech while the user is talking. Default ‘false` while in beta. @default false.
71 72 73 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 71 def background_denoising_enabled @background_denoising_enabled end |
#background_sound ⇒ Vapi::AssistantBackgroundSound (readonly)
Returns This is the background sound in the call. Default for phone calls is ‘office’ and default for web calls is ‘off’.
62 63 64 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 62 def background_sound @background_sound end |
#client_messages ⇒ Array<Vapi::AssistantClientMessagesItem> (readonly)
Returns These are the messages that will be sent to your Client SDKs. Default is ,speech-update,status-update,transcript,tool-calls,user-interrupted,voice-input. You can check the shape of the messages in ClientMessage schema.
48 49 50 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 48 def @client_messages end |
#created_at ⇒ DateTime (readonly)
Returns This is the ISO 8601 date-time string of when the assistant was created.
172 173 174 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 172 def created_at @created_at end |
#credential_ids ⇒ Array<String> (readonly)
Returns These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.
166 167 168 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 166 def credential_ids @credential_ids end |
#end_call_message ⇒ String (readonly)
Returns This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything.
104 105 106 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 104 def @end_call_message end |
#end_call_phrases ⇒ Array<String> (readonly)
Returns This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.
107 108 109 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 107 def end_call_phrases @end_call_phrases end |
#first_message ⇒ String (readonly)
Returns This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.). If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
89 90 91 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 89 def @first_message end |
#first_message_mode ⇒ Vapi::AssistantFirstMessageMode (readonly)
Returns This is the mode for the first message. Default is ‘assistant-speaks-first’. Use:
-
‘assistant-speaks-first’ to have the assistant speak first.
-
‘assistant-waits-for-user’ to have the assistant wait for the user to speak
first.
-
‘assistant-speaks-first-with-model-generated-message’ to have the assistant
speak first with a message generated by the model based on the conversation state. (‘assistant.model.messages` at call start, `call.messages` at squad transfer points). @default ’assistant-speaks-first’.
40 41 42 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 40 def @first_message_mode end |
#hipaa_enabled ⇒ Boolean (readonly)
Returns When this is enabled, no logs, recordings, or transcriptions will be stored. At the end of the call, you will still receive an end-of-call-report message to store on your server. Defaults to false.
44 45 46 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 44 def hipaa_enabled @hipaa_enabled end |
#id ⇒ String (readonly)
Returns This is the unique identifier for the assistant.
168 169 170 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 168 def id @id end |
#max_duration_seconds ⇒ Float (readonly)
Returns This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended. @default 600 (10 minutes).
59 60 61 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 59 def max_duration_seconds @max_duration_seconds end |
#message_plan ⇒ Vapi::MessagePlan (readonly)
Returns This is the plan for static predefined messages that can be spoken by the assistant during the call, like ‘idleMessages`. Note: `firstMessage`, `voicemailMessage`, and `endCallMessage` are currently at the root level. They will be moved to `messagePlan` in the future, but will remain backwards compatible.
134 135 136 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 134 def @message_plan end |
#metadata ⇒ Hash{String => Object} (readonly)
Returns This is for metadata you want to store on the assistant.
109 110 111 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 109 def @metadata end |
#model ⇒ Vapi::AssistantModel (readonly)
Returns These are the options for the assistant’s LLM.
27 28 29 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 27 def model @model end |
#model_output_in_messages_enabled ⇒ Boolean (readonly)
Returns This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech. Default ‘false` while in beta. @default false.
76 77 78 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 76 def @model_output_in_messages_enabled end |
#monitor_plan ⇒ Vapi::MonitorPlan (readonly)
Returns This is the plan for real-time monitoring of the assistant’s calls. Usage:
-
To enable live listening of the assistant’s calls, set
‘monitorPlan.listenEnabled` to `true`.
-
To enable live control of the assistant’s calls, set
‘monitorPlan.controlEnabled` to `true`. Note, `serverMessages`, `clientMessages`, `serverUrl` and `serverUrlSecret` are currently at the root level but will be moved to `monitorPlan` in the future. Will remain backwards compatible.
162 163 164 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 162 def monitor_plan @monitor_plan end |
#name ⇒ String (readonly)
Returns This is the name of the assistant. This is required when you want to transfer between assistants in a call.
84 85 86 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 84 def name @name end |
#org_id ⇒ String (readonly)
Returns This is the unique identifier for the org that this assistant belongs to.
170 171 172 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 170 def org_id @org_id end |
#server_messages ⇒ Array<Vapi::AssistantServerMessagesItem> (readonly)
Returns These are the messages that will be sent to your Server URL. Default is h-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
52 53 54 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 52 def @server_messages end |
#server_url ⇒ String (readonly)
Returns This is the URL Vapi will communicate with via HTTP GET and POST Requests. This is used for retrieving context, function calling, and end-of-call reports. All requests will be sent with the call object among other things relevant to that message. You can find more details in the Server URL documentation. This overrides the serverUrl set on the org and the phoneNumber. Order of precedence: tool.server.url > assistant.serverUrl > phoneNumber.serverUrl > org.serverUrl.
117 118 119 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 117 def server_url @server_url end |
#server_url_secret ⇒ String (readonly)
Returns This is the secret you can set that Vapi will send with every request to your server. Will be sent as a header called x-vapi-secret. Same precedence logic as serverUrl.
121 122 123 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 121 def server_url_secret @server_url_secret end |
#silence_timeout_seconds ⇒ Float (readonly)
Returns How many seconds of silence to wait before ending the call. Defaults to 30. @default 30.
55 56 57 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 55 def silence_timeout_seconds @silence_timeout_seconds end |
#start_speaking_plan ⇒ Vapi::StartSpeakingPlan (readonly)
Returns This is the plan for when the assistant should start talking. You should configure this if you’re running into these issues:
-
The assistant is too slow to start talking after the customer is done
speaking.
-
The assistant is too fast to start talking after the customer is done
speaking.
-
The assistant is so fast that it’s actually interrupting the customer.
142 143 144 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 142 def start_speaking_plan @start_speaking_plan end |
#stop_speaking_plan ⇒ Vapi::StopSpeakingPlan (readonly)
Returns This is the plan for when assistant should stop talking on customer interruption. You should configure this if you’re running into these issues:
-
The assistant is too slow to recognize customer’s interruption.
-
The assistant is too fast to recognize customer’s interruption.
-
The assistant is getting interrupted by phrases that are just acknowledgments.
-
The assistant is getting interrupted by background noises.
-
The assistant is not properly stopping – it starts talking right after
getting interrupted.
152 153 154 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 152 def stop_speaking_plan @stop_speaking_plan end |
#transcriber ⇒ Vapi::AssistantTranscriber (readonly)
Returns These are the options for the assistant’s transcriber.
25 26 27 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 25 def transcriber @transcriber end |
#transport_configurations ⇒ Array<Vapi::TransportConfigurationTwilio> (readonly)
Returns These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.
81 82 83 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 81 def transport_configurations @transport_configurations end |
#updated_at ⇒ DateTime (readonly)
Returns This is the ISO 8601 date-time string of when the assistant was last updated.
174 175 176 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 174 def updated_at @updated_at end |
#voice ⇒ Vapi::AssistantVoice (readonly)
Returns These are the options for the assistant’s voice.
29 30 31 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 29 def voice @voice end |
#voicemail_detection ⇒ Vapi::TwilioVoicemailDetection (readonly)
Returns These are the settings to configure or disable voicemail detection. Alternatively, voicemail detection can be configured using the model.tools=. This uses Twilio’s built-in detection while the VoicemailTool relies on the model to detect if a voicemail was reached. You can use neither of them, one of them, or both of them. By default, Twilio built-in detection is enabled while VoicemailTool is not.
97 98 99 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 97 def voicemail_detection @voicemail_detection end |
#voicemail_message ⇒ String (readonly)
Returns This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up.
101 102 103 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 101 def @voicemail_message end |
Class Method Details
.from_json(json_object:) ⇒ Vapi::Assistant
Deserialize a JSON object to an instance of Assistant
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 383 def self.from_json(json_object:) struct = JSON.parse(json_object, object_class: OpenStruct) parsed_json = JSON.parse(json_object) if parsed_json["transcriber"].nil? transcriber = nil else transcriber = parsed_json["transcriber"].to_json transcriber = Vapi::AssistantTranscriber.from_json(json_object: transcriber) end if parsed_json["model"].nil? model = nil else model = parsed_json["model"].to_json model = Vapi::AssistantModel.from_json(json_object: model) end if parsed_json["voice"].nil? voice = nil else voice = parsed_json["voice"].to_json voice = Vapi::AssistantVoice.from_json(json_object: voice) end = parsed_json["firstMessageMode"] hipaa_enabled = parsed_json["hipaaEnabled"] = parsed_json["clientMessages"] = parsed_json["serverMessages"] silence_timeout_seconds = parsed_json["silenceTimeoutSeconds"] max_duration_seconds = parsed_json["maxDurationSeconds"] background_sound = parsed_json["backgroundSound"] backchanneling_enabled = parsed_json["backchannelingEnabled"] background_denoising_enabled = parsed_json["backgroundDenoisingEnabled"] = parsed_json["modelOutputInMessagesEnabled"] transport_configurations = parsed_json["transportConfigurations"]&.map do |item| item = item.to_json Vapi::TransportConfigurationTwilio.from_json(json_object: item) end name = parsed_json["name"] = parsed_json["firstMessage"] if parsed_json["voicemailDetection"].nil? voicemail_detection = nil else voicemail_detection = parsed_json["voicemailDetection"].to_json voicemail_detection = Vapi::TwilioVoicemailDetection.from_json(json_object: voicemail_detection) end = parsed_json["voicemailMessage"] = parsed_json["endCallMessage"] end_call_phrases = parsed_json["endCallPhrases"] = parsed_json["metadata"] server_url = parsed_json["serverUrl"] server_url_secret = parsed_json["serverUrlSecret"] if parsed_json["analysisPlan"].nil? analysis_plan = nil else analysis_plan = parsed_json["analysisPlan"].to_json analysis_plan = Vapi::AnalysisPlan.from_json(json_object: analysis_plan) end if parsed_json["artifactPlan"].nil? artifact_plan = nil else artifact_plan = parsed_json["artifactPlan"].to_json artifact_plan = Vapi::ArtifactPlan.from_json(json_object: artifact_plan) end if parsed_json["messagePlan"].nil? = nil else = parsed_json["messagePlan"].to_json = Vapi::MessagePlan.from_json(json_object: ) end if parsed_json["startSpeakingPlan"].nil? start_speaking_plan = nil else start_speaking_plan = parsed_json["startSpeakingPlan"].to_json start_speaking_plan = Vapi::StartSpeakingPlan.from_json(json_object: start_speaking_plan) end if parsed_json["stopSpeakingPlan"].nil? stop_speaking_plan = nil else stop_speaking_plan = parsed_json["stopSpeakingPlan"].to_json stop_speaking_plan = Vapi::StopSpeakingPlan.from_json(json_object: stop_speaking_plan) end if parsed_json["monitorPlan"].nil? monitor_plan = nil else monitor_plan = parsed_json["monitorPlan"].to_json monitor_plan = Vapi::MonitorPlan.from_json(json_object: monitor_plan) end credential_ids = parsed_json["credentialIds"] id = parsed_json["id"] org_id = parsed_json["orgId"] created_at = (DateTime.parse(parsed_json["createdAt"]) unless parsed_json["createdAt"].nil?) updated_at = (DateTime.parse(parsed_json["updatedAt"]) unless parsed_json["updatedAt"].nil?) new( transcriber: transcriber, model: model, voice: voice, first_message_mode: , hipaa_enabled: hipaa_enabled, client_messages: , server_messages: , silence_timeout_seconds: silence_timeout_seconds, max_duration_seconds: max_duration_seconds, background_sound: background_sound, backchanneling_enabled: backchanneling_enabled, background_denoising_enabled: background_denoising_enabled, model_output_in_messages_enabled: , transport_configurations: transport_configurations, name: name, first_message: , voicemail_detection: voicemail_detection, voicemail_message: , end_call_message: , end_call_phrases: end_call_phrases, metadata: , server_url: server_url, server_url_secret: server_url_secret, analysis_plan: analysis_plan, artifact_plan: artifact_plan, message_plan: , start_speaking_plan: start_speaking_plan, stop_speaking_plan: stop_speaking_plan, monitor_plan: monitor_plan, credential_ids: credential_ids, id: id, org_id: org_id, created_at: created_at, updated_at: updated_at, additional_properties: struct ) end |
.validate_raw(obj:) ⇒ Void
Leveraged for Union-type generation, validate_raw attempts to parse the given
hash and check each fields type against the current object's property
definitions.
525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 525 def self.validate_raw(obj:) obj.transcriber.nil? || Vapi::AssistantTranscriber.validate_raw(obj: obj.transcriber) obj.model.nil? || Vapi::AssistantModel.validate_raw(obj: obj.model) obj.voice.nil? || Vapi::AssistantVoice.validate_raw(obj: obj.voice) obj.&.is_a?(Vapi::AssistantFirstMessageMode) != false || raise("Passed value for field obj.first_message_mode is not the expected type, validation failed.") obj.hipaa_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.hipaa_enabled is not the expected type, validation failed.") obj.&.is_a?(Array) != false || raise("Passed value for field obj.client_messages is not the expected type, validation failed.") obj.&.is_a?(Array) != false || raise("Passed value for field obj.server_messages is not the expected type, validation failed.") obj.silence_timeout_seconds&.is_a?(Float) != false || raise("Passed value for field obj.silence_timeout_seconds is not the expected type, validation failed.") obj.max_duration_seconds&.is_a?(Float) != false || raise("Passed value for field obj.max_duration_seconds is not the expected type, validation failed.") obj.background_sound&.is_a?(Vapi::AssistantBackgroundSound) != false || raise("Passed value for field obj.background_sound is not the expected type, validation failed.") obj.backchanneling_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.backchanneling_enabled is not the expected type, validation failed.") obj.background_denoising_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.background_denoising_enabled is not the expected type, validation failed.") obj.&.is_a?(Boolean) != false || raise("Passed value for field obj.model_output_in_messages_enabled is not the expected type, validation failed.") obj.transport_configurations&.is_a?(Array) != false || raise("Passed value for field obj.transport_configurations is not the expected type, validation failed.") obj.name&.is_a?(String) != false || raise("Passed value for field obj.name is not the expected type, validation failed.") obj.&.is_a?(String) != false || raise("Passed value for field obj.first_message is not the expected type, validation failed.") obj.voicemail_detection.nil? || Vapi::TwilioVoicemailDetection.validate_raw(obj: obj.voicemail_detection) obj.&.is_a?(String) != false || raise("Passed value for field obj.voicemail_message is not the expected type, validation failed.") obj.&.is_a?(String) != false || raise("Passed value for field obj.end_call_message is not the expected type, validation failed.") obj.end_call_phrases&.is_a?(Array) != false || raise("Passed value for field obj.end_call_phrases is not the expected type, validation failed.") obj.&.is_a?(Hash) != false || raise("Passed value for field obj.metadata is not the expected type, validation failed.") obj.server_url&.is_a?(String) != false || raise("Passed value for field obj.server_url is not the expected type, validation failed.") obj.server_url_secret&.is_a?(String) != false || raise("Passed value for field obj.server_url_secret is not the expected type, validation failed.") obj.analysis_plan.nil? || Vapi::AnalysisPlan.validate_raw(obj: obj.analysis_plan) obj.artifact_plan.nil? || Vapi::ArtifactPlan.validate_raw(obj: obj.artifact_plan) obj..nil? || Vapi::MessagePlan.validate_raw(obj: obj.) obj.start_speaking_plan.nil? || Vapi::StartSpeakingPlan.validate_raw(obj: obj.start_speaking_plan) obj.stop_speaking_plan.nil? || Vapi::StopSpeakingPlan.validate_raw(obj: obj.stop_speaking_plan) obj.monitor_plan.nil? || Vapi::MonitorPlan.validate_raw(obj: obj.monitor_plan) obj.credential_ids&.is_a?(Array) != false || raise("Passed value for field obj.credential_ids is not the expected type, validation failed.") obj.id.is_a?(String) != false || raise("Passed value for field obj.id is not the expected type, validation failed.") obj.org_id.is_a?(String) != false || raise("Passed value for field obj.org_id is not the expected type, validation failed.") obj.created_at.is_a?(DateTime) != false || raise("Passed value for field obj.created_at is not the expected type, validation failed.") obj.updated_at.is_a?(DateTime) != false || raise("Passed value for field obj.updated_at is not the expected type, validation failed.") end |
Instance Method Details
#to_json(*_args) ⇒ String
Serialize an instance of Assistant to a JSON object
515 516 517 |
# File 'lib/vapi_server_sdk/types/assistant.rb', line 515 def to_json(*_args) @_field_set&.to_json end |