Class: Vapi::CustomLlmModel

Inherits:
Object
  • Object
show all
Defined in:
lib/vapi_server_sdk/types/custom_llm_model.rb

Constant Summary collapse

OMIT =
Object.new

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(url:, model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, metadata_send_mode: OMIT, temperature: OMIT, knowledge_base: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) ⇒ Vapi::CustomLlmModel

Parameters:

  • messages (Array<Vapi::OpenAiMessage>) (defaults to: OMIT)

    This is the starting state for the conversation.

  • tools (Array<Vapi::CustomLlmModelToolsItem>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.

  • tool_ids (Array<String>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.

  • metadata_send_mode (Vapi::CustomLlmModelMetadataSendMode) (defaults to: OMIT)

    This determines whether metadata is sent in requests to the custom provider.

    • ‘off` will not send any metadata. payload will look like `{ messages }`

    • ‘variable` will send `assistant.metadata` as a variable on the payload.

    payload will look like ‘{ messages, metadata }`

    • ‘destructured` will send `assistant.metadata` fields directly on the payload.

    payload will look like ‘{ messages, …metadata }` Further, `variable` and `destructured` will send `call`, `phoneNumber`, and `customer` objects in the payload. Default is `variable`.

  • url (String)

    These is the URL we’ll use for the OpenAI client’s ‘baseURL`. Ex. openrouter.ai/api/v1

  • model (String)

    This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b

  • temperature (Float) (defaults to: OMIT)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

  • knowledge_base (Vapi::KnowledgeBase) (defaults to: OMIT)

    These are the options for the knowledge base.

  • max_tokens (Float) (defaults to: OMIT)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

  • emotion_recognition_enabled (Boolean) (defaults to: OMIT)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false

  • num_fast_turns (Float) (defaults to: OMIT)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0

  • additional_properties (OpenStruct) (defaults to: nil)

    Additional properties unmapped to the current class definition



101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 101

def initialize(url:, model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, metadata_send_mode: OMIT,
               temperature: OMIT, knowledge_base: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil)
  @messages = messages if messages != OMIT
  @tools = tools if tools != OMIT
  @tool_ids = tool_ids if tool_ids != OMIT
  @metadata_send_mode =  if  != OMIT
  @url = url
  @model = model
  @temperature = temperature if temperature != OMIT
  @knowledge_base = knowledge_base if knowledge_base != OMIT
  @max_tokens = max_tokens if max_tokens != OMIT
  @emotion_recognition_enabled = emotion_recognition_enabled if emotion_recognition_enabled != OMIT
  @num_fast_turns = num_fast_turns if num_fast_turns != OMIT
  @additional_properties = additional_properties
  @_field_set = {
    "messages": messages,
    "tools": tools,
    "toolIds": tool_ids,
    "metadataSendMode": ,
    "url": url,
    "model": model,
    "temperature": temperature,
    "knowledgeBase": knowledge_base,
    "maxTokens": max_tokens,
    "emotionRecognitionEnabled": emotion_recognition_enabled,
    "numFastTurns": num_fast_turns
  }.reject do |_k, v|
    v == OMIT
  end
end

Instance Attribute Details

#additional_propertiesOpenStruct (readonly)

Returns Additional properties unmapped to the current class definition.

Returns:

  • (OpenStruct)

    Additional properties unmapped to the current class definition



58
59
60
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 58

def additional_properties
  @additional_properties
end

#emotion_recognition_enabledBoolean (readonly)

Returns This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false.

Returns:

  • (Boolean)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default ‘false` because the model is usually are good at understanding the user’s emotion from text. @default false



50
51
52
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 50

def emotion_recognition_enabled
  @emotion_recognition_enabled
end

#knowledge_baseVapi::KnowledgeBase (readonly)

Returns These are the options for the knowledge base.

Returns:



41
42
43
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 41

def knowledge_base
  @knowledge_base
end

#max_tokensFloat (readonly)

Returns This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

Returns:

  • (Float)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.



44
45
46
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 44

def max_tokens
  @max_tokens
end

#messagesArray<Vapi::OpenAiMessage> (readonly)

Returns This is the starting state for the conversation.

Returns:



13
14
15
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 13

def messages
  @messages
end

#metadata_send_modeVapi::CustomLlmModelMetadataSendMode (readonly)

Returns This determines whether metadata is sent in requests to the custom provider.

  • ‘off` will not send any metadata. payload will look like `{ messages }`

  • ‘variable` will send `assistant.metadata` as a variable on the payload.

payload will look like ‘{ messages, metadata }`

  • ‘destructured` will send `assistant.metadata` fields directly on the payload.

payload will look like ‘{ messages, …metadata }` Further, `variable` and `destructured` will send `call`, `phoneNumber`, and `customer` objects in the payload. Default is `variable`.

Returns:

  • (Vapi::CustomLlmModelMetadataSendMode)

    This determines whether metadata is sent in requests to the custom provider.

    • ‘off` will not send any metadata. payload will look like `{ messages }`

    • ‘variable` will send `assistant.metadata` as a variable on the payload.

    payload will look like ‘{ messages, metadata }`

    • ‘destructured` will send `assistant.metadata` fields directly on the payload.

    payload will look like ‘{ messages, …metadata }` Further, `variable` and `destructured` will send `call`, `phoneNumber`, and `customer` objects in the payload. Default is `variable`.



31
32
33
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 31

def 
  @metadata_send_mode
end

#modelString (readonly)

Returns This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b.

Returns:

  • (String)

    This is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b



36
37
38
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 36

def model
  @model
end

#num_fast_turnsFloat (readonly)

Returns This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0.

Returns:

  • (Float)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0



56
57
58
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 56

def num_fast_turns
  @num_fast_turns
end

#temperatureFloat (readonly)

Returns This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

Returns:

  • (Float)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.



39
40
41
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 39

def temperature
  @temperature
end

#tool_idsArray<String> (readonly)

Returns These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.

Returns:

  • (Array<String>)

    These are the tools that the assistant can use during the call. To use transient tools, use ‘tools`. Both `tools` and `toolIds` can be used together.



21
22
23
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 21

def tool_ids
  @tool_ids
end

#toolsArray<Vapi::CustomLlmModelToolsItem> (readonly)

Returns These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.

Returns:

  • (Array<Vapi::CustomLlmModelToolsItem>)

    These are the tools that the assistant can use during the call. To use existing tools, use ‘toolIds`. Both `tools` and `toolIds` can be used together.



17
18
19
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 17

def tools
  @tools
end

#urlString (readonly)

Returns These is the URL we’ll use for the OpenAI client’s ‘baseURL`. Ex. openrouter.ai/api/v1.

Returns:

  • (String)

    These is the URL we’ll use for the OpenAI client’s ‘baseURL`. Ex. openrouter.ai/api/v1



34
35
36
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 34

def url
  @url
end

Class Method Details

.from_json(json_object:) ⇒ Vapi::CustomLlmModel

Deserialize a JSON object to an instance of CustomLlmModel

Parameters:

  • json_object (String)

Returns:



136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 136

def self.from_json(json_object:)
  struct = JSON.parse(json_object, object_class: OpenStruct)
  parsed_json = JSON.parse(json_object)
  messages = parsed_json["messages"]&.map do |item|
    item = item.to_json
    Vapi::OpenAiMessage.from_json(json_object: item)
  end
  tools = parsed_json["tools"]&.map do |item|
    item = item.to_json
    Vapi::CustomLlmModelToolsItem.from_json(json_object: item)
  end
  tool_ids = parsed_json["toolIds"]
   = parsed_json["metadataSendMode"]
  url = parsed_json["url"]
  model = parsed_json["model"]
  temperature = parsed_json["temperature"]
  if parsed_json["knowledgeBase"].nil?
    knowledge_base = nil
  else
    knowledge_base = parsed_json["knowledgeBase"].to_json
    knowledge_base = Vapi::KnowledgeBase.from_json(json_object: knowledge_base)
  end
  max_tokens = parsed_json["maxTokens"]
  emotion_recognition_enabled = parsed_json["emotionRecognitionEnabled"]
  num_fast_turns = parsed_json["numFastTurns"]
  new(
    messages: messages,
    tools: tools,
    tool_ids: tool_ids,
    metadata_send_mode: ,
    url: url,
    model: model,
    temperature: temperature,
    knowledge_base: knowledge_base,
    max_tokens: max_tokens,
    emotion_recognition_enabled: emotion_recognition_enabled,
    num_fast_turns: num_fast_turns,
    additional_properties: struct
  )
end

.validate_raw(obj:) ⇒ Void

Leveraged for Union-type generation, validate_raw attempts to parse the given

hash and check each fields type against the current object's property
definitions.

Parameters:

  • obj (Object)

Returns:

  • (Void)


190
191
192
193
194
195
196
197
198
199
200
201
202
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 190

def self.validate_raw(obj:)
  obj.messages&.is_a?(Array) != false || raise("Passed value for field obj.messages is not the expected type, validation failed.")
  obj.tools&.is_a?(Array) != false || raise("Passed value for field obj.tools is not the expected type, validation failed.")
  obj.tool_ids&.is_a?(Array) != false || raise("Passed value for field obj.tool_ids is not the expected type, validation failed.")
  obj.&.is_a?(Vapi::CustomLlmModelMetadataSendMode) != false || raise("Passed value for field obj.metadata_send_mode is not the expected type, validation failed.")
  obj.url.is_a?(String) != false || raise("Passed value for field obj.url is not the expected type, validation failed.")
  obj.model.is_a?(String) != false || raise("Passed value for field obj.model is not the expected type, validation failed.")
  obj.temperature&.is_a?(Float) != false || raise("Passed value for field obj.temperature is not the expected type, validation failed.")
  obj.knowledge_base.nil? || Vapi::KnowledgeBase.validate_raw(obj: obj.knowledge_base)
  obj.max_tokens&.is_a?(Float) != false || raise("Passed value for field obj.max_tokens is not the expected type, validation failed.")
  obj.emotion_recognition_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.emotion_recognition_enabled is not the expected type, validation failed.")
  obj.num_fast_turns&.is_a?(Float) != false || raise("Passed value for field obj.num_fast_turns is not the expected type, validation failed.")
end

Instance Method Details

#to_json(*_args) ⇒ String

Serialize an instance of CustomLlmModel to a JSON object

Returns:

  • (String)


180
181
182
# File 'lib/vapi_server_sdk/types/custom_llm_model.rb', line 180

def to_json(*_args)
  @_field_set&.to_json
end