Class: Vapi::OpenAiModel

Inherits:
Object
  • Object
show all
Defined in:
lib/vapi_server_sdk/types/open_ai_model.rb

Constant Summary collapse

OMIT =
Object.new

Instance Attribute Summary collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT, fallback_models: OMIT, semantic_caching_enabled: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil) ⇒ Vapi::OpenAiModel

Parameters:

  • messages (Array<Vapi::OpenAiMessage>) (defaults to: OMIT)

    This is the starting state for the conversation.

  • tools (Array<Vapi::OpenAiModelToolsItem>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use existing tools, use toolIds. Both tools and toolIds can be used together.

  • tool_ids (Array<String>) (defaults to: OMIT)

    These are the tools that the assistant can use during the call. To use transient tools, use tools. Both tools and toolIds can be used together.

  • knowledge_base (Vapi::CreateCustomKnowledgeBaseDto) (defaults to: OMIT)

    These are the options for the knowledge base.

  • knowledge_base_id (String) (defaults to: OMIT)

    This is the ID of the knowledge base the model will use.

  • model (Vapi::OpenAiModelModel)

    This is the OpenAI model that will be used.

  • fallback_models (Array<Vapi::OpenAiModelFallbackModelsItem>) (defaults to: OMIT)

    These are the fallback models that will be used if the primary model fails. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.

  • semantic_caching_enabled (Boolean) (defaults to: OMIT)
  • temperature (Float) (defaults to: OMIT)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

  • max_tokens (Float) (defaults to: OMIT)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

  • emotion_recognition_enabled (Boolean) (defaults to: OMIT)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default false because the model is usually are good at understanding the user’s emotion from text. @default false

  • num_fast_turns (Float) (defaults to: OMIT)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0

  • additional_properties (OpenStruct) (defaults to: nil)

    Additional properties unmapped to the current class definition



91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 91

def initialize(model:, messages: OMIT, tools: OMIT, tool_ids: OMIT, knowledge_base: OMIT, knowledge_base_id: OMIT,
               fallback_models: OMIT, semantic_caching_enabled: OMIT, temperature: OMIT, max_tokens: OMIT, emotion_recognition_enabled: OMIT, num_fast_turns: OMIT, additional_properties: nil)
  @messages = messages if messages != OMIT
  @tools = tools if tools != OMIT
  @tool_ids = tool_ids if tool_ids != OMIT
  @knowledge_base = knowledge_base if knowledge_base != OMIT
  @knowledge_base_id = knowledge_base_id if knowledge_base_id != OMIT
  @model = model
  @fallback_models = fallback_models if fallback_models != OMIT
  @semantic_caching_enabled = semantic_caching_enabled if semantic_caching_enabled != OMIT
  @temperature = temperature if temperature != OMIT
  @max_tokens = max_tokens if max_tokens != OMIT
  @emotion_recognition_enabled = emotion_recognition_enabled if emotion_recognition_enabled != OMIT
  @num_fast_turns = num_fast_turns if num_fast_turns != OMIT
  @additional_properties = additional_properties
  @_field_set = {
    "messages": messages,
    "tools": tools,
    "toolIds": tool_ids,
    "knowledgeBase": knowledge_base,
    "knowledgeBaseId": knowledge_base_id,
    "model": model,
    "fallbackModels": fallback_models,
    "semanticCachingEnabled": semantic_caching_enabled,
    "temperature": temperature,
    "maxTokens": max_tokens,
    "emotionRecognitionEnabled": emotion_recognition_enabled,
    "numFastTurns": num_fast_turns
  }.reject do |_k, v|
    v == OMIT
  end
end

Instance Attribute Details

#additional_propertiesOpenStruct (readonly)

Returns Additional properties unmapped to the current class definition.

Returns:

  • (OpenStruct)

    Additional properties unmapped to the current class definition



54
55
56
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 54

def additional_properties
  @additional_properties
end

#emotion_recognition_enabledBoolean (readonly)

Returns This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default false because the model is usually are good at understanding the user’s emotion from text. @default false.

Returns:

  • (Boolean)

    This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default false because the model is usually are good at understanding the user’s emotion from text. @default false



46
47
48
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 46

def emotion_recognition_enabled
  @emotion_recognition_enabled
end

#fallback_modelsArray<Vapi::OpenAiModelFallbackModelsItem> (readonly)

Returns These are the fallback models that will be used if the primary model fails. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.

Returns:

  • (Array<Vapi::OpenAiModelFallbackModelsItem>)

    These are the fallback models that will be used if the primary model fails. This shouldn’t be specified unless you have a specific reason to do so. Vapi will automatically find the fastest fallbacks that make sense.



32
33
34
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 32

def fallback_models
  @fallback_models
end

#knowledge_baseVapi::CreateCustomKnowledgeBaseDto (readonly)

Returns These are the options for the knowledge base.

Returns:



24
25
26
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 24

def knowledge_base
  @knowledge_base
end

#knowledge_base_idString (readonly)

Returns This is the ID of the knowledge base the model will use.

Returns:

  • (String)

    This is the ID of the knowledge base the model will use.



26
27
28
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 26

def knowledge_base_id
  @knowledge_base_id
end

#max_tokensFloat (readonly)

Returns This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.

Returns:

  • (Float)

    This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.



40
41
42
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 40

def max_tokens
  @max_tokens
end

#messagesArray<Vapi::OpenAiMessage> (readonly)

Returns This is the starting state for the conversation.

Returns:



14
15
16
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 14

def messages
  @messages
end

#modelVapi::OpenAiModelModel (readonly)

Returns This is the OpenAI model that will be used.

Returns:



28
29
30
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 28

def model
  @model
end

#num_fast_turnsFloat (readonly)

Returns This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0.

Returns:

  • (Float)

    This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0



52
53
54
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 52

def num_fast_turns
  @num_fast_turns
end

#semantic_caching_enabledBoolean (readonly)

Returns:

  • (Boolean)


34
35
36
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 34

def semantic_caching_enabled
  @semantic_caching_enabled
end

#temperatureFloat (readonly)

Returns This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.

Returns:

  • (Float)

    This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.



37
38
39
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 37

def temperature
  @temperature
end

#tool_idsArray<String> (readonly)

Returns These are the tools that the assistant can use during the call. To use transient tools, use tools. Both tools and toolIds can be used together.

Returns:

  • (Array<String>)

    These are the tools that the assistant can use during the call. To use transient tools, use tools. Both tools and toolIds can be used together.



22
23
24
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 22

def tool_ids
  @tool_ids
end

#toolsArray<Vapi::OpenAiModelToolsItem> (readonly)

Returns These are the tools that the assistant can use during the call. To use existing tools, use toolIds. Both tools and toolIds can be used together.

Returns:

  • (Array<Vapi::OpenAiModelToolsItem>)

    These are the tools that the assistant can use during the call. To use existing tools, use toolIds. Both tools and toolIds can be used together.



18
19
20
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 18

def tools
  @tools
end

Class Method Details

.from_json(json_object:) ⇒ Vapi::OpenAiModel

Deserialize a JSON object to an instance of OpenAiModel

Parameters:

  • json_object (String)

Returns:



128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 128

def self.from_json(json_object:)
  struct = JSON.parse(json_object, object_class: OpenStruct)
  parsed_json = JSON.parse(json_object)
  messages = parsed_json["messages"]&.map do |item|
    item = item.to_json
    Vapi::OpenAiMessage.from_json(json_object: item)
  end
  tools = parsed_json["tools"]&.map do |item|
    item = item.to_json
    Vapi::OpenAiModelToolsItem.from_json(json_object: item)
  end
  tool_ids = parsed_json["toolIds"]
  if parsed_json["knowledgeBase"].nil?
    knowledge_base = nil
  else
    knowledge_base = parsed_json["knowledgeBase"].to_json
    knowledge_base = Vapi::CreateCustomKnowledgeBaseDto.from_json(json_object: knowledge_base)
  end
  knowledge_base_id = parsed_json["knowledgeBaseId"]
  model = parsed_json["model"]
  fallback_models = parsed_json["fallbackModels"]
  semantic_caching_enabled = parsed_json["semanticCachingEnabled"]
  temperature = parsed_json["temperature"]
  max_tokens = parsed_json["maxTokens"]
  emotion_recognition_enabled = parsed_json["emotionRecognitionEnabled"]
  num_fast_turns = parsed_json["numFastTurns"]
  new(
    messages: messages,
    tools: tools,
    tool_ids: tool_ids,
    knowledge_base: knowledge_base,
    knowledge_base_id: knowledge_base_id,
    model: model,
    fallback_models: fallback_models,
    semantic_caching_enabled: semantic_caching_enabled,
    temperature: temperature,
    max_tokens: max_tokens,
    emotion_recognition_enabled: emotion_recognition_enabled,
    num_fast_turns: num_fast_turns,
    additional_properties: struct
  )
end

.validate_raw(obj:) ⇒ Void

Leveraged for Union-type generation, validate_raw attempts to parse the given

hash and check each fields type against the current object's property
definitions.

Parameters:

  • obj (Object)

Returns:

  • (Void)


184
185
186
187
188
189
190
191
192
193
194
195
196
197
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 184

def self.validate_raw(obj:)
  obj.messages&.is_a?(Array) != false || raise("Passed value for field obj.messages is not the expected type, validation failed.")
  obj.tools&.is_a?(Array) != false || raise("Passed value for field obj.tools is not the expected type, validation failed.")
  obj.tool_ids&.is_a?(Array) != false || raise("Passed value for field obj.tool_ids is not the expected type, validation failed.")
  obj.knowledge_base.nil? || Vapi::CreateCustomKnowledgeBaseDto.validate_raw(obj: obj.knowledge_base)
  obj.knowledge_base_id&.is_a?(String) != false || raise("Passed value for field obj.knowledge_base_id is not the expected type, validation failed.")
  obj.model.is_a?(Vapi::OpenAiModelModel) != false || raise("Passed value for field obj.model is not the expected type, validation failed.")
  obj.fallback_models&.is_a?(Array) != false || raise("Passed value for field obj.fallback_models is not the expected type, validation failed.")
  obj.semantic_caching_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.semantic_caching_enabled is not the expected type, validation failed.")
  obj.temperature&.is_a?(Float) != false || raise("Passed value for field obj.temperature is not the expected type, validation failed.")
  obj.max_tokens&.is_a?(Float) != false || raise("Passed value for field obj.max_tokens is not the expected type, validation failed.")
  obj.emotion_recognition_enabled&.is_a?(Boolean) != false || raise("Passed value for field obj.emotion_recognition_enabled is not the expected type, validation failed.")
  obj.num_fast_turns&.is_a?(Float) != false || raise("Passed value for field obj.num_fast_turns is not the expected type, validation failed.")
end

Instance Method Details

#to_json(*_args) ⇒ String

Serialize an instance of OpenAiModel to a JSON object

Returns:

  • (String)


174
175
176
# File 'lib/vapi_server_sdk/types/open_ai_model.rb', line 174

def to_json(*_args)
  @_field_set&.to_json
end