Class: Vellum::AsyncClient
- Inherits:
-
Object
- Object
- Vellum::AsyncClient
- Defined in:
- lib/vellum_ai.rb
Instance Attribute Summary collapse
-
#deployments ⇒ Object
readonly
Returns the value of attribute deployments.
-
#document_indexes ⇒ Object
readonly
Returns the value of attribute document_indexes.
-
#documents ⇒ Object
readonly
Returns the value of attribute documents.
-
#folder_entities ⇒ Object
readonly
Returns the value of attribute folder_entities.
-
#model_versions ⇒ Object
readonly
Returns the value of attribute model_versions.
-
#registered_prompts ⇒ Object
readonly
Returns the value of attribute registered_prompts.
-
#sandboxes ⇒ Object
readonly
Returns the value of attribute sandboxes.
-
#test_suite_runs ⇒ Object
readonly
Returns the value of attribute test_suite_runs.
-
#test_suites ⇒ Object
readonly
Returns the value of attribute test_suites.
-
#workflow_deployments ⇒ Object
readonly
Returns the value of attribute workflow_deployments.
Instance Method Summary collapse
-
#execute_prompt(inputs:, prompt_deployment_id: nil, prompt_deployment_name: nil, release_tag: nil, external_id: nil, expand_meta: nil, raw_overrides: nil, expand_raw: nil, metadata: nil, request_options: nil) ⇒ ExecutePromptResponse
Executes a deployed Prompt and returns the result.
-
#execute_workflow(inputs:, workflow_deployment_id: nil, workflow_deployment_name: nil, release_tag: nil, external_id: nil, request_options: nil) ⇒ ExecuteWorkflowResponse
Executes a deployed Workflow and returns its outputs.
-
#generate(requests:, deployment_id: nil, deployment_name: nil, options: nil, request_options: nil) ⇒ GenerateResponse
Generate a completion using a previously defined deployment.
- #initialize(api_key:, environment: Environment::PRODUCTION, max_retries: nil, timeout_in_seconds: nil) ⇒ AsyncClient constructor
-
#search(query:, index_id: nil, index_name: nil, options: nil, request_options: nil) ⇒ SearchResponse
Perform a search against a document index.
-
#submit_completion_actuals(actuals:, deployment_id: nil, deployment_name: nil, request_options: nil) ⇒ Void
Used to submit feedback regarding the quality of previously generated completions.
-
#submit_workflow_execution_actuals(actuals:, execution_id: nil, external_id: nil, request_options: nil) ⇒ Void
Used to submit feedback regarding the quality of previous workflow execution and its outputs.
Constructor Details
#initialize(api_key:, environment: Environment::PRODUCTION, max_retries: nil, timeout_in_seconds: nil) ⇒ AsyncClient
263 264 265 266 267 268 269 270 271 272 273 274 275 276 |
# File 'lib/vellum_ai.rb', line 263 def initialize(api_key:, environment: Environment::PRODUCTION, max_retries: nil, timeout_in_seconds: nil) @async_request_client = AsyncRequestClient.new(environment: environment, max_retries: max_retries, timeout_in_seconds: timeout_in_seconds, api_key: api_key) @deployments = AsyncDeploymentsClient.new(request_client: @async_request_client) @document_indexes = AsyncDocumentIndexesClient.new(request_client: @async_request_client) @documents = AsyncDocumentsClient.new(request_client: @async_request_client) @folder_entities = AsyncFolderEntitiesClient.new(request_client: @async_request_client) @model_versions = AsyncModelVersionsClient.new(request_client: @async_request_client) @registered_prompts = AsyncRegisteredPromptsClient.new(request_client: @async_request_client) @sandboxes = AsyncSandboxesClient.new(request_client: @async_request_client) @test_suite_runs = AsyncTestSuiteRunsClient.new(request_client: @async_request_client) @test_suites = AsyncTestSuitesClient.new(request_client: @async_request_client) @workflow_deployments = AsyncWorkflowDeploymentsClient.new(request_client: @async_request_client) end |
Instance Attribute Details
#deployments ⇒ Object (readonly)
Returns the value of attribute deployments.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def deployments @deployments end |
#document_indexes ⇒ Object (readonly)
Returns the value of attribute document_indexes.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def document_indexes @document_indexes end |
#documents ⇒ Object (readonly)
Returns the value of attribute documents.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def documents @documents end |
#folder_entities ⇒ Object (readonly)
Returns the value of attribute folder_entities.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def folder_entities @folder_entities end |
#model_versions ⇒ Object (readonly)
Returns the value of attribute model_versions.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def model_versions @model_versions end |
#registered_prompts ⇒ Object (readonly)
Returns the value of attribute registered_prompts.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def registered_prompts @registered_prompts end |
#sandboxes ⇒ Object (readonly)
Returns the value of attribute sandboxes.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def sandboxes @sandboxes end |
#test_suite_runs ⇒ Object (readonly)
Returns the value of attribute test_suite_runs.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def test_suite_runs @test_suite_runs end |
#test_suites ⇒ Object (readonly)
Returns the value of attribute test_suites.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def test_suites @test_suites end |
#workflow_deployments ⇒ Object (readonly)
Returns the value of attribute workflow_deployments.
255 256 257 |
# File 'lib/vellum_ai.rb', line 255 def workflow_deployments @workflow_deployments end |
Instance Method Details
#execute_prompt(inputs:, prompt_deployment_id: nil, prompt_deployment_name: nil, release_tag: nil, external_id: nil, expand_meta: nil, raw_overrides: nil, expand_raw: nil, metadata: nil, request_options: nil) ⇒ ExecutePromptResponse
Executes a deployed Prompt and returns the result.
299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 |
# File 'lib/vellum_ai.rb', line 299 def execute_prompt(inputs:, prompt_deployment_id: nil, prompt_deployment_name: nil, release_tag: nil, external_id: nil, expand_meta: nil, raw_overrides: nil, expand_raw: nil, metadata: nil, request_options: nil) response = @async_request_client.conn.post do |req| req..timeout = .timeout_in_seconds unless &.timeout_in_seconds.nil? req.headers["X_API_KEY"] = .api_key unless &.api_key.nil? req.headers = { **req.headers, **(&.additional_headers || {}) }.compact req.body = { **(&.additional_body_parameters || {}), inputs: inputs, prompt_deployment_id: prompt_deployment_id, prompt_deployment_name: prompt_deployment_name, release_tag: release_tag, external_id: external_id, expand_meta: , raw_overrides: raw_overrides, expand_raw: , metadata: }.compact req.url "#{@async_request_client.default_environment[:Predict]}/v1/execute-prompt" end ExecutePromptResponse.from_json(json_object: response.body) end |
#execute_workflow(inputs:, workflow_deployment_id: nil, workflow_deployment_name: nil, release_tag: nil, external_id: nil, request_options: nil) ⇒ ExecuteWorkflowResponse
Executes a deployed Workflow and returns its outputs.
331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 |
# File 'lib/vellum_ai.rb', line 331 def execute_workflow(inputs:, workflow_deployment_id: nil, workflow_deployment_name: nil, release_tag: nil, external_id: nil, request_options: nil) response = @async_request_client.conn.post do |req| req..timeout = .timeout_in_seconds unless &.timeout_in_seconds.nil? req.headers["X_API_KEY"] = .api_key unless &.api_key.nil? req.headers = { **req.headers, **(&.additional_headers || {}) }.compact req.body = { **(&.additional_body_parameters || {}), inputs: inputs, workflow_deployment_id: workflow_deployment_id, workflow_deployment_name: workflow_deployment_name, release_tag: release_tag, external_id: external_id }.compact req.url "#{@async_request_client.default_environment[:Predict]}/v1/execute-workflow" end ExecuteWorkflowResponse.from_json(json_object: response.body) end |
#generate(requests:, deployment_id: nil, deployment_name: nil, options: nil, request_options: nil) ⇒ GenerateResponse
Generate a completion using a previously defined deployment.
Note: Uses a base url of ‘predict.vellum.ai`.
364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 |
# File 'lib/vellum_ai.rb', line 364 def generate(requests:, deployment_id: nil, deployment_name: nil, options: nil, request_options: nil) response = @async_request_client.conn.post do |req| req..timeout = .timeout_in_seconds unless &.timeout_in_seconds.nil? req.headers["X_API_KEY"] = .api_key unless &.api_key.nil? req.headers = { **req.headers, **(&.additional_headers || {}) }.compact req.body = { **(&.additional_body_parameters || {}), deployment_id: deployment_id, deployment_name: deployment_name, requests: requests, options: }.compact req.url "#{@async_request_client.default_environment[:Predict]}/v1/generate" end GenerateResponse.from_json(json_object: response.body) end |
#search(query:, index_id: nil, index_name: nil, options: nil, request_options: nil) ⇒ SearchResponse
Perform a search against a document index.
Note: Uses a base url of ‘predict.vellum.ai`.
406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 |
# File 'lib/vellum_ai.rb', line 406 def search(query:, index_id: nil, index_name: nil, options: nil, request_options: nil) response = @async_request_client.conn.post do |req| req..timeout = .timeout_in_seconds unless &.timeout_in_seconds.nil? req.headers["X_API_KEY"] = .api_key unless &.api_key.nil? req.headers = { **req.headers, **(&.additional_headers || {}) }.compact req.body = { **(&.additional_body_parameters || {}), index_id: index_id, index_name: index_name, query: query, options: }.compact req.url "#{@async_request_client.default_environment[:Predict]}/v1/search" end SearchResponse.from_json(json_object: response.body) end |
#submit_completion_actuals(actuals:, deployment_id: nil, deployment_name: nil, request_options: nil) ⇒ Void
Used to submit feedback regarding the quality of previously generated completions.
Note: Uses a base url of ‘predict.vellum.ai`.
437 438 439 440 441 442 443 444 445 446 447 448 449 450 |
# File 'lib/vellum_ai.rb', line 437 def submit_completion_actuals(actuals:, deployment_id: nil, deployment_name: nil, request_options: nil) @async_request_client.conn.post do |req| req..timeout = .timeout_in_seconds unless &.timeout_in_seconds.nil? req.headers["X_API_KEY"] = .api_key unless &.api_key.nil? req.headers = { **req.headers, **(&.additional_headers || {}) }.compact req.body = { **(&.additional_body_parameters || {}), deployment_id: deployment_id, deployment_name: deployment_name, actuals: actuals }.compact req.url "#{@async_request_client.default_environment[:Predict]}/v1/submit-completion-actuals" end end |
#submit_workflow_execution_actuals(actuals:, execution_id: nil, external_id: nil, request_options: nil) ⇒ Void
Used to submit feedback regarding the quality of previous workflow execution and its outputs.
**Note:** Uses a base url of `https://predict.vellum.ai`.
461 462 463 464 465 466 467 468 469 470 471 472 473 474 |
# File 'lib/vellum_ai.rb', line 461 def submit_workflow_execution_actuals(actuals:, execution_id: nil, external_id: nil, request_options: nil) @async_request_client.conn.post do |req| req..timeout = .timeout_in_seconds unless &.timeout_in_seconds.nil? req.headers["X_API_KEY"] = .api_key unless &.api_key.nil? req.headers = { **req.headers, **(&.additional_headers || {}) }.compact req.body = { **(&.additional_body_parameters || {}), actuals: actuals, execution_id: execution_id, external_id: external_id }.compact req.url "#{@async_request_client.default_environment[:Predict]}/v1/submit-workflow-execution-actuals" end end |