Module: Elasticsearch::API::Inference::Actions

Defined in:
lib/elasticsearch/api/actions/inference/get.rb,
lib/elasticsearch/api/actions/inference/put.rb,
lib/elasticsearch/api/actions/inference/delete.rb,
lib/elasticsearch/api/actions/inference/rerank.rb,
lib/elasticsearch/api/actions/inference/update.rb,
lib/elasticsearch/api/actions/inference/put_ai21.rb,
lib/elasticsearch/api/actions/inference/put_groq.rb,
lib/elasticsearch/api/actions/inference/inference.rb,
lib/elasticsearch/api/actions/inference/put_elser.rb,
lib/elasticsearch/api/actions/inference/put_llama.rb,
lib/elasticsearch/api/actions/inference/completion.rb,
lib/elasticsearch/api/actions/inference/put_cohere.rb,
lib/elasticsearch/api/actions/inference/put_custom.rb,
lib/elasticsearch/api/actions/inference/put_jinaai.rb,
lib/elasticsearch/api/actions/inference/put_nvidia.rb,
lib/elasticsearch/api/actions/inference/put_openai.rb,
lib/elasticsearch/api/actions/inference/put_mistral.rb,
lib/elasticsearch/api/actions/inference/put_watsonx.rb,
lib/elasticsearch/api/actions/inference/put_deepseek.rb,
lib/elasticsearch/api/actions/inference/put_voyageai.rb,
lib/elasticsearch/api/actions/inference/put_anthropic.rb,
lib/elasticsearch/api/actions/inference/text_embedding.rb,
lib/elasticsearch/api/actions/inference/put_azureopenai.rb,
lib/elasticsearch/api/actions/inference/put_alibabacloud.rb,
lib/elasticsearch/api/actions/inference/put_contextualai.rb,
lib/elasticsearch/api/actions/inference/put_hugging_face.rb,
lib/elasticsearch/api/actions/inference/put_openshift_ai.rb,
lib/elasticsearch/api/actions/inference/sparse_embedding.rb,
lib/elasticsearch/api/actions/inference/put_amazonbedrock.rb,
lib/elasticsearch/api/actions/inference/put_azureaistudio.rb,
lib/elasticsearch/api/actions/inference/put_elasticsearch.rb,
lib/elasticsearch/api/actions/inference/stream_completion.rb,
lib/elasticsearch/api/actions/inference/put_googleaistudio.rb,
lib/elasticsearch/api/actions/inference/put_googlevertexai.rb,
lib/elasticsearch/api/actions/inference/put_amazonsagemaker.rb,
lib/elasticsearch/api/actions/inference/chat_completion_unified.rb

Instance Method Summary collapse

Instance Method Details

#chat_completion_unified(arguments = {}) ⇒ Object

Perform chat completion inference on the service. The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the chat_completion task type. NOTE: The chat_completion task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. To determine whether a given inference service supports this task type, please see the page for that service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The inference Id (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    chat_completion_request

Raises:

  • (ArgumentError)

See Also:



51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# File 'lib/elasticsearch/api/actions/inference/chat_completion_unified.rb', line 51

def chat_completion_unified(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.chat_completion_unified' }

  defined_params = [:inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = "_inference/chat_completion/#{Utils.listify(_inference_id)}/_stream"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#completion(arguments = {}) ⇒ Object

Perform completion inference on the service. Get responses for completion tasks. This API works only with the completion task type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege).

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The inference Id (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# File 'lib/elasticsearch/api/actions/inference/completion.rb', line 49

def completion(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.completion' }

  defined_params = [:inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = "_inference/completion/#{Utils.listify(_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#delete(arguments = {}) ⇒ Object

Delete an inference endpoint. This API requires the manage_inference cluster privilege (the built-in inference_admin role grants this privilege).

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The task type

  • :inference_id (String)

    The inference identifier. (Required)

  • :dry_run (Boolean)

    When true, checks the semantic_text fields and inference processors that reference the endpoint and returns them in a list, but does not delete the endpoint.

  • :force (Boolean)

    When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# File 'lib/elasticsearch/api/actions/inference/delete.rb', line 47

def delete(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.delete' }

  defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = nil

  _task_type = arguments.delete(:task_type)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_DELETE
  path   = if _task_type && _inference_id
             "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}"
           else
             "_inference/#{Utils.listify(_inference_id)}"
           end
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#get(arguments = {}) ⇒ Object

Get an inference endpoint. This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege).

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The task type

  • :inference_id (String)

    The inference Id

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

See Also:



45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# File 'lib/elasticsearch/api/actions/inference/get.rb', line 45

def get(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.get' }

  defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = nil

  _task_type = arguments.delete(:task_type)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_GET
  path   = if _task_type && _inference_id
             "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}"
           elsif _inference_id
             "_inference/#{Utils.listify(_inference_id)}"
           else
             '_inference'
           end
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#inference(arguments = {}) ⇒ Object

Perform inference on the service. This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API. For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of inference task that the model performs.

  • :inference_id (String)

    The unique identifier for the inference endpoint. (Required)

  • :timeout (Time)

    The amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# File 'lib/elasticsearch/api/actions/inference/inference.rb', line 50

def inference(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.inference' }

  defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = if _task_type && _inference_id
             "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}"
           else
             "_inference/#{Utils.listify(_inference_id)}"
           end
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put(arguments = {}) ⇒ Object

Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name:

  • AI21 (chat_completion, completion)

  • AlibabaCloud AI Search (completion, rerank, sparse_embedding, text_embedding)

  • Amazon Bedrock (completion, text_embedding)

  • Amazon SageMaker (chat_completion, completion, rerank, sparse_embedding, text_embedding)

  • Anthropic (completion)

  • Azure AI Studio (completion, rerank, text_embedding)

  • Azure OpenAI (chat_completion, completion, text_embedding)

  • Cohere (completion, rerank, text_embedding)

  • DeepSeek (chat_completion, completion)

  • Elasticsearch (rerank, sparse_embedding, text_embedding - this service is for built-in models and models uploaded through Eland)

  • ELSER (sparse_embedding)

  • Google AI Studio (completion, text_embedding)

  • Google Vertex AI (chat_completion, completion, rerank, text_embedding)

  • Groq (chat_completion)

  • Hugging Face (chat_completion, completion, rerank, text_embedding)

  • JinaAI (rerank, text_embedding)

  • Llama (chat_completion, completion, text_embedding)

  • Mistral (chat_completion, completion, text_embedding)

  • Nvidia (chat_completion, completion, text_embedding, rerank)

  • OpenAI (chat_completion, completion, text_embedding)

  • OpenShift AI (chat_completion, completion, rerank, text_embedding)

  • VoyageAI (rerank, text_embedding)

  • Watsonx inference integration (text_embedding)

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The task type. Refer to the integration list in the API description for the available task types.

  • :inference_id (String)

    The inference Id (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    inference_config

Raises:

  • (ArgumentError)

See Also:



73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
# File 'lib/elasticsearch/api/actions/inference/put.rb', line 73

def put(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put' }

  defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = if _task_type && _inference_id
             "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}"
           else
             "_inference/#{Utils.listify(_inference_id)}"
           end
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_ai21(arguments = {}) ⇒ Object

Create a AI21 inference endpoint. Create an inference endpoint to perform an inference task with the ai21 service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :ai21_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# File 'lib/elasticsearch/api/actions/inference/put_ai21.rb', line 47

def put_ai21(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_ai21' }

  defined_params = [:task_type, :ai21_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]
  raise ArgumentError, "Required argument 'ai21_inference_id' missing" unless arguments[:ai21_inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _ai21_inference_id = arguments.delete(:ai21_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_ai21_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_alibabacloud(arguments = {}) ⇒ Object

Create an AlibabaCloud AI Search inference endpoint. Create an inference endpoint to perform an inference task with the alibabacloud-ai-search service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :alibabacloud_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_alibabacloud.rb', line 47

def put_alibabacloud(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_alibabacloud' }

  defined_params = [:task_type, :alibabacloud_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:alibabacloud_inference_id]
    raise ArgumentError,
          "Required argument 'alibabacloud_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _alibabacloud_inference_id = arguments.delete(:alibabacloud_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_alibabacloud_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_amazonbedrock(arguments = {}) ⇒ Object

Create an Amazon Bedrock inference endpoint. Create an inference endpoint to perform an inference task with the amazonbedrock service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :amazonbedrock_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_amazonbedrock.rb', line 47

def put_amazonbedrock(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_amazonbedrock' }

  defined_params = [:task_type, :amazonbedrock_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:amazonbedrock_inference_id]
    raise ArgumentError,
          "Required argument 'amazonbedrock_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _amazonbedrock_inference_id = arguments.delete(:amazonbedrock_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_amazonbedrock_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_amazonsagemaker(arguments = {}) ⇒ Object

Create an Amazon SageMaker inference endpoint. Create an inference endpoint to perform an inference task with the amazon_sagemaker service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :amazonsagemaker_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_amazonsagemaker.rb', line 47

def put_amazonsagemaker(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_amazonsagemaker' }

  defined_params = [:task_type, :amazonsagemaker_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:amazonsagemaker_inference_id]
    raise ArgumentError,
          "Required argument 'amazonsagemaker_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _amazonsagemaker_inference_id = arguments.delete(:amazonsagemaker_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_amazonsagemaker_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_anthropic(arguments = {}) ⇒ Object

Create an Anthropic inference endpoint. Create an inference endpoint to perform an inference task with the anthropic service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The task type. The only valid task type for the model to perform is completion. (Required)

  • :anthropic_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_anthropic.rb', line 48

def put_anthropic(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_anthropic' }

  defined_params = [:task_type, :anthropic_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:anthropic_inference_id]
    raise ArgumentError,
          "Required argument 'anthropic_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _anthropic_inference_id = arguments.delete(:anthropic_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_anthropic_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_azureaistudio(arguments = {}) ⇒ Object

Create an Azure AI studio inference endpoint. Create an inference endpoint to perform an inference task with the azureaistudio service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :azureaistudio_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_azureaistudio.rb', line 47

def put_azureaistudio(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_azureaistudio' }

  defined_params = [:task_type, :azureaistudio_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:azureaistudio_inference_id]
    raise ArgumentError,
          "Required argument 'azureaistudio_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _azureaistudio_inference_id = arguments.delete(:azureaistudio_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_azureaistudio_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_azureopenai(arguments = {}) ⇒ Object

Create an Azure OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the azureopenai service. The list of chat completion models that you can choose from in your Azure OpenAI deployment include:

The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API. (Required)

  • :azureopenai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
# File 'lib/elasticsearch/api/actions/inference/put_azureopenai.rb', line 52

def put_azureopenai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_azureopenai' }

  defined_params = [:task_type, :azureopenai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:azureopenai_inference_id]
    raise ArgumentError,
          "Required argument 'azureopenai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _azureopenai_inference_id = arguments.delete(:azureopenai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_azureopenai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_cohere(arguments = {}) ⇒ Object

Create a Cohere inference endpoint. Create an inference endpoint to perform an inference task with the cohere service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :cohere_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_cohere.rb', line 47

def put_cohere(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_cohere' }

  defined_params = [:task_type, :cohere_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:cohere_inference_id]
    raise ArgumentError,
          "Required argument 'cohere_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _cohere_inference_id = arguments.delete(:cohere_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_cohere_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_contextualai(arguments = {}) ⇒ Object

Create an Contextual AI inference endpoint. Create an inference endpoint to perform an inference task with the contexualai service. To review the available rerank models, refer to <docs.contextual.ai/api-reference/rerank/rerank#body-model>.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :contextualai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_contextualai.rb', line 48

def put_contextualai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_contextualai' }

  defined_params = [:task_type, :contextualai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:contextualai_inference_id]
    raise ArgumentError,
          "Required argument 'contextualai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _contextualai_inference_id = arguments.delete(:contextualai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_contextualai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_custom(arguments = {}) ⇒ Object

Create a custom inference endpoint. The custom service gives more control over how to interact with external inference services that aren’t explicitly supported through dedicated integrations. The custom service gives you the ability to define the headers, url, query parameters, request body, and secrets. The custom service supports the template replacement functionality, which enables you to define a template that can be replaced with the value associated with that key. Templates are portions of a string that start with ‘$and end with ``. The parameters secret_parameters and task_settings are checked for keys for template replacement. Template replacement is supported in the request, headers, url, and query_parameters. If the definition (key) is not found for a template, an error message is returned. In case of an endpoint definition like the following:

“‘ PUT _inference/text_embedding/test-text-embedding {

"service": "custom",
"service_settings": {
   "secret_parameters": {
        "api_key": "<some api key>"
   },
   "url": "...endpoints.huggingface.cloud/v1/embeddings",
   "headers": {
       "Authorization": "Bearer ${api_key}",
       "Content-Type": "application/json"
   },
   "request": "{\"input\": ${input}}",
   "response": {
       "json_parser": {
           "text_embeddings":"$.data[*].embedding[*]"
       }
   }
}

} “‘

To replace ‘$api_key` the secret_parameters and task_settings are checked for a key named api_key.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :custom_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
# File 'lib/elasticsearch/api/actions/inference/put_custom.rb', line 77

def put_custom(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_custom' }

  defined_params = [:task_type, :custom_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:custom_inference_id]
    raise ArgumentError,
          "Required argument 'custom_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _custom_inference_id = arguments.delete(:custom_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_custom_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_deepseek(arguments = {}) ⇒ Object

Create a DeepSeek inference endpoint. Create an inference endpoint to perform an inference task with the deepseek service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :deepseek_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_deepseek.rb', line 47

def put_deepseek(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_deepseek' }

  defined_params = [:task_type, :deepseek_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:deepseek_inference_id]
    raise ArgumentError,
          "Required argument 'deepseek_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _deepseek_inference_id = arguments.delete(:deepseek_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_deepseek_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_elasticsearch(arguments = {}) ⇒ Object

Create an Elasticsearch inference endpoint. Create an inference endpoint to perform an inference task with the elasticsearch service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :elasticsearch_inference_id (String)

    The unique identifier of the inference endpoint. The must not match the model_id. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_elasticsearch.rb', line 48

def put_elasticsearch(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_elasticsearch' }

  defined_params = [:task_type, :elasticsearch_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:elasticsearch_inference_id]
    raise ArgumentError,
          "Required argument 'elasticsearch_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _elasticsearch_inference_id = arguments.delete(:elasticsearch_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_elasticsearch_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_elser(arguments = {}) ⇒ Object

Create an ELSER inference endpoint. Create an inference endpoint to perform an inference task with the elser service. You can also deploy ELSER by using the Elasticsearch inference integration.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :elser_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
# File 'lib/elasticsearch/api/actions/inference/put_elser.rb', line 48

def put_elser(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_elser' }

  defined_params = [:task_type, :elser_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]
  raise ArgumentError, "Required argument 'elser_inference_id' missing" unless arguments[:elser_inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _elser_inference_id = arguments.delete(:elser_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_elser_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_googleaistudio(arguments = {}) ⇒ Object

Create an Google AI Studio inference endpoint. Create an inference endpoint to perform an inference task with the googleaistudio service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :googleaistudio_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_googleaistudio.rb', line 47

def put_googleaistudio(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_googleaistudio' }

  defined_params = [:task_type, :googleaistudio_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:googleaistudio_inference_id]
    raise ArgumentError,
          "Required argument 'googleaistudio_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _googleaistudio_inference_id = arguments.delete(:googleaistudio_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_googleaistudio_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_googlevertexai(arguments = {}) ⇒ Object

Create a Google Vertex AI inference endpoint. Create an inference endpoint to perform an inference task with the googlevertexai service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :googlevertexai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_googlevertexai.rb', line 47

def put_googlevertexai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_googlevertexai' }

  defined_params = [:task_type, :googlevertexai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:googlevertexai_inference_id]
    raise ArgumentError,
          "Required argument 'googlevertexai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _googlevertexai_inference_id = arguments.delete(:googlevertexai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_googlevertexai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_groq(arguments = {}) ⇒ Object

Create a Groq inference endpoint. Create an inference endpoint to perform an inference task with the groq service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :groq_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# File 'lib/elasticsearch/api/actions/inference/put_groq.rb', line 47

def put_groq(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_groq' }

  defined_params = [:task_type, :groq_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]
  raise ArgumentError, "Required argument 'groq_inference_id' missing" unless arguments[:groq_inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _groq_inference_id = arguments.delete(:groq_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_groq_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_hugging_face(arguments = {}) ⇒ Object

Create a Hugging Face inference endpoint. Create an inference endpoint to perform an inference task with the hugging_face service. Supported tasks include: text_embedding, completion, and chat_completion. To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use. For Elastic’s text_embedding task: The selected model must support the ‘Sentence Embeddings` task. On the new endpoint creation page, select the `Sentence Embeddings` task under the `Advanced Configuration` section. After the endpoint has initialized, copy the generated endpoint URL. Recommended models for text_embedding task:

  • all-MiniLM-L6-v2

  • all-MiniLM-L12-v2

  • all-mpnet-base-v2

  • e5-base-v2

  • e5-small-v2

  • multilingual-e5-base

  • multilingual-e5-small

For Elastic’s chat_completion and completion tasks: The selected model must support the ‘Text Generation` task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for `Text Generation`. When creating dedicated endpoint select the `Text Generation` task. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes /v1/chat/completions part in URL. Then, copy the full endpoint URL for use. Recommended models for chat_completion and completion tasks:

  • Mistral-7B-Instruct-v0.2

  • QwQ-32B

  • Phi-3-mini-128k-instruct

For Elastic’s rerank task: The selected model must support the sentence-ranking task and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints for Rerank so far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models for rerank task:

  • bge-reranker-base

  • jina-reranker-v1-turbo-en-GGUF

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :huggingface_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
# File 'lib/elasticsearch/api/actions/inference/put_hugging_face.rb', line 75

def put_hugging_face(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_hugging_face' }

  defined_params = [:task_type, :huggingface_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:huggingface_inference_id]
    raise ArgumentError,
          "Required argument 'huggingface_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _huggingface_inference_id = arguments.delete(:huggingface_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_huggingface_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_jinaai(arguments = {}) ⇒ Object

Create an JinaAI inference endpoint. Create an inference endpoint to perform an inference task with the jinaai service. To review the available rerank models, refer to <jina.ai/reranker>. To review the available text_embedding models, refer to the <jina.ai/embeddings/>.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :jinaai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# File 'lib/elasticsearch/api/actions/inference/put_jinaai.rb', line 49

def put_jinaai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_jinaai' }

  defined_params = [:task_type, :jinaai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:jinaai_inference_id]
    raise ArgumentError,
          "Required argument 'jinaai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _jinaai_inference_id = arguments.delete(:jinaai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_jinaai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_llama(arguments = {}) ⇒ Object

Create a Llama inference endpoint. Create an inference endpoint to perform an inference task with the llama service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :llama_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
# File 'lib/elasticsearch/api/actions/inference/put_llama.rb', line 47

def put_llama(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_llama' }

  defined_params = [:task_type, :llama_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]
  raise ArgumentError, "Required argument 'llama_inference_id' missing" unless arguments[:llama_inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _llama_inference_id = arguments.delete(:llama_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_llama_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_mistral(arguments = {}) ⇒ Object

Create a Mistral inference endpoint. Create an inference endpoint to perform an inference task with the mistral service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :mistral_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File 'lib/elasticsearch/api/actions/inference/put_mistral.rb', line 47

def put_mistral(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_mistral' }

  defined_params = [:task_type, :mistral_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:mistral_inference_id]
    raise ArgumentError,
          "Required argument 'mistral_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _mistral_inference_id = arguments.delete(:mistral_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_mistral_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_nvidia(arguments = {}) ⇒ Object

Create an Nvidia inference endpoint. Create an inference endpoint to perform an inference task with the nvidia service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API. (Required)

  • :nvidia_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_nvidia.rb', line 48

def put_nvidia(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_nvidia' }

  defined_params = [:task_type, :nvidia_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:nvidia_inference_id]
    raise ArgumentError,
          "Required argument 'nvidia_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _nvidia_inference_id = arguments.delete(:nvidia_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_nvidia_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_openai(arguments = {}) ⇒ Object

Create an OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the openai service or openai compatible APIs.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API. (Required)

  • :openai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_openai.rb', line 48

def put_openai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_openai' }

  defined_params = [:task_type, :openai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:openai_inference_id]
    raise ArgumentError,
          "Required argument 'openai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _openai_inference_id = arguments.delete(:openai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_openai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_openshift_ai(arguments = {}) ⇒ Object

Create an OpenShift AI inference endpoint. Create an inference endpoint to perform an inference task with the openshift_ai service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API. (Required)

  • :openshiftai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_openshift_ai.rb', line 48

def put_openshift_ai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_openshift_ai' }

  defined_params = [:task_type, :openshiftai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:openshiftai_inference_id]
    raise ArgumentError,
          "Required argument 'openshiftai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _openshiftai_inference_id = arguments.delete(:openshiftai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_openshiftai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_voyageai(arguments = {}) ⇒ Object

Create a VoyageAI inference endpoint. Create an inference endpoint to perform an inference task with the voyageai service. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :voyageai_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/put_voyageai.rb', line 48

def put_voyageai(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_voyageai' }

  defined_params = [:task_type, :voyageai_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:voyageai_inference_id]
    raise ArgumentError,
          "Required argument 'voyageai_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _voyageai_inference_id = arguments.delete(:voyageai_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_voyageai_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#put_watsonx(arguments = {}) ⇒ Object

Create a Watsonx inference endpoint. Create an inference endpoint to perform an inference task with the watsonxai service. You need an IBM Cloud Databases for Elasticsearch deployment to use the watsonxai inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :task_type (String)

    The type of the inference task that the model will perform. (Required)

  • :watsonx_inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference endpoint to be created. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# File 'lib/elasticsearch/api/actions/inference/put_watsonx.rb', line 49

def put_watsonx(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.put_watsonx' }

  defined_params = [:task_type, :watsonx_inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type]

  unless arguments[:watsonx_inference_id]
    raise ArgumentError,
          "Required argument 'watsonx_inference_id' missing"
  end

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _task_type = arguments.delete(:task_type)

  _watsonx_inference_id = arguments.delete(:watsonx_inference_id)

  method = Elasticsearch::API::HTTP_PUT
  path   = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_watsonx_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#rerank(arguments = {}) ⇒ Object

Perform reranking inference on the service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The unique identifier for the inference endpoint. (Required)

  • :timeout (Time)

    The amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# File 'lib/elasticsearch/api/actions/inference/rerank.rb', line 45

def rerank(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.rerank' }

  defined_params = [:inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = "_inference/rerank/#{Utils.listify(_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#sparse_embedding(arguments = {}) ⇒ Object

Perform sparse embedding inference on the service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The inference Id (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# File 'lib/elasticsearch/api/actions/inference/sparse_embedding.rb', line 45

def sparse_embedding(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.sparse_embedding' }

  defined_params = [:inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = "_inference/sparse_embedding/#{Utils.listify(_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#stream_completion(arguments = {}) ⇒ Object

Perform streaming completion inference on the service. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the monitor_inference cluster privilege (the built-in inference_admin and inference_user roles grant this privilege). You must use a client that supports streaming.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The unique identifier for the inference endpoint. (Required)

  • :timeout (Time)

    The amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
# File 'lib/elasticsearch/api/actions/inference/stream_completion.rb', line 49

def stream_completion(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.stream_completion' }

  defined_params = [:inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = "_inference/completion/#{Utils.listify(_inference_id)}/_stream"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#text_embedding(arguments = {}) ⇒ Object

Perform text embedding inference on the service.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The inference Id (Required)

  • :timeout (Time)

    Specifies the amount of time to wait for the inference request to complete. Server default: 30s.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    request body

Raises:

  • (ArgumentError)

See Also:



45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# File 'lib/elasticsearch/api/actions/inference/text_embedding.rb', line 45

def text_embedding(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.text_embedding' }

  defined_params = [:inference_id].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  method = Elasticsearch::API::HTTP_POST
  path   = "_inference/text_embedding/#{Utils.listify(_inference_id)}"
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end

#update(arguments = {}) ⇒ Object

Update an inference endpoint. Modify task_settings, secrets (within service_settings), or num_allocations for an inference endpoint, depending on the specific endpoint service and task_type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.

Parameters:

  • arguments (Hash) (defaults to: {})

    a customizable set of options

Options Hash (arguments):

  • :inference_id (String)

    The unique identifier of the inference endpoint. (Required)

  • :task_type (String)

    The type of inference task that the model performs.

  • :error_trace (Boolean)

    When set to true Elasticsearch will include the full stack trace of errors when they occur.

  • :filter_path (String, Array<String>)

    Comma-separated list of filters in dot notation which reduce the response returned by Elasticsearch.

  • :human (Boolean)

    When set to true will return statistics in a format suitable for humans. For example ‘“exists_time”: “1h”` for humans and `“exists_time_in_millis”: 3600000` for computers. When disabled the human readable values will be omitted. This makes sense for responses being consumed only by machines.

  • :pretty (Boolean)

    If set to true the returned JSON will be “pretty-formatted”. Only use this option for debugging only.

  • :headers (Hash)

    Custom HTTP headers

  • :body (Hash)

    inference_config

Raises:

  • (ArgumentError)

See Also:



49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# File 'lib/elasticsearch/api/actions/inference/update.rb', line 49

def update(arguments = {})
  request_opts = { endpoint: arguments[:endpoint] || 'inference.update' }

  defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables|
    set_variables[variable] = arguments[variable] if arguments.key?(variable)
  end
  request_opts[:defined_params] = defined_params unless defined_params.empty?

  raise ArgumentError, "Required argument 'body' missing" unless arguments[:body]
  raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id]

  arguments = arguments.clone
  headers = arguments.delete(:headers) || {}

  body = arguments.delete(:body)

  _inference_id = arguments.delete(:inference_id)

  _task_type = arguments.delete(:task_type)

  method = Elasticsearch::API::HTTP_PUT
  path   = if _task_type && _inference_id
             "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}/_update"
           else
             "_inference/#{Utils.listify(_inference_id)}/_update"
           end
  params = Utils.process_params(arguments)

  Elasticsearch::API::Response.new(
    perform_request(method, path, params, body, headers, request_opts)
  )
end