Module: Elasticsearch::API::Inference::Actions
- Defined in:
- lib/elasticsearch/api/actions/inference/get.rb,
lib/elasticsearch/api/actions/inference/put.rb,
lib/elasticsearch/api/actions/inference/delete.rb,
lib/elasticsearch/api/actions/inference/rerank.rb,
lib/elasticsearch/api/actions/inference/update.rb,
lib/elasticsearch/api/actions/inference/inference.rb,
lib/elasticsearch/api/actions/inference/put_elser.rb,
lib/elasticsearch/api/actions/inference/completion.rb,
lib/elasticsearch/api/actions/inference/put_cohere.rb,
lib/elasticsearch/api/actions/inference/put_custom.rb,
lib/elasticsearch/api/actions/inference/put_jinaai.rb,
lib/elasticsearch/api/actions/inference/put_openai.rb,
lib/elasticsearch/api/actions/inference/put_mistral.rb,
lib/elasticsearch/api/actions/inference/put_watsonx.rb,
lib/elasticsearch/api/actions/inference/put_deepseek.rb,
lib/elasticsearch/api/actions/inference/put_voyageai.rb,
lib/elasticsearch/api/actions/inference/put_anthropic.rb,
lib/elasticsearch/api/actions/inference/text_embedding.rb,
lib/elasticsearch/api/actions/inference/put_azureopenai.rb,
lib/elasticsearch/api/actions/inference/put_alibabacloud.rb,
lib/elasticsearch/api/actions/inference/put_hugging_face.rb,
lib/elasticsearch/api/actions/inference/sparse_embedding.rb,
lib/elasticsearch/api/actions/inference/put_amazonbedrock.rb,
lib/elasticsearch/api/actions/inference/put_azureaistudio.rb,
lib/elasticsearch/api/actions/inference/put_elasticsearch.rb,
lib/elasticsearch/api/actions/inference/stream_completion.rb,
lib/elasticsearch/api/actions/inference/put_googleaistudio.rb,
lib/elasticsearch/api/actions/inference/put_googlevertexai.rb,
lib/elasticsearch/api/actions/inference/put_amazonsagemaker.rb,
lib/elasticsearch/api/actions/inference/chat_completion_unified.rb
Instance Method Summary collapse
-
#chat_completion_unified(arguments = {}) ⇒ Object
Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
-
#completion(arguments = {}) ⇒ Object
Perform completion inference on the service.
-
#delete(arguments = {}) ⇒ Object
Delete an inference endpoint.
-
#get(arguments = {}) ⇒ Object
Get an inference endpoint.
-
#inference(arguments = {}) ⇒ Object
Perform inference on the service.
-
#put(arguments = {}) ⇒ Object
Create an inference endpoint.
-
#put_alibabacloud(arguments = {}) ⇒ Object
Create an AlibabaCloud AI Search inference endpoint.
-
#put_amazonbedrock(arguments = {}) ⇒ Object
Create an Amazon Bedrock inference endpoint.
-
#put_amazonsagemaker(arguments = {}) ⇒ Object
Create an Amazon SageMaker inference endpoint.
-
#put_anthropic(arguments = {}) ⇒ Object
Create an Anthropic inference endpoint.
-
#put_azureaistudio(arguments = {}) ⇒ Object
Create an Azure AI studio inference endpoint.
-
#put_azureopenai(arguments = {}) ⇒ Object
Create an Azure OpenAI inference endpoint.
-
#put_cohere(arguments = {}) ⇒ Object
Create a Cohere inference endpoint.
-
#put_custom(arguments = {}) ⇒ Object
Create a custom inference endpoint.
-
#put_deepseek(arguments = {}) ⇒ Object
Create a DeepSeek inference endpoint.
-
#put_elasticsearch(arguments = {}) ⇒ Object
Create an Elasticsearch inference endpoint.
-
#put_elser(arguments = {}) ⇒ Object
Create an ELSER inference endpoint.
-
#put_googleaistudio(arguments = {}) ⇒ Object
Create an Google AI Studio inference endpoint.
-
#put_googlevertexai(arguments = {}) ⇒ Object
Create a Google Vertex AI inference endpoint.
-
#put_hugging_face(arguments = {}) ⇒ Object
Create a Hugging Face inference endpoint.
-
#put_jinaai(arguments = {}) ⇒ Object
Create an JinaAI inference endpoint.
-
#put_mistral(arguments = {}) ⇒ Object
Create a Mistral inference endpoint.
-
#put_openai(arguments = {}) ⇒ Object
Create an OpenAI inference endpoint.
-
#put_voyageai(arguments = {}) ⇒ Object
Create a VoyageAI inference endpoint.
-
#put_watsonx(arguments = {}) ⇒ Object
Create a Watsonx inference endpoint.
-
#rerank(arguments = {}) ⇒ Object
Perform reranking inference on the service.
-
#sparse_embedding(arguments = {}) ⇒ Object
Perform sparse embedding inference on the service.
-
#stream_completion(arguments = {}) ⇒ Object
Perform streaming inference.
-
#text_embedding(arguments = {}) ⇒ Object
Perform text embedding inference on the service.
-
#update(arguments = {}) ⇒ Object
Update an inference endpoint.
Instance Method Details
#chat_completion_unified(arguments = {}) ⇒ Object
Perform chat completion inference The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation. It only works with the ‘chat_completion` task type for `openai` and `elastic` inference services. NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming. The Chat completion inference API and the Stream inference API differ in their response structure and capabilities. The Chat completion inference API provides more comprehensive customization options through more fields and function calling support. If you use the `openai`, `hugging_face` or the `elastic` service, use the Chat completion inference API.
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
# File 'lib/elasticsearch/api/actions/inference/chat_completion_unified.rb', line 51 def chat_completion_unified(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.chat_completion_unified' } defined_params = [:inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'body' missing" unless arguments[:body] raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = "_inference/chat_completion/#{Utils.listify(_inference_id)}/_stream" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#completion(arguments = {}) ⇒ Object
Perform completion inference on the service
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# File 'lib/elasticsearch/api/actions/inference/completion.rb', line 45 def completion(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.completion' } defined_params = [:inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = "_inference/completion/#{Utils.listify(_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#delete(arguments = {}) ⇒ Object
Delete an inference endpoint
46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
# File 'lib/elasticsearch/api/actions/inference/delete.rb', line 46 def delete(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.delete' } defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = nil _task_type = arguments.delete(:task_type) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_DELETE path = if _task_type && _inference_id "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}" else "_inference/#{Utils.listify(_inference_id)}" end params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#get(arguments = {}) ⇒ Object
Get an inference endpoint
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
# File 'lib/elasticsearch/api/actions/inference/get.rb', line 44 def get(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.get' } defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? arguments = arguments.clone headers = arguments.delete(:headers) || {} body = nil _task_type = arguments.delete(:task_type) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_GET path = if _task_type && _inference_id "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}" elsif _inference_id "_inference/#{Utils.listify(_inference_id)}" else '_inference' end params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#inference(arguments = {}) ⇒ Object
Perform inference on the service. This API enables you to use machine learning models to perform specific tasks on data that you provide as an input. It returns a response with the results of the tasks. The inference endpoint you use can perform one specific task that has been defined when the endpoint was created with the create inference API. For details about using this API with a service, such as Amazon Bedrock, Anthropic, or HuggingFace, refer to the service-specific documentation.
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
# File 'lib/elasticsearch/api/actions/inference/inference.rb', line 50 def inference(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.inference' } defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = if _task_type && _inference_id "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}" else "_inference/#{Utils.listify(_inference_id)}" end params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put(arguments = {}) ⇒ Object
Create an inference endpoint. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Mistral, Azure OpenAI, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. The following integrations are available through the inference API. You can find the available task types next to the integration name:
-
AlibabaCloud AI Search (‘completion`, `rerank`, `sparse_embedding`, `text_embedding`)
-
Amazon Bedrock (‘completion`, `text_embedding`)
-
Amazon SageMaker (‘chat_completion`, `completion`, `rerank`, `sparse_embedding`, `text_embedding`)
-
Anthropic (‘completion`)
-
Azure AI Studio (‘completion`, `text_embedding`)
-
Azure OpenAI (‘completion`, `text_embedding`)
-
Cohere (‘completion`, `rerank`, `text_embedding`)
-
DeepSeek (‘completion`, `chat_completion`)
-
Elasticsearch (‘rerank`, `sparse_embedding`, `text_embedding` - this service is for built-in models and models uploaded through Eland)
-
ELSER (‘sparse_embedding`)
-
Google AI Studio (‘completion`, `text_embedding`)
-
Google Vertex AI (‘rerank`, `text_embedding`)
-
Hugging Face (‘chat_completion`, `completion`, `rerank`, `text_embedding`)
-
Mistral (‘chat_completion`, `completion`, `text_embedding`)
-
OpenAI (‘chat_completion`, `completion`, `text_embedding`)
-
VoyageAI (‘text_embedding`, `rerank`)
-
Watsonx inference integration (‘text_embedding`)
-
JinaAI (‘text_embedding`, `rerank`)
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
# File 'lib/elasticsearch/api/actions/inference/put.rb', line 68 def put(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put' } defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'body' missing" unless arguments[:body] raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_PUT path = if _task_type && _inference_id "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}" else "_inference/#{Utils.listify(_inference_id)}" end params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_alibabacloud(arguments = {}) ⇒ Object
Create an AlibabaCloud AI Search inference endpoint. Create an inference endpoint to perform an inference task with the ‘alibabacloud-ai-search` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_alibabacloud.rb', line 47 def put_alibabacloud(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_alibabacloud' } defined_params = [:task_type, :alibabacloud_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:alibabacloud_inference_id] raise ArgumentError, "Required argument 'alibabacloud_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _alibabacloud_inference_id = arguments.delete(:alibabacloud_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_alibabacloud_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_amazonbedrock(arguments = {}) ⇒ Object
Create an Amazon Bedrock inference endpoint. Create an inference endpoint to perform an inference task with the ‘amazonbedrock` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_amazonbedrock.rb', line 47 def put_amazonbedrock(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_amazonbedrock' } defined_params = [:task_type, :amazonbedrock_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:amazonbedrock_inference_id] raise ArgumentError, "Required argument 'amazonbedrock_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _amazonbedrock_inference_id = arguments.delete(:amazonbedrock_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_amazonbedrock_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_amazonsagemaker(arguments = {}) ⇒ Object
Create an Amazon SageMaker inference endpoint. Create an inference endpoint to perform an inference task with the ‘amazon_sagemaker` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_amazonsagemaker.rb', line 47 def put_amazonsagemaker(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_amazonsagemaker' } defined_params = [:task_type, :amazonsagemaker_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:amazonsagemaker_inference_id] raise ArgumentError, "Required argument 'amazonsagemaker_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _amazonsagemaker_inference_id = arguments.delete(:amazonsagemaker_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_amazonsagemaker_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_anthropic(arguments = {}) ⇒ Object
Create an Anthropic inference endpoint. Create an inference endpoint to perform an inference task with the ‘anthropic` service.
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
# File 'lib/elasticsearch/api/actions/inference/put_anthropic.rb', line 48 def put_anthropic(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_anthropic' } defined_params = [:task_type, :anthropic_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:anthropic_inference_id] raise ArgumentError, "Required argument 'anthropic_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _anthropic_inference_id = arguments.delete(:anthropic_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_anthropic_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_azureaistudio(arguments = {}) ⇒ Object
Create an Azure AI studio inference endpoint. Create an inference endpoint to perform an inference task with the ‘azureaistudio` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_azureaistudio.rb', line 47 def put_azureaistudio(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_azureaistudio' } defined_params = [:task_type, :azureaistudio_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:azureaistudio_inference_id] raise ArgumentError, "Required argument 'azureaistudio_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _azureaistudio_inference_id = arguments.delete(:azureaistudio_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_azureaistudio_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_azureopenai(arguments = {}) ⇒ Object
Create an Azure OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the ‘azureopenai` service. The list of chat completion models that you can choose from in your Azure OpenAI deployment include:
The list of embeddings models that you can choose from in your deployment can be found in the Azure models documentation.
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
# File 'lib/elasticsearch/api/actions/inference/put_azureopenai.rb', line 52 def put_azureopenai(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_azureopenai' } defined_params = [:task_type, :azureopenai_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:azureopenai_inference_id] raise ArgumentError, "Required argument 'azureopenai_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _azureopenai_inference_id = arguments.delete(:azureopenai_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_azureopenai_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_cohere(arguments = {}) ⇒ Object
Create a Cohere inference endpoint. Create an inference endpoint to perform an inference task with the ‘cohere` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_cohere.rb', line 47 def put_cohere(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_cohere' } defined_params = [:task_type, :cohere_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:cohere_inference_id] raise ArgumentError, "Required argument 'cohere_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _cohere_inference_id = arguments.delete(:cohere_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_cohere_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_custom(arguments = {}) ⇒ Object
Create a custom inference endpoint. The custom service gives more control over how to interact with external inference services that aren’t explicitly supported through dedicated integrations. The custom service gives you the ability to define the headers, url, query parameters, request body, and secrets. The custom service supports the template replacement functionality, which enables you to define a template that can be replaced with the value associated with that key. Templates are portions of a string that start with ‘$and end with ``. The parameters `secret_parameters` and `task_settings` are checked for keys for template replacement. Template replacement is supported in the `request`, `headers`, `url`, and `query_parameters`. If the definition (key) is not found for a template, an error message is returned. In case of an endpoint definition like the following:
“‘ PUT _inference/text_embedding/test-text-embedding {
"service": "custom",
"service_settings": {
"secret_parameters": {
"api_key": "<some api key>"
},
"url": "...endpoints.huggingface.cloud/v1/embeddings",
"headers": {
"Authorization": "Bearer ${api_key}",
"Content-Type": "application/json"
},
"request": "{\"input\": ${input}}",
"response": {
"json_parser": {
"text_embeddings":"$.data[*].embedding[*]"
}
}
}
} “‘
To replace ‘$api_key` the `secret_parameters` and `task_settings` are checked for a key named `api_key`.
77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 |
# File 'lib/elasticsearch/api/actions/inference/put_custom.rb', line 77 def put_custom(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_custom' } defined_params = [:task_type, :custom_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:custom_inference_id] raise ArgumentError, "Required argument 'custom_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _custom_inference_id = arguments.delete(:custom_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_custom_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_deepseek(arguments = {}) ⇒ Object
Create a DeepSeek inference endpoint. Create an inference endpoint to perform an inference task with the ‘deepseek` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_deepseek.rb', line 47 def put_deepseek(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_deepseek' } defined_params = [:task_type, :deepseek_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:deepseek_inference_id] raise ArgumentError, "Required argument 'deepseek_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _deepseek_inference_id = arguments.delete(:deepseek_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_deepseek_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_elasticsearch(arguments = {}) ⇒ Object
Create an Elasticsearch inference endpoint. Create an inference endpoint to perform an inference task with the ‘elasticsearch` service.
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
# File 'lib/elasticsearch/api/actions/inference/put_elasticsearch.rb', line 48 def put_elasticsearch(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_elasticsearch' } defined_params = [:task_type, :elasticsearch_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:elasticsearch_inference_id] raise ArgumentError, "Required argument 'elasticsearch_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _elasticsearch_inference_id = arguments.delete(:elasticsearch_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_elasticsearch_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_elser(arguments = {}) ⇒ Object
Create an ELSER inference endpoint. Create an inference endpoint to perform an inference task with the ‘elser` service. You can also deploy ELSER by using the Elasticsearch inference integration.
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
# File 'lib/elasticsearch/api/actions/inference/put_elser.rb', line 48 def put_elser(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_elser' } defined_params = [:task_type, :elser_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] raise ArgumentError, "Required argument 'elser_inference_id' missing" unless arguments[:elser_inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _elser_inference_id = arguments.delete(:elser_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_elser_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_googleaistudio(arguments = {}) ⇒ Object
Create an Google AI Studio inference endpoint. Create an inference endpoint to perform an inference task with the ‘googleaistudio` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_googleaistudio.rb', line 47 def put_googleaistudio(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_googleaistudio' } defined_params = [:task_type, :googleaistudio_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:googleaistudio_inference_id] raise ArgumentError, "Required argument 'googleaistudio_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _googleaistudio_inference_id = arguments.delete(:googleaistudio_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_googleaistudio_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_googlevertexai(arguments = {}) ⇒ Object
Create a Google Vertex AI inference endpoint. Create an inference endpoint to perform an inference task with the ‘googlevertexai` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_googlevertexai.rb', line 47 def put_googlevertexai(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_googlevertexai' } defined_params = [:task_type, :googlevertexai_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:googlevertexai_inference_id] raise ArgumentError, "Required argument 'googlevertexai_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _googlevertexai_inference_id = arguments.delete(:googlevertexai_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_googlevertexai_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_hugging_face(arguments = {}) ⇒ Object
Create a Hugging Face inference endpoint. Create an inference endpoint to perform an inference task with the ‘hugging_face` service. Supported tasks include: `text_embedding`, `completion`, and `chat_completion`. To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use. For Elastic’s ‘text_embedding` task: The selected model must support the `Sentence Embeddings` task. On the new endpoint creation page, select the `Sentence Embeddings` task under the `Advanced Configuration` section. After the endpoint has initialized, copy the generated endpoint URL. Recommended models for `text_embedding` task:
-
‘all-MiniLM-L6-v2`
-
‘all-MiniLM-L12-v2`
-
‘all-mpnet-base-v2`
-
‘e5-base-v2`
-
‘e5-small-v2`
-
‘multilingual-e5-base`
-
‘multilingual-e5-small`
For Elastic’s ‘chat_completion` and `completion` tasks: The selected model must support the `Text Generation` task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for `Text Generation`. When creating dedicated endpoint select the `Text Generation` task. After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes `/v1/chat/completions` part in URL. Then, copy the full endpoint URL for use. Recommended models for `chat_completion` and `completion` tasks:
-
‘Mistral-7B-Instruct-v0.2`
-
‘QwQ-32B`
-
‘Phi-3-mini-128k-instruct`
For Elastic’s ‘rerank` task: The selected model must support the `sentence-ranking` task and expose OpenAI API. HuggingFace supports only dedicated (not serverless) endpoints for `Rerank` so far. After the endpoint is initialized, copy the full endpoint URL for use. Tested models for `rerank` task:
-
‘bge-reranker-base`
-
‘jina-reranker-v1-turbo-en-GGUF`
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
# File 'lib/elasticsearch/api/actions/inference/put_hugging_face.rb', line 75 def put_hugging_face(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_hugging_face' } defined_params = [:task_type, :huggingface_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:huggingface_inference_id] raise ArgumentError, "Required argument 'huggingface_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _huggingface_inference_id = arguments.delete(:huggingface_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_huggingface_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_jinaai(arguments = {}) ⇒ Object
Create an JinaAI inference endpoint. Create an inference endpoint to perform an inference task with the ‘jinaai` service. To review the available `rerank` models, refer to <jina.ai/reranker>. To review the available `text_embedding` models, refer to the <jina.ai/embeddings/>.
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
# File 'lib/elasticsearch/api/actions/inference/put_jinaai.rb', line 49 def put_jinaai(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_jinaai' } defined_params = [:task_type, :jinaai_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:jinaai_inference_id] raise ArgumentError, "Required argument 'jinaai_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _jinaai_inference_id = arguments.delete(:jinaai_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_jinaai_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_mistral(arguments = {}) ⇒ Object
Create a Mistral inference endpoint. Create an inference endpoint to perform an inference task with the ‘mistral` service.
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
# File 'lib/elasticsearch/api/actions/inference/put_mistral.rb', line 47 def put_mistral(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_mistral' } defined_params = [:task_type, :mistral_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:mistral_inference_id] raise ArgumentError, "Required argument 'mistral_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _mistral_inference_id = arguments.delete(:mistral_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_mistral_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_openai(arguments = {}) ⇒ Object
Create an OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the ‘openai` service or `openai` compatible APIs.
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
# File 'lib/elasticsearch/api/actions/inference/put_openai.rb', line 48 def put_openai(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_openai' } defined_params = [:task_type, :openai_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:openai_inference_id] raise ArgumentError, "Required argument 'openai_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _openai_inference_id = arguments.delete(:openai_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_openai_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_voyageai(arguments = {}) ⇒ Object
Create a VoyageAI inference endpoint. Create an inference endpoint to perform an inference task with the ‘voyageai` service. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
# File 'lib/elasticsearch/api/actions/inference/put_voyageai.rb', line 48 def put_voyageai(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_voyageai' } defined_params = [:task_type, :voyageai_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:voyageai_inference_id] raise ArgumentError, "Required argument 'voyageai_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _voyageai_inference_id = arguments.delete(:voyageai_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_voyageai_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#put_watsonx(arguments = {}) ⇒ Object
Create a Watsonx inference endpoint. Create an inference endpoint to perform an inference task with the ‘watsonxai` service. You need an IBM Cloud Databases for Elasticsearch deployment to use the `watsonxai` inference service. You can provision one through the IBM catalog, the Cloud Databases CLI plug-in, the Cloud Databases API, or Terraform.
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
# File 'lib/elasticsearch/api/actions/inference/put_watsonx.rb', line 49 def put_watsonx(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.put_watsonx' } defined_params = [:task_type, :watsonx_inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'task_type' missing" unless arguments[:task_type] unless arguments[:watsonx_inference_id] raise ArgumentError, "Required argument 'watsonx_inference_id' missing" end arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _task_type = arguments.delete(:task_type) _watsonx_inference_id = arguments.delete(:watsonx_inference_id) method = Elasticsearch::API::HTTP_PUT path = "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_watsonx_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#rerank(arguments = {}) ⇒ Object
Perform reranking inference on the service
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# File 'lib/elasticsearch/api/actions/inference/rerank.rb', line 45 def rerank(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.rerank' } defined_params = [:inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = "_inference/rerank/#{Utils.listify(_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#sparse_embedding(arguments = {}) ⇒ Object
Perform sparse embedding inference on the service
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# File 'lib/elasticsearch/api/actions/inference/sparse_embedding.rb', line 45 def (arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.sparse_embedding' } defined_params = [:inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = "_inference/sparse_embedding/#{Utils.listify(_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#stream_completion(arguments = {}) ⇒ Object
Perform streaming inference. Get real-time responses for completion tasks by delivering answers incrementally, reducing response times during computation. This API works only with the completion task type. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs. This API requires the ‘monitor_inference` cluster privilege (the built-in `inference_admin` and `inference_user` roles grant this privilege). You must use a client that supports streaming.
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
# File 'lib/elasticsearch/api/actions/inference/stream_completion.rb', line 49 def stream_completion(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.stream_completion' } defined_params = [:inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = "_inference/completion/#{Utils.listify(_inference_id)}/_stream" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#text_embedding(arguments = {}) ⇒ Object
Perform text embedding inference on the service
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
# File 'lib/elasticsearch/api/actions/inference/text_embedding.rb', line 45 def (arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.text_embedding' } defined_params = [:inference_id].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) method = Elasticsearch::API::HTTP_POST path = "_inference/text_embedding/#{Utils.listify(_inference_id)}" params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |
#update(arguments = {}) ⇒ Object
Update an inference endpoint. Modify ‘task_settings`, secrets (within `service_settings`), or `num_allocations` for an inference endpoint, depending on the specific endpoint service and `task_type`. IMPORTANT: The inference APIs enable you to use certain services, such as built-in machine learning models (ELSER, E5), models uploaded through Eland, Cohere, OpenAI, Azure, Google AI Studio, Google Vertex AI, Anthropic, Watsonx.ai, or Hugging Face. For built-in models and models uploaded through Eland, the inference APIs offer an alternative way to use and manage trained models. However, if you do not plan to use the inference APIs to use these models or if you want to use non-NLP models, use the machine learning trained model APIs.
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
# File 'lib/elasticsearch/api/actions/inference/update.rb', line 49 def update(arguments = {}) request_opts = { endpoint: arguments[:endpoint] || 'inference.update' } defined_params = [:inference_id, :task_type].each_with_object({}) do |variable, set_variables| set_variables[variable] = arguments[variable] if arguments.key?(variable) end request_opts[:defined_params] = defined_params unless defined_params.empty? raise ArgumentError, "Required argument 'body' missing" unless arguments[:body] raise ArgumentError, "Required argument 'inference_id' missing" unless arguments[:inference_id] arguments = arguments.clone headers = arguments.delete(:headers) || {} body = arguments.delete(:body) _inference_id = arguments.delete(:inference_id) _task_type = arguments.delete(:task_type) method = Elasticsearch::API::HTTP_PUT path = if _task_type && _inference_id "_inference/#{Utils.listify(_task_type)}/#{Utils.listify(_inference_id)}/_update" else "_inference/#{Utils.listify(_inference_id)}/_update" end params = Utils.process_params(arguments) Elasticsearch::API::Response.new( perform_request(method, path, params, body, headers, request_opts) ) end |