Class: LLM::Ollama

Inherits:
Provider show all
Defined in:
lib/llm/providers/ollama.rb,
lib/llm/providers/ollama/models.rb,
lib/llm/providers/ollama/error_handler.rb,
lib/llm/providers/ollama/stream_parser.rb,
lib/llm/providers/ollama/request_adapter.rb,
lib/llm/providers/ollama/response_adapter.rb

Overview

The Ollama class implements a provider for Ollama – and the provider supports a wide range of models. It is straight forward to run on your own hardware, and there are a number of multi-modal models that can process both images and text.

Examples:

#!/usr/bin/env ruby
require "llm"

llm = LLM.ollama(key: nil)
bot = LLM::Bot.new(llm, model: "llava")
bot.chat ["Tell me about this image", File.open("/images/parrot.png", "rb")]
bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }

Defined Under Namespace

Classes: Models

Constant Summary collapse

HOST =
"localhost"

Instance Method Summary collapse

Methods inherited from Provider

#audio, #chat, clients, #developer_role, #files, #images, #inspect, #moderations, #respond, #responses, #schema, #server_tool, #server_tools, #system_role, #user_role, #vector_stores, #web_search, #with

Constructor Details

#initializeOllama

Returns a new instance of Ollama.

Parameters:

  • key (String, nil)

    The secret key for authentication



31
32
33
# File 'lib/llm/providers/ollama.rb', line 31

def initialize(**)
  super(host: HOST, port: 11434, ssl: false, **)
end

Instance Method Details

#assistant_roleString

Returns the role of the assistant in the conversation. Usually "assistant" or "model"

Returns:

  • (String)

    Returns the role of the assistant in the conversation. Usually "assistant" or "model"



78
79
80
# File 'lib/llm/providers/ollama.rb', line 78

def assistant_role
  "assistant"
end

#complete(prompt, params = {}) ⇒ LLM::Response

Provides an interface to the chat completions API

Examples:

llm = LLM.openai(key: ENV["KEY"])
messages = [{role: "system", content: "Your task is to answer all of my questions"}]
res = llm.complete("5 + 2 ?", messages:)
print "[#{res.choices[0].role}]", res.choices[0].content, "\n"

Parameters:

  • prompt (String)

    The input prompt to be completed

  • params (Hash) (defaults to: {})

    The parameters to maintain throughout the conversation. Any parameter the provider supports can be included and not only those listed here.

Returns:

Raises:

See Also:



60
61
62
63
64
65
66
# File 'lib/llm/providers/ollama.rb', line 60

def complete(prompt, params = {})
  params, stream, tools, role = normalize_complete_params(params)
  req = build_complete_request(prompt, params, role)
  res = execute(request: req, stream: stream)
  ResponseAdapter.adapt(res, type: :completion)
    .extend(Module.new { define_method(:__tools__) { tools } })
end

#default_modelString

Returns the default model for chat completions

Returns:

  • (String)

See Also:



86
87
88
# File 'lib/llm/providers/ollama.rb', line 86

def default_model
  "qwen3:latest"
end

#embed(input, model: default_model, **params) ⇒ LLM::Response

Provides an embedding

Parameters:

  • input (String, Array<String>)

    The input to embed

  • model (String) (defaults to: default_model)

    The embedding model to use

  • params (Hash)

    Other embedding parameters

Returns:



42
43
44
45
46
47
48
# File 'lib/llm/providers/ollama.rb', line 42

def embed(input, model: default_model, **params)
  params   = {model:}.merge!(params)
  req      = Net::HTTP::Post.new("/v1/embeddings", headers)
  req.body = LLM.json.dump({input:}.merge!(params))
  res      = execute(request: req)
  ResponseAdapter.adapt(res, type: :embedding)
end

#modelsLLM::Ollama::Models

Provides an interface to Ollama's models API

Returns:

See Also:



72
73
74
# File 'lib/llm/providers/ollama.rb', line 72

def models
  LLM::Ollama::Models.new(self)
end