Class: Boxcars::Ollama
Overview
A engine that uses local GPT4All API.
Constant Summary collapse
- DEFAULT_PARAMS =
The default parameters to use when asking the engine.
{ model: "llama3", temperature: 0.1, max_tokens: 4096 }.freeze
- DEFAULT_NAME =
the default name of the engine
"Ollama engine"
- DEFAULT_DESCRIPTION =
the default description of the engine
"useful for when you need to use local AI to answer questions. " \ "You should ask targeted questions"
Instance Attribute Summary collapse
-
#batch_size ⇒ Object
readonly
Returns the value of attribute batch_size.
-
#model_kwargs ⇒ Object
readonly
Returns the value of attribute model_kwargs.
-
#ollama_params ⇒ Object
readonly
Returns the value of attribute ollama_params.
-
#prompts ⇒ Object
readonly
Returns the value of attribute prompts.
Class Method Summary collapse
-
.open_ai_client ⇒ OpenAI::Client
Get the OpenAI API client.
Instance Method Summary collapse
-
#client(prompt:, inputs: {}, **kwargs) ⇒ Object
Get an answer from the engine.
- #conversation_model?(_model) ⇒ Boolean
-
#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 2, **kwargs) ⇒ Ollama
constructor
A engine is a container for a single tool to run.
-
#run(question, **kwargs) ⇒ Object
get an answer from the engine for a question.
Methods inherited from Engine
#generate, #generation_info, #get_num_tokens
Constructor Details
#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 2, **kwargs) ⇒ Ollama
A engine is a container for a single tool to run.
28 29 30 31 32 33 |
# File 'lib/boxcars/engine/ollama.rb', line 28 def initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 2, **kwargs) @ollama_params = DEFAULT_PARAMS.merge(kwargs) @prompts = prompts @batch_size = batch_size super(description: description, name: name) end |
Instance Attribute Details
#batch_size ⇒ Object (readonly)
Returns the value of attribute batch_size.
7 8 9 |
# File 'lib/boxcars/engine/ollama.rb', line 7 def batch_size @batch_size end |
#model_kwargs ⇒ Object (readonly)
Returns the value of attribute model_kwargs.
7 8 9 |
# File 'lib/boxcars/engine/ollama.rb', line 7 def model_kwargs @model_kwargs end |
#ollama_params ⇒ Object (readonly)
Returns the value of attribute ollama_params.
7 8 9 |
# File 'lib/boxcars/engine/ollama.rb', line 7 def ollama_params @ollama_params end |
#prompts ⇒ Object (readonly)
Returns the value of attribute prompts.
7 8 9 |
# File 'lib/boxcars/engine/ollama.rb', line 7 def prompts @prompts end |
Class Method Details
.open_ai_client ⇒ OpenAI::Client
Get the OpenAI API client
39 40 41 |
# File 'lib/boxcars/engine/ollama.rb', line 39 def self.open_ai_client ::OpenAI::Client.new(uri_base: "http://localhost:11434") end |
Instance Method Details
#client(prompt:, inputs: {}, **kwargs) ⇒ Object
Get an answer from the engine.
52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
# File 'lib/boxcars/engine/ollama.rb', line 52 def client(prompt:, inputs: {}, **kwargs) clnt = Ollama.open_ai_client params = ollama_params.merge(kwargs) prompt = prompt.first if prompt.is_a?(Array) params = prompt.(inputs).merge(params) if Boxcars.configuration.log_prompts Boxcars.debug(params[:messages].last(2).map { |p| ">>>>>> Role: #{p[:role]} <<<<<<\n#{p[:content]}" }.join("\n"), :cyan) end ans = clnt.chat(parameters: params) ans['choices'].pluck('message').pluck('content').join("\n") rescue => e Boxcars.error(e, :red) raise end |
#conversation_model?(_model) ⇒ Boolean
43 44 45 |
# File 'lib/boxcars/engine/ollama.rb', line 43 def conversation_model?(_model) true end |
#run(question, **kwargs) ⇒ Object
get an answer from the engine for a question.
70 71 72 73 74 75 76 77 78 |
# File 'lib/boxcars/engine/ollama.rb', line 70 def run(question, **kwargs) prompt = Prompt.new(template: question) answer = client(prompt: prompt, **kwargs) raise Error, "Ollama: No response from API" unless answer # raise Error, "Ollama: #{response['error']}" if response["error"] Boxcars.debug("Answer: #{answer}", :cyan) answer end |