Class: LLM::Bot
- Inherits:
-
Object
- Object
- LLM::Bot
- Defined in:
- lib/llm/bot.rb
Overview
LLM::Bot provides an object that can maintain a conversation. A conversation can use the chat completions API that all LLM providers support or the responses API that currently only OpenAI supports.
Instance Attribute Summary collapse
-
#messages ⇒ LLM::Buffer<LLM::Message>
readonly
Returns an Enumerable for the messages in a conversation.
Instance Method Summary collapse
-
#build_prompt ⇒ Object
Build a prompt.
-
#chat(prompt, params = {}) ⇒ LLM::Response
Maintain a conversation via the chat completions API.
-
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called.
-
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image.
-
#initialize(provider, params = {}) ⇒ Bot
constructor
A new instance of Bot.
- #inspect ⇒ String
-
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file.
-
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file.
-
#respond(prompt, params = {}) ⇒ LLM::Response
Maintain a conversation via the responses API.
-
#usage ⇒ LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages.
Constructor Details
Instance Attribute Details
#messages ⇒ LLM::Buffer<LLM::Message> (readonly)
Returns an Enumerable for the messages in a conversation
31 32 33 |
# File 'lib/llm/bot.rb', line 31 def @messages end |
Instance Method Details
#build_prompt ⇒ Object
Build a prompt
133 134 135 |
# File 'lib/llm/bot.rb', line 133 def build_prompt(&) LLM::Builder.new(@provider, &).tap(&:call) end |
#chat(prompt, params = {}) ⇒ LLM::Response
Maintain a conversation via the chat completions API. This method immediately sends a request to the LLM and returns the response.
60 61 62 63 64 65 66 67 68 69 |
# File 'lib/llm/bot.rb', line 60 def chat(prompt, params = {}) prompt, params, = fetch(prompt, params) params = params.merge(messages: [*@messages.to_a, *]) params = @params.merge(params) res = @provider.complete(prompt, params) @messages.concat [LLM::Message.new(params[:role] || :user, prompt)] @messages.concat @messages.concat [res.choices[-1]] res end |
#functions ⇒ Array<LLM::Function>
Returns an array of functions that can be called
107 108 109 110 111 112 |
# File 'lib/llm/bot.rb', line 107 def functions @messages .select(&:assistant?) .flat_map(&:functions) .select(&:pending?) end |
#image_url(url) ⇒ LLM::Object
Recongize an object as a URL to an image
143 144 145 |
# File 'lib/llm/bot.rb', line 143 def image_url(url) LLM::Object.from(value: url, kind: :image_url) end |
#inspect ⇒ String
98 99 100 101 102 |
# File 'lib/llm/bot.rb', line 98 def inspect "#<#{self.class.name}:0x#{object_id.to_s(16)} " \ "@provider=#{@provider.class}, @params=#{@params.inspect}, " \ "@messages=#{@messages.inspect}>" end |
#local_file(path) ⇒ LLM::Object
Recongize an object as a local file
153 154 155 |
# File 'lib/llm/bot.rb', line 153 def local_file(path) LLM::Object.from(value: LLM.File(path), kind: :local_file) end |
#remote_file(res) ⇒ LLM::Object
Reconginize an object as a remote file
163 164 165 |
# File 'lib/llm/bot.rb', line 163 def remote_file(res) LLM::Object.from(value: res, kind: :remote_file) end |
#respond(prompt, params = {}) ⇒ LLM::Response
Not all LLM providers support this API
Maintain a conversation via the responses API. This method immediately sends a request to the LLM and returns the response.
84 85 86 87 88 89 90 91 92 93 94 |
# File 'lib/llm/bot.rb', line 84 def respond(prompt, params = {}) prompt, params, = fetch(prompt, params) res_id = @messages.find(&:assistant?)&.response&.response_id params = params.merge(previous_response_id: res_id, input: ).compact params = @params.merge(params) res = @provider.responses.create(prompt, params) @messages.concat [LLM::Message.new(params[:role] || :user, prompt)] @messages.concat @messages.concat [res.choices[-1]] res end |
#usage ⇒ LLM::Object
Returns token usage for the conversation This method returns token usage for the latest assistant message, and it returns an empty object if there are no assistant messages
121 122 123 |
# File 'lib/llm/bot.rb', line 121 def usage @messages.find(&:assistant?)&.usage || LLM::Object.from({}) end |