Class: Langchain::Assistant
- Inherits:
-
Object
- Object
- Langchain::Assistant
- Defined in:
- lib/langchain/assistant.rb,
lib/langchain/assistant/llm/adapter.rb,
lib/langchain/assistant/messages/base.rb,
lib/langchain/assistant/llm/adapters/base.rb,
lib/langchain/assistant/llm/adapters/ollama.rb,
lib/langchain/assistant/llm/adapters/openai.rb,
lib/langchain/assistant/llm/adapters/anthropic.rb,
lib/langchain/assistant/llm/adapters/mistral_ai.rb,
lib/langchain/assistant/messages/ollama_message.rb,
lib/langchain/assistant/messages/openai_message.rb,
lib/langchain/assistant/llm/adapters/google_gemini.rb,
lib/langchain/assistant/messages/anthropic_message.rb,
lib/langchain/assistant/messages/mistral_ai_message.rb,
lib/langchain/assistant/messages/google_gemini_message.rb,
lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic.rb
Overview
Assistants are Agent-like objects that leverage helpful instructions, LLMs, tools and knowledge to respond to user queries. Assistants can be configured with an LLM of your choice, any vector search database and easily extended with additional tools.
Usage:
llm = Langchain::LLM::GoogleGemini.new(api_key: ENV["GOOGLE_GEMINI_API_KEY"])
assistant = Langchain::Assistant.new(
llm: llm,
instructions: "You're a News Reporter AI",
tools: [Langchain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])]
)
Defined Under Namespace
Instance Attribute Summary collapse
-
#add_message_callback ⇒ Object
Returns the value of attribute add_message_callback.
-
#instructions ⇒ Object
Returns the value of attribute instructions.
-
#llm ⇒ Object
readonly
Returns the value of attribute llm.
-
#llm_adapter ⇒ Object
readonly
Returns the value of attribute llm_adapter.
-
#messages ⇒ Object
Returns the value of attribute messages.
-
#parallel_tool_calls ⇒ Object
Returns the value of attribute parallel_tool_calls.
-
#state ⇒ Object
readonly
Returns the value of attribute state.
-
#tool_choice ⇒ Object
Returns the value of attribute tool_choice.
-
#tool_execution_callback ⇒ Object
Returns the value of attribute tool_execution_callback.
-
#tools ⇒ Object
Returns the value of attribute tools.
-
#total_completion_tokens ⇒ Object
readonly
Returns the value of attribute total_completion_tokens.
-
#total_prompt_tokens ⇒ Object
readonly
Returns the value of attribute total_prompt_tokens.
-
#total_tokens ⇒ Object
readonly
Returns the value of attribute total_tokens.
Instance Method Summary collapse
-
#add_message(role: "user", content: nil, image_url: nil, tool_calls: [], tool_call_id: nil) ⇒ Array<Langchain::Message>
Add a user message to the messages array.
-
#add_message_and_run(content: nil, image_url: nil, auto_tool_execution: false) ⇒ Array<Langchain::Message>
Add a user message and run the assistant.
-
#add_message_and_run!(content: nil, image_url: nil) ⇒ Array<Langchain::Message>
Add a user message and run the assistant with automatic tool execution.
-
#add_messages(messages:) ⇒ Array<Langchain::Message>
Add multiple messages.
-
#array_of_message_hashes ⇒ Array<Hash>
Convert messages to an LLM APIs-compatible array of hashes.
-
#clear_messages! ⇒ Array
Delete all messages.
-
#initialize(llm:, tools: [], instructions: nil, tool_choice: "auto", parallel_tool_calls: true, messages: [], add_message_callback: nil, tool_execution_callback: nil, &block) ⇒ Assistant
constructor
Create a new assistant.
-
#prompt_of_concatenated_messages ⇒ Object
Only used by the Assistant when it calls the LLM#complete() method.
-
#run(auto_tool_execution: false) ⇒ Array<Langchain::Message>
Run the assistant.
-
#run! ⇒ Array<Langchain::Message>
Run the assistant with automatic tool execution.
-
#submit_tool_output(tool_call_id:, output:) ⇒ Array<Langchain::Message>
Submit tool output.
Constructor Details
#initialize(llm:, tools: [], instructions: nil, tool_choice: "auto", parallel_tool_calls: true, messages: [], add_message_callback: nil, tool_execution_callback: nil, &block) ⇒ Assistant
Create a new assistant
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
# File 'lib/langchain/assistant.rb', line 40 def initialize( llm:, tools: [], instructions: nil, tool_choice: "auto", parallel_tool_calls: true, messages: [], # Callbacks add_message_callback: nil, tool_execution_callback: nil, &block ) unless tools.is_a?(Array) && tools.all? { |tool| tool.class.singleton_class.included_modules.include?(Langchain::ToolDefinition) } raise ArgumentError, "Tools must be an array of objects extending Langchain::ToolDefinition" end @llm = llm @llm_adapter = LLM::Adapter.build(llm) @add_message_callback = if validate_callback!("add_message_callback", ) @tool_execution_callback = tool_execution_callback if validate_callback!("tool_execution_callback", tool_execution_callback) self. = @tools = tools @parallel_tool_calls = parallel_tool_calls self.tool_choice = tool_choice self.instructions = instructions @block = block @state = :ready @total_prompt_tokens = 0 @total_completion_tokens = 0 @total_tokens = 0 end |
Instance Attribute Details
#add_message_callback ⇒ Object
Returns the value of attribute add_message_callback.
25 26 27 |
# File 'lib/langchain/assistant.rb', line 25 def @add_message_callback end |
#instructions ⇒ Object
Returns the value of attribute instructions.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def instructions @instructions end |
#llm ⇒ Object (readonly)
Returns the value of attribute llm.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def llm @llm end |
#llm_adapter ⇒ Object (readonly)
Returns the value of attribute llm_adapter.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def llm_adapter @llm_adapter end |
#messages ⇒ Object
Returns the value of attribute messages.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def @messages end |
#parallel_tool_calls ⇒ Object
Returns the value of attribute parallel_tool_calls.
25 26 27 |
# File 'lib/langchain/assistant.rb', line 25 def parallel_tool_calls @parallel_tool_calls end |
#state ⇒ Object (readonly)
Returns the value of attribute state.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def state @state end |
#tool_choice ⇒ Object
Returns the value of attribute tool_choice.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def tool_choice @tool_choice end |
#tool_execution_callback ⇒ Object
Returns the value of attribute tool_execution_callback.
25 26 27 |
# File 'lib/langchain/assistant.rb', line 25 def tool_execution_callback @tool_execution_callback end |
#tools ⇒ Object
Returns the value of attribute tools.
25 26 27 |
# File 'lib/langchain/assistant.rb', line 25 def tools @tools end |
#total_completion_tokens ⇒ Object (readonly)
Returns the value of attribute total_completion_tokens.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def total_completion_tokens @total_completion_tokens end |
#total_prompt_tokens ⇒ Object (readonly)
Returns the value of attribute total_prompt_tokens.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def total_prompt_tokens @total_prompt_tokens end |
#total_tokens ⇒ Object (readonly)
Returns the value of attribute total_tokens.
15 16 17 |
# File 'lib/langchain/assistant.rb', line 15 def total_tokens @total_tokens end |
Instance Method Details
#add_message(role: "user", content: nil, image_url: nil, tool_calls: [], tool_call_id: nil) ⇒ Array<Langchain::Message>
Add a user message to the messages array
83 84 85 86 87 88 89 90 91 92 93 94 95 |
# File 'lib/langchain/assistant.rb', line 83 def (role: "user", content: nil, image_url: nil, tool_calls: [], tool_call_id: nil) = (role: role, content: content, image_url: image_url, tool_calls: tool_calls, tool_call_id: tool_call_id) # Call the callback with the message .call() if # rubocop:disable Style/SafeNavigation # Prepend the message to the messages array << @state = :ready end |
#add_message_and_run(content: nil, image_url: nil, auto_tool_execution: false) ⇒ Array<Langchain::Message>
Add a user message and run the assistant
160 161 162 163 |
# File 'lib/langchain/assistant.rb', line 160 def (content: nil, image_url: nil, auto_tool_execution: false) (content: content, image_url: image_url, role: "user") run(auto_tool_execution: auto_tool_execution) end |
#add_message_and_run!(content: nil, image_url: nil) ⇒ Array<Langchain::Message>
Add a user message and run the assistant with automatic tool execution
169 170 171 |
# File 'lib/langchain/assistant.rb', line 169 def (content: nil, image_url: nil) (content: content, image_url: image_url, auto_tool_execution: true) end |
#add_messages(messages:) ⇒ Array<Langchain::Message>
Add multiple messages
125 126 127 128 129 |
# File 'lib/langchain/assistant.rb', line 125 def (messages:) .each do || (**.slice(:content, :role, :tool_calls, :tool_call_id)) end end |
#array_of_message_hashes ⇒ Array<Hash>
Convert messages to an LLM APIs-compatible array of hashes
100 101 102 103 104 |
# File 'lib/langchain/assistant.rb', line 100 def .map(&:to_hash) .compact end |
#clear_messages! ⇒ Array
Delete all messages
186 187 188 189 |
# File 'lib/langchain/assistant.rb', line 186 def # TODO: If this a bug? Should we keep the "system" message? @messages = [] end |
#prompt_of_concatenated_messages ⇒ Object
Only used by the Assistant when it calls the LLM#complete() method
107 108 109 |
# File 'lib/langchain/assistant.rb', line 107 def .map(&:to_s).join end |
#run(auto_tool_execution: false) ⇒ Array<Langchain::Message>
Run the assistant
135 136 137 138 139 140 141 142 143 144 145 146 |
# File 'lib/langchain/assistant.rb', line 135 def run(auto_tool_execution: false) if .empty? Langchain.logger.warn("#{self.class} - No messages to process") @state = :completed return end @state = :in_progress @state = handle_state until run_finished?(auto_tool_execution) end |
#run! ⇒ Array<Langchain::Message>
Run the assistant with automatic tool execution
151 152 153 |
# File 'lib/langchain/assistant.rb', line 151 def run! run(auto_tool_execution: true) end |
#submit_tool_output(tool_call_id:, output:) ⇒ Array<Langchain::Message>
Submit tool output
178 179 180 181 |
# File 'lib/langchain/assistant.rb', line 178 def submit_tool_output(tool_call_id:, output:) # TODO: Validate that `tool_call_id` is valid by scanning messages and checking if this tool call ID was invoked (role: @llm_adapter.tool_role, content: output, tool_call_id: tool_call_id) end |