Class: Langchain::LLM::OpenAI
Overview
LLM interface for OpenAI APIs: platform.openai.com/overview
Gem requirements:
gem "ruby-openai", "~> 6.3.0"
Usage:
llm = Langchain::LLM::OpenAI.new(
api_key: ENV["OPENAI_API_KEY"],
llm_options: {}, # Available options: https://github.com/alexrudall/ruby-openai/blob/main/lib/openai/client.rb#L5-L13
default_options: {}
)
Direct Known Subclasses
Constant Summary collapse
- DEFAULTS =
{ n: 1, chat_model: "gpt-4o-mini", embedding_model: "text-embedding-3-small" }.freeze
- EMBEDDING_SIZES =
{ "text-embedding-ada-002" => 1536, "text-embedding-3-large" => 3072, "text-embedding-3-small" => 1536 }.freeze
Instance Attribute Summary
Attributes inherited from Base
Instance Method Summary collapse
-
#chat(params = {}, &block) ⇒ Object
Generate a chat completion for given messages.
-
#complete(prompt:, **params) ⇒ Langchain::LLM::OpenAIResponse
rubocop:disable Style/ArgumentsForwarding Generate a completion for a given prompt.
- #default_dimensions ⇒ Object
-
#embed(text:, model: , encoding_format: nil, user: nil, dimensions: ) ⇒ Langchain::LLM::OpenAIResponse
Generate an embedding for a given text.
-
#initialize(api_key:, llm_options: {}, default_options: {}) ⇒ OpenAI
constructor
Initialize an OpenAI LLM instance.
-
#summarize(text:) ⇒ String
Generate a summary for a given text.
Methods inherited from Base
#chat_parameters, #default_dimension
Methods included from DependencyHelper
Constructor Details
#initialize(api_key:, llm_options: {}, default_options: {}) ⇒ OpenAI
Initialize an OpenAI LLM instance
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
# File 'lib/langchain/llm/openai.rb', line 32 def initialize(api_key:, llm_options: {}, default_options: {}) depends_on "ruby-openai", req: "openai" [:log_errors] = Langchain.logger.debug? unless .key?(:log_errors) @client = ::OpenAI::Client.new(access_token: api_key, **) do |f| f.response :logger, Langchain.logger, {headers: true, bodies: true, errors: true} end @defaults = DEFAULTS.merge() chat_parameters.update( model: {default: @defaults[:chat_model]}, logprobs: {}, top_logprobs: {}, n: {default: @defaults[:n]}, temperature: {default: @defaults[:temperature]}, user: {}, response_format: {default: @defaults[:response_format]} ) chat_parameters.ignore(:top_k) end |
Instance Method Details
#chat(params = {}, &block) ⇒ Object
Generate a chat completion for given messages.
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
# File 'lib/langchain/llm/openai.rb', line 119 def chat(params = {}, &block) parameters = chat_parameters.to_params(params) raise ArgumentError.new("messages argument is required") if Array(parameters[:messages]).empty? raise ArgumentError.new("model argument is required") if parameters[:model].to_s.empty? if parameters[:tool_choice] && Array(parameters[:tools]).empty? raise ArgumentError.new("'tool_choice' is only allowed when 'tools' are specified.") end if block @response_chunks = [] parameters[:stream_options] = {include_usage: true} parameters[:stream] = proc do |chunk, _bytesize| chunk_content = chunk.dig("choices", 0) || {} @response_chunks << chunk yield chunk_content end end response = with_api_error_handling do client.chat(parameters: parameters) end response = response_from_chunks if block reset_response_chunks Langchain::LLM::OpenAIResponse.new(response) end |
#complete(prompt:, **params) ⇒ Langchain::LLM::OpenAIResponse
rubocop:disable Style/ArgumentsForwarding Generate a completion for a given prompt
101 102 103 104 105 106 107 108 109 110 |
# File 'lib/langchain/llm/openai.rb', line 101 def complete(prompt:, **params) Langchain.logger.warn "DEPRECATED: `Langchain::LLM::OpenAI#complete` is deprecated, and will be removed in the next major version. Use `Langchain::LLM::OpenAI#chat` instead." if params[:stop_sequences] params[:stop] = params.delete(:stop_sequences) end # Should we still accept the `messages: []` parameter here? = [{role: "user", content: prompt}] chat(messages: , **params) end |
#default_dimensions ⇒ Object
161 162 163 |
# File 'lib/langchain/llm/openai.rb', line 161 def default_dimensions @defaults[:dimensions] || EMBEDDING_SIZES.fetch(defaults[:embedding_model]) end |
#embed(text:, model: , encoding_format: nil, user: nil, dimensions: ) ⇒ Langchain::LLM::OpenAIResponse
Generate an embedding for a given text
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
# File 'lib/langchain/llm/openai.rb', line 61 def ( text:, model: defaults[:embedding_model], encoding_format: nil, user: nil, dimensions: @defaults[:dimensions] ) raise ArgumentError.new("text argument is required") if text.empty? raise ArgumentError.new("model argument is required") if model.empty? raise ArgumentError.new("encoding_format must be either float or base64") if encoding_format && %w[float base64].include?(encoding_format) parameters = { input: text, model: model } parameters[:encoding_format] = encoding_format if encoding_format parameters[:user] = user if user if dimensions parameters[:dimensions] = dimensions elsif EMBEDDING_SIZES.key?(model) parameters[:dimensions] = EMBEDDING_SIZES[model] end # dimensions parameter not supported by text-embedding-ada-002 model parameters.delete(:dimensions) if model == "text-embedding-ada-002" response = with_api_error_handling do client.(parameters: parameters) end Langchain::LLM::OpenAIResponse.new(response) end |
#summarize(text:) ⇒ String
Generate a summary for a given text
152 153 154 155 156 157 158 159 |
# File 'lib/langchain/llm/openai.rb', line 152 def summarize(text:) prompt_template = Langchain::Prompt.load_from_path( file_path: Langchain.root.join("langchain/llm/prompts/summarize_template.yaml") ) prompt = prompt_template.format(text: text) complete(prompt: prompt) end |