Class: BxBuilderChain::Llm::OpenAi
- Defined in:
- lib/bx_builder_chain/llm/open_ai.rb
Overview
LLM interface for OpenAI APIs: platform.openai.com/overview
Gem requirements:
gem "ruby-openai", "~> 4.0.0"
Usage:
openai = BxBuilderChain::LLM::OpenAI.new(api_key:, llm_options: {})
Constant Summary collapse
- DEFAULTS =
{ temperature: 0.0, completion_model_name: "text-davinci-003", chat_completion_model_name: "gpt-3.5-turbo", embeddings_model_name: "text-embedding-ada-002", dimension: 1536 }.freeze
- LENGTH_VALIDATOR =
BxBuilderChain::Utils::TokenLength::OpenAiValidator
- ROLE_MAPPING =
{ "ai" => "assistant", "human" => "user" }
Instance Attribute Summary collapse
-
#functions ⇒ Object
Returns the value of attribute functions.
Attributes inherited from Base
Instance Method Summary collapse
-
#chat(prompt: "", messages: [], context: "", examples: [], **options) {|AIMessage| ... } ⇒ AIMessage
Generate a chat completion for a given prompt or messages.
-
#complete(prompt:, **params) ⇒ String
Generate a completion for a given prompt.
-
#embed(text:, **params) ⇒ Array
Generate an embedding for a given text.
-
#initialize(api_key: BxBuilderChain.configuration.openai_api_key, llm_options: {}, default_options: {}) ⇒ OpenAi
constructor
A new instance of OpenAi.
Methods inherited from Base
#count_tokens, #default_dimension, #summarize
Methods included from DependencyHelper
Constructor Details
#initialize(api_key: BxBuilderChain.configuration.openai_api_key, llm_options: {}, default_options: {}) ⇒ OpenAi
Returns a new instance of OpenAi.
28 29 30 31 32 33 34 |
# File 'lib/bx_builder_chain/llm/open_ai.rb', line 28 def initialize(api_key: BxBuilderChain.configuration.openai_api_key, llm_options: {}, default_options: {}) depends_on "ruby-openai" require "openai" @client = ::OpenAI::Client.new(access_token: api_key, **) @defaults = DEFAULTS.merge() end |
Instance Attribute Details
#functions ⇒ Object
Returns the value of attribute functions.
26 27 28 |
# File 'lib/bx_builder_chain/llm/open_ai.rb', line 26 def functions @functions end |
Instance Method Details
#chat(prompt: "", messages: [], context: "", examples: [], **options) {|AIMessage| ... } ⇒ AIMessage
Generate a chat completion for a given prompt or messages.
Examples
# simplest case, just give a prompt
openai.chat prompt: "When was Ruby first released?"
# prompt plus some context about how to respond
openai.chat context: "You are RubyGPT, a helpful chat bot for helping people learn Ruby", prompt: "Does Ruby have a REPL like IPython?"
# full control over messages that get sent, equivilent to the above
openai.chat messages: [
{
role: "system",
content: "You are RubyGPT, a helpful chat bot for helping people learn Ruby", prompt: "Does Ruby have a REPL like IPython?"
},
{
role: "user",
content: "When was Ruby first released?"
}
]
# few-short prompting with examples
openai.chat prompt: "When was factory_bot released?",
examples: [
{
role: "user",
content: "When was Ruby on Rails released?"
}
{
role: "assistant",
content: "2004"
},
]
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
# File 'lib/bx_builder_chain/llm/open_ai.rb', line 119 def chat(prompt: "", messages: [], context: "", examples: [], **) raise ArgumentError.new(":prompt or :messages argument is expected") if prompt.empty? && .empty? parameters = compose_parameters @defaults[:chat_completion_model_name], parameters[:messages] = (prompt: prompt, messages: , context: context, examples: examples) if functions parameters[:functions] = functions else parameters[:max_tokens] = validate_max_tokens(parameters[:messages], parameters[:model]) end response = client.chat(parameters: parameters) response.dig("choices", 0, "message", "content") end |
#complete(prompt:, **params) ⇒ String
Generate a completion for a given prompt
65 66 67 68 69 70 71 72 73 |
# File 'lib/bx_builder_chain/llm/open_ai.rb', line 65 def complete(prompt:, **params) parameters = compose_parameters @defaults[:completion_model_name], params parameters[:prompt] = prompt parameters[:max_tokens] = validate_max_tokens(prompt, parameters[:model]) response = client.completions(parameters: parameters) response.dig("choices", 0, "text") end |
#embed(text:, **params) ⇒ Array
Generate an embedding for a given text
43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
# File 'lib/bx_builder_chain/llm/open_ai.rb', line 43 def (text:, **params) parameters = {model: @defaults[:embeddings_model_name], input: text} validate_max_tokens(text, parameters[:model]) response = client.(parameters: parameters.merge(params)) = response.dig("data", 0, "embedding") return if puts response raise "Error: #{response.dig("data")}" end |