Class: Boxcars::Groq
Overview
A engine that uses Groq’s API.
Constant Summary collapse
- DEFAULT_PARAMS =
The default parameters to use when asking the engine.
{ model: "llama3-70b-8192", temperature: 0.1, max_tokens: 4096 }.freeze
- DEFAULT_NAME =
the default name of the engine
"Groq engine"
- DEFAULT_DESCRIPTION =
the default description of the engine
"useful for when you need to use AI to answer questions. " \ "You should ask targeted questions"
Instance Attribute Summary collapse
-
#batch_size ⇒ Object
readonly
Returns the value of attribute batch_size.
-
#groq_parmas ⇒ Object
readonly
Returns the value of attribute groq_parmas.
-
#model_kwargs ⇒ Object
readonly
Returns the value of attribute model_kwargs.
-
#prompts ⇒ Object
readonly
Returns the value of attribute prompts.
Class Method Summary collapse
-
.open_ai_client(groq_api_key: nil) ⇒ OpenAI::Client
Get the OpenAI API client.
Instance Method Summary collapse
-
#check_response(response, must_haves: %w[choices])) ⇒ Object
make sure we got a valid response.
-
#client(prompt:, inputs: {}, groq_api_key: nil, **kwargs) ⇒ Object
Get an answer from the engine.
- #conversation_model?(_model) ⇒ Boolean
-
#default_params ⇒ Object
Get the default parameters for the engine.
-
#engine_type ⇒ Object
the engine type.
-
#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 20, **kwargs) ⇒ Groq
constructor
A engine is a container for a single tool to run.
-
#max_tokens_for_prompt(prompt_text) ⇒ Integer
Calculate the maximum number of tokens possible to generate for a prompt.
-
#run(question, **kwargs) ⇒ Object
get an answer from the engine for a question.
Methods inherited from Engine
#generate, #generation_info, #get_num_tokens
Constructor Details
#initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 20, **kwargs) ⇒ Groq
A engine is a container for a single tool to run.
28 29 30 31 32 33 |
# File 'lib/boxcars/engine/groq.rb', line 28 def initialize(name: DEFAULT_NAME, description: DEFAULT_DESCRIPTION, prompts: [], batch_size: 20, **kwargs) @groq_parmas = DEFAULT_PARAMS.merge(kwargs) @prompts = prompts @batch_size = batch_size super(description: description, name: name) end |
Instance Attribute Details
#batch_size ⇒ Object (readonly)
Returns the value of attribute batch_size.
7 8 9 |
# File 'lib/boxcars/engine/groq.rb', line 7 def batch_size @batch_size end |
#groq_parmas ⇒ Object (readonly)
Returns the value of attribute groq_parmas.
7 8 9 |
# File 'lib/boxcars/engine/groq.rb', line 7 def groq_parmas @groq_parmas end |
#model_kwargs ⇒ Object (readonly)
Returns the value of attribute model_kwargs.
7 8 9 |
# File 'lib/boxcars/engine/groq.rb', line 7 def model_kwargs @model_kwargs end |
#prompts ⇒ Object (readonly)
Returns the value of attribute prompts.
7 8 9 |
# File 'lib/boxcars/engine/groq.rb', line 7 def prompts @prompts end |
Class Method Details
.open_ai_client(groq_api_key: nil) ⇒ OpenAI::Client
Get the OpenAI API client
39 40 41 42 |
# File 'lib/boxcars/engine/groq.rb', line 39 def self.open_ai_client(groq_api_key: nil) access_token = Boxcars.configuration.groq_api_key(groq_api_key: groq_api_key) ::OpenAI::Client.new(access_token: access_token, uri_base: "https://api.groq.com/openai") end |
Instance Method Details
#check_response(response, must_haves: %w[choices])) ⇒ Object
make sure we got a valid response
91 92 93 94 95 96 97 98 99 100 101 102 103 |
# File 'lib/boxcars/engine/groq.rb', line 91 def check_response(response, must_haves: %w[choices]) if response['error'].is_a?(Hash) code = response.dig('error', 'code') msg = response.dig('error', 'message') || 'unknown error' raise KeyError, "GROQ_API_TOKEN not valid" if code == 'invalid_api_key' raise ValueError, "Groq error: #{msg}" end must_haves.each do |key| raise ValueError, "Expecting key #{key} in response" unless response.key?(key) end end |
#client(prompt:, inputs: {}, groq_api_key: nil, **kwargs) ⇒ Object
Get an answer from the engine.
53 54 55 56 57 58 59 60 61 62 63 64 65 |
# File 'lib/boxcars/engine/groq.rb', line 53 def client(prompt:, inputs: {}, groq_api_key: nil, **kwargs) clnt = Groq.open_ai_client(groq_api_key: groq_api_key) params = groq_parmas.merge(kwargs) prompt = prompt.first if prompt.is_a?(Array) params = prompt.(inputs).merge(params) if Boxcars.configuration.log_prompts Boxcars.debug(params[:messages].last(2).map { |p| ">>>>>> Role: #{p[:role]} <<<<<<\n#{p[:content]}" }.join("\n"), :cyan) end clnt.chat(parameters: params) rescue => e Boxcars.error(e, :red) raise end |
#conversation_model?(_model) ⇒ Boolean
44 45 46 |
# File 'lib/boxcars/engine/groq.rb', line 44 def conversation_model?(_model) true end |
#default_params ⇒ Object
Get the default parameters for the engine.
82 83 84 |
# File 'lib/boxcars/engine/groq.rb', line 82 def default_params groq_parmas end |
#engine_type ⇒ Object
the engine type
106 107 108 |
# File 'lib/boxcars/engine/groq.rb', line 106 def engine_type "groq" end |
#max_tokens_for_prompt(prompt_text) ⇒ Integer
Calculate the maximum number of tokens possible to generate for a prompt.
113 114 115 116 117 118 119 |
# File 'lib/boxcars/engine/groq.rb', line 113 def max_tokens_for_prompt(prompt_text) num_tokens = get_num_tokens(prompt_text) # get max context size for model by name max_size = 8096 max_size - num_tokens end |
#run(question, **kwargs) ⇒ Object
get an answer from the engine for a question.
70 71 72 73 74 75 76 77 78 79 |
# File 'lib/boxcars/engine/groq.rb', line 70 def run(question, **kwargs) prompt = Prompt.new(template: question) response = client(prompt: prompt, **kwargs) raise Error, "Groq: No response from API" unless response check_response(response) answer = response["choices"].map { |c| c.dig("message", "content") || c["text"] }.join("\n").strip puts answer answer end |