OmniAI::Google
A Google implementation of the OmniAI APIs.
Installation
gem install omniai-google
Usage
Client
A client is setup as follows if ENV['GOOGLE_API_KEY']
exists:
client = OmniAI::Google::Client.new
A client may also be passed the following options:
api_key
(required - default isENV['GOOGLE_API_KEY']
)host
(optional)version
(optional - options arev1
orv1beta
)
Configuration
Global configuration is supported for the following options:
OmniAI::Google.configure do |config|
config.api_key = 'sk-...' # default: ENV['GOOGLE_API_KEY']
config.host = '...' # default: 'https://generativelanguage.googleapis.com'
config.version = 'v1beta' # default: 'v1'
end
Chat
A chat completion is generated by passing in prompts using any a variety of formats:
completion = client.chat('Tell me a joke!')
completion.choice..content # 'Why did the chicken cross the road? To get to the other side.'
completion = client.chat({
role: OmniAI::Chat::Role::USER,
content: 'Is it wise to jump off a bridge?'
})
completion.choice..content # 'No.'
completion = client.chat([
{
role: OmniAI::Chat::Role::USER,
content: 'You are a helpful assistant.'
},
'What is the capital of Canada?',
])
completion.choice..content # 'The capital of Canada is Ottawa.'
Model
model
takes an optional string (default is gemini-1.5-pro
):
completion = client.chat('How fast is a cheetah?', model: OmniAI::Google::Chat::Model::GEMINI_FLASH)
completion.choice..content # 'A cheetah can reach speeds over 100 km/h.'
Temperature
temperature
takes an optional float between 0.0
and 2.0
:
completion = client.chat('Pick a number between 1 and 5', temperature: 2.0)
completion.choice..content # '3'
Google API Reference temperature
Stream
stream
takes an optional a proc to stream responses in real-time chunks instead of waiting for a complete response:
stream = proc do |chunk|
print(chunk.choice.delta.content) # 'Better', 'three', 'hours', ...
end
client.chat('Be poetic.', stream:)