Class: Aigen::Google::Client
- Inherits:
-
Object
- Object
- Aigen::Google::Client
- Defined in:
- lib/aigen/google/client.rb
Instance Attribute Summary collapse
-
#config ⇒ Object
readonly
Returns the value of attribute config.
-
#http_client ⇒ Object
readonly
Returns the value of attribute http_client.
Instance Method Summary collapse
-
#generate_content(prompt: nil, contents: nil, model: nil, temperature: nil, top_p: nil, top_k: nil, max_output_tokens: nil, response_modalities: nil, aspect_ratio: nil, image_size: nil, safety_settings: nil, **options) ⇒ Hash
Generates content from the Gemini API with support for text, multimodal content, generation parameters, and safety settings.
-
#generate_content_stream(prompt:, model: nil, **options) {|chunk| ... } ⇒ nil, Enumerator
Streams generated content from the Gemini API with progressive chunk delivery.
-
#generate_image(prompt, aspect_ratio: nil, size: nil, model: nil, **options) ⇒ Aigen::Google::ImageResponse
Generates an image from a text prompt using Gemini image generation models.
-
#initialize(api_key: nil, model: nil, timeout: nil, **options) ⇒ Client
constructor
A new instance of Client.
- #start_chat(history: [], model: nil, **options) ⇒ Object
Constructor Details
#initialize(api_key: nil, model: nil, timeout: nil, **options) ⇒ Client
Returns a new instance of Client.
8 9 10 11 12 13 14 15 16 |
# File 'lib/aigen/google/client.rb', line 8 def initialize(api_key: nil, model: nil, timeout: nil, **) @config = build_configuration(api_key, model, timeout, ) @config.validate! @http_client = HttpClient.new( api_key: @config.api_key, timeout: @config.timeout, retry_count: @config.retry_count ) end |
Instance Attribute Details
#config ⇒ Object (readonly)
Returns the value of attribute config.
6 7 8 |
# File 'lib/aigen/google/client.rb', line 6 def config @config end |
#http_client ⇒ Object (readonly)
Returns the value of attribute http_client.
6 7 8 |
# File 'lib/aigen/google/client.rb', line 6 def http_client @http_client end |
Instance Method Details
#generate_content(prompt: nil, contents: nil, model: nil, temperature: nil, top_p: nil, top_k: nil, max_output_tokens: nil, response_modalities: nil, aspect_ratio: nil, image_size: nil, safety_settings: nil, **options) ⇒ Hash
Generates content from the Gemini API with support for text, multimodal content, generation parameters, and safety settings.
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
# File 'lib/aigen/google/client.rb', line 60 def generate_content(prompt: nil, contents: nil, model: nil, temperature: nil, top_p: nil, top_k: nil, max_output_tokens: nil, response_modalities: nil, aspect_ratio: nil, image_size: nil, safety_settings: nil, **) model ||= @config.default_model # Build generation config if parameters provided (validates before API call) gen_config = nil if temperature || top_p || top_k || max_output_tokens || response_modalities || aspect_ratio || image_size gen_config = GenerationConfig.new( temperature: temperature, top_p: top_p, top_k: top_k, max_output_tokens: max_output_tokens, response_modalities: response_modalities, aspect_ratio: aspect_ratio, image_size: image_size ) end # Build payload payload = {} # Handle contents (multimodal) or prompt (simple text) if contents payload[:contents] = contents elsif prompt # Backward compatibility: convert simple prompt to contents format payload[:contents] = [ { parts: [ {text: prompt} ] } ] end # Add generation config if present payload[:generationConfig] = gen_config.to_h if gen_config && !gen_config.to_h.empty? # Add safety settings if present payload[:safetySettings] = safety_settings if safety_settings # Merge any additional options payload.merge!() if .any? endpoint = "models/#{model}:generateContent" @http_client.post(endpoint, payload) end |
#generate_content_stream(prompt:, model: nil, **options) {|chunk| ... } ⇒ nil, Enumerator
Streams generated content from the Gemini API with progressive chunk delivery. Supports both block-based immediate processing and lazy Enumerator evaluation.
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 |
# File 'lib/aigen/google/client.rb', line 140 def generate_content_stream(prompt:, model: nil, **, &block) model ||= @config.default_model payload = { contents: [ { parts: [ {text: prompt} ] } ] } # Merge any additional generation config options payload.merge!() if .any? endpoint = "models/#{model}:streamGenerateContent" # If block given, stream with block if block_given? @http_client.post_stream(endpoint, payload, &block) else # Return Enumerator for lazy evaluation Enumerator.new do |yielder| @http_client.post_stream(endpoint, payload) do |chunk| yielder << chunk end end end end |
#generate_image(prompt, aspect_ratio: nil, size: nil, model: nil, **options) ⇒ Aigen::Google::ImageResponse
Generates an image from a text prompt using Gemini image generation models. This is a convenience method that automatically sets response_modalities to [“TEXT”, “IMAGE”] and returns an ImageResponse object for easy image extraction.
202 203 204 205 206 207 208 209 210 211 212 213 |
# File 'lib/aigen/google/client.rb', line 202 def generate_image(prompt, aspect_ratio: nil, size: nil, model: nil, **) response = generate_content( prompt: prompt, model: model, response_modalities: ["TEXT", "IMAGE"], aspect_ratio: aspect_ratio, image_size: size, ** ) ImageResponse.new(response) end |