Class: Aigen::Google::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/aigen/google/client.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(api_key: nil, model: nil, timeout: nil, **options) ⇒ Client

Returns a new instance of Client.



8
9
10
11
12
13
14
15
16
# File 'lib/aigen/google/client.rb', line 8

def initialize(api_key: nil, model: nil, timeout: nil, **options)
  @config = build_configuration(api_key, model, timeout, options)
  @config.validate!
  @http_client = HttpClient.new(
    api_key: @config.api_key,
    timeout: @config.timeout,
    retry_count: @config.retry_count
  )
end

Instance Attribute Details

#configObject (readonly)

Returns the value of attribute config.



6
7
8
# File 'lib/aigen/google/client.rb', line 6

def config
  @config
end

#http_clientObject (readonly)

Returns the value of attribute http_client.



6
7
8
# File 'lib/aigen/google/client.rb', line 6

def http_client
  @http_client
end

Instance Method Details

#generate_content(prompt: nil, contents: nil, model: nil, temperature: nil, top_p: nil, top_k: nil, max_output_tokens: nil, response_modalities: nil, aspect_ratio: nil, image_size: nil, safety_settings: nil, **options) ⇒ Hash

Generates content from the Gemini API with support for text, multimodal content, generation parameters, and safety settings.

Examples:

Simple text prompt (backward compatible)

response = client.generate_content(prompt: "Hello")

With generation config

response = client.generate_content(
  prompt: "Tell me a story",
  temperature: 0.7,
  max_output_tokens: 1024
)

Multimodal content (text + image)

text = Aigen::Google::Content.text("What is in this image?")
image = Aigen::Google::Content.image(data: base64_data, mime_type: "image/jpeg")
response = client.generate_content(contents: [text.to_h, image.to_h])

Image generation (Nano Banana)

response = client.generate_content(
  prompt: "A serene mountain landscape",
  response_modalities: ["TEXT", "IMAGE"],
  aspect_ratio: "16:9",
  image_size: "2K"
)

Parameters:

  • prompt (String, nil) (defaults to: nil)

    simple text prompt (for backward compatibility)

  • contents (Array<Hash>, nil) (defaults to: nil)

    array of content hashes for multimodal requests

  • model (String, nil) (defaults to: nil)

    the model to use (defaults to client’s default_model)

  • temperature (Float, nil) (defaults to: nil)

    controls randomness (0.0-1.0)

  • top_p (Float, nil) (defaults to: nil)

    nucleus sampling threshold (0.0-1.0)

  • top_k (Integer, nil) (defaults to: nil)

    top-k sampling limit (> 0)

  • max_output_tokens (Integer, nil) (defaults to: nil)

    maximum response tokens (> 0)

  • safety_settings (Array<Hash>, nil) (defaults to: nil)

    safety filtering configuration

  • options (Hash)

    additional options to pass to the API

Returns:

  • (Hash)

    the API response

Raises:



60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
# File 'lib/aigen/google/client.rb', line 60

def generate_content(prompt: nil, contents: nil, model: nil, temperature: nil, top_p: nil, top_k: nil, max_output_tokens: nil, response_modalities: nil, aspect_ratio: nil, image_size: nil, safety_settings: nil, **options)
  model ||= @config.default_model

  # Build generation config if parameters provided (validates before API call)
  gen_config = nil
  if temperature || top_p || top_k || max_output_tokens || response_modalities || aspect_ratio || image_size
    gen_config = GenerationConfig.new(
      temperature: temperature,
      top_p: top_p,
      top_k: top_k,
      max_output_tokens: max_output_tokens,
      response_modalities: response_modalities,
      aspect_ratio: aspect_ratio,
      image_size: image_size
    )
  end

  # Build payload
  payload = {}

  # Handle contents (multimodal) or prompt (simple text)
  if contents
    payload[:contents] = contents
  elsif prompt
    # Backward compatibility: convert simple prompt to contents format
    payload[:contents] = [
      {
        parts: [
          {text: prompt}
        ]
      }
    ]
  end

  # Add generation config if present
  payload[:generationConfig] = gen_config.to_h if gen_config && !gen_config.to_h.empty?

  # Add safety settings if present
  payload[:safetySettings] = safety_settings if safety_settings

  # Merge any additional options
  payload.merge!(options) if options.any?

  endpoint = "models/#{model}:generateContent"
  @http_client.post(endpoint, payload)
end

#generate_content_stream(prompt:, model: nil, **options) {|chunk| ... } ⇒ nil, Enumerator

Streams generated content from the Gemini API with progressive chunk delivery. Supports both block-based immediate processing and lazy Enumerator evaluation.

Examples:

Stream with block (immediate processing)

client.generate_content_stream(prompt: "Tell me a story") do |chunk|
  text = chunk["candidates"][0]["content"]["parts"][0]["text"]
  print text
end

Stream with Enumerator (lazy evaluation)

stream = client.generate_content_stream(prompt: "Tell me a story")
stream.each do |chunk|
  text = chunk["candidates"][0]["content"]["parts"][0]["text"]
  print text
end

Stream with lazy operations

stream = client.generate_content_stream(prompt: "Count to 10")
first_three = stream.lazy.take(3).map { |c| c["candidates"][0]["content"]["parts"][0]["text"] }.to_a

Parameters:

  • prompt (String)

    the prompt text to generate content from

  • model (String, nil) (defaults to: nil)

    the model to use (defaults to client’s default_model)

  • options (Hash)

    additional options to pass to the API (e.g., generationConfig)

Yield Parameters:

  • chunk (Hash)

    parsed JSON chunk from the streaming response (if block given)

Returns:

  • (nil)

    if block is given, returns nil after streaming completes

  • (Enumerator)

    if no block given, returns lazy Enumerator for progressive iteration

Raises:



140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
# File 'lib/aigen/google/client.rb', line 140

def generate_content_stream(prompt:, model: nil, **options, &block)
  model ||= @config.default_model

  payload = {
    contents: [
      {
        parts: [
          {text: prompt}
        ]
      }
    ]
  }

  # Merge any additional generation config options
  payload.merge!(options) if options.any?

  endpoint = "models/#{model}:streamGenerateContent"

  # If block given, stream with block
  if block_given?
    @http_client.post_stream(endpoint, payload, &block)
  else
    # Return Enumerator for lazy evaluation
    Enumerator.new do |yielder|
      @http_client.post_stream(endpoint, payload) do |chunk|
        yielder << chunk
      end
    end
  end
end

#generate_image(prompt, aspect_ratio: nil, size: nil, model: nil, **options) ⇒ Aigen::Google::ImageResponse

Generates an image from a text prompt using Gemini image generation models. This is a convenience method that automatically sets response_modalities to [“TEXT”, “IMAGE”] and returns an ImageResponse object for easy image extraction.

Examples:

Basic image generation

client = Aigen::Google::Client.new(model: "gemini-2.5-flash-image")
response = client.generate_image("A serene mountain landscape")
response.save("landscape.png") if response.success?

With size and aspect ratio

response = client.generate_image(
  "A futuristic cityscape",
  aspect_ratio: "16:9",
  size: "2K"
)
if response.success?
  puts response.text
  response.save("city.png")
else
  puts "Failed: #{response.failure_message}"
end

Parameters:

  • prompt (String)

    the text description of the image to generate

  • aspect_ratio (String, nil) (defaults to: nil)

    optional aspect ratio (“1:1”, “16:9”, “9:16”, “4:3”, “3:4”, “5:4”, “4:5”)

  • size (String, nil) (defaults to: nil)

    optional image size (“1K”, “2K”, “4K”)

  • model (String, nil) (defaults to: nil)

    optional model name (defaults to client’s model)

  • options (Hash)

    additional options to pass to generate_content

Returns:

Raises:



202
203
204
205
206
207
208
209
210
211
212
213
# File 'lib/aigen/google/client.rb', line 202

def generate_image(prompt, aspect_ratio: nil, size: nil, model: nil, **options)
  response = generate_content(
    prompt: prompt,
    model: model,
    response_modalities: ["TEXT", "IMAGE"],
    aspect_ratio: aspect_ratio,
    image_size: size,
    **options
  )

  ImageResponse.new(response)
end

#start_chat(history: [], model: nil, **options) ⇒ Object



215
216
217
218
219
220
221
# File 'lib/aigen/google/client.rb', line 215

def start_chat(history: [], model: nil, **options)
  Chat.new(
    client: self,
    model: model || @config.default_model,
    history: history
  )
end