Class: GooglePalmApi::Client

Inherits:
Object
  • Object
show all
Defined in:
lib/google_palm_api/client.rb

Constant Summary collapse

ENDPOINT_URL =
"https://generativelanguage.googleapis.com/"
DEFAULTS =
{
  temperature: 0.0,
  completion_model_name: "text-bison-001",
  chat_completion_model_name: "chat-bison-001",
  embeddings_model_name: "embedding-gecko-001"
}

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(api_key:) ⇒ Client

Returns a new instance of Client.



18
19
20
# File 'lib/google_palm_api/client.rb', line 18

def initialize(api_key:)
  @api_key = api_key
end

Instance Attribute Details

#api_keyObject (readonly)

Returns the value of attribute api_key.



7
8
9
# File 'lib/google_palm_api/client.rb', line 7

def api_key
  @api_key
end

Instance Method Details

#count_message_tokens(prompt:, model: nil) ⇒ Hash

Runs a model’s tokenizer on a string and returns the token count.

Parameters:

  • model (String) (defaults to: nil)
  • prompt (String)

Returns:

  • (Hash)


169
170
171
172
173
174
175
176
# File 'lib/google_palm_api/client.rb', line 169

def count_message_tokens(prompt:, model: nil)
  response = connection.post("/v1beta2/models/#{model || DEFAULTS[:chat_completion_model_name]}:countMessageTokens") do |req|
    req.params = {key: api_key}

    req.body = {prompt: {messages: [{content: prompt}]}}
  end
  response.body
end

#embed(text:, model: nil, client: nil) ⇒ Hash

The embedding service in the PaLM API generates state-of-the-art embeddings for words, phrases, and sentences. The resulting embeddings can then be used for NLP tasks, such as semantic search, text classification and clustering among many others. This section describes what embeddings are and highlights some key use cases for the embedding service to help you get started. When you’re ready to start developing, you can find complete runnable code in the embeddings quickstart. Method signature: developers.generativeai.google/api/python/google/generativeai/generate_embeddings

Parameters:

  • text (String)
  • model (String) (defaults to: nil)
  • client (String) (defaults to: nil)

Returns:

  • (Hash)


130
131
132
133
134
135
136
137
138
139
140
141
142
143
# File 'lib/google_palm_api/client.rb', line 130

def embed(
  text:,
  model: nil,
  client: nil
)
  response = connection.post("/v1beta2/models/#{model || DEFAULTS[:embeddings_model_name]}:embedText") do |req|
    req.params = {key: api_key}

    req.body = {text: text}
    req.body[:model] = model if model
    req.body[:client] = client if client
  end
  response.body
end

#generate_chat_message(messages:, prompt: nil, context: nil, examples: nil, temperature: nil, candidate_count: nil, top_p: nil, top_k: nil, client: nil, model: nil) ⇒ Hash

The chat service is designed for interactive, multi-turn conversations. The service enables you to create applications that engage users in dynamic and context-aware conversations. You can provide a context for the conversation as well as examples of conversation turns for the model to follow. It’s ideal for applications that require ongoing communication, such as chatbots, interactive tutors, or customer support assistants. Method signature: developers.generativeai.google/api/python/google/generativeai/chat

Parameters:

  • prompt (String) (defaults to: nil)
  • model (String) (defaults to: nil)
  • context (String) (defaults to: nil)
  • examples (Array) (defaults to: nil)
  • messages (Array)
  • temperature (Float) (defaults to: nil)
  • candidate_count (Integer) (defaults to: nil)
  • top_p (Float) (defaults to: nil)
  • top_k (Integer) (defaults to: nil)
  • client (String) (defaults to: nil)

Returns:

  • (Hash)


87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
# File 'lib/google_palm_api/client.rb', line 87

def generate_chat_message(
  messages:,
  prompt: nil,
  context: nil,
  examples: nil,
  temperature: nil,
  candidate_count: nil,
  top_p: nil,
  top_k: nil,
  client: nil,
  model: nil
)
  # Overwrite the default ENDPOINT_URL for this method.
  response = connection.post("/v1beta2/models/#{model || DEFAULTS[:chat_completion_model_name]}:generateMessage") do |req|
    req.params = {key: api_key}

    req.body = {prompt: {}}

    req.body[:prompt][:messages] = messages if messages
    req.body[:prompt][:context] = context if context
    req.body[:prompt][:examples] = examples if examples

    req.body[:temperature] = temperature || DEFAULTS[:temperature]
    req.body[:candidate_count] = candidate_count if candidate_count
    req.body[:top_p] = top_p if top_p
    req.body[:top_k] = top_k if top_k
    req.body[:client] = client if client
  end
  response.body
end

#generate_text(prompt:, temperature: nil, candidate_count: nil, max_output_tokens: nil, top_p: nil, top_k: nil, safety_settings: nil, stop_sequences: nil, client: nil, model: nil) ⇒ Hash

The text service is designed for single turn interactions. It’s ideal for tasks that can be completed within one response from the API, without the need for a continuous conversation. The text service allows you to obtain text completions, generate summaries, or perform other NLP tasks that don’t require back-and-forth interactions. Method signature: developers.generativeai.google/api/python/google/generativeai/generate_text

Parameters:

  • prompt (String)
  • model (String) (defaults to: nil)
  • temperature (Float) (defaults to: nil)
  • candidate_count (Integer) (defaults to: nil)
  • max_output_tokens (Integer) (defaults to: nil)
  • top_p (Float) (defaults to: nil)
  • top_k (Integer) (defaults to: nil)
  • safety_settings (String) (defaults to: nil)
  • stop_sequences (Array) (defaults to: nil)
  • client (String) (defaults to: nil)

Returns:

  • (Hash)


40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# File 'lib/google_palm_api/client.rb', line 40

def generate_text(
  prompt:,
  temperature: nil,
  candidate_count: nil,
  max_output_tokens: nil,
  top_p: nil,
  top_k: nil,
  safety_settings: nil,
  stop_sequences: nil,
  client: nil,
  model: nil
)
  response = connection.post("/v1beta2/models/#{model || DEFAULTS[:completion_model_name]}:generateText") do |req|
    req.params = {key: api_key}

    req.body = {prompt: {text: prompt}}
    req.body[:temperature] = temperature || DEFAULTS[:temperature]
    req.body[:candidate_count] = candidate_count if candidate_count
    req.body[:max_output_tokens] = max_output_tokens if max_output_tokens
    req.body[:top_p] = top_p if top_p
    req.body[:top_k] = top_k if top_k
    req.body[:safety_settings] = safety_settings if safety_settings
    req.body[:stop_sequences] = stop_sequences if stop_sequences
    req.body[:client] = client if client
  end
  response.body
end

#get_model(model:) ⇒ Hash

Gets information about a specific Model.

Parameters:

  • name (String)

Returns:

  • (Hash)


184
185
186
187
188
189
# File 'lib/google_palm_api/client.rb', line 184

def get_model(model:)
  response = connection.get("/v1beta2/models/#{model}") do |req|
    req.params = {key: api_key}
  end
  response.body
end

#list_models(page_size: nil, page_token: nil) ⇒ Hash

Lists models available through the API.

Parameters:

  • page_size (Integer) (defaults to: nil)
  • page_token (String) (defaults to: nil)

Returns:

  • (Hash)


152
153
154
155
156
157
158
159
160
# File 'lib/google_palm_api/client.rb', line 152

def list_models(page_size: nil, page_token: nil)
  response = connection.get("/v1beta2/models") do |req|
    req.params = {key: api_key}

    req.params[:pageSize] = page_size if page_size
    req.params[:pageToken] = page_token if page_token
  end
  response.body
end