Class: Cohere::Client
- Inherits:
-
Object
- Object
- Cohere::Client
- Defined in:
- lib/cohere/client.rb
Instance Attribute Summary collapse
-
#api_key ⇒ Object
readonly
Returns the value of attribute api_key.
-
#connection ⇒ Object
readonly
Returns the value of attribute connection.
Instance Method Summary collapse
-
#chat(model:, messages:, stream: false, tools: [], documents: [], citation_options: nil, response_format: nil, safety_mode: nil, max_tokens: nil, stop_sequences: nil, temperature: nil, seed: nil, frequency_penalty: nil, presence_penalty: nil, k: nil, p: nil, logprops: nil, &block) ⇒ Object
Generates a text response to a user message and streams it down, token by token.
-
#classify(model:, inputs:, examples: nil, preset: nil, truncate: nil) ⇒ Object
This endpoint makes a prediction about which label fits the specified text inputs best.
- #detect_language(texts:) ⇒ Object
-
#detokenize(tokens:, model:) ⇒ Object
This endpoint takes tokens using byte-pair encoding and returns their text representation.
-
#embed(model:, input_type:, embedding_types:, texts: nil, images: nil, truncate: nil) ⇒ Object
This endpoint returns text embeddings.
-
#generate(prompt:, model: nil, num_generations: nil, max_tokens: nil, preset: nil, temperature: nil, k: nil, p: nil, frequency_penalty: nil, presence_penalty: nil, end_sequences: nil, stop_sequences: nil, return_likelihoods: nil, logit_bias: nil, truncate: nil) ⇒ Object
This endpoint generates realistic text conditioned on a given input.
-
#initialize(api_key:, timeout: nil) ⇒ Client
constructor
A new instance of Client.
-
#rerank(model:, query:, documents:, top_n: nil, rank_fields: nil, return_documents: nil, max_chunks_per_doc: nil) ⇒ Object
This endpoint takes in a query and a list of texts and produces an ordered array with each text assigned a relevance score.
- #summarize(text:, length: nil, format: nil, model: nil, extractiveness: nil, temperature: nil, additional_command: nil) ⇒ Object
-
#tokenize(text:, model:) ⇒ Object
This endpoint splits input text into smaller units called tokens using byte-pair encoding (BPE).
Constructor Details
#initialize(api_key:, timeout: nil) ⇒ Client
Returns a new instance of Client.
9 10 11 12 |
# File 'lib/cohere/client.rb', line 9 def initialize(api_key:, timeout: nil) @api_key = api_key @timeout = timeout end |
Instance Attribute Details
#api_key ⇒ Object (readonly)
Returns the value of attribute api_key.
7 8 9 |
# File 'lib/cohere/client.rb', line 7 def api_key @api_key end |
#connection ⇒ Object (readonly)
Returns the value of attribute connection.
7 8 9 |
# File 'lib/cohere/client.rb', line 7 def connection @connection end |
Instance Method Details
#chat(model:, messages:, stream: false, tools: [], documents: [], citation_options: nil, response_format: nil, safety_mode: nil, max_tokens: nil, stop_sequences: nil, temperature: nil, seed: nil, frequency_penalty: nil, presence_penalty: nil, k: nil, p: nil, logprops: nil, &block) ⇒ Object
Generates a text response to a user message and streams it down, token by token
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
# File 'lib/cohere/client.rb', line 15 def chat( model:, messages:, stream: false, tools: [], documents: [], citation_options: nil, response_format: nil, safety_mode: nil, max_tokens: nil, stop_sequences: nil, temperature: nil, seed: nil, frequency_penalty: nil, presence_penalty: nil, k: nil, p: nil, logprops: nil, &block ) response = v2_connection.post("chat") do |req| req.body = {} req.body[:model] = model req.body[:messages] = if req.body[:tools] = tools if tools.any? req.body[:documents] = documents if documents.any? req.body[:citation_options] = if req.body[:response_format] = response_format if response_format req.body[:safety_mode] = safety_mode if safety_mode req.body[:max_tokens] = max_tokens if max_tokens req.body[:stop_sequences] = stop_sequences if stop_sequences req.body[:temperature] = temperature if temperature req.body[:seed] = seed if seed req.body[:frequency_penalty] = frequency_penalty if frequency_penalty req.body[:presence_penalty] = presence_penalty if presence_penalty req.body[:k] = k if k req.body[:p] = p if p req.body[:logprops] = logprops if logprops if stream || block req.body[:stream] = true req..on_data = block if block end end response.body end |
#classify(model:, inputs:, examples: nil, preset: nil, truncate: nil) ⇒ Object
This endpoint makes a prediction about which label fits the specified text inputs best.
148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
# File 'lib/cohere/client.rb', line 148 def classify( model:, inputs:, examples: nil, preset: nil, truncate: nil ) response = v1_connection.post("classify") do |req| req.body = { model: model, inputs: inputs } req.body[:examples] = examples if examples req.body[:preset] = preset if preset req.body[:truncate] = truncate if truncate end response.body end |
#detect_language(texts:) ⇒ Object
183 184 185 186 187 188 |
# File 'lib/cohere/client.rb', line 183 def detect_language(texts:) response = v1_connection.post("detect-language") do |req| req.body = {texts: texts} end response.body end |
#detokenize(tokens:, model:) ⇒ Object
This endpoint takes tokens using byte-pair encoding and returns their text representation.
176 177 178 179 180 181 |
# File 'lib/cohere/client.rb', line 176 def detokenize(tokens:, model:) response = v1_connection.post("detokenize") do |req| req.body = {tokens: tokens, model: model} end response.body end |
#embed(model:, input_type:, embedding_types:, texts: nil, images: nil, truncate: nil) ⇒ Object
This endpoint returns text embeddings. An embedding is a list of floating point numbers that captures semantic information about the text that it represents.
102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
# File 'lib/cohere/client.rb', line 102 def ( model:, input_type:, embedding_types:, texts: nil, images: nil, truncate: nil ) response = v2_connection.post("embed") do |req| req.body = { model: model, input_type: input_type, embedding_types: } req.body[:texts] = texts if texts req.body[:images] = images if images req.body[:truncate] = truncate if truncate end response.body end |
#generate(prompt:, model: nil, num_generations: nil, max_tokens: nil, preset: nil, temperature: nil, k: nil, p: nil, frequency_penalty: nil, presence_penalty: nil, end_sequences: nil, stop_sequences: nil, return_likelihoods: nil, logit_bias: nil, truncate: nil) ⇒ Object
This endpoint generates realistic text conditioned on a given input.
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 |
# File 'lib/cohere/client.rb', line 64 def generate( prompt:, model: nil, num_generations: nil, max_tokens: nil, preset: nil, temperature: nil, k: nil, p: nil, frequency_penalty: nil, presence_penalty: nil, end_sequences: nil, stop_sequences: nil, return_likelihoods: nil, logit_bias: nil, truncate: nil ) response = v1_connection.post("generate") do |req| req.body = {prompt: prompt} req.body[:model] = model if model req.body[:num_generations] = num_generations if num_generations req.body[:max_tokens] = max_tokens if max_tokens req.body[:preset] = preset if preset req.body[:temperature] = temperature if temperature req.body[:k] = k if k req.body[:p] = p if p req.body[:frequency_penalty] = frequency_penalty if frequency_penalty req.body[:presence_penalty] = presence_penalty if presence_penalty req.body[:end_sequences] = end_sequences if end_sequences req.body[:stop_sequences] = stop_sequences if stop_sequences req.body[:return_likelihoods] = return_likelihoods if return_likelihoods req.body[:logit_bias] = logit_bias if logit_bias req.body[:truncate] = truncate if truncate end response.body end |
#rerank(model:, query:, documents:, top_n: nil, rank_fields: nil, return_documents: nil, max_chunks_per_doc: nil) ⇒ Object
This endpoint takes in a query and a list of texts and produces an ordered array with each text assigned a relevance score.
124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 |
# File 'lib/cohere/client.rb', line 124 def rerank( model:, query:, documents:, top_n: nil, rank_fields: nil, return_documents: nil, max_chunks_per_doc: nil ) response = v2_connection.post("rerank") do |req| req.body = { model: model, query: query, documents: documents } req.body[:top_n] = top_n if top_n req.body[:rank_fields] = rank_fields if rank_fields req.body[:return_documents] = return_documents if return_documents req.body[:max_chunks_per_doc] = max_chunks_per_doc if max_chunks_per_doc end response.body end |
#summarize(text:, length: nil, format: nil, model: nil, extractiveness: nil, temperature: nil, additional_command: nil) ⇒ Object
190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
# File 'lib/cohere/client.rb', line 190 def summarize( text:, length: nil, format: nil, model: nil, extractiveness: nil, temperature: nil, additional_command: nil ) response = v1_connection.post("summarize") do |req| req.body = {text: text} req.body[:length] = length if length req.body[:format] = format if format req.body[:model] = model if model req.body[:extractiveness] = extractiveness if extractiveness req.body[:temperature] = temperature if temperature req.body[:additional_command] = additional_command if additional_command end response.body end |
#tokenize(text:, model:) ⇒ Object
This endpoint splits input text into smaller units called tokens using byte-pair encoding (BPE).
168 169 170 171 172 173 |
# File 'lib/cohere/client.rb', line 168 def tokenize(text:, model:) response = v1_connection.post("tokenize") do |req| req.body = {text: text, model: model} end response.body end |