Class: Roseflow::OpenAI::Provider

Inherits:
Object
  • Object
show all
Defined in:
lib/roseflow/openai/provider.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(config = Roseflow::OpenAI::Config.new) ⇒ Provider

Returns a new instance of Provider.



9
10
11
# File 'lib/roseflow/openai/provider.rb', line 9

def initialize(config = Roseflow::OpenAI::Config.new)
  @config = config
end

Instance Attribute Details

#configObject (readonly)

Returns the value of attribute config.



107
108
109
# File 'lib/roseflow/openai/provider.rb', line 107

def config
  @config
end

Instance Method Details

#chat(model:, messages:, **options, &block) ⇒ Roseflow::OpenAI::ChatResponse

Chat with a model

Parameters:

  • model (Roseflow::OpenAI::Model)

    The model object to use

  • messages (Array<String>)

    The messages to send to the model

  • options (Hash)

    Additional options to pass to the API

Options Hash (**options):

  • :max_tokens (Integer)

    The maximum number of tokens to generate in the completion.

  • :temperature (Float)

    Sampling temperature to use, between 0 and 2

  • :top_p (Float)

    The cumulative probability of tokens to use.

  • :n (Integer)

    The number of completions to generate.

  • :logprobs (Integer)

    Include the log probabilities on the logprobs most likely tokens.

  • :echo (Boolean)

    Whether to echo the question as part of the completion.

  • :stop (String | Array)

    Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

  • :presence_penalty (Float)

    Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

  • :frequency_penalty (Float)

    Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

  • :best_of (Integer)

    Generates ‘best_of` completions server-side and returns the “best” (the one with the lowest log probability per token)

  • :streaming (Integer)

    Whether to stream back partial progress

  • :user (String)

    A unique identifier representing your end-user

Returns:



41
42
43
44
45
46
47
48
49
# File 'lib/roseflow/openai/provider.rb', line 41

def chat(model:, messages:, **options, &block)
  streaming = options.fetch(:streaming, false)

  if streaming
    client.streaming_chat_completion(model: model, messages: messages.map(&:to_h), **options, &block)
  else
    client.create_chat_completion(model: model, messages: messages.map(&:to_h), **options)
  end
end

#clientObject

Returns the client for the provider



14
15
16
# File 'lib/roseflow/openai/provider.rb', line 14

def client
  @client ||= Client.new(config, self)
end

#completion(model:, prompt:, **options) ⇒ Roseflow::OpenAI::CompletionResponse

Create a completion.

Parameters:

  • model (Roseflow::OpenAI::Model)

    The model object to use

  • prompt (String)

    The prompt to use for completion

  • options (Hash)

    Additional options to pass to the API

Options Hash (**options):

  • :max_tokens (Integer)

    The maximum number of tokens to generate in the completion.

  • :temperature (Float)

    Sampling temperature to use, between 0 and 2

  • :top_p (Float)

    The cumulative probability of tokens to use.

  • :n (Integer)

    The number of completions to generate.

  • :logprobs (Integer)

    Include the log probabilities on the logprobs most likely tokens.

  • :echo (Boolean)

    Whether to echo the question as part of the completion.

  • :stop (String | Array)

    Up to 4 sequences where the API will stop generating further tokens.

  • :presence_penalty (Float)

    Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.

  • :frequency_penalty (Float)

    Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.

  • :best_of (Integer)

    Generates ‘best_of` completions server-side and returns the “best” (the one with the lowest log probability per token)

  • :streaming (Integer)

    Whether to stream back partial progress

  • :user (String)

    A unique identifier representing your end-user

Returns:



69
70
71
72
73
74
75
76
77
# File 'lib/roseflow/openai/provider.rb', line 69

def completion(model:, prompt:, **options)
  streaming = options.fetch(:streaming, false)

  if streaming
    client.streaming_completion(model: model, prompt: prompt, **options)
  else
    client.create_completion(model: model, prompt: prompt, **options)
  end
end

#edit(model:, instruction:, **options) ⇒ Roseflow::OpenAI::EditResponse

Creates a new edit for the provided input, instruction, and parameters.

Parameters:

  • model (Roseflow::OpenAI::Model)

    The model object to use

  • instruction (String)

    The instruction to use for editing

  • options (Hash)

    Additional options to pass to the API

Options Hash (**options):

  • :input (String)

    The input text to use as a starting point for the edit.

  • :n (Integer)

    The number of edits to generate.

  • :temperature (Float)

    Sampling temperature to use, between 0 and 2

  • :top_p (Float)

    The cumulative probability of tokens to use.

Returns:



89
90
91
# File 'lib/roseflow/openai/provider.rb', line 89

def edit(model:, instruction:, **options)
  client.create_edit(model: model, instruction: instruction, **options)
end

#embedding(model:, input:, **options) ⇒ Object

Creates an embedding vector representing the input text.

Parameters:

  • model (Roseflow::OpenAI::Model)

    The model object to use

  • input (String)

    The input text to use for embedding

  • options (Hash)

    Additional options to pass to the API

Options Hash (**options):

  • :user (String)

    A unique identifier representing your end-user



99
100
101
# File 'lib/roseflow/openai/provider.rb', line 99

def embedding(model:, input:, **options)
  client.create_embedding(model: model, input: input, **options).embedding.to_embedding
end

#image(prompt:, **options) ⇒ Object



103
104
105
# File 'lib/roseflow/openai/provider.rb', line 103

def image(prompt:, **options)
  client.create_image(prompt: prompt, **options)
end

#modelsObject

Returns the model repository for the provider



19
20
21
# File 'lib/roseflow/openai/provider.rb', line 19

def models
  @models ||= ModelRepository.new(self)
end