Class: Roseflow::OpenAI::Model

Inherits:
Object
  • Object
show all
Defined in:
lib/roseflow/openai/model.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(model, provider) ⇒ Model

Initializes a new model instance.

Parameters:



20
21
22
23
24
# File 'lib/roseflow/openai/model.rb', line 20

def initialize(model, provider)
  @model_ = model
  @provider_ = provider
  assign_attributes
end

Instance Attribute Details

#nameObject (readonly)

Returns the value of attribute name.



14
15
16
# File 'lib/roseflow/openai/model.rb', line 14

def name
  @name
end

Instance Method Details

#blocking?Boolean

Returns:

  • (Boolean)


98
99
100
# File 'lib/roseflow/openai/model.rb', line 98

def blocking?
  @permissions_.fetch("is_blocking")
end

#call(operation, options) {|chunk| ... } ⇒ Faraday::Response

Calls the model.

Parameters:

  • operation (Symbol)

    Operation to perform

  • options (Hash)

    Options to use

Yields:

  • (chunk)

    Chunk of data if stream is enabled

Returns:

  • (Faraday::Response)

    raw API response if no block is given



50
51
52
53
# File 'lib/roseflow/openai/model.rb', line 50

def call(operation, options, &block)
  operation = OperationHandler.new(operation, options).call
  client.post(operation, &block)
end

#chat(messages, options = {}) {|chunk| ... } ⇒ OpenAI::ChatResponse

Convenience method for chat completions.

Parameters:

  • messages (Array<String>)

    Messages to use

  • options (Hash) (defaults to: {})

    Options to use

Yields:

  • (chunk)

    Chunk of data if stream is enabled

Returns:

Raises:

  • (TokenLimitExceededError)


37
38
39
40
41
42
# File 'lib/roseflow/openai/model.rb', line 37

def chat(messages, options = {}, &block)
  token_count = tokenizer.count_tokens(transform_chat_messages(options.fetch(:messages, [])))
  raise TokenLimitExceededError, "Token limit for model #{name} exceeded: #{token_count} is more than #{max_tokens}" if token_count > max_tokens
  response = call(:chat, options.merge({ messages: messages, model: name }), &block)
  ChatResponse.new(response) unless block_given?
end

#chattable?Boolean

Indicates if the model is chattable.

Returns:

  • (Boolean)


64
65
66
# File 'lib/roseflow/openai/model.rb', line 64

def chattable?
  OpenAI::Config::CHAT_MODELS.include?(name)
end

#completionable?Boolean

Indicates if the model can do completions.

Returns:

  • (Boolean)


69
70
71
# File 'lib/roseflow/openai/model.rb', line 69

def completionable?
  OpenAI::Config::COMPLETION_MODELS.include?(name)
end

#embeddable?Boolean

Indicates if the model can do embeddings.

Returns:

  • (Boolean)


79
80
81
# File 'lib/roseflow/openai/model.rb', line 79

def embeddable?
  OpenAI::Config::EMBEDDING_MODELS.include?(name)
end

#finetuneable?Boolean

Indicates if the model is fine-tunable.

Returns:

  • (Boolean)


84
85
86
# File 'lib/roseflow/openai/model.rb', line 84

def finetuneable?
  @permissions_.fetch("allow_fine_tuning")
end

#imageable?Boolean

Indicates if the model can do image completions.

Returns:

  • (Boolean)


74
75
76
# File 'lib/roseflow/openai/model.rb', line 74

def imageable?
  OpenAI::Config::IMAGE_MODELS.include?(name)
end

#max_tokensObject

Returns the maximum number of tokens for the model.



103
104
105
# File 'lib/roseflow/openai/model.rb', line 103

def max_tokens
  OpenAI::Config::MAX_TOKENS.fetch(name, 2049)
end

#operationsObject

Returns a list of operations for the model.

TODO: OpenAI does not actually provide this information per model. Figure out a way to do this in a proper way if feasible.



59
60
61
# File 'lib/roseflow/openai/model.rb', line 59

def operations
  OperationHandler::OPERATION_CLASSES.keys
end

#sampleable?Boolean

Indicates if the model can be sampled.

Returns:

  • (Boolean)


94
95
96
# File 'lib/roseflow/openai/model.rb', line 94

def sampleable?
  @permissions_.fetch("allow_sampling")
end

#searchable_indices?Boolean

Indicates if the model has searchable indices.

Returns:

  • (Boolean)


89
90
91
# File 'lib/roseflow/openai/model.rb', line 89

def searchable_indices?
  @permissions_.fetch("allow_search_indices")
end

#tokenizerObject

Tokenizer instance for the model.



27
28
29
# File 'lib/roseflow/openai/model.rb', line 27

def tokenizer
  @tokenizer_ ||= Roseflow::Tiktoken::Tokenizer.new(model: name)
end