Class: RubyLLM::Moderation
- Inherits:
-
Object
- Object
- RubyLLM::Moderation
- Defined in:
- lib/ruby_llm/moderation.rb
Overview
Identify potentially harmful content in text. platform.openai.com/docs/guides/moderation
Instance Attribute Summary collapse
-
#id ⇒ Object
readonly
Returns the value of attribute id.
-
#model ⇒ Object
readonly
Returns the value of attribute model.
-
#results ⇒ Object
readonly
Returns the value of attribute results.
Class Method Summary collapse
Instance Method Summary collapse
-
#categories ⇒ Object
Get categories for the first result (most common case).
-
#category_scores ⇒ Object
Get category scores for the first result (most common case).
-
#content ⇒ Object
Convenience method to get content from moderation result.
-
#flagged? ⇒ Boolean
Check if any content was flagged.
-
#flagged_categories ⇒ Object
Get all flagged categories across all results.
-
#initialize(id:, model:, results:) ⇒ Moderation
constructor
A new instance of Moderation.
Constructor Details
#initialize(id:, model:, results:) ⇒ Moderation
Returns a new instance of Moderation.
9 10 11 12 13 |
# File 'lib/ruby_llm/moderation.rb', line 9 def initialize(id:, model:, results:) @id = id @model = model @results = results end |
Instance Attribute Details
#id ⇒ Object (readonly)
Returns the value of attribute id.
7 8 9 |
# File 'lib/ruby_llm/moderation.rb', line 7 def id @id end |
#model ⇒ Object (readonly)
Returns the value of attribute model.
7 8 9 |
# File 'lib/ruby_llm/moderation.rb', line 7 def model @model end |
#results ⇒ Object (readonly)
Returns the value of attribute results.
7 8 9 |
# File 'lib/ruby_llm/moderation.rb', line 7 def results @results end |
Class Method Details
.moderate(input, model: nil, provider: nil, assume_model_exists: false, context: nil) ⇒ Object
15 16 17 18 19 20 21 22 23 24 25 26 27 |
# File 'lib/ruby_llm/moderation.rb', line 15 def self.moderate(input, model: nil, provider: nil, assume_model_exists: false, context: nil) config = context&.config || RubyLLM.config model ||= config.default_moderation_model || 'omni-moderation-latest' model, provider_instance = Models.resolve(model, provider: provider, assume_exists: assume_model_exists, config: config) model_id = model.id provider_instance.moderate(input, model: model_id) end |
Instance Method Details
#categories ⇒ Object
Get categories for the first result (most common case)
52 53 54 |
# File 'lib/ruby_llm/moderation.rb', line 52 def categories results.first&.dig('categories') || {} end |
#category_scores ⇒ Object
Get category scores for the first result (most common case)
47 48 49 |
# File 'lib/ruby_llm/moderation.rb', line 47 def category_scores results.first&.dig('category_scores') || {} end |
#content ⇒ Object
Convenience method to get content from moderation result
30 31 32 |
# File 'lib/ruby_llm/moderation.rb', line 30 def content results end |
#flagged? ⇒ Boolean
Check if any content was flagged
35 36 37 |
# File 'lib/ruby_llm/moderation.rb', line 35 def flagged? results.any? { |result| result['flagged'] } end |
#flagged_categories ⇒ Object
Get all flagged categories across all results
40 41 42 43 44 |
# File 'lib/ruby_llm/moderation.rb', line 40 def flagged_categories results.flat_map do |result| result['categories']&.select { |_category, flagged| flagged }&.keys || [] end.uniq end |