Class: RubricLLM::Metrics::ContextPrecision
- Defined in:
- lib/rubric_llm/metrics/context_precision.rb
Constant Summary collapse
- SYSTEM_PROMPT =
"You are an evaluation judge. Assess whether the retrieved contexts are relevant to the question.\nContext precision measures if the retrieved documents are useful for answering the question.\n\nRespond with JSON only:\n{\n \"score\": <float 0.0-1.0>,\n \"context_scores\": [{\"index\": <int>, \"relevant\": <true/false>, \"reason\": \"<brief>\"}],\n \"reasoning\": \"<brief explanation>\"\n}\n"
Instance Attribute Summary
Attributes inherited from Base
Instance Method Summary collapse
Methods inherited from Base
Constructor Details
This class inherits a constructor from RubricLLM::Metrics::Base
Instance Method Details
#call(question:, context: []) ⇒ Object
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
# File 'lib/rubric_llm/metrics/context_precision.rb', line 18 def call(question:, context: [], **) return { score: nil, details: { error: "No context provided" } } if Array(context).empty? user_prompt = " Question: \#{question}\n\n Contexts:\n \#{Array(context).each_with_index.map { |c, i| \"\#{i + 1}. \#{c}\" }.join(\"\\n\")}\n\n Evaluate how relevant each context is to the question.\n PROMPT\n\n result = judge_eval(system_prompt: SYSTEM_PROMPT, user_prompt:)\n normalize(result)\nend\n" |