Class: Langchain::OutputParsers::OutputFixingParser
- Defined in:
- lib/langchain/output_parsers/output_fixing_parser.rb
Overview
Output Fixing Parser
Instance Attribute Summary collapse
-
#llm ⇒ Object
readonly
Returns the value of attribute llm.
-
#parser ⇒ Object
readonly
Returns the value of attribute parser.
-
#prompt ⇒ Object
readonly
Returns the value of attribute prompt.
Class Method Summary collapse
-
.from_llm(llm:, parser:, prompt: nil) ⇒ Object
Creates a new instance of the class using the given JSON::Schema.
Instance Method Summary collapse
-
#get_format_instructions ⇒ String
calls get_format_instructions on the @parser.
-
#initialize(llm:, parser:, prompt:) ⇒ OutputFixingParser
constructor
Initializes a new instance of the class.
-
#parse(completion) ⇒ Object
Parse the output of an LLM call, if fails with OutputParserException then call the LLM with a fix prompt in an attempt to get the correctly formatted response.
- #to_h ⇒ Object
Constructor Details
#initialize(llm:, parser:, prompt:) ⇒ OutputFixingParser
Initializes a new instance of the class.
14 15 16 17 18 19 20 21 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 14 def initialize(llm:, parser:, prompt:) raise ArgumentError.new("llm must be an instance of Langchain::LLM got: #{llm.class}") unless llm.is_a?(Langchain::LLM::Base) raise ArgumentError.new("parser must be an instance of Langchain::OutputParsers got #{parser.class}") unless parser.is_a?(Langchain::OutputParsers::Base) raise ArgumentError.new("prompt must be an instance of Langchain::Prompt::PromptTemplate got #{prompt.class}") unless prompt.is_a?(Langchain::Prompt::PromptTemplate) @llm = llm @parser = parser @prompt = prompt end |
Instance Attribute Details
#llm ⇒ Object (readonly)
Returns the value of attribute llm.
7 8 9 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 7 def llm @llm end |
#parser ⇒ Object (readonly)
Returns the value of attribute parser.
7 8 9 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 7 def parser @parser end |
#prompt ⇒ Object (readonly)
Returns the value of attribute prompt.
7 8 9 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 7 def prompt @prompt end |
Class Method Details
.from_llm(llm:, parser:, prompt: nil) ⇒ Object
Creates a new instance of the class using the given JSON::Schema.
67 68 69 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 67 def self.from_llm(llm:, parser:, prompt: nil) new(llm: llm, parser: parser, prompt: prompt || naive_fix_prompt) end |
Instance Method Details
#get_format_instructions ⇒ String
calls get_format_instructions on the @parser
according to the @schema.
35 36 37 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 35 def get_format_instructions parser.get_format_instructions end |
#parse(completion) ⇒ Object
Parse the output of an LLM call, if fails with OutputParserException then call the LLM with a fix prompt in an attempt to get the correctly formatted response
46 47 48 49 50 51 52 53 54 55 56 57 58 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 46 def parse(completion) parser.parse(completion) rescue OutputParserException => e new_completion = llm.chat( messages: [{role: "user", content: prompt.format( instructions: parser.get_format_instructions, completion: completion, error: e )}] ).completion parser.parse(new_completion) end |
#to_h ⇒ Object
23 24 25 26 27 28 29 |
# File 'lib/langchain/output_parsers/output_fixing_parser.rb', line 23 def to_h { _type: "OutputFixingParser", parser: parser.to_h, prompt: prompt.to_h } end |