Class: Langchain::LLM::BaseResponse
- Inherits:
-
Object
- Object
- Langchain::LLM::BaseResponse
- Defined in:
- lib/langchain/llm/response/base_response.rb
Direct Known Subclasses
AI21Response, AnthropicResponse, AwsBedrockMetaResponse, AwsTitanResponse, CohereResponse, GoogleGeminiResponse, HuggingFaceResponse, LlamaCppResponse, MistralAIResponse, OllamaResponse, OpenAIResponse, ReplicateResponse
Instance Attribute Summary collapse
-
#context ⇒ Object
Save context in the response when doing RAG workflow vectorsearch#ask().
-
#model ⇒ Object
readonly
Returns the value of attribute model.
-
#raw_response ⇒ Object
readonly
Returns the value of attribute raw_response.
Instance Method Summary collapse
-
#chat_completion ⇒ String
Returns the chat completion text.
-
#chat_completions ⇒ Array<String>
Return the chat completion candidates.
-
#completion ⇒ String
Returns the completion text.
-
#completion_tokens ⇒ Integer
Number of tokens utilized to generate the completion.
-
#completions ⇒ Array<String>
Return the completion candidates.
-
#created_at ⇒ Time
Returns the timestamp when the response was created.
-
#embedding ⇒ Array<Float>
Return the first embedding.
-
#embeddings ⇒ Array<Array>
Return the embeddings.
-
#initialize(raw_response, model: nil) ⇒ BaseResponse
constructor
A new instance of BaseResponse.
-
#prompt_tokens ⇒ Integer
Number of tokens utilized in the prompt.
-
#total_tokens ⇒ Integer
Total number of tokens utilized.
Constructor Details
#initialize(raw_response, model: nil) ⇒ BaseResponse
Returns a new instance of BaseResponse.
11 12 13 14 |
# File 'lib/langchain/llm/response/base_response.rb', line 11 def initialize(raw_response, model: nil) @raw_response = raw_response @model = model end |
Instance Attribute Details
#context ⇒ Object
Save context in the response when doing RAG workflow vectorsearch#ask()
9 10 11 |
# File 'lib/langchain/llm/response/base_response.rb', line 9 def context @context end |
#model ⇒ Object (readonly)
Returns the value of attribute model.
6 7 8 |
# File 'lib/langchain/llm/response/base_response.rb', line 6 def model @model end |
#raw_response ⇒ Object (readonly)
Returns the value of attribute raw_response.
6 7 8 |
# File 'lib/langchain/llm/response/base_response.rb', line 6 def raw_response @raw_response end |
Instance Method Details
#chat_completion ⇒ String
Returns the chat completion text
35 36 37 |
# File 'lib/langchain/llm/response/base_response.rb', line 35 def chat_completion raise NotImplementedError end |
#chat_completions ⇒ Array<String>
Return the chat completion candidates
56 57 58 |
# File 'lib/langchain/llm/response/base_response.rb', line 56 def chat_completions raise NotImplementedError end |
#completion ⇒ String
Returns the completion text
27 28 29 |
# File 'lib/langchain/llm/response/base_response.rb', line 27 def completion raise NotImplementedError end |
#completion_tokens ⇒ Integer
Number of tokens utilized to generate the completion
77 78 79 |
# File 'lib/langchain/llm/response/base_response.rb', line 77 def completion_tokens raise NotImplementedError end |
#completions ⇒ Array<String>
Return the completion candidates
49 50 51 |
# File 'lib/langchain/llm/response/base_response.rb', line 49 def completions raise NotImplementedError end |
#created_at ⇒ Time
Returns the timestamp when the response was created
19 20 21 |
# File 'lib/langchain/llm/response/base_response.rb', line 19 def created_at raise NotImplementedError end |
#embedding ⇒ Array<Float>
Return the first embedding
42 43 44 |
# File 'lib/langchain/llm/response/base_response.rb', line 42 def raise NotImplementedError end |
#embeddings ⇒ Array<Array>
Return the embeddings
63 64 65 |
# File 'lib/langchain/llm/response/base_response.rb', line 63 def raise NotImplementedError end |
#prompt_tokens ⇒ Integer
Number of tokens utilized in the prompt
70 71 72 |
# File 'lib/langchain/llm/response/base_response.rb', line 70 def prompt_tokens raise NotImplementedError end |
#total_tokens ⇒ Integer
Total number of tokens utilized
84 85 86 |
# File 'lib/langchain/llm/response/base_response.rb', line 84 def total_tokens raise NotImplementedError end |