Method: Langchain::LLM::LlamaCpp#complete

Defined in:
lib/langchain/llm/llama_cpp.rb

#complete(prompt:, n_predict: 128) ⇒ String

Returns The completed prompt.

Parameters:

  • prompt (String)

    The prompt to complete

  • n_predict (Integer) (defaults to: 128)

    The number of tokens to predict

Returns:

  • (String)

    The completed prompt


51
52
53
54
55
# File 'lib/langchain/llm/llama_cpp.rb', line 51

def complete(prompt:, n_predict: 128)
  # contexts do not appear to be stateful when it comes to completion, so re-use the same one
  context = completion_context
  ::LLaMACpp.generate(context, prompt, n_predict: n_predict)
end