💎🔗 Langchain.rb for Rails
The fastest way to sprinkle AI ✨ on top of your Rails app. Add OpenAI-powered question-and-answering in minutes.
Available for paid consulting engagements! Email me.
Dependencies
- Ruby 3.0+
- Postgres 11+
Table of Contents
Installation
Install the gem and add to the application's Gemfile by executing:
bundle add langchainrb_rails
If bundler is not being used to manage dependencies, install the gem by executing:
gem install langchainrb_rails
Configuration w/ Pgvector (requires Postgres 11+)
- Run the Rails generator to add vectorsearch to your ActiveRecord model
bash rails generate langchainrb_rails:pgvector --model=Product --llm=openai
This adds required dependencies to your Gemfile, creates the config/initializers/langchainrb_rails.rb initializer file, database migrations, and adds the necessary code to the ActiveRecord model to enable vectorsearch.
Bundle and migrate
bundle install && rails db:migrateSet the env var
OPENAI_API_KEYto your OpenAI API key: https://platform.openai.com/account/api-keysENV["OPENAI_API_KEY"]=Generate embeddings for your model
Product.
This can take a while depending on the number of database records.
Usage
Question and Answering
Product.ask("list the brands of shoes that are in stock")
Returns a String with a natural language answer. The answer is assembled using the following steps:
- An embedding is generated for the passed in
questionusing the selected LLM. - We calculate a cosine similarity to find records that most closely match your question's embedding.
- A prompt is created using the question and the above records (their
#as_vectorrepresentation )are added as context. - This prompt is passed to the LLM to generate an answer
Similarity Search
Product.similarity_search("t-shirt")
Returns ActiveRecord relation that most closely matches the query using vector search.
Customization
Changing the vector representation of a record
By default, embeddings are generated by calling the following method on your model instance:
to_json(except: :embedding)
You can override this by defining an #as_vector method in your model:
def as_vector
{ name: name, description: description, category: category.name, ... }.to_json
end
Re-generate embeddings after modifying this method:
Product.
Rails Generators
Pgvector Generator
rails generate langchainrb_rails:pgvector --model=Product --llm=openai
Pinecone Generator - adds vectorsearch to your ActiveRecord model
rails generate langchainrb_rails:pinecone --model=Product --llm=openai
Qdrant Generator - adds vectorsearch to your ActiveRecord model
rails generate langchainrb_rails:qdrant --model=Product --llm=openai
Available --llm options: cohere, google_palm, hugging_face, llama_cpp, ollama, openai, and replicate. The selected LLM will be used to generate embeddings and completions.
The --model option is used to specify which ActiveRecord model vectorsearch capabilities will be added to.
Pinecone Generator does the following:
- Creates the
config/initializers/langchainrb_rails.rbinitializer file - Adds necessary code to the ActiveRecord model to enable vectorsearch
- Adds
pineconegem to the Gemfile
Prompt Generator - adds prompt templating capabilities to your ActiveRecord model
rails generate langchainrb_rails:prompt
This generator adds the following files to your Rails project:
- An ActiveRecord
Promptmodel atapp/models/prompt.rb - A rails migration to create the
promptstable
You can then use the Prompt model to create and manage prompts for your model.
Example usage:
prompt = Prompt.create!(template: "Tell me a {adjective} joke about {subject}.")
prompt.render(adjective: "funny", subject: "elephants")
# => "Tell me a funny joke about elephants."
Assistant Generator - adds assistant capabilities to your ActiveRecord model
rails generate langchainrb_rails:assistant