Class: Langchain::Vectorsearch::Pgvector

Inherits:
Base
  • Object
show all
Defined in:
lib/langchainrb_overrides/vectorsearch/pgvector.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(llm:) ⇒ Pgvector

Returns a new instance of Pgvector.

Parameters:

  • url (String)

    The URL of the PostgreSQL database

  • index_name (String)

    The name of the table to use for the index

  • llm (Object)

    The LLM client to use

  • namespace (String)

    The namespace to use for the index when inserting/querying



25
26
27
28
29
30
31
32
33
34
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 25

def initialize(llm:)
  # If the line below is called, the generator fails as calls to
  # LangchainrbRails.config.vectorsearch will generate an exception.
  # These happen in the template files.
  # depends_on "neighbor"

  @operator = OPERATORS[DEFAULT_OPERATOR]

  super(llm: llm)
end

Instance Attribute Details

#llmObject (readonly)

The PostgreSQL vector search adapter

Gem requirements:

gem "pgvector", "~> 0.2"

Usage:

pgvector = Langchain::Vectorsearch::Pgvector.new(llm:)


18
19
20
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 18

def llm
  @llm
end

#modelObject

Returns the value of attribute model.



19
20
21
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 19

def model
  @model
end

#operatorObject (readonly)

The PostgreSQL vector search adapter

Gem requirements:

gem "pgvector", "~> 0.2"

Usage:

pgvector = Langchain::Vectorsearch::Pgvector.new(llm:)


18
19
20
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 18

def operator
  @operator
end

Instance Method Details

#add_texts(texts:, ids:) ⇒ Array<Integer>

Add a list of texts to the index

Parameters:

  • texts (Array<String>)

    The texts to add to the index

  • ids (Array<String>)

    The ids to add to the index, in the same order as the texts

Returns:

  • (Array<Integer>)

    The the ids of the added texts.



40
41
42
43
44
45
46
47
48
49
50
51
52
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 40

def add_texts(texts:, ids:)
  embeddings = texts.map do |text|
    llm.embed(text: text).embedding
  end

  # I believe the records returned by #find must be in the
  # same order as the embeddings. I _think_ this works for uuid ids but didn't test
  # deeply.
  # TODO - implement find_each so we don't load all records into memory
  model.find(ids).each.with_index do |record, i|
    record.update_column(:embedding, embeddings[i])
  end
end

#ask(question:, k: 4) {|String| ... } ⇒ String

Ask a question and return the answer

Parameters:

  • question (String)

    The question to ask

  • k (Integer) (defaults to: 4)

    The number of results to have in context

Yields:

  • (String)

    Stream responses back one String at a time

Returns:

  • (String)

    The answer to the question



110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 110

def ask(question:, k: 4, &block)
  # Noisy as the embedding column has a lot of data
  ActiveRecord::Base.logger.silence do
    search_results = similarity_search(query: question, k: k)

    context = search_results.map do |result|
      result.as_vector
    end
    context = context.join("\n---\n")

    prompt = generate_rag_prompt(question: question, context: context)

    messages = [{role: "user", content: prompt}]
    llm.chat(messages: messages, &block)
  end
end

#create_default_schemaObject

Invoke a rake task that will create an initializer (‘config/initializers/langchain.rb`) file and db/migrations/* files



70
71
72
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 70

def create_default_schema
  Rake::Task["pgvector"].invoke
end

#destroy_default_schemaObject

Destroy default schema



75
76
77
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 75

def destroy_default_schema
  # Tell the user to rollback the migration
end

#remove_texts(ids:) ⇒ Boolean

Remove vectors from the index

Parameters:

  • ids (Array<String>)

    The ids of the vectors to remove

Returns:

  • (Boolean)

    true



62
63
64
65
66
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 62

def remove_texts(ids:)
  # Since the record is being destroyed and the `embedding` is a column on the record,
  # we don't need to do anything here.
  true
end

#similarity_search(query:, k: 4) ⇒ Array<Hash>

Search for similar texts in the index TODO - drop the named “query:” param so it is the same interface as #ask?

Parameters:

  • query (String)

    The text to search for

  • k (Integer) (defaults to: 4)

    The number of top results to return

Returns:

  • (Array<Hash>)

    The results of the search



84
85
86
87
88
89
90
91
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 84

def similarity_search(query:, k: 4)
  embedding = llm.embed(text: query).embedding

  similarity_search_by_vector(
    embedding: embedding,
    k: k
  )
end

#similarity_search_by_vector(embedding:, k: 4) ⇒ Array<Hash>

Search for similar texts in the index by the passed in vector. You must generate your own vector using the same LLM that generated the embeddings stored in the Vectorsearch DB. TODO - drop the named “embedding:” param so it is the same interface as #ask?

Parameters:

  • embedding (Array<Float>)

    The vector to search for

  • k (Integer) (defaults to: 4)

    The number of top results to return

Returns:

  • (Array<Hash>)

    The results of the search



99
100
101
102
103
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 99

def similarity_search_by_vector(embedding:, k: 4)
  model
    .nearest_neighbors(:embedding, embedding, distance: operator)
    .limit(k)
end

#update_texts(texts:, ids:) ⇒ Object



54
55
56
# File 'lib/langchainrb_overrides/vectorsearch/pgvector.rb', line 54

def update_texts(texts:, ids:)
  add_texts(texts: texts, ids: ids)
end