Class: Sequent::Core::Persistors::ReplayOptimizedPostgresPersistor
- Inherits:
-
Object
- Object
- Sequent::Core::Persistors::ReplayOptimizedPostgresPersistor
- Includes:
- Persistor
- Defined in:
- lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb
Overview
The ReplayOptimizedPostgresPersistor is optimized for bulk loading records in a Postgres database.
Depending on the amount of records it uses CSV import, otherwise statements are batched using normal sql.
Rebuilding the view state (or projection) of an aggregate typically consists of an initial insert and then many updates and maybe a delete. With a normal Persistor (like ActiveRecordPersistor) each action is executed to the database. This persitor creates an inmemory store first and finally flushes the in memory store to the database. This can significantly reduces the amount of queries to the database. E.g. 1 insert, 6 updates is only a single insert using this Persistor.
After lot of experimenting this turned out to be the fastest way to to bulk inserts in the database. You can tweak the amount of records in the CSV via insert_with_csv_size
before it flushes to the database to gain (or loose) speed.
It is highly recommended to create indices
on the in memory record_store
to speed up the processing. By default all records are indexed by aggregate_id
if they have such a property.
Example:
class InvoiceProjector < Sequent::Core::Projector
on RecipientMovedEvent do |event|
update_all_records InvoiceRecord, recipient_id: event.recipient.aggregate_id do |record|
record.recipient_street = record.recipient.street
end
end
end
In this case it is wise to create an index on InvoiceRecord on the recipient_id like you would in the database.
Example:
ReplayOptimizedPostgresPersistor.new(
50,
{InvoiceRecord => [[:recipient_id]]}
)
Defined Under Namespace
Modules: InitStruct Classes: Index
Instance Attribute Summary collapse
-
#insert_with_csv_size ⇒ Object
Returns the value of attribute insert_with_csv_size.
-
#record_store ⇒ Object
readonly
Returns the value of attribute record_store.
Class Method Summary collapse
Instance Method Summary collapse
- #clear ⇒ Object
- #commit ⇒ Object
- #create_or_update_record(record_class, values, created_at = Time.now) {|record| ... } ⇒ Object
- #create_record(record_class, values) {|record| ... } ⇒ Object
- #create_records(record_class, array_of_value_hashes) ⇒ Object
- #delete_all_records(record_class, where_clause) ⇒ Object
- #delete_record(record_class, record) ⇒ Object
- #do_with_record(record_class, where_clause) {|record| ... } ⇒ Object
- #do_with_records(record_class, where_clause) ⇒ Object
- #find_records(record_class, where_clause) ⇒ Object
- #get_record(record_class, where_clause) ⇒ Object
- #get_record!(record_class, where_clause) ⇒ Object
-
#initialize(insert_with_csv_size = 50, indices = {}) ⇒ ReplayOptimizedPostgresPersistor
constructor
insert_with_csv_size
number of records to insert in a single batch. - #last_record(record_class, where_clause) ⇒ Object
- #update_all_records(record_class, where_clause, updates) ⇒ Object
- #update_record(record_class, event, where_clause = {aggregate_id: event.aggregate_id}, options = {}) {|record| ... } ⇒ Object
Constructor Details
#initialize(insert_with_csv_size = 50, indices = {}) ⇒ ReplayOptimizedPostgresPersistor
insert_with_csv_size
number of records to insert in a single batch
indices
Hash of indices to create in memory. Greatly speeds up the replaying.
Key corresponds to the name of the 'Record'
Values contains list of lists on which columns to index. E.g. [[:first_index_column], [:another_index, :with_to_columns]]
156 157 158 159 160 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 156 def initialize(insert_with_csv_size = 50, indices = {}) @insert_with_csv_size = insert_with_csv_size @record_store = Hash.new { |h, k| h[k] = Set.new } @record_index = Index.new(indices) end |
Instance Attribute Details
#insert_with_csv_size ⇒ Object
Returns the value of attribute insert_with_csv_size.
50 51 52 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 50 def insert_with_csv_size @insert_with_csv_size end |
#record_store ⇒ Object (readonly)
Returns the value of attribute record_store.
49 50 51 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 49 def record_store @record_store end |
Class Method Details
.struct_cache ⇒ Object
52 53 54 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 52 def self.struct_cache @struct_cache ||= {} end |
Instance Method Details
#clear ⇒ Object
341 342 343 344 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 341 def clear @record_store.clear @record_index.clear end |
#commit ⇒ Object
294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 294 def commit @record_store.each do |clazz, records| @column_cache ||= {} @column_cache[clazz.name] ||= clazz.columns.reduce({}) do |hash, column| hash.merge({ column.name => column }) end if records.size > @insert_with_csv_size csv = CSV.new("") column_names = clazz.column_names.reject { |name| name == "id" } records.each do |record| csv << column_names.map do |column_name| cast_value_to_column_type(clazz, column_name, record) end end buf = '' conn = ActiveRecord::Base.connection.raw_connection copy_data = StringIO.new csv.string conn.transaction do conn.copy_data("COPY #{clazz.table_name} (#{column_names.join(",")}) FROM STDIN WITH csv") do while copy_data.read(1024, buf) conn.put_copy_data(buf) end end end else clazz.unscoped do inserts = [] column_names = clazz.column_names.reject { |name| name == "id" } prepared_values = (1..column_names.size).map { |i| "$#{i}" }.join(",") records.each do |record| values = column_names.map do |column_name| cast_value_to_column_type(clazz, column_name, record) end inserts << values end sql = %Q{insert into #{clazz.table_name} (#{column_names.join(",")}) values (#{prepared_values})} inserts.each do |insert| clazz.connection.raw_connection.async_exec(sql, insert) end end end end ensure clear end |
#create_or_update_record(record_class, values, created_at = Time.now) {|record| ... } ⇒ Object
215 216 217 218 219 220 221 222 223 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 215 def create_or_update_record(record_class, values, created_at = Time.now) record = get_record(record_class, values) unless record record = create_record(record_class, values.merge(created_at: created_at)) end yield record if block_given? @record_index.update(record_class, record) record end |
#create_record(record_class, values) {|record| ... } ⇒ Object
173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 173 def create_record(record_class, values) column_names = record_class.column_names values = record_class.column_defaults.with_indifferent_access.merge(values) values.merge!(updated_at: values[:created_at]) if column_names.include?("updated_at") struct_class_name = "#{record_class.to_s}Struct" if self.class.struct_cache.has_key?(struct_class_name) struct_class = self.class.struct_cache[struct_class_name] else # We create a struct on the fly. # Since the replay happens in memory we implement the ==, eql? and hash methods # to point to the same object. A record is the same if and only if they point to # the same object. These methods are necessary since we use Set instead of []. class_def=<<-EOD #{struct_class_name} = Struct.new(*#{column_names.map(&:to_sym)}) class #{struct_class_name} include InitStruct def ==(other) self.equal?(other) end def hash self.object_id.hash end end EOD eval("#{class_def}") struct_class = ReplayOptimizedPostgresPersistor.const_get(struct_class_name) self.class.struct_cache[struct_class_name] = struct_class end record = struct_class.new.set_values(values) yield record if block_given? @record_store[record_class] << record @record_index.add(record_class, record) record end |
#create_records(record_class, array_of_value_hashes) ⇒ Object
211 212 213 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 211 def create_records(record_class, array_of_value_hashes) array_of_value_hashes.each { |values| create_record(record_class, values) } end |
#delete_all_records(record_class, where_clause) ⇒ Object
236 237 238 239 240 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 236 def delete_all_records(record_class, where_clause) find_records(record_class, where_clause).each do |record| delete_record(record_class, record) end end |
#delete_record(record_class, record) ⇒ Object
242 243 244 245 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 242 def delete_record(record_class, record) @record_store[record_class].delete(record) @record_index.remove(record_class, record) end |
#do_with_record(record_class, where_clause) {|record| ... } ⇒ Object
264 265 266 267 268 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 264 def do_with_record(record_class, where_clause) record = get_record!(record_class, where_clause) yield record @record_index.update(record_class, record) end |
#do_with_records(record_class, where_clause) ⇒ Object
256 257 258 259 260 261 262 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 256 def do_with_records(record_class, where_clause) records = find_records(record_class, where_clause) records.each do |record| yield record @record_index.update(record_class, record) end end |
#find_records(record_class, where_clause) ⇒ Object
270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 270 def find_records(record_class, where_clause) if @record_index.use_index?(record_class, where_clause) @record_index.find(record_class, where_clause) else @record_store[record_class].select do |record| where_clause.all? do |k, v| expected_value = v.kind_of?(Symbol) ? v.to_s : v actual_value = record[k.to_sym] actual_value = actual_value.to_s if actual_value.kind_of? Symbol if expected_value.kind_of?(Array) expected_value.include?(actual_value) else actual_value == expected_value end end end end.dup end |
#get_record(record_class, where_clause) ⇒ Object
231 232 233 234 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 231 def get_record(record_class, where_clause) results = find_records(record_class, where_clause) results.empty? ? nil : results.first end |
#get_record!(record_class, where_clause) ⇒ Object
225 226 227 228 229 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 225 def get_record!(record_class, where_clause) record = get_record(record_class, where_clause) raise("record #{record_class} not found for #{where_clause}, store: #{@record_store[record_class]}") unless record record end |
#last_record(record_class, where_clause) ⇒ Object
289 290 291 292 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 289 def last_record(record_class, where_clause) results = find_records(record_class, where_clause) results.empty? ? nil : results.last end |
#update_all_records(record_class, where_clause, updates) ⇒ Object
247 248 249 250 251 252 253 254 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 247 def update_all_records(record_class, where_clause, updates) find_records(record_class, where_clause).each do |record| updates.each_pair do |k, v| record[k.to_sym] = v end @record_index.update(record_class, record) end end |
#update_record(record_class, event, where_clause = {aggregate_id: event.aggregate_id}, options = {}) {|record| ... } ⇒ Object
162 163 164 165 166 167 168 169 170 171 |
# File 'lib/sequent/core/persistors/replay_optimized_postgres_persistor.rb', line 162 def update_record(record_class, event, where_clause = {aggregate_id: event.aggregate_id}, = {}, &block) record = get_record!(record_class, where_clause) record.updated_at = event.created_at if record.respond_to?(:updated_at) yield record if block_given? @record_index.update(record_class, record) update_sequence_number = .key?(:update_sequence_number) ? [:update_sequence_number] : record.respond_to?(:sequence_number=) record.sequence_number = event.sequence_number if update_sequence_number end |