Class: RubyEventStore::ROM::UnitOfWork
- Inherits:
-
Object
- Object
- RubyEventStore::ROM::UnitOfWork
- Defined in:
- lib/ruby_event_store/rom/unit_of_work.rb
Instance Method Summary collapse
- #call {|changesets = []| ... } ⇒ Object
-
#initialize(gateway) ⇒ UnitOfWork
constructor
A new instance of UnitOfWork.
Constructor Details
#initialize(gateway) ⇒ UnitOfWork
Returns a new instance of UnitOfWork.
6 7 8 |
# File 'lib/ruby_event_store/rom/unit_of_work.rb', line 6 def initialize(gateway) @gateway = gateway end |
Instance Method Details
#call {|changesets = []| ... } ⇒ Object
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
# File 'lib/ruby_event_store/rom/unit_of_work.rb', line 10 def call yield(changesets = []) @gateway.transaction( savepoint: true, # See: https://github.com/jeremyevans/sequel/blob/master/doc/transactions.rdoc # # Committing changesets concurrently causes MySQL deadlocks # which are not caught and retried by Sequel's built-in # :retry_on option. This appears to be a result of how ROM # handles exceptions which don't bubble up so that Sequel # can retry transactions with the :retry_on option when there's # a deadlock. # # This is exacerbated by the fact that changesets insert multiple # tuples with individual INSERT statements because ROM specifies # to Sequel to return a list of primary keys created. The likelihood # of a deadlock is reduced with batched INSERT statements. # # For this reason we need to manually insert changeset records to avoid # MySQL deadlocks or to allow Sequel to retry transactions # when the :retry_on option is specified. retry_on: Sequel::SerializationFailure, before_retry: lambda do |_num, ex| env.logger.warn("RETRY TRANSACTION [#{self.class.name} => #{ex.class.name}] #{ex.}") end ) { changesets.each(&:commit) } end |