Class: Google::Cloud::Firestore::BulkWriter
- Inherits:
-
Object
- Object
- Google::Cloud::Firestore::BulkWriter
- Defined in:
- lib/google/cloud/firestore/bulk_writer.rb
Overview
BulkWriter
Accumulate and efficiently sends large amounts of document write operations to the server.
BulkWriter can handle large data migrations or updates, buffering records in memory and submitting them to the server in batches of 20.
The submission of batches is internally parallelized with a ThreadPoolExecutor.
Constant Summary collapse
- MAX_RETRY_ATTEMPTS =
10
Instance Method Summary collapse
-
#close ⇒ nil
Closes the BulkWriter object for new operations.
-
#create(doc, data) ⇒ Google::Cloud::Firestore::Promise::Future
Creates a document with the provided data (fields and values).
-
#delete(doc, exists: nil, update_time: nil) ⇒ Google::Cloud::Firestore::Promise::Future
Deletes a document from the database.
-
#flush ⇒ nil
Flushes all the current operation before enqueuing new operations.
-
#initialize(client, service, request_threads: nil, batch_threads: nil, retries: nil) ⇒ BulkWriter
constructor
Initialize the attributes and start the schedule_operations job.
-
#set(doc, data, merge: nil) ⇒ Google::Cloud::Firestore::Promise::Future
Writes the provided data (fields and values) to the provided document.
-
#update(doc, data, update_time: nil) ⇒ Google::Cloud::Firestore::Promise::Future
Updates the document with the provided data (fields and values).
Constructor Details
#initialize(client, service, request_threads: nil, batch_threads: nil, retries: nil) ⇒ BulkWriter
Initialize the attributes and start the schedule_operations job
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 56 def initialize client, service, request_threads: nil, batch_threads: nil, retries: nil @client = client @service = service @closed = false @flush = false @request_threads = (request_threads || 2).to_i @write_thread_pool = Concurrent::ThreadPoolExecutor.new max_threads: @request_threads, max_queue: 0 @mutex = Mutex.new @scheduler = BulkWriterScheduler.new client, service, batch_threads @doc_refs = Set.new @retries = [retries || MAX_RETRY_ATTEMPTS, MAX_RETRY_ATTEMPTS].min @request_results = [] end |
Instance Method Details
#close ⇒ nil
Closes the BulkWriter object for new operations. Existing operations will be flushed and the threadpool will shutdown.
477 478 479 480 481 482 483 484 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 477 def close @mutex.synchronize { @closed = true } flush @mutex.synchronize do @write_thread_pool.shutdown @scheduler.close end end |
#create(doc, data) ⇒ Google::Cloud::Firestore::Promise::Future
Creates a document with the provided data (fields and values).
The operation will fail if the document already exists.
133 134 135 136 137 138 139 140 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 133 def create doc, data doc_path = coalesce_doc_path_argument doc pre_add_operation doc_path write = Convert.write_for_create doc_path, data create_and_enqueue_operation write end |
#delete(doc, exists: nil, update_time: nil) ⇒ Google::Cloud::Firestore::Promise::Future
Deletes a document from the database.
444 445 446 447 448 449 450 451 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 444 def delete doc, exists: nil, update_time: nil doc_path = coalesce_doc_path_argument doc pre_add_operation doc_path write = Convert.write_for_delete doc_path, exists: exists, update_time: update_time create_and_enqueue_operation write end |
#flush ⇒ nil
Flushes all the current operation before enqueuing new operations.
457 458 459 460 461 462 463 464 465 466 467 468 469 470 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 457 def flush @mutex.synchronize { @flush = true } @request_results.each do |result| begin result.wait! rescue StandardError # Ignored end end @mutex.synchronize do @doc_refs = Set.new @flush = false end end |
#set(doc, data, merge: nil) ⇒ Google::Cloud::Firestore::Promise::Future
Writes the provided data (fields and values) to the provided document.
If the document does not exist, it will be created. By default, the
provided data overwrites existing data, but the provided data can be
merged into the existing document using the merge
argument.
If you're not sure whether the document exists, use the merge
argument to merge the new data with any existing document data to
avoid overwriting entire documents.
247 248 249 250 251 252 253 254 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 247 def set doc, data, merge: nil doc_path = coalesce_doc_path_argument doc pre_add_operation doc_path write = Convert.write_for_set doc_path, data, merge: merge create_and_enqueue_operation write end |
#update(doc, data, update_time: nil) ⇒ Google::Cloud::Firestore::Promise::Future
Updates the document with the provided data (fields and values). The provided data is merged into the existing document data.
The operation will fail if the document does not exist.
365 366 367 368 369 370 371 372 |
# File 'lib/google/cloud/firestore/bulk_writer.rb', line 365 def update doc, data, update_time: nil doc_path = coalesce_doc_path_argument doc pre_add_operation doc_path write = Convert.write_for_update doc_path, data, update_time: update_time create_and_enqueue_operation write end |