Class: Fluent::Plugin::Buffer
- Includes:
- OwnedByMixin, Fluent::PluginHelper::Mixin, Fluent::PluginId, UniqueId::Mixin, MonitorMixin
- Defined in:
- lib/fluent/plugin/buffer/chunk.rb,
lib/fluent/plugin/buffer.rb,
lib/fluent/plugin/buffer/file_chunk.rb,
lib/fluent/plugin/buffer/memory_chunk.rb,
lib/fluent/plugin/buffer/file_single_chunk.rb
Overview
fluent/plugin/buffer is already loaded
Direct Known Subclasses
Defined Under Namespace
Classes: BufferChunkOverflowError, BufferError, BufferOverflowError, Chunk, FileChunk, FileSingleChunk, MemoryChunk, Metadata, ShouldRetry
Constant Summary collapse
- MINIMUM_APPEND_ATTEMPT_RECORDS =
10
- DEFAULT_CHUNK_LIMIT_SIZE =
8MB
8 * 1024 * 1024
- DEFAULT_TOTAL_LIMIT_SIZE =
512MB, same with v0.12 (BufferedOutput + buf_memory: 64 x 8MB)
512 * 1024 * 1024
- DEFAULT_CHUNK_FULL_THRESHOLD =
0.95
- STATS_KEYS =
[ 'stage_length', 'stage_byte_size', 'queue_length', 'queue_byte_size', 'available_buffer_space_ratios', 'total_queued_size', 'oldest_timekey', 'newest_timekey' ]
Constants included from Configurable
Configurable::CONFIG_TYPE_REGISTRY
Instance Attribute Summary collapse
-
#available_buffer_space_ratios_metrics ⇒ Object
readonly
Returns the value of attribute available_buffer_space_ratios_metrics.
-
#dequeued ⇒ Object
readonly
for tests.
-
#newest_timekey_metrics ⇒ Object
readonly
Returns the value of attribute newest_timekey_metrics.
-
#oldest_timekey_metrics ⇒ Object
readonly
Returns the value of attribute oldest_timekey_metrics.
-
#queue ⇒ Object
readonly
for tests.
-
#queue_length_metrics ⇒ Object
readonly
for metrics.
-
#queue_size_metrics ⇒ Object
readonly
for metrics.
-
#queued_num ⇒ Object
readonly
for tests.
-
#stage ⇒ Object
readonly
for tests.
-
#stage_length_metrics ⇒ Object
readonly
for metrics.
-
#stage_size_metrics ⇒ Object
readonly
for metrics.
-
#total_queued_size_metrics ⇒ Object
readonly
Returns the value of attribute total_queued_size_metrics.
Attributes inherited from Base
Instance Method Summary collapse
- #backup(chunk_unique_id) ⇒ Object
- #chunk_size_full?(chunk) ⇒ Boolean
- #chunk_size_over?(chunk) ⇒ Boolean
- #clear_queue! ⇒ Object
- #close ⇒ Object
- #configure(conf) ⇒ Object
- #dequeue_chunk ⇒ Object
- #enable_update_timekeys ⇒ Object
-
#enqueue_all(force_enqueue = false) ⇒ Object
At flush_at_shutdown, all staged chunks should be enqueued for buffer flush.
- #enqueue_chunk(metadata) ⇒ Object
- #enqueue_unstaged_chunk(chunk) ⇒ Object
- #generate_chunk(metadata) ⇒ Object
-
#initialize ⇒ Buffer
constructor
A new instance of Buffer.
-
#metadata(timekey: nil, tag: nil, variables: nil) ⇒ Object
Keep this method for existing code.
- #new_metadata(timekey: nil, tag: nil, variables: nil) ⇒ Object
- #persistent? ⇒ Boolean
- #purge_chunk(chunk_id) ⇒ Object
- #queue_full? ⇒ Boolean
- #queue_size ⇒ Object
- #queue_size=(value) ⇒ Object
- #queued?(metadata = nil, optimistic: false) ⇒ Boolean
- #queued_records ⇒ Object
-
#resume ⇒ Object
TODO: for back pressure feature def used?(ratio) @total_limit_size * ratio > @stage_size_metrics.get + @queue_size_metrics.get end.
- #stage_size ⇒ Object
- #stage_size=(value) ⇒ Object
- #start ⇒ Object
- #statistics ⇒ Object
- #storable? ⇒ Boolean
- #takeback_chunk(chunk_id) ⇒ Object
- #terminate ⇒ Object
- #timekeys ⇒ Object
- #update_timekeys ⇒ Object
-
#write(metadata_and_data, format: nil, size: nil, enqueue: false) ⇒ Object
metadata MUST have consistent object_id for each variation data MUST be Array of serialized events, or EventStream metadata_and_data MUST be a hash of { metadata => data }.
-
#write_once(metadata, data, format: nil, size: nil, &block) ⇒ Object
write once into a chunk 1.
-
#write_step_by_step(metadata, data, format, splits_count, &block) ⇒ Object
1.
Methods included from Fluent::PluginHelper::Mixin
Methods included from Fluent::PluginId
#plugin_id, #plugin_id_configured?, #plugin_id_for_test?, #plugin_root_dir, #stop
Methods included from UniqueId::Mixin
#dump_unique_id_hex, #generate_unique_id
Methods included from OwnedByMixin
Methods inherited from Base
#acquire_worker_lock, #after_shutdown, #after_shutdown?, #after_start, #after_started?, #before_shutdown, #before_shutdown?, #called_in_test?, #closed?, #configured?, #context_router, #context_router=, #fluentd_worker_id, #get_lock_path, #has_router?, #inspect, #multi_workers_ready?, #plugin_root_dir, #reloadable_plugin?, #shutdown, #shutdown?, #started?, #stop, #stopped?, #string_safe_encoding, #terminated?
Methods included from SystemConfig::Mixin
#system_config, #system_config_override
Methods included from Configurable
#config, #configure_proxy_generate, #configured_section_create, included, lookup_type, register_type
Constructor Details
#initialize ⇒ Buffer
Returns a new instance of Buffer.
172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
# File 'lib/fluent/plugin/buffer.rb', line 172 def initialize super @chunk_limit_size = nil @total_limit_size = nil @queue_limit_length = nil @chunk_limit_records = nil @stage = {} #=> Hash (metadata -> chunk) : not flushed yet @queue = [] #=> Array (chunks) : already flushed (not written) @dequeued = {} #=> Hash (unique_id -> chunk): already written (not purged) @queued_num = {} # metadata => int (number of queued chunks) @dequeued_num = {} # metadata => int (number of dequeued chunks) @stage_length_metrics = nil @stage_size_metrics = nil @queue_length_metrics = nil @queue_size_metrics = nil @available_buffer_space_ratios_metrics = nil @total_queued_size_metrics = nil @newest_timekey_metrics = nil @oldest_timekey_metrics = nil @timekeys = Hash.new(0) @enable_update_timekeys = false @mutex = Mutex.new end |
Instance Attribute Details
#available_buffer_space_ratios_metrics ⇒ Object (readonly)
Returns the value of attribute available_buffer_space_ratios_metrics.
167 168 169 |
# File 'lib/fluent/plugin/buffer.rb', line 167 def available_buffer_space_ratios_metrics @available_buffer_space_ratios_metrics end |
#dequeued ⇒ Object (readonly)
for tests
170 171 172 |
# File 'lib/fluent/plugin/buffer.rb', line 170 def dequeued @dequeued end |
#newest_timekey_metrics ⇒ Object (readonly)
Returns the value of attribute newest_timekey_metrics.
168 169 170 |
# File 'lib/fluent/plugin/buffer.rb', line 168 def newest_timekey_metrics @newest_timekey_metrics end |
#oldest_timekey_metrics ⇒ Object (readonly)
Returns the value of attribute oldest_timekey_metrics.
168 169 170 |
# File 'lib/fluent/plugin/buffer.rb', line 168 def oldest_timekey_metrics @oldest_timekey_metrics end |
#queue ⇒ Object (readonly)
for tests
170 171 172 |
# File 'lib/fluent/plugin/buffer.rb', line 170 def queue @queue end |
#queue_length_metrics ⇒ Object (readonly)
for metrics
166 167 168 |
# File 'lib/fluent/plugin/buffer.rb', line 166 def queue_length_metrics @queue_length_metrics end |
#queue_size_metrics ⇒ Object (readonly)
for metrics
166 167 168 |
# File 'lib/fluent/plugin/buffer.rb', line 166 def queue_size_metrics @queue_size_metrics end |
#queued_num ⇒ Object (readonly)
for tests
170 171 172 |
# File 'lib/fluent/plugin/buffer.rb', line 170 def queued_num @queued_num end |
#stage ⇒ Object (readonly)
for tests
170 171 172 |
# File 'lib/fluent/plugin/buffer.rb', line 170 def stage @stage end |
#stage_length_metrics ⇒ Object (readonly)
for metrics
166 167 168 |
# File 'lib/fluent/plugin/buffer.rb', line 166 def stage_length_metrics @stage_length_metrics end |
#stage_size_metrics ⇒ Object (readonly)
for metrics
166 167 168 |
# File 'lib/fluent/plugin/buffer.rb', line 166 def stage_size_metrics @stage_size_metrics end |
#total_queued_size_metrics ⇒ Object (readonly)
Returns the value of attribute total_queued_size_metrics.
167 168 169 |
# File 'lib/fluent/plugin/buffer.rb', line 167 def total_queued_size_metrics @total_queued_size_metrics end |
Instance Method Details
#backup(chunk_unique_id) ⇒ Object
918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 |
# File 'lib/fluent/plugin/buffer.rb', line 918 def backup(chunk_unique_id) unique_id = dump_unique_id_hex(chunk_unique_id) if @disable_chunk_backup log.warn "disable_chunk_backup is true. #{unique_id} chunk is not backed up." return end safe_owner_id = owner.plugin_id.gsub(/[ "\/\\:;|*<>?]/, '_') backup_base_dir = system_config.root_dir || DEFAULT_BACKUP_DIR backup_file = File.join(backup_base_dir, 'backup', "worker#{fluentd_worker_id}", safe_owner_id, "#{unique_id}.log") backup_dir = File.dirname(backup_file) log.warn "bad chunk is moved to #{backup_file}" FileUtils.mkdir_p(backup_dir, mode: system_config. || Fluent::DEFAULT_DIR_PERMISSION) unless Dir.exist?(backup_dir) File.open(backup_file, 'ab', system_config. || Fluent::DEFAULT_FILE_PERMISSION) { |f| yield f } end |
#chunk_size_full?(chunk) ⇒ Boolean
641 642 643 |
# File 'lib/fluent/plugin/buffer.rb', line 641 def chunk_size_full?(chunk) chunk.bytesize >= @chunk_limit_size * @chunk_full_threshold || (@chunk_limit_records && chunk.size >= @chunk_limit_records * @chunk_full_threshold) end |
#chunk_size_over?(chunk) ⇒ Boolean
637 638 639 |
# File 'lib/fluent/plugin/buffer.rb', line 637 def chunk_size_over?(chunk) chunk.bytesize > @chunk_limit_size || (@chunk_limit_records && chunk.size > @chunk_limit_records) end |
#clear_queue! ⇒ Object
619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 |
# File 'lib/fluent/plugin/buffer.rb', line 619 def clear_queue! log.on_trace { log.trace "clearing queue", instance: self.object_id } synchronize do until @queue.empty? begin q = @queue.shift log.trace("purging a chunk in queue"){ {id: dump_unique_id_hex(chunk.unique_id), bytesize: chunk.bytesize, size: chunk.size} } q.purge rescue => e log.error "unexpected error while clearing buffer queue", error_class: e.class, error: e log.error_backtrace end end @queue_size_metrics.set(0) end end |
#close ⇒ Object
269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 |
# File 'lib/fluent/plugin/buffer.rb', line 269 def close super synchronize do log.debug "closing buffer", instance: self.object_id @dequeued.each_pair do |chunk_id, chunk| chunk.close end until @queue.empty? @queue.shift.close end @stage.each_pair do |, chunk| chunk.close end end end |
#configure(conf) ⇒ Object
219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 |
# File 'lib/fluent/plugin/buffer.rb', line 219 def configure(conf) super unless @queue_limit_length.nil? @total_limit_size = @chunk_limit_size * @queue_limit_length end @stage_length_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "stage_length", help_text: 'Length of stage buffers', prefer_gauge: true) @stage_length_metrics.set(0) @stage_size_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "stage_byte_size", help_text: 'Total size of stage buffers', prefer_gauge: true) @stage_size_metrics.set(0) # Ensure zero. @queue_length_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "queue_length", help_text: 'Length of queue buffers', prefer_gauge: true) @queue_length_metrics.set(0) @queue_size_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "queue_byte_size", help_text: 'Total size of queue buffers', prefer_gauge: true) @queue_size_metrics.set(0) # Ensure zero. @available_buffer_space_ratios_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "available_buffer_space_ratios", help_text: 'Ratio of available space in buffer', prefer_gauge: true) @available_buffer_space_ratios_metrics.set(100) # Default is 100%. @total_queued_size_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "total_queued_size", help_text: 'Total size of stage and queue buffers', prefer_gauge: true) @total_queued_size_metrics.set(0) @newest_timekey_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "newest_timekey", help_text: 'Newest timekey in buffer', prefer_gauge: true) @oldest_timekey_metrics = metrics_create(namespace: "fluentd", subsystem: "buffer", name: "oldest_timekey", help_text: 'Oldest timekey in buffer', prefer_gauge: true) end |
#dequeue_chunk ⇒ Object
557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 |
# File 'lib/fluent/plugin/buffer.rb', line 557 def dequeue_chunk return nil if @queue.empty? log.on_trace { log.trace "dequeueing a chunk", instance: self.object_id } synchronize do chunk = @queue.shift # this buffer is dequeued by other thread just before "synchronize" in this thread return nil unless chunk @dequeued[chunk.unique_id] = chunk @queued_num[chunk.] -= 1 # BUG if nil, 0 or subzero @dequeued_num[chunk.] ||= 0 @dequeued_num[chunk.] += 1 log.trace "chunk dequeued", instance: self.object_id, metadata: chunk. chunk end end |
#enable_update_timekeys ⇒ Object
249 250 251 |
# File 'lib/fluent/plugin/buffer.rb', line 249 def enable_update_timekeys @enable_update_timekeys = true end |
#enqueue_all(force_enqueue = false) ⇒ Object
At flush_at_shutdown, all staged chunks should be enqueued for buffer flush. Set true to force_enqueue for it.
535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 |
# File 'lib/fluent/plugin/buffer.rb', line 535 def enqueue_all(force_enqueue = false) log.on_trace { log.trace "enqueueing all chunks in buffer", instance: self.object_id } update_timekeys if @enable_update_timekeys if block_given? synchronize{ @stage.keys }.each do || return if !force_enqueue && queue_full? # NOTE: The following line might cause data race depending on Ruby implementations except CRuby # cf. https://github.com/fluent/fluentd/pull/1721#discussion_r146170251 chunk = @stage[] next unless chunk v = yield , chunk enqueue_chunk() if v end else synchronize{ @stage.keys }.each do || return if !force_enqueue && queue_full? enqueue_chunk() end end end |
#enqueue_chunk(metadata) ⇒ Object
480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 |
# File 'lib/fluent/plugin/buffer.rb', line 480 def enqueue_chunk() log.on_trace { log.trace "enqueueing chunk", instance: self.object_id, metadata: } chunk = synchronize do @stage.delete() end return nil unless chunk chunk.synchronize do synchronize do if chunk.empty? chunk.close else chunk..seq = 0 # metadata.seq should be 0 for counting @queued_num @queue << chunk @queued_num[] = @queued_num.fetch(, 0) + 1 chunk.enqueued! end bytesize = chunk.bytesize @stage_size_metrics.sub(bytesize) @queue_size_metrics.add(bytesize) end end nil end |
#enqueue_unstaged_chunk(chunk) ⇒ Object
506 507 508 509 510 511 512 513 514 515 516 517 518 519 |
# File 'lib/fluent/plugin/buffer.rb', line 506 def enqueue_unstaged_chunk(chunk) log.on_trace { log.trace "enqueueing unstaged chunk", instance: self.object_id, metadata: chunk. } synchronize do chunk.synchronize do = chunk. .seq = 0 # metadata.seq should be 0 for counting @queued_num @queue << chunk @queued_num[] = @queued_num.fetch(, 0) + 1 chunk.enqueued! end @queue_size_metrics.add(chunk.bytesize) end end |
#generate_chunk(metadata) ⇒ Object
308 309 310 |
# File 'lib/fluent/plugin/buffer.rb', line 308 def generate_chunk() raise NotImplementedError, "Implement this method in child class" end |
#metadata(timekey: nil, tag: nil, variables: nil) ⇒ Object
Keep this method for existing code
317 318 319 |
# File 'lib/fluent/plugin/buffer.rb', line 317 def (timekey: nil, tag: nil, variables: nil) Metadata.new(timekey, tag, variables) end |
#new_metadata(timekey: nil, tag: nil, variables: nil) ⇒ Object
312 313 314 |
# File 'lib/fluent/plugin/buffer.rb', line 312 def (timekey: nil, tag: nil, variables: nil) Metadata.new(timekey, tag, variables) end |
#persistent? ⇒ Boolean
215 216 217 |
# File 'lib/fluent/plugin/buffer.rb', line 215 def persistent? false end |
#purge_chunk(chunk_id) ⇒ Object
590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 |
# File 'lib/fluent/plugin/buffer.rb', line 590 def purge_chunk(chunk_id) = nil synchronize do chunk = @dequeued.delete(chunk_id) return nil unless chunk # purged by other threads = chunk. log.on_trace { log.trace "purging a chunk", instance: self.object_id, chunk_id: dump_unique_id_hex(chunk_id), metadata: } begin bytesize = chunk.bytesize chunk.purge @queue_size_metrics.sub(bytesize) rescue => e log.error "failed to purge buffer chunk", chunk_id: dump_unique_id_hex(chunk_id), error_class: e.class, error: e log.error_backtrace end @dequeued_num[chunk.] -= 1 if && !@stage[] && (!@queued_num[] || @queued_num[] < 1) && @dequeued_num[].zero? @queued_num.delete() @dequeued_num.delete() end log.on_trace { log.trace "chunk purged", instance: self.object_id, chunk_id: dump_unique_id_hex(chunk_id), metadata: } end nil end |
#queue_full? ⇒ Boolean
462 463 464 |
# File 'lib/fluent/plugin/buffer.rb', line 462 def queue_full? synchronize { @queue.size } >= @queued_chunks_limit_size end |
#queue_size ⇒ Object
207 208 209 |
# File 'lib/fluent/plugin/buffer.rb', line 207 def queue_size @queue_size_metrics.get end |
#queue_size=(value) ⇒ Object
211 212 213 |
# File 'lib/fluent/plugin/buffer.rb', line 211 def queue_size=(value) @queue_size_metrics.set(value) end |
#queued?(metadata = nil, optimistic: false) ⇒ Boolean
470 471 472 473 474 475 476 477 478 |
# File 'lib/fluent/plugin/buffer.rb', line 470 def queued?( = nil, optimistic: false) if optimistic optimistic_queued?() else synchronize do optimistic_queued?() end end end |
#queued_records ⇒ Object
466 467 468 |
# File 'lib/fluent/plugin/buffer.rb', line 466 def queued_records synchronize { @queue.reduce(0){|r, chunk| r + chunk.size } } end |
#resume ⇒ Object
TODO: for back pressure feature def used?(ratio)
@total_limit_size * ratio > @stage_size_metrics.get + @queue_size_metrics.get
end
303 304 305 306 |
# File 'lib/fluent/plugin/buffer.rb', line 303 def resume # return {}, [] raise NotImplementedError, "Implement this method in child class" end |
#stage_size ⇒ Object
199 200 201 |
# File 'lib/fluent/plugin/buffer.rb', line 199 def stage_size @stage_size_metrics.get end |
#stage_size=(value) ⇒ Object
203 204 205 |
# File 'lib/fluent/plugin/buffer.rb', line 203 def stage_size=(value) @stage_size_metrics.set(value) end |
#start ⇒ Object
253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 |
# File 'lib/fluent/plugin/buffer.rb', line 253 def start super @stage, @queue = resume @stage.each_pair do |, chunk| @stage_size_metrics.add(chunk.bytesize) end @queue.each do |chunk| @queued_num[chunk.] ||= 0 @queued_num[chunk.] += 1 @queue_size_metrics.add(chunk.bytesize) end update_timekeys log.debug "buffer started", instance: self.object_id, stage_size: @stage_size_metrics.get, queue_size: @queue_size_metrics.get end |
#statistics ⇒ Object
889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 |
# File 'lib/fluent/plugin/buffer.rb', line 889 def statistics stage_size, queue_size = @stage_size_metrics.get, @queue_size_metrics.get buffer_space = 1.0 - ((stage_size + queue_size * 1.0) / @total_limit_size) @stage_length_metrics.set(@stage.size) @queue_length_metrics.set(@queue.size) @available_buffer_space_ratios_metrics.set(buffer_space * 100) @total_queued_size_metrics.set(stage_size + queue_size) stats = { 'stage_length' => @stage_length_metrics.get, 'stage_byte_size' => stage_size, 'queue_length' => @queue_length_metrics.get, 'queue_byte_size' => queue_size, 'available_buffer_space_ratios' => @available_buffer_space_ratios_metrics.get.round(1), 'total_queued_size' => @total_queued_size_metrics.get, } tkeys = timekeys if (m = tkeys.min) @oldest_timekey_metrics.set(m) stats['oldest_timekey'] = @oldest_timekey_metrics.get end if (m = tkeys.max) @newest_timekey_metrics.set(m) stats['newest_timekey'] = @newest_timekey_metrics.get end { 'buffer' => stats } end |
#storable? ⇒ Boolean
294 295 296 |
# File 'lib/fluent/plugin/buffer.rb', line 294 def storable? @total_limit_size > @stage_size_metrics.get + @queue_size_metrics.get end |
#takeback_chunk(chunk_id) ⇒ Object
576 577 578 579 580 581 582 583 584 585 586 587 588 |
# File 'lib/fluent/plugin/buffer.rb', line 576 def takeback_chunk(chunk_id) log.on_trace { log.trace "taking back a chunk", instance: self.object_id, chunk_id: dump_unique_id_hex(chunk_id) } synchronize do chunk = @dequeued.delete(chunk_id) return false unless chunk # already purged by other thread @queue.unshift(chunk) log.on_trace { log.trace "chunk taken back", instance: self.object_id, chunk_id: dump_unique_id_hex(chunk_id), metadata: chunk. } @queued_num[chunk.] += 1 # BUG if nil @dequeued_num[chunk.] -= 1 end true end |
#terminate ⇒ Object
285 286 287 288 289 290 291 292 |
# File 'lib/fluent/plugin/buffer.rb', line 285 def terminate super @dequeued = @stage = @queue = @queued_num = nil @stage_length_metrics = @stage_size_metrics = @queue_length_metrics = @queue_size_metrics = nil @available_buffer_space_ratios_metrics = @total_queued_size_metrics = nil @newest_timekey_metrics = @oldest_timekey_metrics = nil @timekeys.clear end |
#timekeys ⇒ Object
321 322 323 |
# File 'lib/fluent/plugin/buffer.rb', line 321 def timekeys @timekeys.keys end |
#update_timekeys ⇒ Object
521 522 523 524 525 526 527 528 529 530 531 532 |
# File 'lib/fluent/plugin/buffer.rb', line 521 def update_timekeys synchronize do chunks = @stage.values chunks.concat(@queue) @timekeys = chunks.each_with_object({}) do |chunk, keys| if chunk. && chunk..timekey t = chunk..timekey keys[t] = keys.fetch(t, 0) + 1 end end end end |
#write(metadata_and_data, format: nil, size: nil, enqueue: false) ⇒ Object
metadata MUST have consistent object_id for each variation data MUST be Array of serialized events, or EventStream metadata_and_data MUST be a hash of { metadata => data }
328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 |
# File 'lib/fluent/plugin/buffer.rb', line 328 def write(, format: nil, size: nil, enqueue: false) return if .size < 1 raise BufferOverflowError, "buffer space has too many data" unless storable? log.on_trace { log.trace "writing events into buffer", instance: self.object_id, metadata_size: .size } operated_chunks = [] unstaged_chunks = {} # metadata => [chunk, chunk, ...] chunks_to_enqueue = [] staged_bytesizes_by_chunk = {} # track internal BufferChunkOverflowError in write_step_by_step buffer_chunk_overflow_errors = [] begin # sort metadata to get lock of chunks in same order with other threads .keys.sort.each do || data = [] write_once(, data, format: format, size: size) do |chunk, adding_bytesize, error| chunk.mon_enter # add lock to prevent to be committed/rollbacked from other threads operated_chunks << chunk if chunk.staged? # # https://github.com/fluent/fluentd/issues/2712 # write_once is supposed to write to a chunk only once # but this block **may** run multiple times from write_step_by_step and previous write may be rollbacked # So we should be counting the stage_size only for the last successful write # staged_bytesizes_by_chunk[chunk] = adding_bytesize elsif chunk.unstaged? unstaged_chunks[] ||= [] unstaged_chunks[] << chunk end if error && !error.empty? buffer_chunk_overflow_errors << error end end end return if operated_chunks.empty? # Now, this thread acquires many locks of chunks... getting buffer-global lock causes dead lock. # Any operations needs buffer-global lock (including enqueueing) should be done after releasing locks. first_chunk = operated_chunks.shift # Following commits for other chunks also can finish successfully if the first commit operation # finishes without any exceptions. # In most cases, #commit just requires very small disk spaces, so major failure reason are # permission errors, disk failures and other permanent(fatal) errors. begin first_chunk.commit if enqueue || first_chunk.unstaged? || chunk_size_full?(first_chunk) chunks_to_enqueue << first_chunk end first_chunk.mon_exit rescue operated_chunks.unshift(first_chunk) raise end errors = [] # Buffer plugin estimates there's no serious error cause: will commit for all chunks eigher way operated_chunks.each do |chunk| begin chunk.commit if enqueue || chunk.unstaged? || chunk_size_full?(chunk) chunks_to_enqueue << chunk end chunk.mon_exit rescue => e chunk.rollback chunk.mon_exit errors << e end end # All locks about chunks are released. # # Now update the stage, stage_size with proper locking # FIX FOR stage_size miscomputation - https://github.com/fluent/fluentd/issues/2712 # staged_bytesizes_by_chunk.each do |chunk, bytesize| chunk.synchronize do synchronize { @stage_size_metrics.add(bytesize) } log.on_trace { log.trace { "chunk #{chunk.path} size_added: #{bytesize} new_size: #{chunk.bytesize}" } } end end chunks_to_enqueue.each do |c| if c.staged? && (enqueue || chunk_size_full?(c)) m = c. enqueue_chunk(m) if unstaged_chunks[m] && !unstaged_chunks[m].empty? u = unstaged_chunks[m].pop u.synchronize do if u.unstaged? && !chunk_size_full?(u) # `u.metadata.seq` and `m.seq` can be different but Buffer#enqueue_chunk expect them to be the same value u..seq = 0 synchronize { @stage[m] = u.staged! @stage_size_metrics.add(u.bytesize) } end end end elsif c.unstaged? enqueue_unstaged_chunk(c) else # previously staged chunk is already enqueued, closed or purged. # no problem. end end operated_chunks.clear if errors.empty? if errors.size > 0 log.warn "error occurs in committing chunks: only first one raised", errors: errors.map(&:class) raise errors.first end ensure operated_chunks.each do |chunk| chunk.rollback rescue nil # nothing possible to do for #rollback failure if chunk.unstaged? chunk.purge rescue nil # to prevent leakage of unstaged chunks end chunk.mon_exit rescue nil # this may raise ThreadError for chunks already committed end unless buffer_chunk_overflow_errors.empty? # Notify delayed BufferChunkOverflowError here raise BufferChunkOverflowError, buffer_chunk_overflow_errors.join(", ") end end end |
#write_once(metadata, data, format: nil, size: nil, &block) ⇒ Object
write once into a chunk
-
append whole data into existing chunk
-
commit it & return unless chunk_size_over?
-
enqueue existing chunk & retry whole method if chunk was not empty
-
go to step_by_step writing
653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 |
# File 'lib/fluent/plugin/buffer.rb', line 653 def write_once(, data, format: nil, size: nil, &block) return if data.empty? stored = false adding_bytesize = nil chunk = synchronize { @stage[] ||= generate_chunk().staged! } enqueue_chunk_before_retry = false chunk.synchronize do # retry this method if chunk is already queued (between getting chunk and entering critical section) raise ShouldRetry unless chunk.staged? empty_chunk = chunk.empty? original_bytesize = chunk.bytesize begin if format serialized = format.call(data) chunk.concat(serialized, size ? size.call : data.size) else chunk.append(data, compress: @compress) end adding_bytesize = chunk.bytesize - original_bytesize if chunk_size_over?(chunk) if format && empty_chunk if chunk.bytesize > @chunk_limit_size log.warn "chunk bytes limit exceeds for an emitted event stream: #{adding_bytesize}bytes" else log.warn "chunk size limit exceeds for an emitted event stream: #{chunk.size}records" end end chunk.rollback if format && !empty_chunk # Event streams should be appended into a chunk at once # as far as possible, to improve performance of formatting. # Event stream may be a MessagePackEventStream. We don't want to split it into # 2 or more chunks (except for a case that the event stream is larger than chunk limit). enqueue_chunk_before_retry = true raise ShouldRetry end else stored = true end rescue chunk.rollback raise end if stored block.call(chunk, adding_bytesize) end end unless stored # try step-by-step appending if data can't be stored into existing a chunk in non-bulk mode # # 1/10 size of original event stream (splits_count == 10) seems enough small # to try emitting events into existing chunk. # it does not matter to split event stream into very small splits, because chunks have less # overhead to write data many times (even about file buffer chunks). write_step_by_step(, data, format, 10, &block) end rescue ShouldRetry enqueue_chunk() if enqueue_chunk_before_retry retry end |
#write_step_by_step(metadata, data, format, splits_count, &block) ⇒ Object
-
split event streams into many (10 -> 100 -> 1000 -> …) chunks
-
append splits into the staged chunks as much as possible
-
create unstaged chunk and append rest splits -> repeat it for all splits
729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 |
# File 'lib/fluent/plugin/buffer.rb', line 729 def write_step_by_step(, data, format, splits_count, &block) splits = [] if splits_count > data.size splits_count = data.size end slice_size = if data.size % splits_count == 0 data.size / splits_count else data.size / (splits_count - 1) end slice_origin = 0 while slice_origin < data.size splits << data.slice(slice_origin, slice_size) slice_origin += slice_size end # This method will append events into the staged chunk at first. # Then, will generate chunks not staged (not queued) to append rest data. staged_chunk_used = false modified_chunks = [] = get_next_chunk = ->(){ if staged_chunk_used # Staging new chunk here is bad idea: # Recovering whole state including newly staged chunks is much harder than current implementation. = .dup_next generate_chunk() else synchronize { @stage[] ||= generate_chunk().staged! } end } writing_splits_index = 0 enqueue_chunk_before_retry = false while writing_splits_index < splits.size chunk = get_next_chunk.call errors = [] # The chunk must be locked until being passed to &block. chunk.mon_enter modified_chunks << {chunk: chunk, adding_bytesize: 0, errors: errors} raise ShouldRetry unless chunk.writable? staged_chunk_used = true if chunk.staged? original_bytesize = committed_bytesize = chunk.bytesize begin while writing_splits_index < splits.size split = splits[writing_splits_index] formatted_split = format ? format.call(split) : nil if split.size == 1 # Check BufferChunkOverflowError determined_bytesize = nil if @compress != :text determined_bytesize = nil elsif formatted_split determined_bytesize = formatted_split.bytesize elsif split.first.respond_to?(:bytesize) determined_bytesize = split.first.bytesize end if determined_bytesize && determined_bytesize > @chunk_limit_size # It is a obvious case that BufferChunkOverflowError should be raised here. # But if it raises here, already processed 'split' or # the proceeding 'split' will be lost completely. # So it is a last resort to delay raising such a exception errors << "a #{determined_bytesize} bytes record (nth: #{writing_splits_index}) is larger than buffer chunk limit size (#{@chunk_limit_size})" writing_splits_index += 1 next end if determined_bytesize.nil? || chunk.bytesize + determined_bytesize > @chunk_limit_size # The split will (might) cause size over so keep already processed # 'split' content here (allow performance regression a bit). chunk.commit committed_bytesize = chunk.bytesize end end if format chunk.concat(formatted_split, split.size) else chunk.append(split, compress: @compress) end adding_bytes = chunk.bytesize - committed_bytesize if chunk_size_over?(chunk) # split size is larger than difference between size_full? and size_over? chunk.rollback committed_bytesize = chunk.bytesize if split.size == 1 # Check BufferChunkOverflowError again if adding_bytes > @chunk_limit_size errors << "concatenated/appended a #{adding_bytes} bytes record (nth: #{writing_splits_index}) is larger than buffer chunk limit size (#{@chunk_limit_size})" writing_splits_index += 1 next else # As already processed content is kept after rollback, then unstaged chunk should be queued. # After that, re-process current split again. # New chunk should be allocated, to do it, modify @stage and so on. synchronize { @stage.delete() } staged_chunk_used = false chunk.unstaged! break end end if chunk_size_full?(chunk) || split.size == 1 enqueue_chunk_before_retry = true else splits_count *= 10 end raise ShouldRetry end writing_splits_index += 1 if chunk_size_full?(chunk) break end end rescue chunk.purge if chunk.unstaged? # unstaged chunk will leak unless purge it raise end modified_chunks.last[:adding_bytesize] = chunk.bytesize - original_bytesize end modified_chunks.each do |data| block.call(data[:chunk], data[:adding_bytesize], data[:errors]) end rescue ShouldRetry modified_chunks.each do |data| chunk = data[:chunk] chunk.rollback rescue nil if chunk.unstaged? chunk.purge rescue nil end chunk.mon_exit rescue nil end enqueue_chunk() if enqueue_chunk_before_retry retry ensure modified_chunks.each do |data| chunk = data[:chunk] chunk.mon_exit end end |