Class: Datadog::Profiling::Exporter
- Inherits:
-
Object
- Object
- Datadog::Profiling::Exporter
- Defined in:
- lib/datadog/profiling/exporter.rb
Overview
Exports profiling data gathered by the multiple recorders in a Flush.
@ivoanjo: Note that the recorder that gathers pprof data is special, since we use its start/finish/empty? to decide if there’s data to flush, as well as the timestamp for that data. I could’ve made the whole design more generic, but I’m unsure if we’ll ever have more than a handful of recorders, so I’ve decided to make it specific until we actually need to support more recorders.
Constant Summary collapse
- PROFILE_DURATION_THRESHOLD_SECONDS =
Profiles with duration less than this will not be reported
1
Instance Method Summary collapse
- #can_flush? ⇒ Boolean
- #flush ⇒ Object
-
#initialize(pprof_recorder:, worker:, info_collector:, code_provenance_collector:, internal_metadata:, minimum_duration_seconds: PROFILE_DURATION_THRESHOLD_SECONDS, time_provider: Time, sequence_tracker: Datadog::Profiling::SequenceTracker) ⇒ Exporter
constructor
A new instance of Exporter.
- #reset_after_fork ⇒ Object
Constructor Details
#initialize(pprof_recorder:, worker:, info_collector:, code_provenance_collector:, internal_metadata:, minimum_duration_seconds: PROFILE_DURATION_THRESHOLD_SECONDS, time_provider: Time, sequence_tracker: Datadog::Profiling::SequenceTracker) ⇒ Exporter
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
# File 'lib/datadog/profiling/exporter.rb', line 34 def initialize( pprof_recorder:, worker:, info_collector:, code_provenance_collector:, internal_metadata:, minimum_duration_seconds: PROFILE_DURATION_THRESHOLD_SECONDS, time_provider: Time, sequence_tracker: Datadog::Profiling::SequenceTracker ) @pprof_recorder = pprof_recorder @worker = worker @code_provenance_collector = code_provenance_collector @minimum_duration_seconds = minimum_duration_seconds @time_provider = time_provider @last_flush_finish_at = nil @created_at = time_provider.now.utc = # NOTE: At the time of this comment collected info does not change over time so we'll hardcode # it on startup to prevent serializing the same info on every flush. @info_json = JSON.generate(info_collector.info).freeze @sequence_tracker = sequence_tracker end |
Instance Method Details
#can_flush? ⇒ Boolean
99 100 101 |
# File 'lib/datadog/profiling/exporter.rb', line 99 def can_flush? !duration_below_threshold?(last_flush_finish_at || created_at, time_provider.now.utc) end |
#flush ⇒ Object
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
# File 'lib/datadog/profiling/exporter.rb', line 58 def flush worker_stats = @worker.stats_and_reset_not_thread_safe serialization_result = pprof_recorder.serialize return if serialization_result.nil? start, finish, encoded_profile, profile_stats = serialization_result @last_flush_finish_at = finish if duration_below_threshold?(start, finish) Datadog.logger.debug("Skipped exporting profiling events as profile duration is below minimum") return end uncompressed_code_provenance = code_provenance_collector.refresh.generate_json if code_provenance_collector = Datadog.configuration. ? Core::Environment::Process.serialized : '' Flush.new( start: start, finish: finish, encoded_profile: encoded_profile, code_provenance_file_name: Datadog::Profiling::Ext::Transport::HTTP::CODE_PROVENANCE_FILENAME, code_provenance_data: uncompressed_code_provenance, tags_as_array: Datadog::Profiling::TagBuilder.call( settings: Datadog.configuration, profile_seq: sequence_tracker.get_next, ).to_a, process_tags: , internal_metadata: .merge( { worker_stats: worker_stats, profile_stats: profile_stats, recorder_stats: pprof_recorder.stats, gc: GC.stat, } ), info_json: info_json, ) end |
#reset_after_fork ⇒ Object
103 104 105 106 |
# File 'lib/datadog/profiling/exporter.rb', line 103 def reset_after_fork @last_flush_finish_at = time_provider.now.utc nil end |