Class: StatsD::Instrument::Client

Inherits:
Object
  • Object
show all
Includes:
Strict
Defined in:
lib/statsd/instrument/client.rb

Overview

The Client is the main interface for using StatsD. It defines the metric methods that you would normally call from your application.

The client set to StatsD.singleton_client will handle all metric calls made against the StatsD singleton, e.g. StatsD.increment.

We recommend that the configuration of the StatsD setup is provided through environment variables

You are encouraged to instantiate multiple clients, and instantiate variants of an existing clients using #clone_with_options. We recommend instantiating a separate client for every logical component of your application using clone_with_options, and setting a different metric prefix.

Constant Summary collapse

NO_CHANGE =
Object.new

Instance Attribute Summary collapse

Metric Methods collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(prefix: nil, default_sample_rate: nil, default_tags: nil, implementation: "datadog", sink: StatsD::Instrument::NullSink.new, datagram_builder_class: self.class.datagram_builder_class_for_implementation(implementation), enable_aggregation: false, aggregation_flush_interval: 2.0, aggregation_max_context_size: StatsD::Instrument::Aggregator::DEFAULT_MAX_CONTEXT_SIZE) ⇒ Client

Instantiates a new client.



151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
# File 'lib/statsd/instrument/client.rb', line 151

def initialize(
  prefix: nil,
  default_sample_rate: nil,
  default_tags: nil,
  implementation: "datadog",
  sink: StatsD::Instrument::NullSink.new,
  datagram_builder_class: self.class.datagram_builder_class_for_implementation(implementation),
  enable_aggregation: false,
  aggregation_flush_interval: 2.0,
  aggregation_max_context_size: StatsD::Instrument::Aggregator::DEFAULT_MAX_CONTEXT_SIZE
)
  @sink = sink
  @datagram_builder_class = datagram_builder_class

  @prefix = prefix
  @default_tags = default_tags
  @default_sample_rate = default_sample_rate

  @datagram_builder = { false => nil, true => nil }
  @enable_aggregation = enable_aggregation
  @aggregation_flush_interval = aggregation_flush_interval
  if @enable_aggregation
    @aggregator =
      Aggregator.new(
        @sink,
        datagram_builder_class,
        prefix,
        default_tags,
        flush_interval: @aggregation_flush_interval,
        max_values: aggregation_max_context_size,
      )
  end
end

Instance Attribute Details

#datagram_builder_classClass (readonly)

The class to use to build StatsD datagrams. To build the actual datagrams, the class will be instantiated, potentially multiple times, by the client.

Returns:

See Also:



73
74
75
# File 'lib/statsd/instrument/client.rb', line 73

def datagram_builder_class
  @datagram_builder_class
end

#default_tagsArray<String>, ... (readonly)

The tags to apply to all the metrics emitted through this client.

The tags can be supplied in normal form: an array of strings. You can also provide a hash, which will be turned into normal form by concatanting the key and the value using a colon. To not use any default tags, set to nil. Note that other components of your StatsD metric pipeline may also add tags to metrics. E.g. the DataDog agent may add add tags like hostname.

We generally recommend to not use default tags, or use them sparingly. Adding tags to every metric easily introduces carninality explosions, which will make metrics less precise due to the lossy nature of aggregation. It also makes your infrastructure more expsnive to run, and the user interface of your metric explorer less responsive.

Returns:

  • (Array<String>, Hash, nil)


135
136
137
# File 'lib/statsd/instrument/client.rb', line 135

def default_tags
  @default_tags
end

#prefixString? (readonly)

Note:

The prefix can be overridden by any metric call by setting the no_prefix keyword argument to true. We recommend against doing this, but this behavior is retained for backwards compatibility. Rather, when you feel the need to do this, we recommend instantiating a new client without prefix (using #clone_with_options), and using it to emit the metric.

The prefix to prepend to the metric names that are emitted through this client, using a dot (.) as namespace separator. E.g. when the prefix is set to foo, and you emit a metric named bar, the metric name will be foo.bar.

Generally all the metrics you emit to the same StatsD server will share a single, global namespace. If you are emitting metrics from multiple applications, using a prefix is recommended to prevent metric name collisions.

You can also leave this value to be nil if you don't want to prefix your metric names.

Returns:

  • (String, nil)


118
119
120
# File 'lib/statsd/instrument/client.rb', line 118

def prefix
  @prefix
end

#sink#sample?, #<< (readonly)

The sink to send UDP datagrams to.

This can be set to any object that responds to the following methods:

  • sample? which should return true if the metric should be sampled, i.e. actually sent to the sink.
  • #<< which takes a UDP datagram as string to emit the datagram. This method will only be called if sample? returned true.

Generally, you should use an instance of one of the following classes that ship with this library:

  • Sink A sink that will actually emit the provided datagrams over UDP.
  • NullSink A sink that will simply swallow every datagram. This sink is for use when testing your application.
  • LogSink A sink that log all provided datagrams to a Logger, normally StatsD.logger.

Returns:

  • (#sample?, #<<)


95
96
97
# File 'lib/statsd/instrument/client.rb', line 95

def sink
  @sink
end

Class Method Details

.datagram_builder_class_for_implementation(implementation) ⇒ Class

Finds the right DatagramBuilder class for a given implementation.

Parameters:

  • implementation (Symbol, String)

    The name of the implementation, e.g. "statsd" or :datadog.

Returns:

  • (Class)

    The subclass of DatagramBuilder builder to use to generate UDP datagrams for the given implementation.

Raises:

  • NotImplementedError if the implementation is not recognized or supported.



56
57
58
59
60
61
62
63
64
65
# File 'lib/statsd/instrument/client.rb', line 56

def datagram_builder_class_for_implementation(implementation)
  case implementation.to_s
  when "statsd"
    StatsD::Instrument::StatsDDatagramBuilder
  when "datadog", "dogstatsd"
    StatsD::Instrument::DogStatsDDatagramBuilder
  else
    raise NotImplementedError, "Implementation named #{implementation} could not be found"
  end
end

.from_env(env = StatsD::Instrument::Environment.current, prefix: env.statsd_prefix, default_sample_rate: env.statsd_sample_rate, default_tags: env.statsd_default_tags, implementation: env.statsd_implementation, sink: env.default_sink_for_environment, datagram_builder_class: datagram_builder_class_for_implementation(implementation)) ⇒ Object

Instantiates a StatsD::Instrument::Client using configuration values provided in environment variables.

See Also:



27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# File 'lib/statsd/instrument/client.rb', line 27

def from_env(
  env = StatsD::Instrument::Environment.current,
  prefix: env.statsd_prefix,
  default_sample_rate: env.statsd_sample_rate,
  default_tags: env.statsd_default_tags,
  implementation: env.statsd_implementation,
  sink: env.default_sink_for_environment,
  datagram_builder_class: datagram_builder_class_for_implementation(implementation)
)
  new(
    prefix: prefix,
    default_sample_rate: default_sample_rate,
    default_tags: default_tags,
    implementation: implementation,
    sink: sink,
    datagram_builder_class: datagram_builder_class,
    enable_aggregation: env.experimental_aggregation_enabled?,
    aggregation_flush_interval: env.aggregation_interval,
  )
end

Instance Method Details

#capture { ... } ⇒ Array<StatsD::Instagram::Datagram>

Captures metrics that were emitted during the provided block.

Yields:

  • During the execution of the provided block, metrics will be captured.

Returns:

  • (Array<StatsD::Instagram::Datagram>)

    The list of metrics that were emitted during the block, in the same order in which they were emitted.



553
554
555
556
557
# File 'lib/statsd/instrument/client.rb', line 553

def capture(&block)
  sink = capture_sink
  with_capture_sink(sink, &block)
  sink.datagrams
end

#capture_sinkObject



534
535
536
537
538
539
# File 'lib/statsd/instrument/client.rb', line 534

def capture_sink
  StatsD::Instrument::CaptureSink.new(
    parent: @sink,
    datagram_class: datagram_builder_class.datagram_class,
  )
end

#clone_with_options(sink: NO_CHANGE, prefix: NO_CHANGE, default_sample_rate: NO_CHANGE, default_tags: NO_CHANGE, datagram_builder_class: NO_CHANGE) ⇒ Object



515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
# File 'lib/statsd/instrument/client.rb', line 515

def clone_with_options(
  sink: NO_CHANGE,
  prefix: NO_CHANGE,
  default_sample_rate: NO_CHANGE,
  default_tags: NO_CHANGE,
  datagram_builder_class: NO_CHANGE
)
  self.class.new(
    sink: sink == NO_CHANGE ? @sink : sink,
    prefix: prefix == NO_CHANGE ? @prefix : prefix,
    default_sample_rate: default_sample_rate == NO_CHANGE ? @default_sample_rate : default_sample_rate,
    default_tags: default_tags == NO_CHANGE ? @default_tags : default_tags,
    datagram_builder_class:
      datagram_builder_class == NO_CHANGE ? @datagram_builder_class : datagram_builder_class,
    enable_aggregation: @enable_aggregation,
    aggregation_flush_interval: @aggregation_flush_interval,
  )
end

#default_sample_rateFloat

The default sample rate to use for metrics that are emitted without a sample rate set. This should be a value between 0 (never emit a metric) and 1.0 (always emit). If it is not set, the default value 1.0 is used.

We generally recommend setting sample rates on individual metrics based on their frequency, rather than changing the default sample rate.

Returns:

  • (Float)

    (default: 1.0) A value between 0.0 and 1.0.



145
146
147
# File 'lib/statsd/instrument/client.rb', line 145

def default_sample_rate
  @default_sample_rate || 1.0
end

#distribution(name, value = nil, sample_rate: nil, tags: nil, no_prefix: false, &block) ⇒ void

Note:

The distribution metric type is not available on all implementations. A NotImplementedError will be raised if you call this method, but the active implementation does not support it.

This method returns an undefined value.

Emits a distribution metric, which builds a histogram of the reported values.

Parameters:

  • value (Numeric) (defaults to: nil)

    The value to include in the distribution histogram.

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)



320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
# File 'lib/statsd/instrument/client.rb', line 320

def distribution(name, value = nil, sample_rate: nil, tags: nil, no_prefix: false, &block)
  if block_given?
    return latency(name, sample_rate: sample_rate, tags: tags, metric_type: :d, no_prefix: no_prefix, &block)
  end

  # For all timing metrics, we have to use the sampling logic.
  # Not doing so would impact performance and CPU usage.
  # See Datadog's documentation for more details: https://github.com/DataDog/datadog-go/blob/20af2dbfabbbe6bd0347780cd57ed931f903f223/statsd/aggregator.go#L281-L283
  sample_rate ||= @default_sample_rate
  if sample_rate && !sample?(sample_rate)
    return StatsD::Instrument::VOID
  end

  if @enable_aggregation
    @aggregator.aggregate_timing(
      name,
      value,
      tags: tags,
      no_prefix: no_prefix,
      type: :d,
      sample_rate: sample_rate,
    )
    return StatsD::Instrument::VOID
  end

  emit(datagram_builder(no_prefix: no_prefix).d(name, value, sample_rate, tags))
  StatsD::Instrument::VOID
end

#event(title, text, timestamp: nil, hostname: nil, aggregation_key: nil, priority: nil, source_type_name: nil, alert_type: nil, tags: nil, no_prefix: false) ⇒ void

Note:

Supported by the Datadog implementation only.

This method returns an undefined value.

Emits an event. An event represents any record of activity noteworthy for engineers.

Parameters:

  • title (String)

    Event title.

  • text (String)

    Event description. Newlines are allowed.

  • timestamp (Time) (defaults to: nil)

    The of the event. If not provided, Datadog will interpret it as the current timestamp.

  • hostname (String) (defaults to: nil)

    A hostname to associate with the event.

  • aggregation_key (String) (defaults to: nil)

    An aggregation key to group events with the same key.

  • priority (String) (defaults to: nil)

    Priority of the event. Either "normal" (default) or "low".

  • source_type_name (String) (defaults to: nil)

    The source type of the event.

  • alert_type (String) (defaults to: nil)

    Either "error", "warning", "info" (default) or "success".

  • tags (Array, Hash) (defaults to: nil)

    Tags to associate with the event.



460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
# File 'lib/statsd/instrument/client.rb', line 460

def event(title, text, timestamp: nil, hostname: nil, aggregation_key: nil, priority: nil,
  source_type_name: nil, alert_type: nil, tags: nil, no_prefix: false)

  emit(datagram_builder(no_prefix: no_prefix)._e(
    title,
    text,
    timestamp: timestamp,
    hostname: hostname,
    tags: tags,
    aggregation_key: aggregation_key,
    priority: priority,
    source_type_name: source_type_name,
    alert_type: alert_type,
  ))
end

#force_flushvoid

This method returns an undefined value.

Forces the client to flush all metrics that are currently buffered, first flushes the aggregation if enabled.



480
481
482
483
484
485
486
# File 'lib/statsd/instrument/client.rb', line 480

def force_flush
  if @enable_aggregation
    @aggregator.flush
  end
  @sink.flush(blocking: false)
  StatsD::Instrument::VOID
end

#gauge(name, value, sample_rate: nil, tags: nil, no_prefix: false) ⇒ void

This method returns an undefined value.

Emits a gauge metric.

You should use a gauge if you are reporting the current value of something that can only have one value at the time. E.g., the speed of your car. A newly reported value will replace the previously reported value.

Parameters:

  • value (Numeric)

    The gauged value.

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)



280
281
282
283
284
285
286
287
288
289
290
291
# File 'lib/statsd/instrument/client.rb', line 280

def gauge(name, value, sample_rate: nil, tags: nil, no_prefix: false)
  if @enable_aggregation
    @aggregator.gauge(name, value, tags: tags, no_prefix: no_prefix)
    return StatsD::Instrument::VOID
  end

  sample_rate ||= @default_sample_rate
  if sample_rate.nil? || sample?(sample_rate)
    emit(datagram_builder(no_prefix: no_prefix).g(name, value, sample_rate, tags))
  end
  StatsD::Instrument::VOID
end

#histogram(name, value, sample_rate: nil, tags: nil, no_prefix: false) ⇒ void

Note:

The histogram metric type is not available on all implementations. A NotImplementedError will be raised if you call this method, but the active implementation does not support it.

This method returns an undefined value.

Emits a histogram metric, which builds a histogram of the reported values.

Parameters:

  • value (Numeric)

    The value to include in the histogram.

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)



360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
# File 'lib/statsd/instrument/client.rb', line 360

def histogram(name, value, sample_rate: nil, tags: nil, no_prefix: false)
  sample_rate ||= @default_sample_rate
  if sample_rate && !sample?(sample_rate)
    # For all timing metrics, we have to use the sampling logic.
    # Not doing so would impact performance and CPU usage.
    # See Datadog's documentation for more details: https://github.com/DataDog/datadog-go/blob/20af2dbfabbbe6bd0347780cd57ed931f903f223/statsd/aggregator.go#L281-L283
    return StatsD::Instrument::VOID
  end

  if @enable_aggregation
    @aggregator.aggregate_timing(name, value, tags: tags, no_prefix: no_prefix, type: :h)
    return StatsD::Instrument::VOID
  end

  emit(datagram_builder(no_prefix: no_prefix).h(name, value, sample_rate, tags))
  StatsD::Instrument::VOID
end

#increment(name, value = 1, sample_rate: nil, tags: nil, no_prefix: false) ⇒ void

This method returns an undefined value.

Emits a counter metric.

You should use a counter metric to count the frequency of something happening. As a result, the value should generally be set to 1 (the default), unless you reporting about a batch of activity. E.g. increment('messages.processed', messages.size) For values that are not frequencies, you should use another metric type, e.g. #histogram or #distribution.

Parameters:

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • value (Integer) (defaults to: 1)

    (default: 1) The value to increment the counter by.

    You should not compensate for the sample rate using the counter increment. E.g., if your sample rate is set to 0.01, you should not use 100 as increment to compensate for it. The sample rate is part of the packet that is being sent to the server, and the server should know how to compensate for it.

  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)



220
221
222
223
224
225
226
227
228
229
230
231
232
# File 'lib/statsd/instrument/client.rb', line 220

def increment(name, value = 1, sample_rate: nil, tags: nil, no_prefix: false)
  sample_rate ||= @default_sample_rate

  if @enable_aggregation
    @aggregator.increment(name, value, tags: tags, no_prefix: no_prefix)
    return StatsD::Instrument::VOID
  end

  if sample_rate.nil? || sample?(sample_rate)
    emit(datagram_builder(no_prefix: no_prefix).c(name, value, sample_rate, tags))
  end
  StatsD::Instrument::VOID
end

#latency(name, sample_rate: nil, tags: nil, metric_type: nil, no_prefix: false) { ... } ⇒ Object

Measures the latency of the given block in milliseconds, and emits it as a metric.

Parameters:

  • metric_type (Symbol) (defaults to: nil)

    The metric type to use. If not specified, we will use the preferred metric type of the implementation. The default is :ms. Generally, you should not have to set this.

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)

Yields:

  • The latency (execution time) of the block

Returns:

  • The return value of the provided block will be passed through.



390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
# File 'lib/statsd/instrument/client.rb', line 390

def latency(name, sample_rate: nil, tags: nil, metric_type: nil, no_prefix: false)
  start = Process.clock_gettime(Process::CLOCK_MONOTONIC, :float_millisecond)
  begin
    yield
  ensure
    stop = Process.clock_gettime(Process::CLOCK_MONOTONIC, :float_millisecond)

    # For all timing metrics, we have to use the sampling logic.
    # Not doing so would impact performance and CPU usage.
    # See Datadog's documentation for more details:
    # https://github.com/DataDog/datadog-go/blob/20af2dbfabbbe6bd0347780cd57ed931f903f223/statsd/aggregator.go#L281-L283
    sample_rate ||= @default_sample_rate
    if sample_rate.nil? || sample?(sample_rate)

      metric_type ||= datagram_builder(no_prefix: no_prefix).latency_metric_type
      latency_in_ms = stop - start

      if @enable_aggregation
        @aggregator.aggregate_timing(
          name,
          latency_in_ms,
          tags: tags,
          no_prefix: no_prefix,
          type: metric_type,
          sample_rate: sample_rate,
        )
      else
        emit(datagram_builder(no_prefix: no_prefix).send(metric_type, name, latency_in_ms, sample_rate, tags))
      end
    end
  end
end

#measure(name, value = nil, sample_rate: nil, tags: nil, no_prefix: false, &block) ⇒ void

This method returns an undefined value.

Emits a timing metric.

Parameters:

  • value (Numeric) (defaults to: nil)

    The duration to record, in milliseconds.

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)



241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
# File 'lib/statsd/instrument/client.rb', line 241

def measure(name, value = nil, sample_rate: nil, tags: nil, no_prefix: false, &block)
  sample_rate ||= @default_sample_rate
  if sample_rate && !sample?(sample_rate)
    # For all timing metrics, we have to use the sampling logic.
    # Not doing so would impact performance and CPU usage.
    # See Datadog's documentation for more details: https://github.com/DataDog/datadog-go/blob/20af2dbfabbbe6bd0347780cd57ed931f903f223/statsd/aggregator.go#L281-L283

    if block_given?
      return yield
    end

    return StatsD::Instrument::VOID
  end

  if block_given?
    return latency(name, sample_rate: sample_rate, tags: tags, metric_type: :ms, no_prefix: no_prefix, &block)
  end

  if @enable_aggregation
    @aggregator.aggregate_timing(name, value, tags: tags, no_prefix: no_prefix, type: :ms)
    return StatsD::Instrument::VOID
  end
  emit(datagram_builder(no_prefix: no_prefix).ms(name, value, sample_rate, tags))
  StatsD::Instrument::VOID
end

#service_check(name, status, timestamp: nil, hostname: nil, tags: nil, message: nil, no_prefix: false) ⇒ void

Note:

Supported by the Datadog implementation only.

This method returns an undefined value.

Emits a service check. Services Checks allow you to characterize the status of a service in order to monitor it within Datadog.

Parameters:

  • name (String)

    Name of the service

  • status (Symbol)

    Either :ok, :warning, :critical or :unknown

  • timestamp (Time) (defaults to: nil)

    The moment when the service was checked. If not provided, Datadog will interpret it as the current timestamp.

  • hostname (String) (defaults to: nil)

    A hostname to associate with the check.

  • tags (Array, Hash) (defaults to: nil)

    Tags to associate with the check.

  • message (String) (defaults to: nil)

    A message describing the current state of the service check.



435
436
437
438
439
440
441
442
443
444
# File 'lib/statsd/instrument/client.rb', line 435

def service_check(name, status, timestamp: nil, hostname: nil, tags: nil, message: nil, no_prefix: false)
  emit(datagram_builder(no_prefix: no_prefix)._sc(
    name,
    status,
    timestamp: timestamp,
    hostname: hostname,
    tags: tags,
    message: message,
  ))
end

#set(name, value, sample_rate: nil, tags: nil, no_prefix: false) ⇒ void

This method returns an undefined value.

Emits a set metric, which counts distinct values.

Parameters:

  • value (Numeric, String)

    The value to count for distinct occurrences.

  • name (String)

    The name of the metric.

    • We recommend using snake_case.metric_names as naming scheme.
    • A . should be used for namespacing, e.g. foo.bar.baz
    • A metric name should not include the following characters: |, @, and :. The library will convert these characters to _.
  • sample_rate (Float) (defaults to: nil)

    (default: #default_sample_rate) The rate at which to sample this metric call. This value should be between 0 and 1. This value can be used to reduce the amount of network I/O (and CPU cycles) is being used for very frequent metrics.

    • A value of 0.1 means that only 1 out of 10 calls will be emitted; the other 9 will be short-circuited.
    • When set to 1, every metric will be emitted.
    • If this parameter is not set, the default sample rate for this client will be used.
  • tags (Hash<Symbol, String>, Array<String>) (defaults to: nil)

    (default: nil)



300
301
302
303
304
305
306
# File 'lib/statsd/instrument/client.rb', line 300

def set(name, value, sample_rate: nil, tags: nil, no_prefix: false)
  sample_rate ||= @default_sample_rate
  if sample_rate.nil? || sample?(sample_rate)
    emit(datagram_builder(no_prefix: no_prefix).s(name, value, sample_rate, tags))
  end
  StatsD::Instrument::VOID
end

#with_capture_sink(capture_sink) ⇒ Object



541
542
543
544
545
546
# File 'lib/statsd/instrument/client.rb', line 541

def with_capture_sink(capture_sink)
  @sink = capture_sink
  yield
ensure
  @sink = @sink.parent
end

#with_options(sink: NO_CHANGE, prefix: NO_CHANGE, default_sample_rate: NO_CHANGE, default_tags: NO_CHANGE, datagram_builder_class: NO_CHANGE) {|client| ... } ⇒ Object

Instantiates a new StatsD client that uses the settings of the current client, except for the provided overrides.

Yields:

  • (client)

    A new client will be constructed with the altered settings, and yielded to the block. The original client will not be affected. The new client will be disposed after the block returns

Returns:

  • The return value of the block will be passed on as return value.



497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
# File 'lib/statsd/instrument/client.rb', line 497

def with_options(
  sink: NO_CHANGE,
  prefix: NO_CHANGE,
  default_sample_rate: NO_CHANGE,
  default_tags: NO_CHANGE,
  datagram_builder_class: NO_CHANGE
)
  client = clone_with_options(
    sink: sink,
    prefix: prefix,
    default_sample_rate: default_sample_rate,
    default_tags: default_tags,
    datagram_builder_class: datagram_builder_class,
  )

  yield(client)
end