Class: LogStash::Outputs::CloudWatch
- Includes:
- PluginMixins::AwsConfig
- Defined in:
- lib/logstash/outputs/cloudwatch.rb
Overview
This output lets you aggregate and send metric data to AWS CloudWatch
#### Summary: This plugin is intended to be used on a logstash indexer agent (but that is not the only way, see below.) In the intended scenario, one cloudwatch output plugin is configured, on the logstash indexer node, with just AWS API credentials, and possibly a region and/or a namespace. The output looks for fields present in events, and when it finds them, it uses them to calculate aggregate statistics. If the ‘metricname` option is set in this output, then any events which pass through it will be aggregated & sent to CloudWatch, but that is not recommended. The intended use is to NOT set the metricname option here, and instead to add a `CW_metricname` field (and other fields) to only the events you want sent to CloudWatch.
When events pass through this output they are queued for background aggregation and sending, which happens every minute by default. The queue has a maximum size, and when it is full aggregated statistics will be sent to CloudWatch ahead of schedule. Whenever this happens a warning message is written to logstash’s log. If you see this you should increase the ‘queue_size` configuration option to avoid the extra API calls. The queue is emptied every time we send data to CloudWatch.
Note: when logstash is stopped the queue is destroyed before it can be processed. This is a known limitation of logstash and will hopefully be addressed in a future version.
#### Details: There are two ways to configure this plugin, and they can be used in combination: event fields & per-output defaults
Event Field configuration… You add fields to your events in inputs & filters and this output reads those fields to aggregate events. The names of the fields read are configurable via the ‘field_*` options.
Per-output defaults… You set universal defaults in this output plugin’s configuration, and if an event does not have a field for that option then the default is used.
Notice, the event fields take precedence over the per-output defaults.
At a minimum events must have a “metric name” to be sent to CloudWatch. This can be achieved either by providing a default here OR by adding a ‘CW_metricname` field. By default, if no other configuration is provided besides a metric name, then events will be counted (Unit: Count, Value: 1) by their metric name (either a default or from their `CW_metricname` field)
Other fields which can be added to events to modify the behavior of this plugin are, ‘CW_namespace`, `CW_unit`, `CW_value`, and `CW_dimensions`. All of these field names are configurable in this output. You can also set per-output defaults for any of them. See below for details.
Read more about [AWS CloudWatch](aws.amazon.com/cloudwatch/), and the specific of API endpoint this output uses, [PutMetricData](docs.amazonwebservices.com/AmazonCloudWatch/latest/APIReference/API_PutMetricData.html)
Constant Summary collapse
- DIMENSIONS =
Constants aggregate_key members
"dimensions"
- TIMESTAMP =
"timestamp"
- METRIC =
"metric"
- COUNT =
"count"
- UNIT =
"unit"
- SUM =
"sum"
- MIN =
"min"
- MAX =
"max"
- COUNT_UNIT =
Units
"Count"
- NONE =
"None"
- VALID_UNITS =
["Seconds", "Microseconds", "Milliseconds", "Bytes", "Kilobytes", "Megabytes", "Gigabytes", "Terabytes", "Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", "Percent", COUNT_UNIT, "Bytes/Second", "Kilobytes/Second", "Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", "Bits/Second", "Kilobits/Second", "Megabits/Second", "Gigabits/Second", "Terabits/Second", "Count/Second", NONE]
Constants included from PluginMixins::AwsConfig
PluginMixins::AwsConfig::US_EAST_1
Constants included from Config::Mixin
Instance Attribute Summary
Attributes included from Config::Mixin
Attributes inherited from Plugin
Instance Method Summary collapse
Methods included from PluginMixins::AwsConfig
#aws_options_hash, included, #setup_aws_config
Methods inherited from Base
#handle, #handle_worker, #initialize, #worker_setup, #workers_not_supported
Methods included from Config::Mixin
Methods inherited from Plugin
#eql?, #finished, #finished?, #hash, #initialize, #inspect, lookup, #reload, #running?, #shutdown, #teardown, #terminating?, #to_s
Constructor Details
This class inherits a constructor from LogStash::Outputs::Base
Instance Method Details
#aws_service_endpoint(region) ⇒ Object
156 157 158 159 160 |
# File 'lib/logstash/outputs/cloudwatch.rb', line 156 def aws_service_endpoint(region) return { :cloud_watch_endpoint => "monitoring.#{region}.amazonaws.com" } end |
#receive(event) ⇒ Object
179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
# File 'lib/logstash/outputs/cloudwatch.rb', line 179 def receive(event) return unless output?(event) if event == LogStash::SHUTDOWN job.trigger() job.unschedule() @logger.info("CloudWatch aggregator thread shutdown.") finished return end return unless (event[@field_metricname] || @metricname) if (@event_queue.length >= @event_queue.max) @job.trigger @logger.warn("Posted to AWS CloudWatch ahead of schedule. If you see this often, consider increasing the cloudwatch queue_size option.") end @logger.info("Queueing event", :event => event) @event_queue << event end |
#register ⇒ Object
163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
# File 'lib/logstash/outputs/cloudwatch.rb', line 163 def register require "thread" require "rufus/scheduler" require "aws" @cw = AWS::CloudWatch.new() @event_queue = SizedQueue.new(@queue_size) @scheduler = Rufus::Scheduler.start_new @job = @scheduler.every @timeframe do @logger.info("Scheduler Activated") publish(aggregate({})) end end |