Class: LogStash::Outputs::ElasticSearch

Inherits:
Base
  • Object
show all
Includes:
Common, CommonConfigs
Defined in:
lib/logstash/outputs/elasticsearch.rb,
lib/logstash/outputs/elasticsearch/common.rb,
lib/logstash/outputs/elasticsearch/http_client.rb,
lib/logstash/outputs/elasticsearch/common_configs.rb,
lib/logstash/outputs/elasticsearch/http_client/pool.rb,
lib/logstash/outputs/elasticsearch/template_manager.rb,
lib/logstash/outputs/elasticsearch/http_client_builder.rb,
lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb

Overview

.Compatibility Note

NOTE

Starting with Elasticsearch 5.3, there’s an refmodules-http.html[HTTP setting] called ‘http.content_type.required`. If this option is set to `true`, and you are using Logstash 2.4 through 5.2, you need to update the Elasticsearch output plugin to version 6.2.5 or higher.

This plugin is the recommended method of storing logs in Elasticsearch. If you plan on using the Kibana web interface, you’ll want to use this output.

This output only speaks the HTTP protocol. HTTP is the preferred protocol for interacting with Elasticsearch as of Logstash 2.0. We strongly encourage the use of HTTP over the node protocol for a number of reasons. HTTP is only marginally slower, yet far easier to administer and work with. When using the HTTP protocol one may upgrade Elasticsearch versions without having to upgrade Logstash in lock-step.

You can learn more about Elasticsearch at <www.elastic.co/products/elasticsearch>

Template management for Elasticsearch 5.x

Index template for this version (Logstash 5.0) has been changed to reflect Elasticsearch’s mapping changes in version 5.0. Most importantly, the subfield for string multi-fields has changed from ‘.raw` to `.keyword` to match ES default behavior.

** Users installing ES 5.x and LS 5.x ** This change will not affect you and you will continue to use the ES defaults.

** Users upgrading from LS 2.x to LS 5.x with ES 5.x ** LS will not force upgrade the template, if ‘logstash` template already exists. This means you will still use `.raw` for sub-fields coming from 2.x. If you choose to use the new template, you will have to reindex your data after the new template is installed.

Retry Policy

The retry policy has changed significantly in the 2.2.0 release. This plugin uses the Elasticsearch bulk API to optimize its imports into Elasticsearch. These requests may experience either partial or total failures.

The following errors are retried infinitely:

  • Network errors (inability to connect)

  • 429 (Too many requests) and

  • 503 (Service unavailable) errors

NOTE: 409 exceptions are no longer retried. Please set a higher ‘retry_on_conflict` value if you experience 409 exceptions. It is more performant for Elasticsearch to retry these exceptions than this plugin.

Batch Sizes ====

This plugin attempts to send batches of events as a single request. However, if a request exceeds 20MB we will break it up until multiple batch requests. If a single document exceeds 20MB it will be sent as a single request.

DNS Caching

This plugin uses the JVM to lookup DNS entries and is subject to the value of docs.oracle.com/javase/7/docs/technotes/guides/net/properties.html[networkaddress.cache.ttl], a global setting for the JVM.

As an example, to set your DNS TTL to 1 second you would set the ‘LS_JAVA_OPTS` environment variable to `-Dnetworkaddress.cache.ttl=1`.

Keep in mind that a connection with keepalive enabled will not reevaluate its DNS value while the keepalive is in effect.

HTTP Compression

This plugin supports request and response compression. Response compression is enabled by default and for Elasticsearch versions 5.0 and later, the user doesn’t have to set any configs in Elasticsearch for it to send back compressed response. For versions before 5.0, ‘http.compression` must be set to `true` in Elasticsearch to take advantage of response compression when using this plugin

For requests compression, regardless of the Elasticsearch version, users have to enable ‘http_compression` setting in their Logstash config file.

Defined Under Namespace

Modules: Common, CommonConfigs, HttpClientBuilder Classes: HttpClient, TemplateManager

Constant Summary collapse

TARGET_BULK_BYTES =

This is a constant instead of a config option because there really isn’t a good reason to configure it.

The criteria used are:

  1. We need a number that’s less than 100MiB because ES won’t accept bulks larger than that.

  2. It must be large enough to amortize the connection constant across multiple requests.

  3. It must be small enough that even if multiple threads hit this size we won’t use a lot of heap.

We wound up agreeing that a number greater than 10 MiB and less than 100MiB made sense. We picked one on the lowish side to not use too much heap.

20 * 1024 * 1024
@@plugins =
Gem::Specification.find_all{|spec| spec.name =~ /logstash-output-elasticsearch-/ }

Constants included from Common

Common::DEFAULT_EVENT_TYPE_ES6, Common::DEFAULT_EVENT_TYPE_ES7, Common::DOC_CONFLICT_CODE, Common::DOC_DLQ_CODES, Common::DOC_SUCCESS_CODES, Common::VALID_HTTP_ACTIONS, Common::VERSION_TYPES_PERMITTING_CONFLICT

Instance Attribute Summary

Attributes included from Common

#client, #hosts

Instance Method Summary collapse

Methods included from Common

#check_action_validity, #dlq_enabled?, #event_action_tuple, #get_event_type, #install_template, #maximum_seen_major_version, #multi_receive, #next_sleep_interval, #register, #retrying_submit, #safe_bulk, #setup_hosts, #sleep_for_interval, #submit, #valid_actions

Methods included from CommonConfigs

included

Instance Method Details

#build_clientObject



229
230
231
232
# File 'lib/logstash/outputs/elasticsearch.rb', line 229

def build_client
  params["metric"] = metric
  @client ||= ::LogStash::Outputs::ElasticSearch::HttpClientBuilder.build(@logger, @hosts, params)
end

#closeObject



234
235
236
237
# File 'lib/logstash/outputs/elasticsearch.rb', line 234

def close
  @stopping.make_true
  @client.close if @client
end