
Exploring Output Plugins
The output plugin is used to send data to a destination. It acts as the final section required in the Logstash configuration file. Some of the most used output plugins are as follows.
stdout
This is a fairly simple plugin, which outputs the data to the standard output of the shell. It is useful for debugging the configurations used for the plugins. This is mostly used to validate whether Logstash is parsing the input and applying filters (if any) properly to provide output as required.
The basic configuration for stdout
is as follows:
stdout { }
In this plugin, no settings are mandatory. The additional configuration settings are as follows:
codec
: This is used to encode the data before sending it as an output. You can usecodec
as JSON to display the output data in JSON format, or usecodec
asrubydebug
to display the output data using the Ruby Awesome Print library.workers
: This is used to define the number of threads that will process the packets for output.
The value types and default values for the settings are as follows:
Configuration example:
output { stdout { codec => rubydebug workers => 2 } }
In the preceding configuration, we have mentioned the codec
as rubydebug
and it will print the output to the current shell as a standard output.
file
The file
plugin is used to write the output to a file. It helps to create a file in which output is stored that can be used later on as well. By default, it writes one event per line written in JSON format and can be modified using the codec
.
The basic configuration for file
is as follows:
file { path => ... }
In this plugin, only the path
setting is mandatory:
path
: This specifies the location of either the directory or the filename where files will be written. You can provide the name of the file either directly, or using field name values to create a file.
The additional configuration settings are as follows:
codec
: This is used to encode the data before sending it as an output.create_if_deleted
: This is used to create a file even if the file has been deleted. Whenever an input will be parsed, the output will be stored to a file, and if the file is deleted, it will be re-created.dir_mode
: This is used to define the mode of the directory access to be used.file_mode
: This is used to define the mode of the file access to be used.filename_failure
: If the path provided is incorrect, then all the output will be written to the file, as specified in the configuration for this field.flush_interval
: This is used to determine the time (in seconds) for writing to the file at a specified interval.gzip
: This is used to gzip the output before writing it and storing it as a file.workers
: This is used to define the number of threads that will process the packets for output.
The value types and default values for the settings are as follows:
Configuration example:
output { file { create_if_deleted => true file_mode => 777 filename_failure => "failedpath_file" flush_interval => 0 path => "/usr/share/logstash/file.txt" } }
In the preceding configuration, we are mentioning the mode of file access as well as specifying the option to recreate the file if deleted. The flush_interval
is specified as 0
, that is, it will write output to the file for every event as and when it is received. We have also specified the path
within which the file, which will store the output, will be created.
elasticsearch
The elasticsearch
output plugin is used to send the output from Logstash to Elasticsearch, where data will be stored. It is one of the most commonly used output filters for sending data to Elasticsearch. If you want to visualize data in Kibana, you will need to send your data to Elasticsearch. Kibana uses the data stored in Elasticsearch to create visualizations.
The basic configuration for elasticsearch
is as follows:
elasticsearch { }
In this plugin, no settings are mandatory.
This output plugin has various additional configuration settings; however, we will look at a few of them in more detail here:
action
: This is used to perform various operations on the documents stored in Elasticsearch. The various operations are:index
(used to index a document),delete
(used to delete a document based on the ID),create
(used to create and index the document, where the ID is unique) andupdate
(used to update the document based on the ID).cacert
: This is used to provide the absolute path of the.cer
or.pem
file, which is used as a certificate to validate the server certificate for secured access.codec
: This is used to encode the data before sending it as an output.doc_as_upsert
: This is used to enable the update mode for each document. It is anupsert
mode, wherein if a document contains an ID that has already been processed, then it will update the value of that ID. If the document ID doesn't exist, then it will create the document with that ID.document_id
: This is used to specify the ID for the document, which acts as a unique specifier. Generally, it's advised to keep it auto-incremental.document_type
: This is used to provide the type wherein the document will be stored. As discussed in the previous chapter, anindex
can contain multiple types. So to send the output to a specific type or different type, this property needs to be specified.hosts
: This is used to specify the host IP address to communicate with the Elasticsearch node. It will specify whichhosts
should be connected to while sending data. You can specify single or multiplehosts
at once. By default, the port is9200
.Note
Do not include the dedicated master node of Elasticsearch in the
hosts
property, otherwise it will send input data to the master nodes.index
: This is used to specify the name of the index in which data will be written. The index name can be either static or dynamic when derived from a field value or using regex.path
: This is used when you are using a proxy to connect to the Elasticsearch node. For that you can specify thepath
wherein Elasticsearch is reachable.proxy
: This is used to specify the proxy address while connecting to the Elasticsearch node.ssl
: This is used to enable the Secured Socket Layer (SSL) transmission or Transport Layer Security (TLS), which creates a secured communication channel with the Elasticsearch node or cluster.
The value types and default values for the settings are as follows:
Configuration example:
output { elasticsearch { cacert => "/usr/share/logstash/cert.pem" doc_as_upsert => true document_type => "elasticsearch" hosts => ["localhost:9200","127.0.0.3:9201"] index => "logstash" ssl => true } }
In the preceding configuration, we have specified the certificate path and enabled the upsert
functionality for the document. We have mentioned the type
wherein it should store the output data. We have also specified the nodes of the Elasticsearch cluster to connect and write output data within the index
Logstash, as mentioned.