Next to the given instructions below, you should check and verify the official instructions from elastic for installation.
Filebeat is an agent to move log files. There are also other agents available, such as Topbeat which focuses on CPU, Memory and Hard Disk monitoring.
Filebeat Index Templates for Elasticsearch
As Filebeat is used to move logs to Elasticsearch, it must be configured to index incomming fields intelligently. This can be done by adding a index template for Filebeat.
$ cd ~
# Download a default template
$ curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
# Install the template
$ curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json
{
"acknowledged" : true
}
Filebeat Agent Server SSL Configuration
Filebeat runs as an agent and Elasticsearch has been configured to use SSL connections. Therefore Filebeat agents need to support this encrypted connection. Therefore you need to copy the certificate from the ELK server to the agent server:
# Execute this from the ELK server (where Elasticsearch runs); keep in mind to create the destination folder first: $ scp /etc/pki/tls/certs/logstash-forwarder.crt user@agent_server_private_address:/etc/pki/tls/certs
Filebeat Agent Setup
Filebeat can be installed on the agent server afterwards via the following commands:
# Load Elasticsearch public GPG key into your apt repository: $ wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - # Add Logstash to your apt repositories source list: $ echo "deb https://packages.elastic.co/beats/apt stable main" | sudo tee -a /etc/apt/sources.list.d/beats.list $ sudo apt-get update $ sudo apt-get install filebeat # Add log file destination and input type (document_type) for elastic search filters: $ sudo vim /etc/filebeat/filebeat.yml ... paths: - /var/log/auth.log - /var/log/syslog document_type: syslog # - /var/log/*.log ... ### Logstash as output logstash: # The Logstash hosts hosts: ["ELK_server_private_IP:5044"] bulk_max_size: 1024 ... tls: # List of root certificates for HTTPS server verifications certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"] ... $ sudo systemctl restart filebeat $ sudo systemctl enable filebeat # Test connectivity $ curl -XGET \'http://localhost:9200/filebeat-*/_search?pretty\' ... { "_index" : "filebeat-2016.01.29", "_type" : "log", "_id" : "AVKO98yuaHvsHQLa53HE", "_score" : 1.0, "_source":{"message":"Feb 3 14:34:00 rails sshd[963]: Server listening on :: port 22.","@version":"1","@timestamp":"2016-01-29T19:59:09.145Z","beat":{"hostname":"topbeat-u-03","name":"topbeat-u-03"},"count":1,"fields":null,"input_type":"log","offset":70,"source":"/var/log/auth.log","type":"log","host":"topbeat-u-03"} } ...
Afterwards the data is transferred by Filebeat to Elasticsearch and available for queries within Kibana. Filebeat has two components to move data to Elasticsearch, which are described on elasticsearch.co. In general changed file-sizes and new files are handled by the harvester, which are defined within the prospector. Prospectors are defined within /etc/filebeat/filebeat.yml, like all configurations for Filebeat.