Configuring Logstash with Filebeat

In post Configuring ELK stack to analyse Apache Tomcat logs  we configured Logstash to pull data from directory whereas in this post we will configure Filebeat to push data to Logstash. Before configuring, let’s have a brief about why we need Filebeat.

Why Filebeat?
Filebeat helps in decentralization the server where logs are generated from where logs are processed, thus sharing the load from a single machine.

Now, lets’ start with our configuration, following below steps:

Step 1: Download and extract Filebeat in any directory, for me it’s filebeat under directory /Users/ArpitAggarwal/ as follows:

$ mkdir filebeat
$ cd filebeat
$ wget https://download.elastic.co/beats/filebeat/filebeat-1.0.0-darwin.tgz
$ tar -xvzf filebeat-1.0.0-darwin.tgz

Step 2: Replace the filebeat.yml content inside directory /Users/ArpitAggarwal/filebeat/filebeat-1.0.0-darwin/ with below content:

filebeat:
  prospectors:
    -
      paths:
        - /Users/ArpitAggarwal/tomcat/logs/*.log*"
      input_type: log
      document_type: my_log
output:
  logstash:
    hosts: ["localhost:5000"]
  console:
    pretty: true
shipper:
logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

paths tag specified above is the location from where data is to be pulled.
document_type specified above is the type to be published in the ‘type’ field of logstash configuration.

Step 3: Start filebeat as a background process, as follows:

$ cd filebeat/filebeat-1.0.0-darwin
$ ./filebeat -c filebeat.yml &

Step 4: Configure Logstash to receive data from filebeat and output it to ElasticSearch running on localhost. To do the same, create a directory where we will create our logstash configuration file, for me it’s logstash created under directory /Users/ArpitAggarwal/ as follows:

$ cd /Users/ArpitAggarwal/
$ mkdir logstash patterns
$ cd logstash
$ touch logstash.conf
$ cd ../patterns
$ touch grok-patterns.txt

Copy the below content to logstash.conf:

input {
   beats {
     type => beats
     port => 5000
   }
}
filter {
    multiline {
              patterns_dir => "/Users/ArpitAggarwal/logstash/patterns"
              pattern => "\[%{TOMCAT_DATESTAMP}"
              what => "previous"
    }
    if [type] == "my_log" and "com.test.controller.log.LogController" in [message] {
        mutate {
                add_tag => [ "MY_LOG" ]
               }
        if "_grokparsefailure" in [tags] {
                  drop { }
              }
       date {
             match => [ "timestamp", "UNIX_MS" ]
             target => "@timestamp"
            }
        } else {
            drop { }
      }
}
output {
   stdout {
          codec => rubydebug
   }
   if [type] == "my_log"  {
                elasticsearch {
                           manage_template => false
                           hosts => ["localhost:9201"]
                 }
    }
}

Next, copy the contents from file https://github.com/elastic/logstash/blob/v1.2.2/patterns/grok-patterns to patterns/grok-patterns.txt

Step 5: Download and extract Logstash in any directory, for me it’s logstash-installation under directory /Users/ArpitAggarwal/, as follows:

$ wget https://download.elastic.co/logstash/logstash/logstash-2.1.0.zip
$ unzip logstash-2.1.0.zip

Step 6: Validate logstash configuration file using below command:

$ cd /Users/ArpitAggarwal/logstash-installation/logstash-2.1.0/bin
$ ./logstash -f /Users/ArpitAggarwal/logstash/logstash.conf --configtest --verbose —debug

Step 7: Install logstash-input-beats plugin and start Logstash as a background process to push data to ElasticSearch received from Filebeat, as follows:

$ cd /Users/ArpitAggarwal/logstash-installation/logstash-2.1.0/bin
$ ./plugin install logstash-input-beats
$ ./logstash -f /Users/ArpitAggarwal/logstash/logstash.conf &

Configuring ELK stack to analyse Apache Tomcat logs

In this post, we will set up ElasticSearch, Logstash  and Kibana  to analyse Apache Tomcat server logs. Before setting up ELK stack, let’s have a brief about each.

ElasticSearch
Schema-less database that has powerful search capabilities and is easy to scale horizontally. Indexes every single field, aggregate and group the data.

Logstash
Written in Ruby and allows us to pipeline data to and from anywhere. An ETL pipeline which allows to fetch, transform, and store events into ElasticSearch. Packaged version runs on JRuby and takes advantage of the JVM’s threading capabilities by throwing dozens of threads to parallelize data processing.

Kibana
Web based data analysis and dashboarding tool for ElasticSearch. Leverages ElasticSearch’s search capabilities to visualise data in seconds. Supports Lucene Query String syntax and Elasticsearch’s filter capabilities.

Next, we will start with installing each component from stack seperately, following below steps:

Step 1: Download and extract ElasticSearch .tar.gz file in a directory, for me it’s elasticsearch-2.1.0.tar.gz extracted in directory named elasticsearch under directory /Users/ArpitAggarwal/

Step 2: Start elasticsearch server moving to bin folder and executing ./elasticsearch as follows:

$ cd /Users/ArpitAggarwal/elasticsearch/elasticsearch-2.1.0/bin
$ ./elasticsearch

Above command start elasticsearch accessible at http://localhost:9201/ and default indexes accessible at http://localhost:9201/_cat/indices?v

For deleting indexes, hit a curl from command line as follows:

curl -XDELETE 'http://localhost:9201/*/'

Stpe 3: Next, we will install and configure Kibana to point to our ElasticSearch instance, for doing the same Download and extract the .tar.gz file in a directory, for me it’s kibana-4.3.0-darwin-x64.tar.gz extracted in directory named kibana under directory /Users/ArpitAggarwal/

Step 4: Modify the kibana.yml under directory /Users/ArpitAggarwal/kibana/kibana-4.3.0-darwin-x64/config/kibana.yml to point to our local ElasticSearch instance by replacing existing elasticsearch.url value to http://localhost:9201

Step 5: Start Kibana moving to bin folder and executing ./kibana as follows:

$ cd /Users/ArpitAggarwal/kibana/kibana-4.3.0-darwin-x64/bin
$ ./kibana

Above command start Kibana accessible at http://localhost:5601/

Step 6: Next, we will install and configure Nginx to point to our local Kibana instance, for doing the same Download Nginx in a directory, for me it’s nginx under /Users/ArpitAggarwal/ unzip the nginx-*.tar.gz and install it using command:

$ cd nginx-1.9.6
$ ./configure
$ make
$ make install

By default, Nginx will be installed in directory /usr/local/nginx, but Nginx provides the ability to specify a directory where it is to be installed, and same you can do it by providing additional compile option – –prefix as follows:

./configure --prefix=/Users/ArpitAggarwal/nginx

Next, open the nginx configuration file at /Users/ArpitAggarwal/nginx/conf/nginx.conf and replace location block under server with below content:

location / {
    # point to Kibana local instance
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

Step 7: Start Nginx, as follows:

cd /Users/ArpitAggarwal/nginx/sbin
./nginx

Above command start the nginx server accessible at http://localhost

Step 8: Next, we will install Logstash, executing below commands:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null
brew install logstash

Above command install Logstash at location /usr/local/opt/

Step 9: Now, we will configure Logstash to push data from Tomcat server logs directory to ElasticSearch. To do the same, create a directory where we will create our logstash configuration file, for me it’s logstash created under directory /Users/ArpitAggarwal/ as follows:

cd /Users/ArpitAggarwal/
mkdir logstash patterns
cd logstash
touch logstash.conf
cd ../patterns
touch grok-patterns.txt

Copy the below content to logstash.conf:

input {
    file {
        path => "/Users/ArpitAggarwal/tomcat/logs/*.log*"
        start_position => beginning
	    type=> "my_log"
    }
}
filter {
	multiline {
			  patterns_dir => "/Users/ArpitAggarwal/logstash/patterns"
			  pattern => "\[%{TOMCAT_DATESTAMP}"
			  what => "previous"
	}
	if [type] == "my_log"  and "com.test.controller.log.LogController" in [message] {
        mutate {
				add_tag => [ "MY_LOG" ]
			   }
       	if "_grokparsefailure" in [tags] {
				  drop { }
		      }
       date {
             match => [ "timestamp", "UNIX_MS" ]
             target => "@timestamp"
            }
	    } else {
	        drop { }
	  }
}
output {
   stdout {
          codec => rubydebug
   }
   if [type] == "my_log"  {
                elasticsearch {
                           manage_template => false
                           host => localhost
                           protocol => http
                           port => "9201"
                 }
    }
}

Next, Copy the contents from file https://github.com/elastic/logstash/blob/v1.2.2/patterns/grok-patterns to patterns/grok-patterns.txt

Step 10: Validate logstash’s configuration file using below command:

$ cd /usr/local/opt/
$ logstash -f /Users/ArpitAggarwal/logstash/logstash.conf --configtest --verbose —debug

Step 11: Push data to ElasticSearch using Logstash as follows:

$ cd /usr/local/opt/
$ logstash -f /Users/ArpitAggarwal/logstash/logstash.conf