Writing and Consuming SOAP Webservice with Spring

In the era of RESTful Web Services, I got a chance to consume SOAP Web Service. To do the same I chosen Spring, reason being we are already using Spring as backend framework in our project and secondly it provides an intuitive way to interact service(s) with well-defined boundaries to promote reusability and portability through WebServiceTemplate.

Assuming you already know about SOAP Web Services, let’s start creating hello-world soap service running on port 9999 and client to consume the same, following below steps:

Step 1: Go to start.spring.io and create a new project soap-server adding the Web starters, based on the following image:

soap-server

Step 2: Edit SoapServerApplication.java to publish the hello-world service at Endpoint – http://localhost:9999/service/hello-world, as follows:


package com.arpit.soap.server.main;

import javax.xml.ws.Endpoint;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import com.arpit.soap.server.service.impl.HelloWorldServiceImpl;

@SpringBootApplication
public class SoapServerApplication implements CommandLineRunner {

	@Value("${service.port}")
	private String servicePort;

	@Override
	public void run(String... args) throws Exception {
		Endpoint.publish("http://localhost:" + servicePort
				+ "/service/hello-world", new HelloWorldServiceImpl());
	}

	public static void main(String[] args) {
		SpringApplication.run(SoapServerApplication.class, args);
	}
}


Step 3: Edit application.properties to specify the application name, port and port number of hello-world service, as follows:

server.port=9000
spring.application.name=soap-server

## Soap Service Port
service.port=9999

Step 4: Create additional packages as com.arpit.soap.server.service and com.arpit.soap.server.service.impl to define the Web Service and it’s implementation, as follows:

HelloWorldService.java

package com.arpit.soap.server.service;

import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebService;

import com.arpit.soap.server.model.ApplicationCredentials;

@WebService
public interface HelloWorldService {

	@WebMethod(operationName = "helloWorld", action = "https://aggarwalarpit.wordpress.com/hello-world/helloWorld")
	String helloWorld(final String name,
			@WebParam(header = true) final ApplicationCredentials credential);

}

@WebService specified above marks a Java class as implementing a Web Service, or a Java interface as defining a Web Service interface.

@WebMethod specified above marks a Java method as a Web Service operation.

@WebParam specified above customize the mapping of an individual parameter to a Web Service message part and XML element.

HelloWorldServiceImpl.java

package com.arpit.soap.server.service.impl;

import javax.jws.WebService;

import com.arpit.soap.server.model.ApplicationCredentials;
import com.arpit.soap.server.service.HelloWorldService;

@WebService(endpointInterface = "com.arpit.soap.server.service.HelloWorldService")
public class HelloWorldServiceImpl implements HelloWorldService {

	@Override
	public String helloWorld(final String name,
			final ApplicationCredentials credential) {
		return "Hello World from " + name;
	}
}

Step 5: Move to soap-server directory and run command: mvn spring-boot:run. Once running, open http://localhost:9999/service/hello-world?wsdl to view the WSDL for the hello-world service.

Next, we will create soap-client which will consume our newly created hello-world service.

Step 6: Go to start.spring.io and create a new project soap-client adding the Web, Web Services starters, based on the following image:

soap-client.png

Step 7: Edit SoapClientApplication.java to create a request to hello-world web service, sending the same to soap-server along with header and get the response from it, as follows:


package com.arpit.soap.client.main;

import java.io.IOException;
import java.io.StringWriter;

import javax.xml.bind.JAXBElement;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerException;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.stream.StreamResult;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.ws.WebServiceMessage;
import org.springframework.ws.client.core.WebServiceMessageCallback;
import org.springframework.ws.client.core.WebServiceTemplate;
import org.springframework.ws.soap.SoapMessage;
import org.springframework.xml.transform.StringSource;

import com.arpit.soap.server.service.ApplicationCredentials;
import com.arpit.soap.server.service.HelloWorld;
import com.arpit.soap.server.service.HelloWorldResponse;
import com.arpit.soap.server.service.ObjectFactory;

@SpringBootApplication
@ComponentScan("com.arpit.soap.client.config")
public class SoapClientApplication implements CommandLineRunner {

	@Autowired
	@Qualifier("webServiceTemplate")
	private WebServiceTemplate webServiceTemplate;

	@Value("#{'${service.soap.action}'}")
	private String serviceSoapAction;

	@Value("#{'${service.user.id}'}")
	private String serviceUserId;

	@Value("#{'${service.user.password}'}")
	private String serviceUserPassword;

	public static void main(String[] args) {
		SpringApplication.run(SoapClientApplication.class, args);
		System.exit(0);
	}

	public void run(String... args) throws Exception {
		final HelloWorld helloWorld = createHelloWorldRequest();
		@SuppressWarnings("unchecked")
		final JAXBElement<HelloWorldResponse> jaxbElement = (JAXBElement<HelloWorldResponse>) sendAndRecieve(helloWorld);
		final HelloWorldResponse helloWorldResponse = jaxbElement.getValue();
		System.out.println(helloWorldResponse.getReturn());
	}

	private Object sendAndRecieve(HelloWorld seatMapRequestType) {
		return webServiceTemplate.marshalSendAndReceive(seatMapRequestType,
				new WebServiceMessageCallback() {
					public void doWithMessage(WebServiceMessage message)
							throws IOException, TransformerException {
						SoapMessage soapMessage = (SoapMessage) message;
						soapMessage.setSoapAction(serviceSoapAction);
						org.springframework.ws.soap.SoapHeader soapheader = soapMessage
								.getSoapHeader();
						final StringWriter out = new StringWriter();
						webServiceTemplate.getMarshaller().marshal(
								getHeader(serviceUserId, serviceUserPassword),
								new StreamResult(out));
						Transformer transformer = TransformerFactory
								.newInstance().newTransformer();
						transformer.transform(new StringSource(out.toString()),
								soapheader.getResult());
					}
				});
	}

	private Object getHeader(final String userId, final String password) {
		final https.aggarwalarpit_wordpress.ObjectFactory headerObjectFactory = new https.aggarwalarpit_wordpress.ObjectFactory();
		final ApplicationCredentials applicationCredentials = new ApplicationCredentials();
		applicationCredentials.setUserId(userId);
		applicationCredentials.setPassword(password);
		final JAXBElement<ApplicationCredentials> header = headerObjectFactory
				.createApplicationCredentials(applicationCredentials);
		return header;
	}

	private HelloWorld createHelloWorldRequest() {
		final ObjectFactory objectFactory = new ObjectFactory();
		final HelloWorld helloWorld = objectFactory.createHelloWorld();
		helloWorld.setArg0("Arpit");
		return helloWorld;
	}

}

Step 8: Next, create additional package com.arpit.soap.client.config to configure WebServiceTemplate, as follows:

ApplicationConfig.java

package com.arpit.soap.client.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;
import org.springframework.oxm.jaxb.Jaxb2Marshaller;
import org.springframework.web.servlet.config.annotation.EnableWebMvc;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;
import org.springframework.ws.client.core.WebServiceTemplate;
import org.springframework.ws.soap.saaj.SaajSoapMessageFactory;
import org.springframework.ws.transport.http.HttpComponentsMessageSender;

@Configuration
@EnableWebMvc
public class ApplicationConfig extends WebMvcConfigurerAdapter {

	@Value("#{'${service.endpoint}'}")
	private String serviceEndpoint;

	@Value("#{'${marshaller.packages.to.scan}'}")
	private String marshallerPackagesToScan;

	@Value("#{'${unmarshaller.packages.to.scan}'}")
	private String unmarshallerPackagesToScan;

	@Bean
	public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
		return new PropertySourcesPlaceholderConfigurer();
	}

	@Bean
	public SaajSoapMessageFactory messageFactory() {
		SaajSoapMessageFactory messageFactory = new SaajSoapMessageFactory();
		messageFactory.afterPropertiesSet();
		return messageFactory;
	}

	@Bean
	public Jaxb2Marshaller marshaller() {
		Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
		marshaller.setPackagesToScan(marshallerPackagesToScan.split(","));
		return marshaller;
	}

	@Bean
	public Jaxb2Marshaller unmarshaller() {
		Jaxb2Marshaller unmarshaller = new Jaxb2Marshaller();
		unmarshaller.setPackagesToScan(unmarshallerPackagesToScan.split(","));
		return unmarshaller;
	}

	@Bean
	public WebServiceTemplate webServiceTemplate() {
		WebServiceTemplate webServiceTemplate = new WebServiceTemplate(
				messageFactory());
		webServiceTemplate.setMarshaller(marshaller());
		webServiceTemplate.setUnmarshaller(unmarshaller());
		webServiceTemplate.setMessageSender(messageSender());
		webServiceTemplate.setDefaultUri(serviceEndpoint);
		return webServiceTemplate;
	}

	@Bean
	public HttpComponentsMessageSender messageSender() {
		HttpComponentsMessageSender httpComponentsMessageSender = new HttpComponentsMessageSender();
		return httpComponentsMessageSender;
	}
}

Step 9: Edit application.properties to specify the application name, port and hello-world soap web service configurations, as follows:

server.port=9000
spring.application.name=soap-client

## Soap Service Configuration

service.endpoint=http://localhost:9999/service/hello-world
service.soap.action=https://aggarwalarpit.wordpress.com/hello-world/helloWorld
service.user.id=arpit
service.user.password=arpit
marshaller.packages.to.scan=com.arpit.soap.server.service
unmarshaller.packages.to.scan=com.arpit.soap.server.service

service.endpoint specified above is the URL provided to the service user to invoke the services exposed by the service provider.

service.soap.action specifies which process or program that need to be called when a request is sent by the service requester and also defines the relative path of the process/program.

marshaller.packages.to.scan specifies the packages to scan at time of marshalling before sending the request to the server.

unmarshaller.packages.to.scan specifies the packages to scan at time of unmarshalling after receiving the request from the server.

Now, we will generate Java Objects from WSDL using wsimport and copy it to the soap-client project executing below command on the terminal:

wsimport -keep -verbose http://localhost:9999/service/hello-world?wsdl

Step 10: Move to soap-client directory and run command: mvn spring-boot:run. Once command finishes we will see “Hello World from Arpit” as response from hello-world soap service on console.

While running if you are getting error as – Unable to marshal type “com.arpit.soap.server.service.HelloWorld” as an element because it is missing an @XmlRootElement annotation then add @XmlRootElement(name = “helloWorld”, namespace = “http://service.server.soap.arpit.com/ “) to the com.arpit.soap.server.service.HelloWorld, where namespace should be matched from xmlns:ser defined in soap envelope, as below:

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ser="http://service.server.soap.arpit.com/">
   <soapenv:Header>
      <ser:arg1>
         <userId>arpit</userId>
         <password>arpit</password>
      </ser:arg1>
   </soapenv:Header>
   <soapenv:Body>
      <ser:helloWorld>
         <!--Optional:-->
         <arg0>Arpit</arg0>
      </ser:helloWorld>
   </soapenv:Body>
</soapenv:Envelope>

The complete source code is hosted on github.

Migrating from SVN to Git

Finally we migrated one of our legacy project from Subversion to Git considering the fact some of the team members still continue to work on SVN until they complete the development of feature(s) of a product on which they are working on.

In this post I tried to replicate the steps what we followed during migration taking an example of moving the project name my-application stored in a locally hosted SVN, following below steps:

Step 1: Download svn-migration-scripts.jar from here and place it in any directory, for me it’s svn-to-git under windows directory D:\ as follows:

D:\> mkdir svn-to-git
D:\> cd svn-to-git

Step 2: Verify the scripts to make sure Java Runtime Environment, Git, Subversion, and the git-svn utility installed.

java -jar D:\svn-to-git\svn-migration-scripts.jar verify

Make sure Java, Subversion and Git are installed before proceeding to next step.

Step 3: Extract the author information from SVN in a text file as follows:

D:\svn-to-git> java -jar D:\svn-to-git\svn-migration-scripts.jar authors http://localhost:81/svn/my-application > authors.txt

http://localhost:81/svn specified above is the svn host url

my-application refers to the name of a project in svn

Above command creates authors.txt that contains the username of every author in the SVN repository along with a generated name and email address. Edit the name and email address of the user and save it.

Step 4: Clone the SVN repository using git svn clone command as follows:

D:\svn-to-git> git svn clone --trunk=/trunk/dev/ --username=ArpitAggarwal --branches=/branches/dev --authors-file=authors.txt http://localhost:81/svn/my-application dev-git

Above command clone and update the information essential for git about the dev branch stored in a /branches/dev of my-application inside .git folder generated under directory dev-git.

If above command fails because of Perl crash then instead of restarting the git svn clone process move to your partially retrieved git repository and execute git-svn fetch command, it continues fetching the svn revisions from where we left off as follows:

D:\svn-to-git\dev-git> git svn fetch

Step 5: Next, we will clean the newly created Git repository to make it ready to push on a remote GitHub repository and also make Git aware about the authors file which we created before, as follows:

D:\svn-to-git\dev-git> java -Dfile.encoding=utf-8 -jar D:\svn-to-git\svn-migration-scripts.jar clean-git --force
D:\svn-to-git\dev-git> git config svn.authorsfile D:\svn-to-git\authors.txt

If clean-git command fails then checkout my answer on stackoverflow.com.

Step 6: Now create a new repository on GitHub for me it’s my-application and push the code to it following below commands:

D:\svn-to-git\dev-git> git remote add origin git@github.com:arpitaggarwal/my-application.git
D:\svn-to-git\dev-git> git push -u origin master

Now anytime from now if you want to update Git from SVN code, just fetch the new commits from the SVN repository, rebase the same with the local Git repository, clean and push it to the remote, as follows:

D:\svn-to-git\dev-git> git svn fetch
D:\svn-to-git\dev-git> java -Dfile.encoding=utf-8 -jar D:\svn-to-git\svn-migration-scripts.jar sync-rebase
D:\svn-to-git\dev-git> java -Dfile.encoding=utf-8 -jar D:\svn-to-git\svn-migration-scripts.jar clean-git --force
D:\svn-to-git\dev-git> git add .
D:\svn-to-git\dev-git> git push

Sources: https://www.atlassian.com/git/tutorials/svn-to-git-prepping-your-team-migration

Microservices fault and latency tolerance using Netflix Hystrix

Recently in one of my project I got a requirement to execute a fallback call for a failing webservice call. To implement the same I was looking for some implementation of circuit breaker pattern and finally came across Netflix Hystrix library which I found is the best suited library as per our application.

In this post I tried to showcase a thin example of our problem and how Hystrix solved the same using a single microservice and a client to access it along with Hystrix Dashboard. Before diving into coding, let’s understand in brief what Hystrix is and how it works internally.

What is Hystrix?

Hystrix is a library that helps us control the interactions between the distributed services by adding latency tolerance and fault tolerance logic. It does this by isolating points of access between the services, stopping cascading failures across them, and providing fallback options, all of which improve our system’s overall resiliency.

It implements the circuit breaker pattern which work on circuit-breaker transitions from CLOSED to OPEN when a circuit meets a specified threshold and error percentage exceeds the threshold error percentage. While it is open, it short-circuits all requests made against that circuit-breaker. After some amount of time, the next single request is let through (this is the HALF-OPEN state). If the request fails, the circuit-breaker returns to the OPEN state for the duration of the sleep window. If the request succeeds, the circuit-breaker transitions to CLOSED and all requests made against that circuit-breaker are passed through to the service. More you can explore here.

Now, let’s start creating employee-service microservice running on port 8090 and client to access the same along with Hystrix Dashboard following below steps:

Step 1: Go to start.spring.io and create a new project employee-service adding the Web starters, based on the following image:

screen-1

Step 2: Edit EmployeeServiceApplication.java to add a method which returns a list of employee, as follows:

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

@SpringBootApplication
@RestController
public class EmployeeServiceApplication {

 public static void main(String[] args) {
   SpringApplication.run(EmployeeServiceApplication.class, args);
 }

 @RequestMapping(value = "/list")
 public String list() {
	return "Arpit, Sanjeev, Abhishek";
 }
}

Step 3: Edit application.properties to specify the application name and port number of a service, as follows:

server.port=8090
spring.application.name=employee-service

Step 4: Move to employee-service directory and run command: mvn spring-boot:run. Once running, open http://localhost:8090/list.

Next, we will create hystrix-client which will access our newly created employee-service and if it is down will return the response from fallback method.

Step 5: Go to start.spring.io and create a new project hystrix-client adding the Web, Hystrix and Actuator starters, based on the following image:

screen-2

 

Step 6: Edit HystrixClientApplication.java to add a method which calls employee-service to get a response and if service is down or unavailable because of any reason return a response from fallback method, as follows:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.circuitbreaker.EnableCircuitBreaker;
import org.springframework.cloud.netflix.hystrix.EnableHystrix;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.support.PropertySourcesPlaceholderConfigurer;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter;

import com.test.service.IEmployeeService;

@EnableHystrix
@EnableCircuitBreaker
@SpringBootApplication
@RestController
@ComponentScan(basePackages = { "com.test.service" })
public class HystrixClientApplication {

@Autowired
private IEmployeeService employeeService;

public static void main(String[] args) {
	SpringApplication.run(HystrixClientApplication.class, args);
}

@RequestMapping("/list")
public String list() {
	return employeeService.list();
}

  static class ApplicationConfig extends WebMvcConfigurerAdapter {

	@Bean
	public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {
		return new PropertySourcesPlaceholderConfigurer();
	}
  }
}

Step 7: Create interface IEmployeeService and it’s implementation class EmployeeServiceImpl under com.test.service package and edit them as follows:

IEmployeeService.java

package com.test.service;

public interface IEmployeeService {
  String list();
}

EmployeeServiceImpl.java

package com.test.service.impl;

import java.net.URI;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Service;
import org.springframework.web.client.RestTemplate;

import com.netflix.hystrix.contrib.javanica.annotation.HystrixCommand;
import com.netflix.hystrix.contrib.javanica.annotation.HystrixProperty;
import com.test.service.IEmployeeService;

@Service
public class EmployeeServiceImpl implements IEmployeeService {

 @Value("#{'${employee.service.url}'}")
 private String employeeServiceUrl;

 @HystrixCommand(commandProperties = {
			@HystrixProperty(name = "execution.isolation.strategy", value = "THREAD"),
			@HystrixProperty(name = "execution.timeout.enabled", value = "true"),
			@HystrixProperty(name = "execution.isolation.thread.timeoutInMilliseconds", value = "500"),
			@HystrixProperty(name = "execution.isolation.thread.interruptOnTimeout", value = "true"),
			@HystrixProperty(name = "fallback.enabled", value = "true"),
			@HystrixProperty(name = "circuitBreaker.enabled", value = "true"),
			@HystrixProperty(name = "circuitBreaker.requestVolumeThreshold", value = "10"),
			@HystrixProperty(name = "circuitBreaker.sleepWindowInMilliseconds", value = "1000"),
			@HystrixProperty(name = "circuitBreaker.errorThresholdPercentage", value = "10"),
			@HystrixProperty(name = "circuitBreaker.forceOpen", value = "false"),
			@HystrixProperty(name = "circuitBreaker.forceClosed", value = "false") }, fallbackMethod = "fallback", commandKey = "list", groupKey = "EmployeeServiceImpl", threadPoolKey = "thread-pool-employee-service", threadPoolProperties = { @HystrixProperty(name = "coreSize", value = "5") }, ignoreExceptions = { IllegalAccessException.class })
 public String list() {
	RestTemplate restTemplate = new RestTemplate();
	URI uri = URI.create(employeeServiceUrl + "/list");
	return restTemplate.getForObject(uri, String.class);
 }
 
 public String fallback() {
	return "Fallback call, seems employee service is down";
 }
}

@HystrixCommand specified above is used to wrap code that will execute potentially risky functionality with fault and latency tolerance, statistics and performance metrics capture, circuit breaker and bulkhead functionality.

@HystrixProperty specified above is used to control HystrixCommand behavior. All available options are listed here.

Step 8: Edit application.properties to specify the application port on which hystrix-client should be running and url on which employee-service is available, as follows:

server.port=8080
employee.service.url=http://localhost:8090

Step 9: Move to hystrix-client directory and run command: mvn spring-boot:run. Once running, open http://localhost:8080/list.

Is Hystrix working?

Shut down the employee-service application. Fallback message should be seen : Fallback call, seems employee service is down.

Next, we will create Hystrix Dashboard which will provide us the graphical view of success and failure requests, circuit status, host, cluster and thread pool status of an application.

Step 10: Go to start.spring.io and create a new project hystrix-dashboard adding the Hystrix Dashboard starters. Once created edit HystrixDashboardApplication.java, as follows:

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.netflix.hystrix.dashboard.EnableHystrixDashboard;

@SpringBootApplication
@EnableHystrixDashboard
public class HystrixDashboardApplication {

  public static void main(String[] args) {
    SpringApplication.run(HystrixDashboardApplication.class, args);
  }
}

Step 11: Edit application.properties to specify the application port on which hystrix-dashboard should be running, as follows:

server.port=8383

Step 12: Move to hystrix-dashboard directory and run command: mvn spring-boot:run. Once running, open http://localhost:8383/hystrix and enter http://localhost:8080/hystrix.stream in stream textbox and click Monitor Stream. Once dashboard is loaded we will see image similar to below:

screen-3

The complete source code is hosted on github.

Enterprise Integration Pattern with Spring

Recently in one of my project I got a requirement to poll a directory and it’s sub directories on a constant rate and process the files residing in it to drive some business information out of it. To implement the same we used enterprise integration pattern implementation of spring because of two reasons, firstly – we are already using spring as our backend framework and secondly – it enforce separation of concerns between business logic and integration logic in an intuitive way with well-defined boundaries to promote reusability and portability.

What is Spring Integration?

Spring Integration is an enterprise integration pattern implementation of spring which supports integration with external systems via declarative adapters and these adapters provide a higher-level of abstraction over Spring’s support for remoting, messaging, and scheduling. It does not need a container or separate process space and can be invoked in existing program as it is just a JAR which can be dropped with WAR or standalone systems. You can read the full blog at  http://blog.xebia.in/2016/03/28/enterprise-integration-pattern-with-spring/

Configuring Logstash with Filebeat

In post Configuring ELK stack to analyse Apache Tomcat logs  we configured Logstash to pull data from directory whereas in this post we will configure Filebeat to push data to Logstash. Before configuring, let’s have a brief about why we need Filebeat.

Why Filebeat?
Filebeat helps in decentralization the server where logs are generated from where logs are processed, thus sharing the load from a single machine.

Now, lets’ start with our configuration, following below steps:

Step 1: Download and extract Filebeat in any directory, for me it’s filebeat under directory /Users/ArpitAggarwal/ as follows:

$ mkdir filebeat
$ cd filebeat
$ wget https://download.elastic.co/beats/filebeat/filebeat-1.0.0-darwin.tgz
$ tar -xvzf filebeat-1.0.0-darwin.tgz

Step 2: Replace the filebeat.yml content inside directory /Users/ArpitAggarwal/filebeat/filebeat-1.0.0-darwin/ with below content:

filebeat:
  prospectors:
    -
      paths:
        - /Users/ArpitAggarwal/tomcat/logs/*.log*"
      input_type: log
      document_type: my_log
output:
  logstash:
    hosts: ["localhost:5000"]
  console:
    pretty: true
shipper:
logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

paths tag specified above is the location from where data is to be pulled.
document_type specified above is the type to be published in the ‘type’ field of logstash configuration.

Step 3: Start filebeat as a background process, as follows:

$ cd filebeat/filebeat-1.0.0-darwin
$ ./filebeat -c filebeat.yml &

Step 4: Configure Logstash to receive data from filebeat and output it to ElasticSearch running on localhost. To do the same, create a directory where we will create our logstash configuration file, for me it’s logstash created under directory /Users/ArpitAggarwal/ as follows:

$ cd /Users/ArpitAggarwal/
$ mkdir logstash patterns
$ cd logstash
$ touch logstash.conf
$ cd ../patterns
$ touch grok-patterns.txt

Copy the below content to logstash.conf:

input {
   beats {
     type => beats
     port => 5000
   }
}
filter {
    multiline {
              patterns_dir => "/Users/ArpitAggarwal/logstash/patterns"
              pattern => "\[%{TOMCAT_DATESTAMP}"
              what => "previous"
    }
    if [type] == "my_log" and "com.test.controller.log.LogController" in [message] {
        mutate {
                add_tag => [ "MY_LOG" ]
               }
        if "_grokparsefailure" in [tags] {
                  drop { }
              }
       date {
             match => [ "timestamp", "UNIX_MS" ]
             target => "@timestamp"
            }
        } else {
            drop { }
      }
}
output {
   stdout {
          codec => rubydebug
   }
   if [type] == "my_log"  {
                elasticsearch {
                           manage_template => false
                           hosts => ["localhost:9201"]
                 }
    }
}

Next, copy the contents from file https://github.com/elastic/logstash/blob/v1.2.2/patterns/grok-patterns to patterns/grok-patterns.txt

Step 5: Download and extract Logstash in any directory, for me it’s logstash-installation under directory /Users/ArpitAggarwal/, as follows:

$ wget https://download.elastic.co/logstash/logstash/logstash-2.1.0.zip
$ unzip logstash-2.1.0.zip

Step 6: Validate logstash configuration file using below command:

$ cd /Users/ArpitAggarwal/logstash-installation/logstash-2.1.0/bin
$ ./logstash -f /Users/ArpitAggarwal/logstash/logstash.conf --configtest --verbose —debug

Step 7: Install logstash-input-beats plugin and start Logstash as a background process to push data to ElasticSearch received from Filebeat, as follows:

$ cd /Users/ArpitAggarwal/logstash-installation/logstash-2.1.0/bin
$ ./plugin install logstash-input-beats
$ ./logstash -f /Users/ArpitAggarwal/logstash/logstash.conf &

Running Web Application in Linked Docker Containers Environment

In post Dockerizing Web Application with Puppet we hosted web application in a single container, this time we will host a web application in a linked docker container environment – one in which our database (mysql) resides leveraged by our web application hosted in another docker container.

Before start, let’s have a brief about Linking Docker containers and how it helps us.

Linking or connecting Docker containers?
Linking Docker containers allows containers to discover each other and securely transfer information between them. Linking set up a conduit between containers allowing recipient container securely access source container preventing exposing the source container to the network.

In this post, recipient container is the spring-application-container which we created in this post and source container is the database container which we create now.

Let’s start with creating database container and linking it with  spring-application-container, following below steps:

Step 1: Create directory with any name for me it’s database-container inside directory docker (created in this post), as follows:

$ cd docker
$ mkdir database-container
$ cd database-container
$ touch Dockerfile

Step 2: Copy the below content in docker/database-container/Dockerfile:

FROM ubuntu:latest
MAINTAINER arpitaggarwal "aggarwalarpit.89@gmail.com"
RUN apt-get install -q -y mysql-server
RUN apt-get install -q -y mysql-client
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
EXPOSE 3306

RUN sed -i -e”s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/” /etc/mysql/my.cnf specified above is to set the MYSQL bind-address to 0.0.0.0 because it usually only listens on 127.0.0.1 by default.

Step 3: Build the newly created database-container as follows:

$ cd database-container
$ docker build --no-cache=true -t database .

database specified above refers to name of a database-container image.

Step 4: Start database-container assigning a name “db” and MYSQL Server installed as a service inside database-container, as follows:

$ docker run -P -it --name db database /bin/bash

Step 5: Modify the existing spring-application-container Dockerfile to copy the new application to the container which is using database hosted on database-container, as follows:

FROM ubuntu:latest
MAINTAINER arpitaggarwal "aggarwalarpit.89@gmail.com"
RUN apt-get -y update
RUN apt-get -q -y install git
RUN sudo apt-get install -y ruby
RUN apt-get install -y ruby-dev
RUN apt-get -y update
RUN apt-get install -y make
RUN apt-get install -y build-essential
RUN apt-get install -y puppet
RUN gem install librarian-puppet
ADD Puppetfile /
RUN librarian-puppet install
RUN puppet apply --modulepath=/modules -e "include java8 class { 'tomcat':version => '7',java_home => '/usr/lib/jvm/java-8-oracle'}"
RUN apt-get remove -y make puppet build-essential ruby-dev
COPY /spring-mysql/target/spring-mysql.war /var/lib/tomcat7/webapps/
EXPOSE 8080

Step 6: Build the the application inside a docker directory, this time spring-mysql cloned from github:


$ cd docker
$ git clone https://github.com/arpitaggarwal/spring-mysql.git
$ cd spring-mysql
$ mvn clean install

Step 7: Next, start spring-application-container linking it with database-container as follows:

$ docker run -p 8080:8080 -it --name webapp --link db spring-application-container /bin/bash

–link flag specified above create a secure link between spring-application-container with the database-container and exposes connectivity information for the source container to the recipient container in two ways:

a). Environment variables.
b). Updating the /etc/hosts file.

Now we can use exposed environment variables or the entries from host to access the db container. Also, If we restart the source container, the linked containers /etc/hosts files will be automatically updated with the source container’s new IP address, allowing linked communication to continue.

In our application, we used the host entry mechanism to read the IP address of source container, using Java InetAddress.

Step 8: Our application will try to access the mysql database with user as “test” and password as “test” and use the employee table to store the employee details submitted from applicationso let’s create it:

$ mysql --user=root mysql
$ CREATE USER 'test'@'%' IDENTIFIED BY 'test’;
$ GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION;
$ FLUSH PRIVILEGES;
$ CREATE DATABASE  test;
$ USE TEST;
$ CREATE TABLE employee (id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, name VARCHAR(20), age VARCHAR(30));

Step 9: Get your Docker container IP Address, using docker-machine:

docker-machine ip your_vm_name

Next, create employee submitting name and age from application and refresh the screen to retrieve it from database at url http://container-ip-address:8080/spring-mysql

The complete source code is hosted on github.

 

Configuring ELK stack to analyse Apache Tomcat logs

In this post, we will set up ElasticSearch, Logstash  and Kibana  to analyse Apache Tomcat server logs. Before setting up ELK stack, let’s have a brief about each.

ElasticSearch
Schema-less database that has powerful search capabilities and is easy to scale horizontally. Indexes every single field, aggregate and group the data.

Logstash
Written in Ruby and allows us to pipeline data to and from anywhere. An ETL pipeline which allows to fetch, transform, and store events into ElasticSearch. Packaged version runs on JRuby and takes advantage of the JVM’s threading capabilities by throwing dozens of threads to parallelize data processing.

Kibana
Web based data analysis and dashboarding tool for ElasticSearch. Leverages ElasticSearch’s search capabilities to visualise data in seconds. Supports Lucene Query String syntax and Elasticsearch’s filter capabilities.

Next, we will start with installing each component from stack seperately, following below steps:

Step 1: Download and extract ElasticSearch .tar.gz file in a directory, for me it’s elasticsearch-2.1.0.tar.gz extracted in directory named elasticsearch under directory /Users/ArpitAggarwal/

Step 2: Start elasticsearch server moving to bin folder and executing ./elasticsearch as follows:

$ cd /Users/ArpitAggarwal/elasticsearch/elasticsearch-2.1.0/bin
$ ./elasticsearch

Above command start elasticsearch accessible at http://localhost:9201/ and default indexes accessible at http://localhost:9201/_cat/indices?v

For deleting indexes, hit a curl from command line as follows:

curl -XDELETE 'http://localhost:9201/*/'

Stpe 3: Next, we will install and configure Kibana to point to our ElasticSearch instance, for doing the same Download and extract the .tar.gz file in a directory, for me it’s kibana-4.3.0-darwin-x64.tar.gz extracted in directory named kibana under directory /Users/ArpitAggarwal/

Step 4: Modify the kibana.yml under directory /Users/ArpitAggarwal/kibana/kibana-4.3.0-darwin-x64/config/kibana.yml to point to our local ElasticSearch instance by replacing existing elasticsearch.url value to http://localhost:9201

Step 5: Start Kibana moving to bin folder and executing ./kibana as follows:

$ cd /Users/ArpitAggarwal/kibana/kibana-4.3.0-darwin-x64/bin
$ ./kibana

Above command start Kibana accessible at http://localhost:5601/

Step 6: Next, we will install and configure Nginx to point to our local Kibana instance, for doing the same Download Nginx in a directory, for me it’s nginx under /Users/ArpitAggarwal/ unzip the nginx-*.tar.gz and install it using command:

$ cd nginx-1.9.6
$ ./configure
$ make
$ make install

By default, Nginx will be installed in directory /usr/local/nginx, but Nginx provides the ability to specify a directory where it is to be installed, and same you can do it by providing additional compile option – –prefix as follows:

./configure --prefix=/Users/ArpitAggarwal/nginx

Next, open the nginx configuration file at /Users/ArpitAggarwal/nginx/conf/nginx.conf and replace location block under server with below content:

location / {
    # point to Kibana local instance
    proxy_pass http://localhost:5601;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

Step 7: Start Nginx, as follows:

cd /Users/ArpitAggarwal/nginx/sbin
./nginx

Above command start the nginx server accessible at http://localhost

Step 8: Next, we will install Logstash, executing below commands:

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" < /dev/null 2> /dev/null
brew install logstash

Above command install Logstash at location /usr/local/opt/

Step 9: Now, we will configure Logstash to push data from Tomcat server logs directory to ElasticSearch. To do the same, create a directory where we will create our logstash configuration file, for me it’s logstash created under directory /Users/ArpitAggarwal/ as follows:

cd /Users/ArpitAggarwal/
mkdir logstash patterns
cd logstash
touch logstash.conf
cd ../patterns
touch grok-patterns.txt

Copy the below content to logstash.conf:

input {
    file {
        path => "/Users/ArpitAggarwal/tomcat/logs/*.log*"
        start_position => beginning
	    type=> "my_log"
    }
}
filter {
	multiline {
			  patterns_dir => "/Users/ArpitAggarwal/logstash/patterns"
			  pattern => "\[%{TOMCAT_DATESTAMP}"
			  what => "previous"
	}
	if [type] == "my_log"  and "com.test.controller.log.LogController" in [message] {
        mutate {
				add_tag => [ "MY_LOG" ]
			   }
       	if "_grokparsefailure" in [tags] {
				  drop { }
		      }
       date {
             match => [ "timestamp", "UNIX_MS" ]
             target => "@timestamp"
            }
	    } else {
	        drop { }
	  }
}
output {
   stdout {
          codec => rubydebug
   }
   if [type] == "my_log"  {
                elasticsearch {
                           manage_template => false
                           host => localhost
                           protocol => http
                           port => "9201"
                 }
    }
}

Next, Copy the contents from file https://github.com/elastic/logstash/blob/v1.2.2/patterns/grok-patterns to patterns/grok-patterns.txt

Step 10: Validate logstash’s configuration file using below command:

$ cd /usr/local/opt/
$ logstash -f /Users/ArpitAggarwal/logstash/logstash.conf --configtest --verbose —debug

Step 11: Push data to ElasticSearch using Logstash as follows:

$ cd /usr/local/opt/
$ logstash -f /Users/ArpitAggarwal/logstash/logstash.conf