Overview:

Thought of sharing how did we break a monolithic application into multiple microservices. A number of architectural patterns exist that can be leveraged to build a solid microservices implementation strategy. We chose ‘functional decomposition’ to scale the system.

Monolithic Way

In one of our customer engagement, we had to deal with a java based monolithic common-services (CS). This CS comes with a single deployable (jar) archive which would be used as a platform to build application logic on top of it. It has modules like ‘authentication’, ‘authorization’, ‘notification’ etc… There are n number of applications built on top it and deployed individually as depicted in the diagram.

monolithic-cs
Common Services – Monolithic

 

 

 

 

 

 

Disadvantages with Monolithic Way

There are a number of problems with this monolithic architecture.

  1. First, as features and services are added to the application, the code base started growing substantially more complex.
  2. It is difficult to scale individual portions of the application. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally.
  3. Modern IDEs even have problems loading the entire application code, and compile and build times are long.

Microservices – The Rescue

The microservices architecture to the rescue is designed to address these issues. The services defined in the monolithic application architecture are decomposed into individual services, and deployed separately from one another on separate hosts. Each microservice is aligned with a specific business function, and only defines the operations necessary to that business function.

Microservices
Microservices & their Communication

 

 

 

 

 

 

 

Each service is deployed in multiple hosts for scalability and availability. Though the services can scale well and provide 100% availability, there are few key points to be addressed.

  1. Deployment – As services will be spread across multiple hosts, it can be difficult to keep track of which hosts are running certain services and how to keep track of the deployment
  2. Service Discovery –  services need to find each other. For example, Authentication service might need to find a notification service or another service etc. A Service Discovery system should provide a mechanism for
    • Service Registration
    • Service Discovery
    • Handling Fail over of service instances
    • Load balancing across multiple instances of a Service
    • Handling issues arising due to unreliable network

Deployment

With monolithic way, each application is a separate bundle and its deployment is very simple Wrap the complete application along with all the common services components .

But is not the case with microservices way.

Using Docker, we create a DockerFile describing all the language, framework, and library dependencies for a specific service/ application. A container created from the docker image is easily placed on a host.

The portability of containers also makes deployment of microservices a breeze. To push out a new version of a service running on a given host, the running container can simply be stopped and a new container started that is based on a Docker image using the latest version of the service code. All the other containers running on the host will be unaffected by this change.

Thanks to Linux and Docker Container.

Service Discovery:

Service Discovery can be made arbitrarily complex, but it can also be kept simple using the building blocks of AWS with near-zero maintenance costs is built by utilizing the AWS Private Hosted Zone (PHZ) feature. We achieved service discovery with the AWS Private Hosted Zones & Route53. PHZs allows us to connect a Route53 Hosted Zone to a VPC, which in turn means that DNS records in that zone are only visible to attached VPCs. Let us see the details

  • The first thing what we did was creating a new Private Hosted Zone and associate it to the VPC. In our case, we called it prod.imaginea.local, indicating that it is the local DNS for our Imaginea in production environment (which indicates that other environments, e. g. stage or dev reside in other VPCs).
  • Added our actual resource record to the hosted zone, and resolved it as, e.g. prod-auth01.imaginea.local, indicating it’s the endpoint for the service app ‘authentication’  in production environment as one of the hosts as shown. Depending upon the number of hosts that the application/service has created as many number of resource records pointing to the actual host
edit-recordset-2
Record Set

 

 

 

 

 

 

 

  • Added another resource record to the hosted zone which is an alias to the already created resource record, and resolved it, e.g.auth.prod.imaginea.local, indicating it’s the endpoint for the service app authentication in production environment as shown. Depending upon the number of hosts that the application/service has created as many number of resource records having the same alias name “auth.prod.imaginea.local” pointing to the different resource records created in the previous step
edit-recordset
Record Set – alias

 

 

 

 

 

 

 

 

 

 

 

 

  • Added another resource record to the hosted zone which is an alias to the already created resource record, and resolved it, e.g.auth.prod.imaginea.local, indicating it’s the endpoint for the service app authentication in production environment as shown. Depending upon the number of hosts that the application/service has created as many number of resource records having the same alias name “auth.prod.imaginea.local” pointing to the different resource records created in the previous step
  • With this, we need to specify the entire PHZ domain (prod.imaginea.local) every-time we want to lookup the service; And it changes when the environment changes. Say for example, ‘authentication’ service wants to lookup ‘notification’ service in production environment, it has to call ‘notify.prod.imaginea.local’ but when it has to call in stage environment, it has to be ‘notify.stage.prod.local’. No We did not want this. We wanted just lookup ‘notification’ service, so our ‘authentication’ service does not need to know in which zone or environment it is running.  This is where DHCP option sets come into play.
    The Dynamic Host Configuration Protocol (DHCP) provides a standard for passing configuration information to hosts on a TCP/IP network. The options field of a DHCP message contains the configuration parameters. Some of those parameters are the domain name, domain name server etc…We just created a new one which includes prod.imaginea.local as shown.
dhcp
DHCP Options

 

 

 

 

 

  • Once we associated our VPC with this DHCP option set, we can omit the domain part as it’s now part of the search domain (propagated via DHCP).
    Now we can just hardcode the service endpoint of ‘notify’ in ‘authentication’ service as shown below;
  • No need for complex Service Discovery (Consul etc)., and no need for glue software (e.g. confd). The contract between the service consumer and announcer is the service name.

Known Issue:

It takes approximately 40 seconds for Route53 to propagate the changes. More sophisticated approaches like Consul, Etcd, SkyDNS etc… should help. But we are OK with this propagation delay