What Is Microservices Architecture?

Microservices architecture is gaining popularity and is being employed in nearly all major software projects. This is mainly due to its benefits and the problems it solves. This article aims to provide an overview of what microservices architecture is along with its benefits and drawbacks.

I will also be outlining evolution of software programming so that we can understand how software architecture improved over time and why microservices architecture has become a common methodology in enterprise level software development. Finally, I will present software programs that can be used to build microservices architecture.

Please read FinTechExplained disclaimer.

What Are Microservices?

Let’s start by understanding what microservices are.

As the name implies, microservices are essentially independent software services that provide a specific business functionality in a software application. These services can be maintained, monitored and deployed independently.

Microservices are built on top of services oriented architecture.

Services oriented architecture (SOA) enables applications to communicate with each other on a single machine or when the applications are deployed on multiple machines across network.

Each microservice is loosely coupled with other services. These services are self-contained and serve a single functionality (or a group of common functionalities).

Microservices architecture is naturally used in large and complex organisations where a number of development teams can work independently to deliver business functionality and where applications are required to serve a business domain.

Before I explained what problems microservices are intended to solve, I am going to demonstrate history of software evolution.

Understanding History: Software Programming Evolution

Software applications are always required to be maintained and improved. There will always be a need to implement new requirements and to enhance existing features. Additionally, software applications are occasionally required to perform faster. Sometimes existing features are required to be depreciated.

As a consequence, improving and maintaining software application is a crucial element of software life-cycle.

It All Boils Down To One Concept: We are always required to maintain and improve software. Therefore, we need to make it easier to enhance our software applications.

Computers Evolution

Let’s understand how computers have evolved over time. This image summarises evolution of computer systems:

When computers were first introduced in 1940s, the software was embedded (punched) into large and expensive hardware such as on punch-cards and punch-tapes.

The instructions were written in binary machine language. Subsequently, if a change was required to be implemented then a binary machine language expert was needed to provide the instructions. These instructions were then written onto the hardware. This turned out to be an extremely expensive process.

Then second generation computers were introduced in 1950s as an improvement to first generation computers. These computers required assembly language to be written into smaller hardware chips. The philosophy revolved around the fact that it is easier to learn symbolic assembly language than to learn binary code. As a result, assemblers were introduced which would translate machine code into symbolic (mnemonics) assembly language.

Again, every time a change was required, one would have to punch the instructions onto the hardware. As a result, software and hardware were required to be decoupled from each other.

And then came 3rd generation computers in 1960s. These computers required compilers and interpreters to translate human friendly language into machine code. These computers were smaller and required user interactive software to be installed on the computers. 3rd generation computers could run multiple applications at a time. Different programming languages were introduced which were installed on to the computer. The hardware was still quite expensive but the flexibility of separating software from hardware presented numerous opportunities for improving the functionality.

Finally, 4th generation computers were introduced in 1970s. These computers are hand-held devices and can easily fit onto our palm. Again, new applications were introduced which could be installed on the devices without buying new hardware. Ease of installation and maintainability helped a large number of software companies invent new technological ideas.

There is a common theme.

Computers evolved because there is always a need to maintain and improve the applications that are installed on the computer.

Software Evolution

Software evolution followed a similar cycle. This image summarises software evolution:

  • 1st Generation (late 1970s): Object oriented principals such as inheritance, encapsulation and polymorphism were introduced to enable code reusability and maintainability.
  • 2nd Generation (1990s): Applications were designed and implemented using layered architecture. Layered architecture was introduced to reduce tight coupling between different components of software applications. As a result, it was easier to test the software and time to deliver software was reduced. Applications were still monolithic (one single unit to encapsulate all functionalities).
  • 3rd Generation (2000s): Introduction of service oriented distributed applications. This design methodology introduced remoting, and enabled applications to be scaled and deployed onto multiple machines.

Before microservices were introduced, most enterprise level applications were monolithic. Let’s understand the problems it introduced.

How Were Software Applications Architectured In Recent Past? What Problems Did They Introduce?

In recent past, software applications were implemented in N-layered architecture. As an instance, this was an architecture of typical enterprise level software application:

Each layer is implemented in software code comprised of multiple classes and interfaces.

  1. Data layer: Implemented to source data from database and files. It’s sole responsibility is to source data from various data sources.
  2. Business layer: Business layer’s responsibility is to retrieve data from data layer and perform calculations. Business layer does not know whether the data resides in files or database etc. Business layer is dependent on data layer.
  3. Service layer: Service layer sits on top of business layer. It provides a facade/wrapper on business layer and includes security/logging/delegation of calls functionality. Service layer depends on business layer.
  4. Hosting layer: Services were hosted via hosting layer. Hosting layer used services oriented technologies such as WCF/Rest API to host the services using a number of protocols such as http, https, tcp, named pipes etc. Entire application was executed as a software process, for example windows service or IIS web service. Applications were exposed via URL e.g. https://fintechexplained/myapplication
  5. User Interface layer: User interface layer contained code that referenced the hosting layer and allowed users to be able to perform interactive operations on the application.

This design then enabled developers to concentrate on specific functionality, test each feature, de-couple the application using inversion control by injecting dependencies and host the entire application on a number of different machines.

Layered monolithic design worked but had its own drawbacks. Here is a list of common issues we faces:

  1. It was not straight forward to scale and maintain the application. This design increased the time it took to bring new functionality of the applications to the user as the deployment cycles were longer in time.
  2. As entire application was hosted as a single process, every time an update was required to be performed, entire application was required to be stopped and then new version of the application was required to be deployed.
  3. To load the balance of work, entire software application was deployed to multiple machines. Furthermore, it was not possible to deploy specific functionalities of the solution on multiple servers to balance the load.
  4. Design of the applications was complex as all of the features were implemented in a single monolithic application.
  5. As the number of applications grew in an organisation, deployment of monolithic applications meant informing and coordinating with all of the required teams of the new features. This resulted in increasing time to test and deploy the application.
  6. The design started introducing single point of failure as single non-recoverable fault could stop the process in which the application was hosted.
  7. It forced entire application to be implemented in a single technology stack.
  8. As the time to deliver was longer, it costed more money over time to develop and maintain the application.
  9. As all code resided inside a single application, it soon became harder to maintain the code after a while.

How Are Microservices Architected?

It All Boils Down To One Concept: We will always be required to maintain and update software. We need to make it easier to enhance our applications and shorten the amount of time it takes to expose new versions of our applications.

Microservices were introduced to solve the problems mentioned above. Microservices architecture is an improvement on the aforementioned architecture. Each business functionality is exposed as a service. Each service can be hosted and deployed independently of each other.

Microservices Architecture

Think of each service as a mini-application

  • All services can communicate with each other even when the services live on different machines. This then allows new functionality to be implemented in the services.
  • Microservices encourage organisations to follow automated deployment and continuous delivery.
  • Applications end up becoming more resilient because each feature can be independently tested and deployed.
  • As each service is hosted on a separate process, if a service becomes the bottle neck and is resource hungry then it can be moved out into another machine without impacting other services.
  • When more users start to use a feature of an application, the service can be scaled up by deploying in to a powerful machine or by introducing caching without impacting all of the other services.
  • It also increases reliability of the application as each service can be built, tested, deployed and used independently.
  • Application code can be maintained and processes can be monitored separately.
  • Dedicated developers can implement services independently and release those services without impacting other services.
  • It also eliminates single point of failure as a service can go down without impacting all features that the software application offers.
  • As a result, this design reduces time to deliver and ends up reducing cost over time.
  • Code re-usability is further encouraged because a feature is hosted as a service and it allows multiple services to consume the same feature instead of implementing the same code twice.
  • A mix of latest technology stack can be utilised and embellished to meet the requirements. As an instance, R or Python data analysis packages can be deployed and hosted separately and C#.Net can be used to implement services. Additionally, NodeJS can be used on the server side, and AngularJs and ReactJs can be employed to implement user interface. Each business feature can be implemented using a different technology stack by a different team independent of other features.

In the image below, I am demonstrating how the services can be deployed on different application servers.

API Gateway can be introduced as the entry point for clients. It aggregates and returns the responses from several services.

What Are Microservices Drawbacks?

Microservices architecture have their own drawbacks such as:

  1. A separate build, deployment and release workflow is required to be built for each microservice. It is therefore important to ensure build to deployment workflow is automated otherwise it ends up increasing workload for IT operations teams.
  2. As each service is hosted as a separate process, monitoring and maintainability tools for each process is required. It might just be an one off set up but these logs and tools are required to be monitored. ELS-Stack or Prometheus is a good tool for that.
  3. Microservices increase workload on configuration system. Configuration of external services is usually shared across services and can become a time consuming task.
  4. It is harder to cancel a running task when services are deployed as separate applications.
  5. This design can impact performance due to network overhead as each call to a service is made across the wire. Caching and concurrency is usually introduced to improve performance.
  6. Microservices introduce problems that are associated with distributed computing such as security, transactions, concurrency etc.
  7. Application level stability tests are required to ensure new features have not broken functionality of existing services.
  8. It can also increase documentation work as each service has its own versions, release plan and release cycles.
  9. When the number of applications and code-bases increase, it takes an effort to maintain and manage it properly.

Microservices Technologies — Software To Implement Microservices

Microservices can be implemented and hosted by using a wide range of technologies such as:

  • Microsoft Azure Container Service and Azure Service Fabric
  • CoreOs
  • Docker
  • Swarm
  • Kubernetes
  • Mesosphere
  • OpenShift
  • Apprenda

Summary

This article outlined what microservices are along with an overview of their benefits and drawbacks. It then presented how applications were designed in the recent past and the issues they introduced. Finally microservices architecture was presented along with the technologies which can be used to implement microservices architecture.

Microservices architecture has its own drawbacks as communication between services is via service API call rather than in-memory in-process calls but on the other hand, it provides the flexibility to develop, test, maintain and deploy features independently.

Hope it helps and let me know if there is any feedback.


What Is Microservices Architecture? was originally published in FinTechExplained on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Reply

Your email address will not be published. Required fields are marked *