Microservices architecture is on the rise, already forming a key part of several current transformation projects, breaking down traditionally monolithic applications into self-contained, independently deployed services that are identified using domain-driven design.
Stats from a recent study found that more than 60% of respondents have been using microservices for a year or more. In particular, finance and banking lead the way in using microservices, but many other industries are following suit including retail/e-commerce and telecoms.
Not only do microservices hold the potential for enhanced scalability, flexibility and durability, but the time taken to deploy and maintain components is also significantly improved. This isn’t the case for all microservice architectures. Major impediments come in the form of ecosystem integration challenges and weak distribution processes.
Busting the Myths of Distributed Computing
The implementation of microservices is more complex than one may first think, exacerbated by the fact that many DevOps teams fall into the trap of making false assumptions about distributed computing.
The list of distributed computing fallacies was originally addressed in 1994 by L. Peter Deutsch and others at Sun Microsystems, and still holds true today. There are key fallacies that hold special importance to microservices implementation: reliable, homogenous, and secure networks, latency is zero and transport cost is zero.
The smaller you make each microservice, the larger your service count, and the more the fallacies of distributed computing impact stability and user experience/system performance. This makes it mission-critical to establish an architecture and implementation that minimizes latency while handling the realities of network and service outages.
Uncovering the New, Updating the Old
Microservices require connectivity and data to perform their roles and provide business value, however, data acquisition/communication has been largely ignored and tooling severely lags behind. We can see this with API management or gateway products that only support synchronous, request/reply exchange patterns. This only goes to exacerbate the challenges of distributed computing – they don’t have the ability to integrate with or acquire data from legacy systems.
At the same time, eventing/messaging tools have also been stuck in the antiquated, non-agile world, being incompatible with many of the guiding principles of microservices such as DevOps and self-service.
Solving the Logistical Maze of Integration
As services become smaller and their purpose more singular, the potential for reusability increases, but that’s contingent on the ability for services to collaborate. Acquiring data for greenfield systems is easy, but microservices almost always come as a side effect of digital transformation, modernization — or the need to build new capabilities at a more rapid pace.
Most existing systems live on-premises, while microservices live in private and public clouds so the ability for data to transit the often unstable and unpredictable world of wide area networks (WANs) is tricky and time-consuming.
There are mismatches everywhere: updates to legacy systems are slow, but microservices need to be fast and agile. Legacy systems use old communication mediums, but microservices use modern open protocols and APIs. Legacy systems are nearly always on-premise and at best use virtualization, but microservices rely on clouds and IaaS abstraction.
The case becomes clear – organizations need an event-driven architecture to link all these legacy systems versus microservices mismatches.
Creating the Perfect Microservices Harmony
The smaller the service, the less value it offers the end-user – value comes from orchestration. Historically, orchestration was handled by a central component like BPEL engines or ESBs, or, these days, API gateways.
Orchestration is a good description – composers create scores containing sheets of music that will be played by musicians with differing instruments. Each score and its musician are like a microservice. In a complex symphony with a hundred musicians playing a wide range of instruments – like any enterprise with complex applications – far more orchestration is required.
Microservices are Well-rehearsed
So, an example of microservices orchestration is when a microservice performs a series of steps within the code. The input or output of a microservice is a data event that has domain significance. But what is key is that since the microservice is merely producing an event, it does not have knowledge of if or when it will be processed. Other services must register their interest in the event, or set of events, and react accordingly. This all needs microservices orchestration.
Time to Shift the Focus
Event-driven maximizes agility as it liberates data from being ‘at rest’ to being ‘in motion’ – consumable in real-time. Choosing the right eventing or messaging platform is one of the most critical steps to realizing the vast benefits of microservices.
Microservices per se are not enough. Gartner has identified this need as far back as 2018 when it spotlighted ‘event-driven’ as one of its top strategic technology trends, citing “Digital business is event-driven, so organizations need to invest in event-centric design practices and technologies to exploit digital business moments.”
The Full Potential of Event-driven Microservices
Merging event-driven architecture with microservices can house major benefits. Developers can build highly scalable, accessible, robust and versatile systems that can digest and aggregate exceptionally large amounts of events and information in real-time.
Not only should modern events platforms be able to be deployed in every cloud and platform as a service, but they should also support DevOps automation. In such an ever-changing world, developers need a seamless self-service experience – only made possible with event-driven microservices.