Photo by Tim Evans on Unsplash
Moving from Synchronous to Asynchronous Services
One of the foundational pillars of Reactive microsystem architectures is the Message Driven approach. It is described in the Reactive Manifesto as follows:
Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. This boundary also provides the means to delegate failures as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.
-- Message-Driven: The Reactive Manifesto
-- Message-Driven: The Reactive Manifesto
This Message-Driven model provides a robust alternative to REST's service-to-service communication approach using asynchronous messaging as the standard communication model. Synchronous messaging can still be used when a use-case requires it but asynchronous communications is prioritized over synchronous.
Why not Use REST Microservices all the time?
REST is synchronous by nature as it sits atop HTTP's request and response model. Because of REST's association with HTTP, it is an excellent choice for exposing an application's API in a language-agnostic manner. REST APIs allow a clear demarcation between the application, and it's clients and can provide a common client integration point across technologies. Additionally, REST's maturity and its wide-spread deployment over the internet has allowed it to become pervasive among developers. However, it is important to avoid the cognitive bias of the Golden Hammer:
"I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail."
- Abraham Maslow
- Abraham Maslow
While we can build applications using nothing but REST services, it is important to understand the trade-offs we make when using REST.
Tightly Coupled
REST communication is point-to-point by nature. When invoking a REST endpoint, the caller must know the address of the service endpoint. REST-based microservice applications often employ a service discovery mechanism to avoid brittle, static addressing. While service discovery provides a solution to static addressing, it comes at the cost of additional application complexity. Each service must register with the discovery service, and each service client must query the discovery service to find a suitable endpoint before it can invoke a service. Every call to the service discovery mechanism exacts a performance penalty. Each round-trip to the discovery service increases the total response time for every call. This performance penalty significantly increases when services are chained together or when the service client must broadcast a message to multiple endpoints.
Blocking
REST is layered on top of HTTP's synchronous request/response communication model. Even when a service doesn't return a payload, it must still return a response indicating that the service handled the request. Because of this synchronous nature, the service must block while waiting for a response which degrades the calling service's performance. The calling thread is tied up waiting for a response and is unable to perform any work. By tying up the calling service, we decrease its availability and negatively affecting scalability.
Error handling
Every REST call expects a response, even when the response doesn't contain a payload. Every successful REST/HTTP service request returns a response message with a status line containing HTTP/x Status 200 OK to signal that request has been processed. However, what happens when the service is unreachable or times-out? In addition to trapping the exception, the developer must decide if some form of retry logic should be implemented to compensate for time-outs. The addition of retry logic increases the complexity of the client and impacts performance and maintainability.
Inter-Service communication
Behind the application's public-facing external API, will likely be a collection of services that must communicate with each other to perform a particular function. We can calculate the response time of any application function as the sum of the execution times plus the inter-service communication times of the participating services. While REST provides an excellent mechanism for exposing the external API, it is not always the best option for inter-service communications. As the number of services needed to handle an external request increases, the synchronous nature of REST and the complexity of the internal REST service clients can significantly affect the response time for any call.
Message-Driven Microservices
An alternate approach to REST communications is the Message-Driven approach. In this approach, microservices no longer communicate directly to their intended endpoint. Instead, services communicate through a message broker that is responsible for routing the message to one or more destinations. With a message-driven approach, communication is performed asynchronously through the message broker without waiting for a response. If the sender expects a response from the destination, it will receive the response at some future time when it becomes available. This approach has several benefits:Loosely Coupled
The publish/subscribe model of messaging allows consumers to be decoupled from the producers. By decoupling the producer and consumer, the need for explicit service discovery is unnecessary. The message broker is responsible for locating a suitable destination and routing the message to an appropriate consumer. Additionally, topic-based messaging allows a single message to be sent to multiple destinations. By leveraging topic-based messaging we can reduce both communication, and potential retry overhead imposed by REST services when a domain event occurs.
Non-Blocking
Building around an asynchronous messaging model liberates caller resources that would typically have sat idle while awaiting a synchronous response. By eliminating blocking, we increase the availability of the caller to service more work and increase overall scalability.
Error Handling
Because the message-broker queues messages for processing, The destination service does not have to be available when the sending service enqueues the message. This decoupling is advantageous with elastic microservices as service startup is not instantaneous. The message broker can wait until the service is available before passing messages to it. By providing message queueing, we can reduce the number of unreachable connections and time-out errors common in REST communications.
Easier to scale
While RESTful microservices can be scaled through replication, they require additional infrastructure to support service discovery to identify availabile service endpoints. In a message-driven system, the message broker centralizes the responsibility of message routing. Callers create messages of a particular type and dispatch them to the message broker. Consuming services register with the message broker indicating their ability to process specific message types. By deploying replica services that support the same message type, we can elastically scale the service and let the message broker dynamically manage the service routing.
Message-Driven service design
Moving to a message-driven communications model requires a shift in the way services are designed. When a method or function contains a call to an external message-driven service, we can no longer implement the code linearly if we are expecting a response from the external service. Instead, we must decompose the process into two phases: pre-request, and an optional post-response.In the pre-request phase, the service performs all operations that are needed to prepare the request message including any preprocessing, message creation, and the call to send the message to the message broker. Once the message has been sent, the method or function exits which frees the service to continue servicing any incoming load. For any service call that does not need to receive a response (e.g., event notification, value set, etc. ), there is no post-response phase.
If a response is expected from the request, it will arrive and be handled in the post-response phase. In this phase, we use the contents of the response payload to complete the unit of work we started in the pre-request phase. The primary design consideration revolves around execution scope. When the pre-request phase completes, its execution scope is lost. Any post-response phase messaging processing must provide a mechanism to reconstitute as much of that scope as is needed to service the response message.
Twitter
Facebook
Reddit
LinkedIn
Email