This article is the second in a three article series on moving from Reactive Programming to Reactive Architecture, where we take a look at using RSocket and RabbitMQ to handle Reactive Streams between different applications. In the first article of this series, we looked we looked at what an an architecture using reactive streams over the network could look like. The third article, where we do Reactive RPC calls using Reactor RabbitMQ will be released soon.
In the first article of this series moving from reactive programming to reactive architecture, we already looked at an example using RSocket within a sender and receiver application. Be sure to take a look at it if you want to see some code samples using RSocket combined with the Spring framework.
Reactive Streams over the network
Despite all the cool technology we have seen the last couple of years in regards to programming with Reactive streams, there was one big elephant in the room. The lack of true Reactive Streams over the network.
Of course it was already possible to use Server Sent Events to stream a result over HTTP using for example Spring Webflux. However this resulted in a one-way traffic from the server to the client. Apart from the TCP backpressure that is given over the network, no real end-to-end Reactive Backpressure can be guaranteed. In the scenario of a slow application consuming information of a much faster application streaming information from a database, this could result in a lot of unnecessary work being done. Evidently it’s a waste to send 10’000 records when only the first 100 got processed by the client, or even worse: the client could run out of memory due to being overwhelmed by the server’s response. The one-way traffic also prevents an easy form of dynamic communication between the two.
Alternatively, Websockets could be used to provide two-way communication between Reactive applications using Reactive Streams. Unfortunately this would still be a pretty rudimentary solution and still has zero support for actual Reactive Backpressure.
RSocket to the rescue!
Several interaction models
One very interesting distinction between RSocket and many other protocols, is that its setup can sometimes be considered closer to peer-to-peer than a server-client communication. Messages can even flow in both directions depending on the stream setup. You have the choice between:
- request/response: A stream with a single result. For example to load a single resource like an image.
- request/stream: A finite stream of 0, 1, or many. This enables us to give a stream of results through time. For example to retrieve a range of database records over an API. There is support for reactive backpressure, which means that the peer offering the stream shouldn’t overwhelm its consumer.
- fire-and-forget (no response): For example to send IoT data like measurement information. In this case the receiver should not send a response.
- channel: Offering a bidirectional communication, this makes the sender and receiver true equals. Both sides will send messages and offer backpressure. This can be very interesting for games.
Support for different technologies – both programming languages and transport protocols
RSocket as a protocol tries to be agnostic of both the programming language that’s used, as well as the transport layer used underneath the RSocket protocol.
Because RSocket is a language-agnostic protocol it means applications built on different programming languages and libraries can still communicate with each other, and support Reactive Streams between them. Imagine your frontend supporting Reactive Streams with RxJS, continuing that stream over a Java Spring application where a mapping happens, ending up in a .NET application. Programming languages supported are:
- Java (and Spring Framework)
RSocket itself is a binary protocol, meaning it requires a byte-stream capable transport protocol. The currently supported transport protocols are:
Backpressure over the entire architecture
Perhaps one of the most interesting side-effects of having Reactive Backpressure as part of the RSocket protocol is that it effectively propagates over the entire architecture. As long as all the applications have Reactive Backpressure on their communication layer between each other, the speed of the entire flow will be dictated by the slowest part. This means that the applications between the start and the end of the flow will be able to work in a far more “smoother” way and with a smaller memory print.
The end-to-end backpressure we already saw within applications is hence spread over the entire flow between our applications instead.
How RSocket fits in a Reactive Architecture
It should be clear by now that RSocket can be an important part of a Reactive architecture. By enabling fully Reactive Backpressure over the line in client-server and peer to peer models, it brings a lot of the advantages behind Reactive Programming to the entire architecture. Not only between different JVMs, but even between applications built on different technologies.
Rsocket as part of the Reactive Foundation
During the writing of this article, the news appeared that the Reactive foundation was established by Pivotal, Alibaba, Lightbend, and Netifi. The goal of this open-source foundation is to support and accelerate the availability of Reactive programming specifications and software. RSocket as an open-source protocol will be managed as part of this cooperation. This definitely is very interesting news, because the combined forces of these (reactive) industry-leading companies should produce very interesting results.
RSocket is proving itself as a way to fulfill one of the missing puzzle pieces we had before in Reactive architectures – a simple but resilient way to handle Reactive Streams over the network and will without a doubt become an important part of our toolkit as a developer in the coming years.