Skip to main content

Vert.x microservices: an (opinionated) application

First of all, sorry for the tautology in the title, a library can be either opinionated or un-opinionated (such as Vert.x is), but an application can only be opinionated.
However, I decided to include "opinionated" to help getting my point across even though is redundant.

The Motivation 

I am a big fan of Vert.x and the official documentation is quite good, yet it is not straight-forward to understand how it works and how to use it.
There are a lot of blogs and articles describing Vert.x terminology and its concurrency model.
There are also tons of "Hello World" Vert.x applications on Github, and the rest seem to be just variations of the already typical "Web chat application using Vert.x".
On top of that, many of them are outdated (using AngularJS instead of Angular2+, for example).

The only two exceptions which I found are:
vertx-microservices-workshop: a demo application by the Vert.x development team.
ngrx-realtime-app: proof of concept application focused on state management which just updates a shared counter.

Those are good example applications and I am thankful to the authors for sharing with the community.
However, one problem I have with them is that the source code of all the microservices is put together in the same repository (I guess for the sake of simplicity).
I am not trying to be a purist, but for me, the main requisites of a microservice is that it must be independently deployed, upgraded and run, which basically means that each microservice should have its own version string and its own code repository.

Also, each microservice should have a very-well defined (contracted) API.
And regarding APIs, JSON is good... for JavaScript, but for Java (and other statically-typed languages for that matter) there is something better (I warned you this was going to be an opinionated post ): Google Protocol Buffers, but I am getting ahead of myself...

And one more thing, the devil is in the details, and these applications avoid dealing with "boring" stuff such as basic security (HTTPS configuration), which leaves information gaps for these important topics.

So in short, I was lacking a comprehensive and non-trivial (yet simple) microservices application based on Vert.x, and since I believe the best way of learning is to roll up one's sleeves and code, I wrote one in my spare time.

And then, after I had already written almost all of this post (and completed my example microservices application), I found this: vertx-blueprint-microservice, which is almost what I was looking for (despite of being monorepo).
But it seems to be going through a major refactoring which has not been completed after two years and it actually looks outdated (the frontend is still based on AngularJS).

The Code

I have just "released" (umbrella) version of my example application (this previous post presented the first version), which is composed of six different Github repositories:

mylocation-backend version > main microservice and API gateway.
mylocation-last_known_location version > another microservice, including its protobuf API.
mylocation-last_known_location-persistence version > yet another microservice, also including its own protobuf API.

vertx-utility-extensions version > library used by the previous three microservices to avoid boilerplate code when starting and initializing Vert.x.

mylocation-android version 1.0 > Android application which sends live location data messages to mylocation-backend.

mylocation-frontend version 1.1.0 > Angular 7 application which asynchronously displays either live location data (if possible) or the latest previously saved location.

If you think that having to manage and push commits to 6 different source code repositories involves quite a lot of hassle, you are totally right, but I do believe the advantages overcome the disadvantages (more on that in a future post). Again, this is my opinion, and please note that it is not based on irrational fads but on years of experience.
And of course, I am always happy to change my mind whenever facts or enlightening discussions show me a better way.

I mentioned Google Protocol Buffers before because all the communication between the Java microservices is done through protobuf binary encoding, not only for better performance but also because of the automatic generation of the required API classes.
And for the sake of modularisation, each protobuf API is built separately into its own Maven (jar) artifact.

The communication between the backend and the frontend is done through JSON though, since I did not want to deal with protobuf in JavaScript/TypeScript.
For similar reasons the communication between the Android client app and the backend also uses JSON.

And one comment about the coding style, as soon as the implementation of a Handler grew bigger than a few lines, I decided to create a new class in a separated file instead of an anonymous class or a lambada expression.
I prefer that because I like methods (and classes) to be small, as a direct consequence of applying the Single Responsibility Principle.

The Functionality

Ok, so what does the application do?
When a new Web client connects, the event bus bridge is set up, then the latest saved location (previously sent by my mobile phone to the backend) is requested, retrieved (through the event bus bridge) and displayed.
Simultaneously (everything is asynchronous), it starts listening for live location updates again on the Vert.x event bus bridge, and as soon as the live updates start coming, they replace the saved location on the display.

The app is up and running on a small Amazon Lightsail VM (1GB RAM), so you may see it in action yourself: (warning: the SSL certificate which I used is self-signed since I did not feel like paying $300).

Final Thoughts

I could write much more on related topics such as:
- how I combined Vert.x and Spring in the backend (please don't freak out, I just use Spring purely for dependency injection, I dislike Spring gigantic ecosystem as a whole, specially Spring Boot - again, just my opinion).
- the Android service which fetches the location provided by the GPS.
- the Vert.x service which saves each received location in a local file (in binary format).
- the related Vert.x service which provides that saved location.
- client-side certificate (in the Android client) in addition to server-side.
- the application configuration (resolving and loading properties values, password encryption).
- RxJS usage in the frontend.
- the Vert.x event bus SockJS bridge.
- the social implications of sharing my live location with potentially everybody (spoiler alert: I introduced a "masking" security mechanism in the Android code to keep the area where I live under the radar).

But I will leave that for future posts, in any case, everything is in the code, so feel free to peek at it on Github or ask me☺.


Post a comment

Popular posts from this blog

Using Apache Kafka to implement event-driven microservices

When talking about microservices architecture, most people think of a network of stateless services which communicate through HTTP (one may call it RESTful or not, depending on how much of a nitpicker one is). But there is another way, which may be more suitable depending on the use case at hand. I am talking about event-driven microservices, where in addition to the classic request-response pattern, services publish messages which represent events (facts) and subscribe to topics (or queues depending on the terminology used) to receive events/messages. To fully understand and embrace this new software design paradigm is not straight-forward but it is totally worth it (at least looking into it). There are several interconnected concepts which need to be explored in order to discover the advantages of event-driven design and the evolutionary path which led to it, for example: Log (including log-structured storage engine and write-ahead log) Materialized View Event Sourcing C o

Kafka + WebSockets + Angular: event-driven microservices all the way to the frontend

In the the initial post of the  Event-driven microservices with Kafka series (see here  or  here ), I talked about the advantages of using event-driven communication and Kafka to implement stateful microservices instead of the standard stateless RESTful ones. I also presented the architecture and the source code of a related proof of concept application. In this post, I would like to show how to extend the asynchronous event-driven communication all the way from Kafka to the Web frontend passing through the Java backend. Hence, in the first post of this series, we got rid of HTTP as the communication protocol among microservices in the backend, and now we are also replacing it (with WebSockets) as the communication protocol between the frontend and the backend. Ok, but why would you do that? Because it provides a better experience to the end user!. Using WebSockets you can build legit  real-time user interfaces, the updates are pushed immediately from the server to the client

Kafka + Cadence + WebSockets + Angular: managing event-driven microservices state with Uber Cadence

In the the previous post of the Event-driven microservices with Kafka series (see here ), I showed how to extend the asynchronous event-driven communication all the way from Kafka to the Web frontend passing through the Java backend. In that proof of concept application which processes money transfers, one Kafka topic is used to receive incoming transfer messages and a second topic is used to store the account balances state and broadcast changes (i.e., to push notifications from the backend to the frontend). The first topic (called " transfers ") and the second topic (called " account-balances ") are connected through Kafka Streams. Uber Cadence In this post we are bringing Uber Cadence  into the mix to manage the state of the application (i.e., to keep the balance of the accounts updated), thus, Cadence replaces Kafka Streams. Cadence is an orchestration/workflow engine which unlike most of the other workflow engines out there (e.g., Zeebe, Camunda and m