December 6, 2021

Episode Five

This episode being the second part of a double chapter checks out an architectural outline based on microservices.

Previously on ProScala Podcasts:

As I introduced the show’s first double episode, here is what we’ve learned previously. After a theoretical introduction about what is a monolith and what is a reactive system and what kind of evolutionary steps led to the existence of these, we dived deep in a real life example when we needed to consider making architectural decisions.

To provide a complete learning journey to let you guys on the business side develop a skill in asking the right question at the right time from your developers and to let you on the tech side deepen your knowledge to see through what you are working on and to be able to give better insights of architectural decisions for your colleagues on the business side, we outlined that we would go through 3 architectural concepts of the same software moving step by step toward making it more reactive.

We analyzed what would come with deciding in favor of a monolithic solution, now it’s time to go on and see what features we can implement if moving toward Reactive or what are the parts that we would need to give up on if sticking to a monolith.


Let me welcome you now after the brief of the previous part of this double chapter to Episode Five of ProScala podcasts. This is the fifth tech episode released on the 6th of December, 2021. We will go into detail with more characteristics of architectural design setups targeting both newbies, senior, and business audiences. I’m Csaba Kincses, we’ll start in a moment.

The mentioned against arguments about the first architectural outline that was a monolith predestines what comes next as we were talking about our desire that we do not want to distract core functionality while it would also nice to be able to work in an agile way including domain separation and to ensure this, using multiple units of deployment. As was said, we may expect our secondary functionality to come with a varying load and due to that we would be happy if a delay or crash in that feature set won’t break our whole system, especially the core part that was originally responsible for consuming a stream of price tick data. We also highlighted that sticking to a lightweight stream has some benefits contrary to breeding a herd of messages.

Now we need to think steps further and move toward stretching the boundaries with caution as intending to come up with something in between the completely monolithic solution and something more reactive would require us to be thinking about keeping what worked well in the first setup.

I would call the second setup the master monolith. It could be a subject of debate whether once we have more than one unit of deployment related to main application logic and not including a tiers based break-down that should obviously mean we are dealing with a set of microservices. Here I’d like to stress out with the naming that I’m talking about an architectural idea keeping some characteristics for the core functionality designed in a way as if we were thinking about a monolithic system, but subordinating some non-core features and organizing them in ordinary microservices as if they were designed to be part of any reactive system. I say master monolith using the master concept to highlight that this will be the module to coordinate all what’s happening around this system. We will dive deep into what we’ll get from this setup.

Yet again some etymology on what a master monolith can refer is: we can recall that we intended to have a core functionality resulting in processing times that don’t vary a lot and are in a calculable range, we also chose lightweight streams as our primary input source of input data that are also equal to events in terms of reactive systems, and we did not want to breed a herd of messages that could logically be necessary but would increase our processing time with a network lag and harm operational safety with the mere fact that a network message passing is involved.

So my point is that due to operational safety and the monolithic like scaling traits of the core functionality, we won’t break down this part of our project to as many individually deployable parts as we otherwise could considering domain driven design, a technique used to find the boundaries between blocks of a reactive system. In this term, we would really have a monolith with some subordinated microservices, treating these terms flexibly to give a better understanding.

The latter poses a question, what would be the features that we would outsource to microservices decoupling them from the main processing flow and what would be the benefits to make this shift from the monolith.

Considering a trading robot, we could need ad-hoc machine learning required predictions or social listening of a kind that also requires ad-hoc research requiring significant compute resources. Because these features can consume much more resources than what we originally planned to stuff into the flow of real-time stream processing, and we won’t really benefit from continuously running these, it sounds a natural decision to run these in an ad-hoc manner, decoupled from the real-time processing flow to go for sure not to distract the latter.

And this is an occasion where we can see that we have a solution we won’t really have come up with before reactive and cloud computing. It could be typical that running an ad-hoc ML prediction can result in a 100 fold spike in resource usage, we can easily see that it would be ridiculously expensive to dedicate 100 fold resources for this as we would have done in the old days of physical servers and monoliths. Still, we also see that there can be arguments pro monolith, as it can be better to keep the core functionality together even if we can see domain boundaries inside, that may push us toward breaking it to multiple parts.

Now we have come to some theoretical examination on whether this so-called master monolith part exhausts the main points of the Reactive Manifesto and how and why the system would work as a whole in this setup, also seeking an answer whether this setup can be treated as a reactive system in this form or not.

Due to its nature, the system’s main feature is responsiveness, real-time processing, and if secondary features are truly decoupled from the core features, then we nailed that. Obviously, this is when concurrency comes into the picture, as we won’t be able to halt real-time processing, technically we would call a secondary feature from the main processing flow by initiating an asynchronous message. This is also where a message-driven nature comes in, but as said, we avoid breeding messages by keeping the core part together in a monolithic like block. We were previously speaking about a resiliency feature describing the pure monolith that manifested in a backup Java Virtual Machine solution, so we have redundancy where necessary and if we have the secondary features decoupled, we can still treat this as being resilient.

What’s new is that we should have an elasticity implementation considering secondary features, so most likely these features should be run in the cloud hacking the possibilities even besides considering that the cloud is a managed service because we need to care about providing time guarantees with edge cases in mind like what if the service does not have a ready-made runtime in an instant due to previous idle time. We need to make sure this is not a problem and we even need to care about the possible number of warm runtime instances we may need at a glance.

As inter-service communication comes in, we should also extend resiliency, in this case general fault tolerance features like being prepared for network delays from secondary feature services or any other kind of failure when we funnel secondary feature information into the trading decision flow.

We can state that our solution is an almost reactive hybrid, as a result having some limitations in terms of the Reactive Manifesto, but the kind of that pays out. Regarding resiliency, we assume that the core service, the master monolith is stable and we have a monolith like redundant solution at hand with the twin runtimes, but this is not really the kind of flexible error handling solution with fallback mechanisms and scenarios we would implement thinking the reactive way, so basically we would want to code the master monolith to perfection and than leave it alone. On the other hand, as we would want to modify the master monolith in a way to avoid getting distracted by anything that would go bad in the secondary feature side, we treat the secondary features as a thing where we can tolerate higher instability.

The system is not completely message-driven, as we keep a wider variety of code together in the master monolith considered as core functionality where we would otherwise be forced to stick with a message-driven implementation if we wanted to go 100% reactive. So to sum up, we can treat this as a hybrid system that is by implementation has both monolith and reactive characteristics, gaining a two-tier advantage on responsiveness, and it gives us a safer run of the core features with the cost of giving up from domain separation, treating the microservice like parts as subordinates.

As was said, we aim to discuss 3 types of architectures, that was the second one as we already dived into the details of a pure monolithic solution in the previous part of this double episode, now I barely owe you the pro and con arguments why we would choose this solution out of 3.

First pro argument would be that if we assume that stability of the secondary features is a different order of magnitude compared to the core, then we would like that solution. It has everything to support that, stable parts with their almost constant computational needs with some runtime redundancy are on a central computer, unstable parts are on virtual hosts, containers or in the cloud. That was the argument approaching from the favorable shift toward reactive view.

Now what if we compared this to the scenario of why won’t we go on with the monolith to extend it with the secondary features? Then comes the second pro argument supporting the hybrid, is that besides a bad capacity spike overview of the situation, the business domain of the secondary features could differ so much from the core ones, that shouts for the possibility of a separate deployment.

A counter argument could be that most likely we don’t want to forward the same streaming data to our secondary feature microservices but this is more like a problem to be solved than a con issue which we need to implement depending on the exact feature requirement.

For the reason this chapter is about a journey toward a monolithic solution in the direction of something much closer to the microservice world and we have examined a hybrid solution, it’s time to ask the questions what would bring our intermediate solution much closer to the Reactive Manifesto as even though it was said that we may insist to keep some building blocks together contrary to Reactive and microservice principles, there are those features which would require to implement something that is much closer in its nature to the Reactive world.

It would be hard to come up with a picturesque name for this setup contrary to the previous case as we will mix in many features and none of these will become a major characteristic. What can be a clear point is to go through the features of which it is easy to see that they add value and highlight how these bring our originally called master monolith solution closer to reactive microservices, though we will still take the choice to ignore parts from the manifesto which would be against our goals.

Recalling our previous findings why we kept the core in one piece hence calling it a master monolith, we preferred stability of these parts and avoiding a risk of bringing in network latency as we would have done if this part of the system is split in multiple services.

We treated the core as something from which we can expect stable real-time processing and here we have a bottleneck we did not really deal with. Most likely there is a limitation regarding how many incoming price tick streams we can handle in parallel due to network hardware limitations of typical virtual host or cloud provider services so there will surely be a point in relation to the number of handled assets when we need to consider scaling our master monolith core block.

It’s sure that even though we have all the previously mentioned techniques from virtual hosts, containers to nanoservices to serve infrastructure needs extremely efficiently, we can still come up with obvious sounding features that can be surprisingly resource hungry even considering these techniques.

We may want to trade multiple assets to build a portfolio, sounds simple, right? This would be kind of a thing we would expect from such software but this simple-sounding feature can point to lots of new requirements once we re-examine what we were already thinking about.

What to scale here and can this be done in a linear fashion or are there bottlenecks that would make it more complicated than keeping it linear? The original point with our second master monolith architecture is that we have a core with some subordinated services and as mentioned, we may have trouble with managing multiple streams due to both network limitations and the increased chance of hiccups in keeping the real-time pace. More streams or assets for one core has an effect on fault tolerance, say one core service manages five streams or one stream feeding the data of five assets means that if anything goes wrong, it may have an effect on all these assets and their related calculations.

So here we have operational safety again and yet another thing. Suppose we handle multiple assets, then comes another feature we need to think about. It’s choosing what asset to pick for a trade so we need to have a decision algorithm between signals generated from various assets. This predetermines - though if you rethink this is obvious - that we can’t have a preset precedence to decide in what order we will resolve the data of multiple assets and also, it could be a short lived choice to centralize the processing structure.

What I mean by centralizing is for example keeping a multi-asset process in one master monolith core, but that won’t be an extendable solution and neither would provide us a kind of safety to keep the data needed for asset choices replicated.

Let’s rethink what we have learned from solely coming up with a problem, namely a feature requirement which does not seem rocket science related to a trading robot.

We can see that though we may have master monolith cores, we made a big step toward reactive traits, as we may need more replication techniques and this system will become more message-driven by nature.

Also, we talked about keeping the master monoliths’ process undisturbed to provide a real-time guarantee and for this reason, the core process may do calls to subordinated services. If we have multiple assets, these subordinated services and their allocated resources can be reused, we do not need to replicate a whole cluster as the subordinated services do not need to be dedicated to one asset or one core.

In this setup, we can see that we may have many instances of these master monolith services, therefore these can be treated as simply bigger microservices than what we would otherwise be used to.

Let’s reassess what were the moves made toward reactive, so the point on, we want some extendable portfolio management, we may need many asset dedicated replicas of the core and we may want to reuse subordinated service replicas. Ad-hoc needed services may not behave in an asset-dedicated way to enhance reusability and we may need some service orchestration that can help us have the predictably needed number of subordinated service replicas.

As we are talking about breeding more services and having some portfolio decision mechanism that can utilize output of the asset-related services, obviously we made a big move toward being message-driven.

Another pro argument toward why we need to think in this way is we can assume that once we think in portfolio, we won’t be happy with a binary simple decision-making mechanism regarding assets, probably we would think in risk profiles and try to base as much as possible on analysis. As we still have the assumptions of the master monolith concept with this system even considering the move of the overall nature to reactive, we need to recall that we won’t be able to let the real-time guarantee of a core module instance be distracted, so we won’t be able to pack the decision making in the flow of the core. That would only be possible with something overly simple.

It’s clear that we can come up with many related ideas once we deal with a multi-asset solution which will easily prove how resource-hungry such a system can be even besides all the advanced technologies we have for efficient infrastructure management.

Therefore this seems to be the case when we need to deal with development related to resource management and it can be worth it even considering that infrastructure is cheaper compared to human resources.

Basically regarding this close to purely reactive architecture, if we want to make a decision whether to go this way or not, we need to take into account how much we want to deal with resource management implementations.

The stable point is that we can’t make a wrong bet by choosing the master monolith because our third close to reactive approach seems to be a flexible one giving us many choices whether to build something or not, but a key takeaway would be that the subordinated services should be prepared to not be asset dedicated by design and also be planned reusable.

To shed some light on what would be some specific challenges once we decide to make a bold move in the direction of making the system as reactive as possible, I’m specifying some problems which are in this case not compulsory to implement so it won’t be that strict as with some development assumptions of the master monolith setup and there will be more flexibility to choose one or another way to accomplish a working system.

First to mention is most likely we will have much more possible portfolio elements than how many we would want to track in real time. This is an illustrative situation of that we have the choice to go this or that way, namely we can avoid dealing with such a feature by following only a select set of assets in real time even by wasting some hardware resources or we can choose to implement it. That would work in a way of creating a service that scouts for the instruments which are interesting enough for some reason for tracking them in real time and than puts them on a watchlist of which’s items get a core module instance to let the system react in real time.

Another optional feature would be a comprehensive UI as for the reason we may add more and more complexity, the more of the behavior shall be configurable and monitorable, this is what this feature could be used for.

As mentioned, we’re about to avoid simply replicating some instrument-bound master monolith cluster, hence we will need to implement service orchestration and deal with how the state of our microservices are handled including managing persistence layers.

The complexity of such a solution highly depends on the overall system and how much resources we want to save, with this way we may have many micro-choices and will have lots of nice to have improvements on our backlog.

To reassess, many of you may have imagined as a first idea regarding a trading robot that what you may expect from such a software is not so complicated as we may only want to see some nice high yield drawings on a chart but as you can see, we did not really mention any feature idea that would seem far fetched. We do need real-time guarantees at some point and a need for portfolio management also seems obvious, while building the system with extendability in mind can also prevent the unwelcome occurrence of a huge refactoring need.

It’s important to see through the whole case of architectural design as this is the key to reverse the way of thinking. If you think in a way of sticking to the very first and most easiest features to design a system not taking into account what else could possibly be needed, that can lead to an exponential waste of time and on the contrary, if you have the possible maximum feature set in mind and built in the way of how architectural design decisions are made and how the whole is coded, this reverses the way of thinking in a helpful way.

Say you have done a process in the planning phase that we have just done by examining 3 architectural design patterns. This way when you pick feature 1 to think about details of implementation, you will have the chance to think about what could possibly go wrong in feature 2, 3 or 10 depending on a feature 1 implementation and also if you are thinking about an appropriate feature set, you can break down an imagined huge system having this context in mind instead of adjusting the bigger system to what we have in feature 1 or 2.

This is the way how thinking is somewhat reversed due to this approach, and surprisingly, this can help you be better at developing something simple and stable on the feature detail level for the first run as you already have most of the assumptions in mind which lead to the need of refactoring code so you can prevent a future refactor.

Also, once you have an idea of the possible consequences of simplifying an implementation, you can do that in an orderly way knowing the side effects. The outlined 3 architectural designs are maximalists, but knowing what pitfalls are out there, we can back out from features in a way that we know what kind of limitation we added to the system with such a decision.

We can even partially back out from real-time, turn to adding more hardware instead of being efficient and so on. What could lead to a mess is when there is no clear direction regarding what are the limitations we can deal with as changing requirements lead to more refactoring. If we partially back out from a feature with a plan to provide a complete implementation to keep the system extendable as soon as possible, then such a thing would guarantee staying on the right track.

Finally we reached the lately introduced new block of questions, which is to make you rethink the various details and ideas you heard in this chapter, therefore I’ll interview you regarding your specific experiences and opinion with the following questions:

Have you ever thought of or have experience with thinking in a monolith-microservices hybrid like solution?

What were the key takeaways of doing a direct comparison between what these architectures can offer?

Do you have direct experience with increasing team efficiency by means of domain separation, microservices and enhanced code ownership?

What kind of experience do you have debugging distributed systems and do you treat this activity to be harder or easier in the long run compared to dealing with other kinds of systems?

Did you ever choose to implement a feature because of the existence of the cloud that you otherwise would not implement?

Finally, what are your two cents regarding the depicted 3 architectural designs?

If you do have some answers for these questions instantly, then do not hesitate to connect me via LinkedIn, and please also provide me your feedback including questions and ideas, I’ll be happy to utilize them in similar summary blocks.

That’s all for today with Season 1’s final episode, we will continue in Season 2 with great practical examples that break down concepts given in the previous chapters both talking about Functional and Scala in general and picking some good learning examples regarding pet project implementations.

We will go more low-level but will still keep a focus on the concept that this show is aimed for multiple audiences, having business briefs and linking the low-level examples to the high-level thinking frameworks we have built up in this season.

I hope you enjoyed the season; I tried to do my best to add some evergreen content that can be used as a brochure on your journey toward efficient programming and to provide a foundation for community building. I’ll be back to you with Episode Six, Season 2 on the 16th of May, 2022, till then lots of community-building activities are in plan so most likely you won’t get bored till we get there.

Mind that this podcast has a LinkedIn group where you can meet great fellow tech people to discuss and stay tuned about the happenings related to this show.

I was Csaba Kincses, be back to you at the next episode; thanks for listening!