Serverless, Microservices, or Monolith: Choosing Back End Architecture for Your StartupNovember 12, 2020 • 14 min read
The most significant contributor to the project’s complexity and scalability is its back end architecture. You must consider many factors, including estimated load, deployment type, team size, and experience, to make the best decision.
In this article, we’ll cover the major pros and cons of the most popular architectures:
- Monolithic architecture - an application is a single indivisible unit, usually a single executable for the server and another one for the client.
- Microservices architecture is a type of service-oriented architecture in which an application consists of many different services divided by responsibility.
- Serverless architecture - application consists of many independent functions that are executed on triggers. The cloud provider handles scaling and resources allocation.
The term monolith described objects made from a single large piece of material. In software development, monolithic architecture is when the whole application has a single code base and builds to one executable program. To make everything more manageable, engineers divide the application by a business or technical features into smaller separate modules. Unlike in service-oriented architecture, the modules communicate with each other directly by calling functions.
While a monolithic application can quickly turn into a big bowl of spaghetti code, the architecture itself aims for simplicity. Programmers import functions from different modules, and that’s it. Everything runs in the same app process — no timeouts, crashes, latency, different module versions, and the majority of the code is synchronous. The deployment also boils downs to updating a program and restarting it.
Calling a function in memory is much faster than making a network call and waiting for the results. The data in RAM is accessible by the whole application, and no network hops are needed. While an app with microservice architecture might have to make 20 different API calls to various services that, in turn, do more API calls and database requests, a monolith only has to make a couple of database requests to accomplish the same task.
Everything is in the same codebase and get’s built as a whole. Thus, developers can find and understand all places where a particular piece of code is used. When refactoring or removing unused code, they can be sure they won’t break something that possibly depends on it. They also don’t have to deal with multiple versions of the same shared code deployed simultaneously.
Debugging a monolith application is pretty straightforward. Just run the application with the debugger enabled. You don’t have to prepare a dedicated debugging environment, attach to multiple services, and track what’s happening where. Everything is in a single program that runs on the developer’s computer.
Though a monolith can be scaled horizontally, it’s usually not that straightforward. It might have some state in RAM or rely on exclusive access to the database. Even when appropriately designed, all modules will be running on every server. You can’t just scale a part of the application that needs more resources.
When an application crashes, everything goes down. A bug in a single module can bring the whole system down. When one part of the application eats all the resources, every other aspect of it suffers too. It’s pretty hard to create a highly available, scalable, fault-tolerant monolith.
Maintaining a clean codebase when a project grows becomes difficult. Developers always tend to make a global variable, a singleton, or make a piece of internals public. It’s too easy when you have all the code in front of you.
With a monolith, you’re tied to a single platform and possibly programming language. Some platforms, like .Net and Java, do allow almost complete codesharing between modules in different languages. It isn’t easy to adopt new technologies that might be better suited for future challenges.
When several teams are working together on a project, codesharing becomes an issue. Engineers often have to resolve merge conflicts and check if they haven’t broken anything with their code in a code that other people wrote. Deployments have to be thoughtfully planned, and the whole team should wait for them to see their code running in production.
The applications built with a service-oriented architecture (SOA) consist of many loosely coupled services that communicate with each other over a network. Microservice architecture is a variant of SOA with tiny services using lightweight protocols. Each microservice is independent, with ownership of its dedicated databases and other resources.
You can scale each service separately according to the required resource usage and the current load. While one service can run on hundreds of instances, the other can be idling running on just a single machine. Best engineers will even design the services according to their planned load. A reporting service used only by the product team can use simple relational databases and queries on them. In contrast, the user-facing part of the application would use fast No-SQL databases and some caching.
With good design, there’s no need to deploy the whole application all at once. Each service can be updated separately without affecting the other services, making continuous delivery possible and reducing the entire application’s development-deployment cycle duration.
If one of the services goes down, it doesn’t affect the whole application. Yes, the functionality under its responsibility won’t be available and may affect the connected services, but most of an extensive application will usually remain working.
Unlike the monolith, separate services can leverage different technologies or even programming languages. They can even run on different operating systems. The possibilities are endless as long as every service is still able to understand each other.
Every separate microservice usually has only a few thousand lines of code. Thus it’s easy to read and understand all the nuts and bolts. Debugging the service is a breeze. Unit tests are also more straightforward as code is loosely coupled. The onboarding process for new developers is concise and takes hours instead of days or weeks.
Due to loose coupling and independence of the services, it’s easy to work on them in parallel. Just think through and design the API interaction points so that different developers or even teams can build them simultaneously. Engineers can temporarily create mocks for the necessary APIs and switch back to the real services once those ready.
The services are pretty self-contained to re-use them even in a different application. For example, multiple applications might share the shopping cart, and authentication services are often pretty similar.
Adding an item to inventory in monolith was as simple as updating a few database records in a single transaction. It’s no longer that simple in the microservice world. The inventory service should check the inventory and probably reserve some. The shopping card service should add an item to the card. The related product service should check for relevant suggestions, and multiple analytics services should react to the user action. And what if some of the operations fail? Something goes down? Does the timeout mean it failed, or it’s gone be done? When should we restore the inventory? There are approaches to handle this, but they’re still much more complicated than in a simple monolith.
When debugging a single interaction in a microservice application, developers must run multiple services locally or attach to them remotely. Tracing back the data and matching different services logs to understand why something happened becomes quite a tricky task. There aren’t many tools to help with that either.
Everyone can run the monolithic application by merely executing the entry point file. In contrast, microservices always require some configuration to know about each other and find the correct dependency services. There’s scalability, fault tolerance, load balancing, etc. You’ll need a DevOps for even moderately complex applications to get all the benefits of this architecture.
In a monolith, most of the code runs in a single process. In microservice applications, everything is done asynchronously on different machines. These distributed systems have higher latency because they need many more network hops to finish the same task. Moreover, when using an event queue to simplify things, the events may build up under heavy load, drastically decreasing the system’s performance.
With a monolith, only the public API is accessible on the network. With microservices, many services also expose private endpoints for inter-service communication. Those should be carefully guarded against the outside world.
Serverless is another variant of service-oriented architecture designed specifically for cloud computing. The cloud provider handles many aspects of infrastructure management and scaling automatically, relieving the technical team from provisioning and server maintenance. Of course, the application still runs on servers, but everything is abstracted away so that developers can focus more on business logic. Instead of services, you have to deploy separate functions executed on triggers like HTTP requests, time, or various events.
Serverless shares many pros and cons with the microservice architecture, so we’ll focus only on the differences.
With serverless, you get entirely automatic scaling. The app will automatically upscale and downscale, depending on the current load, maintaining almost constant performance. These applications can handle huge loads, instantly scaling to thousands of parallel executions. Of course, with most cloud providers, you can also set the limits to stay within the budget.
You don’t have to worry about the infrastructure. Just push new code to the cloud, and it will be provisioned automatically. The project would need a much smaller DevOps team than microservices, where DevOps are also responsible for a ton of configuration.
You’re developers and DevOps are working on more critical business-oriented tasks. The code is somewhat more straightforward and much more loosely coupled. You’re charged for the CPU cycles and RAM used by your application. That reduces both the development and running cost of the project.
With monolithic architecture or microservices, your application is always ready to handle requests. That’s not the case with serverless. Functions that were recently executed stay prepared to handle new requests for quite some time instantly and are called warm. When the operation hasn’t been invoked for a while, it gets unloaded by the cloud provider, and spinning it up again induces additional latency. This cold start latency also occurs when scaling. For example, ten instances of the same function might be warm, but the cloud provider would have to spin up to five more cold instances for an increased momentary load. There are techniques to keep the functions warm, but they induce additional cost by calling the API on timers.
There’s no standard for writing serverless functions. Every function should have a cloud provider-specific code to run serverless. The build process, local start scripts, and debugging are also different. There’s a Serverless framework that abstracts all of that for the major vendors. However, sometimes you’ll still need more fine-grained control over the configuration that’s only possible with vendor-specific code.
Since serverless functions have specific build targets and frameworks to run in the cloud, cloud providers have restrictions on the programming languages or platforms they support. The popular ecosystems are usually covered, but you’ll have a hard time trying something less mainstream.
You have to care more about your function’s memory usage and startup times. That means carefully choosing what you should include in their code. Writing an extensive shared library with lots of global variables and their initialization code is not a good idea with serverless.
While you can run serverless functions at regular time intervals, running it for longer than a few seconds is usually not supported. It will be terminated on timeout by the cloud provider. It’s also not a good idea in most cases since you’ll be charged for CPU cycles more than for equivalent server instances running full-time.
Mature serverless projects typically have thousands of separately deployed functions. As the project grows, it gets harder to keep track of what’s still used and what’s deprecated. Security is also a significant concern since, ideally, you’ll need a separate set of permissions for every function. And all of them have their configuration too.
|Speed of Development||High||Low||Moderate|
|Platform Restrictions||Same Build||None||By Cloud Provider|
|Development Costs||Low, Increase in time||High, Constant in time||Low, Increase in time|
There’s rarely one single best architecture for the specific project. There’s always some tradeoffs you have to consider. From our experience
- Monolithic architecture is excellent for prototypes, MVPs, and startups in the early stages. The speed of development is essential here, and monolithic architecture, with its simplicity, shines here, making fast changes possible. The team size is small, and there are not that many users either, so those drawbacks aren’t that important. Unless the application will realistically face some crazy loads from day one, we would suggest going with monolith for an MVP.
- Microservices are best for applications with a constant load that have to scale horizontally. They’re also great for bigger teams thanks to parallel development. If done correctly, a monolithic MVP can be split to microservice quite easily when needed.
- Serverless is helpful for rarely used parts of the application, projects with very inconsistent loads, or adding some back-end functionality to static sites. On paper, it has many benefits to microservices, but in real-world scenarios, you’ll pay more for CPU and RAM under constant load than for an equivalent server running full-time. The cold start latency is also something to consider.
The best decision would be to pick the right architecture for the job and not stick with only one of them forever.
LeanyLabs has extensive experience with these architectures, and we would be glad to answer all your questions and help you make the right choice.