Back to Monoliths

So Amazon Prime Video (of all people!) published a blog post about how they’re returning to monoliths, relayed by DHH, generating lots of noise, to the point that even Dr. Werner Vogels himself, CTO at Amazon, had to pour some thoughts about the subject.

Monoliths are back, baby! Just like dynamically-typed languages return every so often after a decade or so of strongly-typed languages. These are waves that come and go because our craft is young, and we’re still figuring things out.

And also because of ageism: lots of teams lose their older team members, and companies filled with junior (read: inexperienced) developers tend to repeat the mistakes of the past.

As far as I’m concerned I’m a big fan of monoliths, just like I’m a big fan of server-side rendering (another trend that is slowly returning.)

In my first job we had a single Pentium Pro server running both IIS and SQL Server with Windows NT 4.0, with a monolithic ASP 1.0 application on top written in VBScript, serving hundreds of thousands of visitors per month without any caching strategy.

Let me repeat that. Tens of thousands of visitors per day, on a single-core Pentium Pro, running both the web server and the database engine, without any cache (which means that all requests hit the database, all the time.)

In 2000 we moved to a two-servers solution based on Pentium III CPUs (one for the web server, another for the DB) both running Windows 2000 Advanced Server. Still no cache. Same ASP code. Blazingly fast performance.

Whether microservices are a good choice for your architecture or not depends on two factors:

  1. Your team structure.
  2. Being Netflix or not.

In all other cases, I agree with DHH; monoliths are fantastic choices for small teams, and these days, the following architecture might prove effective for 99.99% of all your needs:

  1. An application server (most probably in C#, Crystal, or Go);
  2. A database server (most probably PostgreSQL);
  3. A messaging server (to send lots of email, or to act as a proxy to an SMS or other kind of external messaging system);
  4. A cache server (most probably Redis);
  5. A message queue (with RabbitMQ or Kafka or similar) in the middle.

And that’s it. Whether these are physical or virtual machines or containers or pods does not matter. It literally doesn’t. Just five things to manage, just five things to update, just five things to debug.

Now, we’re in 2023, and we have great tools at our disposal for literally zero dollars: containers, Kubernetes, database engines, messaging queues, and whatnot. But most of all, we’ve got more experience about running web apps today than in 1997.

So the important thing is to make your app follow the 12 Factor Principles, and in particular, make it configurable via environment variables, so that your developers can run your monolith locally in development, and your operations can configure it properly in production.

Use the best design patterns inside your application1, so that you could eventually break it up in pieces in the future; yes, for example programming against interfaces, and injecting services at runtime (something ASP.NET and Quarkus do wonderfully and off-the-box, to name a few frameworks.)

So if your app becomes so successful that you must expand your team, make your monolith capable of being broken thanks to good old interfaces and dependency injection.

Having a monolith will make your app infinitely more simple to design (less moving parts), debug (just launch it with the proper environment variables and plug your favorite debugger), and deploy (just one instance to run.)

Using server-side rendering will also make your app simpler. Just defer your logic to the server. Make the client as simple as possible. Mustache templates are your friend.

Use an ORM such as Entity Framework Core, Doctrine, GORM, or Active Record to make your database code as simple as possible, and your migrations more explicit and easier to understand.

As for updates, forget about canary deployments and stuff like that; show a temporary banner saying that your app is in maintenance, migrate your database, upgrade the app version, and remove the banner. You’re not Netflix, remember, which means you can most probably afford a few minutes of downtime.

Build containers with your monolith inside. Make them as small as possible (Alpine is your friend, but beware of musl), and configurable via environment variables. In most cases you won’t need more than one instance in production (no load balancing, really) and you can even run it locally, or on a staging environment, or on a Kubernetes cluster in staging. Yes, Kubernetes can be a great choice for running your monolith; all those YAML files are a great way to document your deployments and variables and all that’s needed to automate your life.

If you build containers for your system, ensure they don’t expose ports lower than 1024, and that they don’t run as root. If you ever have to run them on OpenShift you’ll be glad you followed this advice. Podman and OpenShift Local are your friends in this area.

Needless to say, use a CI/CD system. Your team should be able to deploy in production a few times a week without much effort. The tooling these days (GitHub Actions, GitLab pipelines, Argo CD, Tekton, etc.) is seriously amazing. Remember to run your unit tests as part of the workflow.

Here I am advocating for a return to sanity. Let us all choose wisely, finding the best architecture for our systems; and yes, in many cases, monoliths will be a great choice, if not the best.

PS: And here’s Kent Beck saying that waterfall is back. More on this soon.


  1. I don’t care what people say: object-oriented code, when done thoughtfully, is a wonderful thing. That usually means less inheritance and more composition, and applying a few techniques from the functional world, like immutable variables. ↩︎