[EN] Warum du (wahrscheinlich) keine Microservices brauchst: Ein Plädoyer für den Modular Monolith

Julian | Dec 25, 2025 min read

The fallacy of “scalability from day 1”

Imagine you are starting a new project. The team is motivated, the whiteboard has been wiped clean, and the first question in the room is: “Which microservices do we need?”

We all know this situation. We look at giants like Netflix or Uber and think: “If they build like that, it must be the ‘right’ way.” But what is often overlooked: Netflix has Netflix problems. Your startup or medium-sized project probably has (still) completely different concerns.

The so-called “Resume Driven Development” often pushes us in this direction. We want to use technologies that look good on your CV. This is human, but architecturally dangerous. Because the price we pay for microservices is not money, but complexity.

Once we break an application into small pieces that communicate over the network, we enter the world of distributed systems. And this is where the “Fallacies of Distributed Computing” (the fallacies of distributed systems) come into play. We incorrectly assume that the network is reliable, that latency is zero and bandwidth is infinite. The reality is different: networks fail, timeouts occur, and services are unavailable.

The result? The Cognitive Load – the mental capacity that your team has to have in order to understand and operate the system – explodes. Instead of building features, you spend time debugging race conditions between services or configuring distributed tracing.

But there is a middle ground. An architecture that combines the clarity of a monolith with the structure of microservices, without the operational madness: The Modular Monolith.

In this article I’ll show you why this is the better choice for 90% of projects and how you can achieve modularity without blowing your system into a thousand pieces.

The hidden costs of microservices

Many teams drastically underestimate what it means to replace a monolithic function with a network call. It’s not just a code exchange, it’s a paradigm shift. Let’s look at the three most expensive items on this bill.

1. Latency and reliability: Physics cannot be optimized away

In a monolith, communication between two modules (say, the UserService and the BillingService) is a simple in-memory method call. This takes nanoseconds and always works – as long as the process is running.

In a microservices architecture, this becomes a network call (often via HTTP/REST or gRPC). Suddenly we’re talking about milliseconds. That doesn’t sound like much, but with complex call chains (“fan-out”) it quickly adds up to noticeable latency. However, reliability is much more critical: networks have hiccups. Packages are lost. Routers fail. Your code must now expect that the other side will not respond every time. You need retries, circuit breakers and fallback strategies for trivial operations.

2. Saying goodbye to ACID: Data consistency is hard

Perhaps the most painful sacrifice when moving to microservices is the loss of the classic database transaction.

In the monolith you benefit from ACID (Atomicity, Consistency, Isolation, Durability). An example: A new user registers and we simultaneously create an entry in the users table and one in the wallets table. If either goes wrong, the database does a ROLLBACK. Everything is clean, the condition is consistent.

With microservices, each service (ideally) has its own database. There is no such thing as a transaction across two services “out of the box”. You end up in the world of Eventual Consistency. This means the system will be consistent at some point, but not right now. To ensure consistency, you need to implement complex patterns like Sagas: If step B fails, service A must perform a “compensation transaction” to undo step A. This is extremely error prone and difficult to debug.

3. Operational Overhead: Who runs the monster?

A monolith is an artifact. One deployment, one log file, one database. Microservices mean: 20 artifacts, 20 deployments, 20 databases. You suddenly need an infrastructure that is as complex as your software itself. Container orchestration (Kubernetes), service meshes (Istio, Linkerd), centralized logging and distributed tracing (Jaeger, Zipkin) become mandatory, not optional. The complexity doesn’t disappear, it just shifts: from the application code to the infrastructure (“Ops”).

Enter the Modular Monolith

When we say “monolith,” many developers cringe. You think of legacy code, of 5000 line classes, and of the dreaded “spaghetti code” monster where a change in the shopping cart inexplicably destroys the PDF invoice generation.

In technical language we call this condition “Big Ball of Mud”. This is an anti-pattern, not an architecture.

The Modular Monolith is the exact opposite. It is a system that is deployed as a unit (single deployment unit), but is internally structured as strictly as microservices.

What sets him apart?

1. Bounded Contexts

We use Domain-Driven Design (DDD) here. A “bounded context” is a delimited area within your domain in which certain terms and rules apply. In the Modular Monolith we map these contexts directly as modules in the code. An “Inventory” module takes care of stock levels. A “Billing” module takes care of invoices. The most important thing: data sovereignty. The Billing module may never directly execute SQL queries on the tables of the Inventory module. It must use the other module’s public API (an interface or service class).

2. Strict encapsulation

Every module is a black box. It has:

  • A public API: What other modules are allowed to call.
  • Internal implementation: Classes, helper methods and database logic that are invisible from the outside. In languages ​​like Java (via Packages/Modules), .NET (via Projects) or TypeScript (via index.ts exports and ESLint rules) we can physically enforce these visibilities. If your IDE doesn’t offer you autocompletion for an internal class of another module, you’ve done everything right.

3. The deadly enemy: Cyclic dependencies

In a bad monolith, module A calls module B, and B calls A. This leads to a tight coupling that makes refactoring almost impossible. A Modular Monolith enforces an acyclic graph. This means: If Sales needs the Product module, Product must not know anything about Sales. If they do need to communicate, we use Domain Events (e.g. “OrderPlacedEvent”), to which other modules can react. This decouples the direct call, but remains in the same process.

In summary: We build the logical boundaries of microservices while retaining the physical simplicity (one process, one DB connection) of the monolith.

Perfect. Now it’s getting concrete. Theory is nice, but in the end we have to commit code.

Implementation: How to force modularity

So what does this look like in the IDE? The most common mistake is to stick with the classic “layered architecture” (folders for Controllers, Services, Models). This is poison for modularity because it separates functionally related things.

We’ll switch to “Package by Feature” (or Component) instead.

1. The project structure

Your project root should not reflect technical layers, but rather your business domains. Here is an example of a clean structure (e.g. in TypeScript/Node.js or Go) that breathes modularity:

src/
  ├── modules/
  │   ├── billing/          # Bounded Context: Billing
  │   │   ├── api/          # PUBLIC: Das einzige, was andere sehen dürfen
  │   │   │     ├── BillingService.interface.ts
  │   │   │     └── events.ts
  │   │   ├── internal/     # PRIVATE: Hier liegt die Logik
  │   │   │     ├── domain/
  │   │   │     ├── database/
  │   │   │     └── BillingServiceImpl.ts
  │   │   └── index.ts      # Exportiert NUR Inhalte aus /api
  │   │
  │   ├── inventory/        # Bounded Context: Inventory
  │   │   ├── ...
  │   └── users/            # Bounded Context: Users
  │       ├── ...
  └── shared/               # Der "Kernel": Logging, Utils, Base Classes

The trick is the index.ts (or package-info.java / .csproj references): it acts as a gatekeeper. Anything in the internal folder will not be exported. If the Inventory module tries to import BillingServiceImpl directly, it will fail.

2. Architecture as Code: The Automated Police

We are all just human. When you’re under time pressure, you quickly import the database class of the other module “just briefly” to solve a problem. Zack, coupling created.

Trust is good, CI pipeline is better. We use tools to check these rules on every build.

  • Java: ArchUnit is the gold standard. You write unit tests that check your architecture:
classes().that().resideInAPackage("..billing..")
    .should().onlyBeAccessed().byAnyPackage("..billing..", "..orders..");
  • TypeScript/JavaScript: ESLint helps here (e.g. with eslint-plugin-boundaries or dependency-cruiser). You can define rules like: “Imports from modules/billing/internal are prohibited everywhere except in modules/billing itself.”
  • C#/.NET: NetArchTest offers similar functionality to ArchUnit.

If you set up these tests, the build will break as soon as someone violates the module boundaries. This forces the team to go through the API or consciously adjust the architecture rather than accidentally undermining it.

3. Decoupling through events

Sometimes module A has to do something when something happens in module B without B knowing about A. Example: When a new user is registered (‘Users’ module), a welcome email should be sent (‘Notification’ module). Instead of Users calling the NotificationService, it publishes an event: UserRegisteredEvent. The Notification module listens to it. This all happens in-process (in memory), so it’s extremely fast and doesn’t require a message broker like Kafka (although you could easily switch to it later).

The Way Out – From Modular Monolith to Microservices (if necessary)

The strongest argument against the monolith is often: “But what if we become so successful that we have to scale?”

This is exactly where the Modular Monolith comes into its own. It is not a dead end, but a launching pad.

Imagine your startup explodes. Your ImageProcessing module makes the server glow while the rest of the application gets bored. You would now be lost in a “Big Ball of Mud”. Everything is interwoven, nothing can be isolated.

In your Modular Monolith, however, you have already defined Bounded Contexts. You have clear APIs. You have no cyclic dependencies. To turn the ImageProcessing module into a true microservice you need to:

  1. Move the code folder to a new repository.
  2. Replace the internal ImageProcessingService interface with a gRPC or REST client.
  3. Done.

The rest of your application barely notices the difference because the logical boundaries have always been there. So you don’t scale based on hype, you scale based on real metrics and pain. You only extract what really needs to be independent.

Fazit: Build for problems you actually have.

Software architecture is the art of delaying decisions as long as possible without blocking progress. Building microservices from day 1 is premature optimization. A modular monolith gives you the speed at the beginning and the flexibility later.

Be pragmatic. Be lazy (in the best sense). And only build like Netflix when you are Netflix.