Imagine our SAGA Orchestrator (Saga Pattern Part 3) running at full speed. Customers are ordering like crazy. Suddenly the Inventory Service becomes slow. Maybe it’s due to a database lock or poor deployment.
What happens without a protective mechanism?
- The orchestrator sends a request to the inventory.
- He waits for an answer… and waits… and waits.
- While it waits, the thread is blocked.
- New orders come in, new threads are started, which are also waiting.
- Within seconds, all Orchestrator threads are used up.
- Result: Orchestrator crashes or becomes unavailable. Although actually only the inventory has a problem, the entire ordering system is now down.
This is called Cascading Failure.
To prevent this, we need a backup. Just like in your fuse box at home: If there is too much current flowing (or here: too many errors happen), we immediately interrupt the line to protect the house (our orchestrator) from burning down.
The Concept – The Three States
The Circuit Breaker Pattern works as a state machine that sits between the caller (Orchestrator) and the target (Inventory Service).
He knows three states:
- CLOSED (Closed - Normal Operation):
- The current flows. Inquiries are forwarded to the service as normal.
- The breaker counts in the background: “Was the request successful or an error/timeout?”
- OPEN (The fuse is out):
- If a certain error threshold is exceeded (e.g. 50% of requests fail), the breaker jumps to
OPEN.- Fail Fast: Requests are now no longer forwarded to the external service. Instead, the breaker immediately throws an exception or calls a fallback method. This relieves the load on the broken service and protects the caller’s resources.
- HALF-OPEN (Half-Open - The Test Run):
- After a defined waiting period, the breaker allows a small number of “test requests” to be carried out.
- Are these successful? Great -> Back to CLOSED.
- Do they fail again? Bad luck -> Back to OPEN (waiting time starts over).
The Implementation – Resilience4j in Action
Enough theory. What does this look like in our Spring Boot Orchestrator? We use Resilience4j, the current industry standard for resilience in Java.
First we add the dependency (in pom.xml):
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-circuitbreaker-resilience4j</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
Let’s assume our Orchestrator has a component that calls the Inventory Service (e.g. via a REST client or RabbitMQ RPC). We secure this method:
@Service
public class InventoryClient {
private final RestTemplate restTemplate;
public InventoryClient(RestTemplate restTemplate) {
this.restTemplate = restTemplate;
}
// Name der Instanz: "inventory"
@CircuitBreaker(name = "inventory", fallbackMethod = "reserveInventoryFallback")
public boolean reserveInventory(String orderId, String sku) {
// Dieser Aufruf könnte langsam sein oder fehlschlagen
ResponseEntity<String> response = restTemplate.postForEntity(
"http://inventory-service/reserve",
new ReservationRequest(orderId, sku),
String.class
);
return response.getStatusCode().is2xxSuccessful();
}
// FALLBACK METHODE
// Muss die gleiche Signatur haben + Exception Parameter
public boolean reserveInventoryFallback(String orderId, String sku, Throwable t) {
System.out.println("Circuit Breaker OPEN! Inventory Service is down. Reason: " + t.getMessage());
// Was tun wir jetzt?
// Option A: Wir geben 'false' zurück, damit der Orchestrator weiß, es hat nicht geklappt.
// Option B: Wir werfen eine spezifische Exception, die eine Kompensation auslöst.
return false;
}
}
What’s happening here?
If the inventory-service is down and the Circuit Breaker jumps to OPEN, reserveInventory will no longer run at all. Instead, Spring jumps directly into reserveInventoryFallback. The orchestrator immediately gets a false back without being blocked.
The configuration – fine tuning
We configure exactly when the backup will fly in the application.yml. Resilience4j is extremely flexible here.
resilience4j:
circuitbreaker:
instances:
inventory:
registerHealthIndicator: true
slidingWindowSize: 10 # Betrachte die letzten 10 Anfragen
minimumNumberOfCalls: 5 # Mindestens 5 Anfragen nötig zur Berechnung
failureRateThreshold: 50 # Bei 50% Fehlerquote -> OPEN
waitDurationInOpenState: 5s # Warte 5s, bevor du HALF-OPEN versuchst
permittedNumberOfCallsInHalfOpenState: 3 # 3 Testanfragen im HALF-OPEN
automaticTransitionFromOpenToHalfOpenEnabled: true
This means: If 5 or more of the last 10 requests fail, the breaker opens. All requests will be blocked immediately for 5 seconds. He then lets through 3 test requests.
Conclusion
The Circuit Breaker Pattern is like an airbag for your microservices. It doesn’t prevent the accident (the failure of the external service), but it prevents the accident from being fatal to your entire system.
In combination with the SAGA Orchestrator, it ensures that the orchestrator remains stable and can handle errors cleanly even when the surrounding services descend into chaos.
![[EN] Microservices Resilienz: Das Circuit Breaker Pattern mit Spring Boot & Resilience4j](/images/Curcuit-Breaker-Pattern-BlogHeader.jpeg)