[EN] Code Reviews, die nicht nerven: Von der Erbsenzählerei zum Mentoring

Julian | Jan 2, 2026 min read

Why We Hate Code Reviews (and Why It’s Wrong)

The code is written, the unit tests are green, and you are proud of your solution. You create the pull request (PR), push it into the team channel and move on to the next task.

Then… nothing happens at first. A day later. Still nothing. Two days later, a notification finally pops up. A comment! You click on it full of anticipation, hoping for an “LGTM” (Looks Good To Me) or an exciting architecture discussion.

Instead you read: “Please insert a blank line here” or “Why didn’t you use the map() function?”.

Frustrating, right?

Scenarios like this are why many teams perceive code reviews as a necessary evil, a brake or even toxic gatekeeping. It feels like a test where the reviewer is trying to catch the author making mistakes to prove their own competence.

This is a massive waste of talent and time.

A code review should never be an exam. It is the most powerful tool we have to break knowledge islands (knowledge sharing), establish shared ownership (the code belongs to all of us, not just the author) and become better engineers together.

If your team finds reviews a pain, then there’s something wrong with the process - not with the people. In this article, I’ll show you how we turn the process from “bean counting” to real mentoring. And that begins, paradoxically, with us first removing people from the process as much as possible.

Let robots do the dirty work (automation)

Before we talk about how we give feedback, we need to clarify what we’re even talking about.

Here’s a golden rule for every senior software engineer: Never discuss in a code review things that a computer can find automatically.

There is nothing more unproductive than a highly paid engineer pointing out to another engineer that a semicolon is missing at the end of a line or that the indentation is incorrect. This is a waste of cognitive capacity.

Your review process doesn’t start when you open the PR. It starts in the CI (Continuous Integration) pipeline.

Before a human even looks at the code, the pipeline must be green. And this pipeline should include:

  • Formatter: Tools like Prettier (JS/TS), Black (Python) or GoFmt end any discussion about style. The code always looks the same. Point.
  • Linter: ESLint, Checkstyle or SonarQube find unused variables, dead code paths and potential bugs.
  • Tests: Unit and integration tests must pass.

If you write in a review: “Please remove the unused import here”, then that is actually an error in the process. Why didn’t the pipeline abort the build?

By delegating these “caretaker activities” to tools, we make room for the essentials: architecture, logic and comprehensibility. A reviewer should not feel like a spell checker, but rather like a co-architect.

For the reviewer – questions instead of orders

Once the tools have been run through, you come into play. Now it’s about psychology.

Text communication is difficult. Her facial expressions and tone of voice are missing. A quickly typed “This is wrong” can trigger immediate resistance from the recipient (who may have invested hours into the solution).

A senior software engineer knows: The goal is not to be right, but to find the best solution. And often the author has a reason for his implementation that you just don’t see yet.

Therefore, the most important rule is: Ask questions instead of barking orders.

Look at the difference:

Bad (Command): “Rename this variable to customerList.”

Effect: Authoritarian. It implies that the author made a mistake.

Good (Question/Suggestion): “The list variable is a bit generic here. What do you think of customerList to make the context clearer?”

Effect: Cooperative. It invites discussion and leaves room for the author to explain their decision (or accept the suggestion).

Blocking vs. Non-Blocking (Nitpicking)

Another powerful tool is explicitly labeling your comments. Not every comment is equally important.

Make a clear distinction between:

  1. Blocking (Must-Change): Something is really broken here. A bug, security hole, or major architectural violation. Without this fix, the code cannot be merged.
  • Example: “There is no check for zero here, it will crash.”
  1. Nitpicks / Optional (Nice-to-have): This is your personal preference or a small suggestion that makes the code “nicer” but is not functionally critical.
  • Mark them explicitly:(nitpick) You could write this in one line, but your version is okay too.”

By marking comments as “Optional” you take a lot of pressure out of the pot. You show the author: “I’ve read your work and thought about it, but I’m not blocking you over small things.”

This is what Psychological Safety does. Your team will stop fearing your reviews and start valuing them.

What we should really pay attention to (The Checklist)

When the linter is happy, your real work begins. But where are you looking? It’s easy to get lost in details. That’s why it helps to have a mental (or physical) checklist that focuses on maintainability and architecture.

Here is the hierarchy of what really counts in a review:

1. Readability

Code is read much more often than it is written. Your most important question as a reviewer is therefore not “Does this work?”, but “Can I understand this in 30 seconds?”.

  • Naming: Do the variable names say what they are, or do I have to guess? (data vs. pendingOrders).
  • Complexity: If a method has 50 lines and three nested if statements, the Cognitive Load is too high. Suggest breaking it up.
  • Comments: Do comments explain the why (business logic) or the how (what the code already says)? The latter is unnecessary noise.

2. Architecture & Design

Here you protect the structure of your application.

  • Separation of Concerns: Does this service do too much? Does the controller call the database directly? (We remember the “Modular Monolith”: respect boundaries!).
  • Reusability: Was code copied here that already exists as a utility function? DRY (Don’t Repeat Yourself) is important, but be careful not to fall into “premature optimization.”

3. Tests as documentation

Tests are the only documentation that cannot lie.

  • Look at the tests before the actual code. Do you understand what the feature is supposed to do based on the tests?
  • Do they also cover the “Unhappy Paths”? (What happens if the API throws a 500?).
  • A test that mocks everything tests nothing. Make sure that sensible scenarios are examined and that the code coverage is not artificially inflated.

4. The “Existence Question”

The best line of code is the one you don’t have to write. Sometimes the most valuable feedback is: “Do we really need this to be that complex? Is there a library that already does this? Or does this solve a problem we don’t even have?” A senior software engineer dares to ask whether the feature actually delivers the desired business value or whether there is an easier way.

For the author – Help your reviewers (PR hygiene)

Es gibt ein ungeschriebenes Gesetz in der Software-Entwicklung:

  • 10 lines of code: 10 comments (“A space is missing here”, “Rename variable”).
  • 500 lines of code: “LGTM” (Looks Good To Me).

Why is that? Because at 500 lines the reviewer’s head switches off. The cognitive load is too high. Big PRs paradoxically lead to worse reviews and more bugs slipping through.

As an author, it is your responsibility to make the review as painless as possible. Here’s your checklist for “good PR hygiene”:

1. Context is King

An empty PR body is disrespectful to your colleagues’ time. Always write a description. Answer two questions:

  • WHAT does this PR do? (Rough summary).
  • WHY are we doing this? (Link to Jira ticket, business context). If it’s frontend changes: Add screenshots or a GIF. An image explains layout changes faster than 50 lines of CSS.

2. Keep it small (Atomare PRs)

Try to keep PRs as small as possible. A feature is too big? Break it down:

  1. PR 1: Database schema update.
  2. PR 2: Backend service logic.
  3. PR 3: Frontend Integration. This is quicker to review, quicker to merge and easier to revert in the event of an error.

3. Das “Self-Review”

This is the ultimate pro tip. Before adding someone as a reviewer, go through the diff view of your own PR. Read your code as if you were a stranger. You’ll be surprised how many console.log, commented out code corpses or obvious typos you find yourself. Your colleague doesn’t have to find every mistake that you correct yourself. This saves time and makes you appear more competent.

Conclusion – A cultural change

At the end of the day, a code review is a reflection of your engineering culture.

If reviews hurt, it’s a sign of a lack of trust or inefficient processes. On the other hand, if reviews are fun (yes, that’s possible!), then you’ve created an environment where everyone wants to learn and grow.

The shift from “nitpicking” to “mentoring” doesn’t happen overnight. But you can start today:

  1. Set up a linter that ends the discussion about syntax.
  2. Phrase your next comment as a question, not a command.
  3. Create your next PR so that it is a gift to the reviewer, not a burden.

Good code reviews make the code better. Great code reviews make the software engineer better.