Insights
Technology·November 2025·6 min read

The case for boring technology in critical systems

There's a pull in software engineering towards the new. New frameworks, new languages, new infrastructure tools. And honestly, most of it is good. The ecosystem moves forward because people try new things.

But when the system you're building handles patient referrals, or processes benefit claims, or runs a payment gateway — the calculus changes. In those contexts, boring technology isn't a compromise. It's a deliberate choice.

What we mean by boring

Boring technology is technology that's been around long enough to have well-known failure modes. PostgreSQL, not the database that launched on Product Hunt last month. React, not the rendering library with 400 GitHub stars and a compelling README. AWS services that have been GA for five years, not the ones still in preview.

Boring doesn't mean old or bad. It means predictable. When something goes wrong at 2am — and it will — you want to be searching Stack Overflow, not filing a GitHub issue and hoping the maintainer is awake.

The hidden cost of interesting choices

Every novel technology you introduce carries a tax. Your team needs to learn it. Your hiring pool shrinks. Your debugging time increases because fewer people have hit the same problems before.

On a side project, that's fine. On a system that processes 50,000 transactions a day or holds sensitive personal data, it's a risk you need to justify.

We've inherited projects where someone chose a trendy state management library that the original developer understood and nobody else did. Or a deployment pipeline built on a tool that changed its API twice in six months. The technology was interesting. Maintaining it was not.

Where this applies most

Healthcare systems, government platforms, financial services. Anything where the consequences of failure extend beyond a bad user experience. If a payment reconciliation job fails silently because of an edge case in a library you chose for its elegant API, that's real money and real regulatory exposure.

In these environments, we default to the most boring option that meets the requirements. PostgreSQL over a newer alternative. Server-rendered pages over a complex SPA when the use case is simple. REST over GraphQL when the data model is straightforward.

This doesn't mean we never use newer tools. We use them all the time — when the problem genuinely calls for it. But the burden of proof is on the new thing, not on the proven one.

How we think about it

Before introducing any technology into a project, we ask a few questions. Can we hire for it? Has it been in production somewhere for at least two years? If the primary maintainer disappears tomorrow, can we still operate the system? If the answer to any of those is no, we need a very good reason to proceed.

For the systems our clients depend on, reliability beats novelty every time. The best technology choice is the one nobody notices because it just works.

Want to discuss this?

If any of this is relevant to what you're building, we're happy to talk it through.

Get in touch