Why Are Functions Less Reliable Than Hardware?
2026-02-04
I’ve spent years trying to articulate something that nagged at me throughout my career: the electronic circuits I designed were consistently more reliable than the software I wrote. It wasn’t about my skill level—I was equally competent at both. The difference was structural.
The problem is functions themselves.
The Vocabulary Problem
I’ve struggled to talk about this clearly. I used the phrase “function-based” to distinguish it from “functional programming” (FP), but people found it confusing. “Why two phrases when we only need one?” Fair point. But we do need the distinction, because the issue isn’t about FP specifically—it’s about the fundamental nature of subroutines with parameters, values, and returns (PVR).
I grew up on Lisp, C, and assembler. For the longest time, I believed PVR subroutines were stone-tablet truth—as fundamental as arithmetic. That’s what everyone believed. That’s what we were taught.
But something didn’t add up.
The Blocking Problem
Functions have an inherent bias: they block by definition. When you call a function, the caller suspends and waits for the callee to return. This seems normal—until you realize it’s not inevitable.
This blocking behavior cascades into everything:
We need operating systems to manage the blocking
Operating systems introduce complexity (think: 55 million lines of code where there used to be none)
Code libraries inherit blocking behavior, making them anti-LEGO—they don’t compose cleanly
Remember cartridge-based games from the 1980s? They didn’t have these problems. A cartridge just took over the machine and did what it wanted. Simple. No OS. No blocking. Just: here’s the hardware, go.
We replaced that simplicity with software containing tens of millions of lines of code just to manage the blocking problem that functions created.
When We Got It Right (Sort Of)
Bizarrely, there are examples where we got closer to the truth:
Apple and PostScript printers: They stuck CPUs into printers and loaded PostScript interpreters onto them. The printer was its own thing, not a blocking subroutine of your computer.
IBM hardware channels: Even earlier, IBM understood this. Separate processors handling I/O independently.
GPUs: They should work this way, but their design is pre-polluted by decades of “everything is a function” thinking.
We can’t just go back to the drawing board and start from scratch. But it’s pleasantly surprising to see what we can do with what we’ve got, once we recognize the pattern.
The Question
Here’s what I’m asking: What if the reliability difference I observed—circuits over software—wasn’t about digital versus analog, but about non-blocking versus blocking?
What if functions aren’t fundamental truth after all, but rather a historical accident we’ve been building on for 50+ years?
We can’t unring the bell. But we can start recognizing the bias and designing around it.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html


Thank you for the great reading. I thought that OS'job was to allocate CPU time to processes. So each time the OS moves to another process, software execution is blocked during milliseconds. I guess there is the same issue in OOP with methods. Don't blame me if I misunderstood something. My background is project manager focused on business stuff. I felt frustrated during my career not understanding the technical details. Now I'm taking time to learn. Just when AI shows up...