Why Your Operating System is Still Trapped in 1960
2026-02-04
What Operating Systems Actually Do
In my view, an OS does two things:
Mete out CPU time to processes
Warehouse a bunch of library code and driver code
Let’s tackle these one at a time.
The Time-Slicing Game
Issue (1) is often called “multi-tasking” or “threading.” Here’s what’s really happening: the OS simulates running multiple CPUs by chopping up the main CPU’s time into roughly-equal pieces to satisfy each app running on the machine.
If you have 10 apps running, each app gets about 1/10 of the CPU—meaning each app runs slower than it could if it owned its own CPU. Then there’s the slice of time used up by the OS itself, so let’s say each app only gets 1/11 of the available time. Roughly.
If any app asks to do something that is very slow (relative to how fast the CPU runs), the OS pushes the app aside and makes it wait—that’s “blocking.” The OS then doles out the remaining time to other apps. Depending on the situation, some apps might get lots more CPU time, but never more than 100% of the CPU (less, of course, the time used up by the OS itself).
All sorts of hand-wringing goes into making this work faster with concepts like “multi-core.” The design of multi-core is gradually backing into what we wanted in the first place—it’s just gummed up with biases from the 1960s approach where CPU time is sliced up and processes all share the same memory.
The Million-Dollar CPU Problem
What we really wanted was to use a zillion CPUs in our hardware and computers. But when CPUs cost $1,000,000.00 each, we couldn’t afford to do this.
Today, though, CPUs cost around $5.00.
Yet we’re still using the same-old-same-old thinking. Why not simulate multiple CPUs that have private memory, instead of simulating CPUs that share memory? Good question. That’s what we would have done if we could have afforded to build multi-CPU computers in the first place.
We would have built development systems and operating systems that simulated multi-CPU computers, each with private memory. Instead, we chose to simulate multi-app computers wherein all the apps share the same memory (and callstack).
We turned right at Albuquerque (ref. Bugs Bunny) and ended up with multi-tasking based on time-slicing and memory sharing. We’ve been gradually backing out of this approach by applying band-aids on 1960s hardware and software ideas, but our operating systems are still laced with extra code and hurdles like “thread safety”—when that kind of thing no longer matters if you have multiple CPUs each with private memory.
Let’s call them DPUs instead of CPUs—Distributed Processing Units—to underline this subtle shift in perspective.
The Hidden Blocking in Your Code
Yes, the word “blocking” is commonly used for the “push aside” action that OSs perform. But that usage papers over the fact that the normal functions we were taught to use back in grade school—f(g(x))—actually pepper low-level blocking in ad-hoc ways throughout our code.
That doesn’t matter when functions are written down on paper where the speed of light doesn’t come into play. But it does matter when we choose to use functions to write code for CPU-based hardware and computers.
That innocuous-looking expression f(g(x)) actually performs blocking (pausing) when run on a computer! “F” calls “g” and waits for an answer before proceeding. That’s blocking.
When we do too much of that kind of thing, we need an OS to step in and preempt our code, lest we use up more than our share of time on the CPU. f(g(x)) is implemented on a CPU using CALL and RETURN opcodes. The code would actually run faster if it didn’t need to execute CALL and RETURN opcodes, but it would also end up using more space in memory.
Again, back in 1960, memory costs were ultra-high, so we didn’t have any problem with the idea of mapping f(g(x)) onto CPUs while wasting CPU time by executing CALL and RETURN instructions. And maybe worse, CALL and RETURN use shared memory—the callstack.
All of this is OK if your code is meant only to run sequentially, like a calculator that crunches some numbers and returns a final answer, or a ballistics equation solver that calculates how to take wind and gravity into account so that an explosive charge can be dropped right onto your head without missing.
But today, we want to solve lots of newer kinds of problems that aren’t that straightforward—like how to get computers on one continent to communicate with computers on another continent (i.e., the “internet”) without using shared memory nor a shared call stack.
The Synchronous Trap
We chose to use synchronous operation and synchronous thinking and problem-solving back in the 1960s when it didn’t appear to make much difference. We already had a tried-and-true notation (math equations) for thinking this way. We chose to re-use those old-fashioned ideas on CPUs-of-the-day without thinking ahead to a time when CPUs would be spread across continents instead of all being inside the same box with zillions of wires between them.
That old choice is now haunting us. We have to waste time inventing new ways to create band-aids so that we could keep using math equations for our program code. We can do it, but we could be using our brain-time to think about better things instead.
Libraries: LEGO Parts or Clockwork Gears?
As for the code warehouse issue (2), well, our mindset has painted us into a corner.
We knew that we would be using operating systems, and we knew that operating systems wanted us to write code as math functions, so we wrote our libraries that way. That innocuous choice caused a self-fulfilling prophecy. When we wrote library code as functions, we forced the issue and required the use of operating systems so that our library code (functions) would work.
We wanted LEGO parts, but we created clockwork gears.
Because in the 1960s we couldn’t afford to build computers with lots of CPUs in them, we didn’t notice that code libraries written as functions were not LEGO parts. The synchronous mindset of the one-CPU approach made functions look like LEGO parts when in fact they were little clockwork gears.
Now that we’re moving into the realm of more-than-one-CPU, we are discovering that little clockwork gears are not LEGO parts. We end up building fragile software that is intricately wired together like a Swiss Watch or a gearbox without a clutch in an automobile.
Can We Move Forward Without Moving Backward?
The big question now is whether we can move forward without moving backward. Can we use what we’ve got, or do we need to start from scratch?
The “giants” of yesteryear figured out how to use a rat’s nest of electronics to create something new—opcodes to form narrow waists for the technologies of the day. Can we do something similar?
I think so. In fact, I am sure that we can move forward by gently tweaking our in-grown mindset.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html

