The Marketing of C
2025-10-05
Opening: The Side-Effects Tell
When your programming language calls normal operations “side effects,” pay attention. That’s the paradigm showing its bias.
Sending a message to another process? Side effect. Writing to a file? Side effect. Updating a sensor? Side effect. The term itself is dismissive - these aren’t side effects, they’re often the point. But in the function-based worldview, anything that isn’t “input parameters → return value” is peripheral, suspect, impure.
K&R C didn’t worry much about this. But watch what happened: modern C added more type checking, more constraints, more gear-teeth that must mesh precisely. The paradigm hardened.
Programs became more like Swiss watches - intricate, synchronized, fragile.
This reveals what C actually is: not low-level, but opinionated.
In this article I discuss C specifically, but much of the discussion applies to a whole host of programming languages built on the same foundation: the synchronous, sequential paradigm.
What C Actually Is
So what is C, actually?
C is a paradigm that treats everything as a function. Not a low-level language - a choice about how to organize code.
Consider the simplest unit: a basic subroutine. Just a sequence of instructions. One piece of code calls it, it runs, it returns. No parameters flying in, no values flying out. Just: do this thing, then come back.
This is what hardware actually gives you. CALL and RETURN. The rest is convention.
Assembly programmers knew this. If you needed to pass data to a subroutine, you put it somewhere - a register, a known memory location - and the subroutine knew where to look. If you needed data back, same deal. Flexible, explicit, cheap. The overhead was only what you actually needed.
C says: no, everything must be a Parameterized Value-Returning subroutine (PVRsubr). Every subroutine must accept parameters. Every subroutine must return a value (even if it’s void - you’re still paying the conceptual and mechanical cost). Caller and callee must agree on where parameters live. Must agree on where return values live. Must agree on the timing: all parameters arrive at once, all returns happen at once.
This isn’t free. You’re paying for it. Frame pointers. Stack discipline. Calling conventions. The whole apparatus of making functions work.
Why This Choice?
Why did C make this choice?
Dennis Ritchie wasn’t trying to discover universal truth. He was trying to build something interesting that sped up his workflow. And what was in the air in the early 1970s?
Mathematical notation. Lambda calculus. The idea that computation could be expressed as function application. Lisp had been around since the late 1950s, exploring these concepts. The academic world was enamoured with the elegance of mathematical functions: given the same inputs, always produce the same outputs. Referential transparency. Composability.
This notation came from mathematics - literally, from writing with quills on papyrus, then pen on paper. It was designed for humans reasoning about static relationships, not for describing machines that mutate memory over time.
But it had a crucial property: it was familiar. Mathematicians and logicians already had a rich notation for functions. They didn’t need to invent something new for these new-fangled “compute-ers.” They could automate what they already knew.
C brought this to systems programming. Not pure functions - C let you mutate memory, access hardware, do the dirty work. But the organizing principle was the same: everything is a function.
What Functions Buy You
And functions do buy you something real.
Encapsulation. A function is a black box with a contract: give me these inputs, I’ll give you these outputs. You don’t need to know how it works inside. This enables modularity, code reuse, reasoning about correctness.
Composability. Functions chain: the output of one becomes the input to another. f(g(x)). This matches how we think about breaking problems into steps.
Shared code. If the same sequence appears in multiple places, wrap it in a function, call it from both places. Save memory. One implementation, many call sites.
These are genuine benefits for solving problems synchronously. When your problem is: take this input, compute that output, right now, in sequence - functions work beautifully.
The Cost
But that’s not every problem.
Distributed computing doesn’t work this way. The internet doesn’t work this way. IoT doesn’t work this way. These domains are fundamentally asynchronous. Messages arrive at different times. Responses come back at different times - or not at all. Nodes send messages to other nodes without “returning” anything to a “caller.”
The function-based notation is a tool in the tool belt. It works. But using it everywhere is like driving screws into wood with a hammer. You can do it - people have built entire careers doing it - but it’s awkward, and you’re always fighting the tool.
Other ways of encapsulating asynchronous units existed. Look at electronics: pin compatibility. Components with defined interfaces that communicate by signals, not by blocking calls. Message passing without synchronization. The hardware world figured this out because they had to - you can’t make a circuit board wait for a return value.
But in software? These avenues were largely ignored. We had our hammer - the function - and we had decades of momentum behind it. The problems that didn’t fit the function model got squeezed into it anyway, with layers of abstraction to hide the mismatch.
The Mismatch in Practice
What does this mismatch look like in practice?
Callback hell. Promises. Async/await. Futures. Reactive programming. Event loops. Threads. Thread safety. Rendezvous. Priority inheritance. Etc.
These aren’t solutions - they’re symptoms. Band-aids over a fundamental impedance mismatch.
You want to make a network request in JavaScript? Can’t just call a function and get the result - that would block everything. So you pass a callback. But now you need to make another request based on that result? Callback inside callback. Three requests? Four? Your code marches steadily rightward across the screen, a pyramid of doom.
Promises were supposed to fix this. Chain your asynchronous operations! Except now you’re writing `.then().then().then()` and reasoning about error propagation through the chain, and debugging stack traces that make no sense because the call stack doesn’t reflect what actually happened.
Async/await was the next fix. Make asynchronous code look synchronous! Just sprinkle `await` keywords everywhere. But you’re still paying the cognitive cost - you have to remember which functions are async, what can be awaited, what blocks the event loop, what doesn’t. The syntax is prettier, but the underlying problem remains: you’re forcing asynchronous reality into synchronous notation.
Go took a different approach: goroutines and channels. Erlang has processes and message passing. Both are trying to escape the function-as-atomic-unit trap. But they’re swimming upstream against decades of infrastructure, tooling, and mental models built around the function paradigm.
The tell is that we call this “concurrency” and treat it as an advanced topic. Something hard. Something that requires special constructs bolted onto the language.
But it’s only hard because our atomic unit - the blocking, synchronous function - doesn’t match the problem.
The Marketing Claim Revisited
So let’s return to the marketing claim: C is a low-level language. Close to the hardware. Close to reality.
But which reality?
C is close to the reality of a single CPU executing instructions in sequence. Mutating memory. Jumping to addresses. In that narrow sense, yes, it’s low-level.
But “low-level” suggests less abstraction, not different abstraction. And the everything-is-a-function paradigm is very much an abstraction - one that moves us away from how distributed, asynchronous, networked systems actually work.
A network packet doesn’t call a function and wait for a return value. A sensor doesn’t block until you’re ready to read it. A message queue doesn’t halt the universe until someone processes the message. These are the realities of modern computing, and C’s atomic unit - the blocking, synchronous function - is orthogonal to all of them.
The claim should be: C is low-level for sequential, synchronous computation on a single CPU. That’s accurate. That’s honest.
But C has been marketed as the low-level language. The one true abstraction that maps to hardware. This obscures that it’s a choice - one paradigm among many - optimized for problems that were foremost in 1972.
When you accept C as “just how computers work,” you inherit its assumptions invisibly. Everything is synchronous. Everything blocks. Everything returns. Side effects are peripheral. The gear-like, interlocking nature of function calls becomes the water you swim in.
And when new problem domains emerge - internet, distributed systems, IoT, reactive UIs - you’re stuck retrofitting asynchronous reality into synchronous notation, wondering why concurrency is so hard.
Forgetting the Choice
K&R C was a choice.
Dennis Ritchie chose to organize code around functions. He chose to make every subroutine pay the PVRsubr cost. He chose synchronous, blocking semantics as the atomic unit.
These were reasonable choices in 1972. Elegant, even. They solved real problems for the work he was doing.
But we’ve forgotten they were choices.
Today, the function-based paradigm isn’t presented as one option among many. It’s presented as the way computers work. Assembly is “low-level C.” Other paradigms - message passing, dataflow, actors - are exotic. Niche. Academic.
This is backwards. C isn’t the ground truth that other paradigms deviate from. C is one point in design space, optimized for a particular class of problems on expensive, time-shared CPUs in the early 1970s.
The world has changed. CPUs are cheap. Networks are ubiquitous. Asynchronous events are everywhere. The problems we’re solving today don’t look like the problems of 1972.
But our atomic units haven’t changed. We’re still building with synchronous, blocking functions, then wondering why we need such elaborate machinery - callbacks, promises, async/await, event loops, concurrency primitives - to make them handle asynchronous reality.
Closing
The first step is recognizing that C made a choice. A good choice for its time and context, but a choice nonetheless.
The function-based paradigm is a tool. A powerful one. But it’s not the only tool, and it’s not the foundation that everything else is built on top of.
What other choices are available? What atomic units would fit today’s problems better?
Those are questions for another article.
Part 2 Hardware Stockholm Syndrome.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html

