There Is Only One Programming Language
2026-02-16
There is only one programming language — machine code.
Everything else is scaffolding.
Every language you’ve ever used, every paradigm you’ve ever argued about on the internet, every mass of syntax and semantics and type theory — all of it eventually dissolves into the same stream of opcodes your CPU actually understands. The question isn’t whether your code becomes machine code. The question is how much machinery you’ve erected between your ideas and the metal, and whether that machinery is still earning its keep.
The Honest Languages
Some of the early languages were built with this fact squarely in mind. Prolog, APL, Forth — each of these contained a small, honest kernel written close to the hardware (in assembler or something similarly low-level), and that kernel existed for one purpose: to create the scaffolding for a specific paradigm. Prolog’s kernel set up unification and backtracking. APL’s kernel set up array operations. Forth’s kernel set up the dictionary and the stacks.
The paradigm rode on top of the kernel. The kernel knew it was scaffolding. There was no pretense.
McCarthy did something similar when he built the first functional programming language. The entire thing was a small kernel — eval paired with apply — and everything else grew out of that. How small can such a kernel be? Sector Lisp answers the question in startling fashion: 436 bytes for the Lisp evaluator, with a 40-byte garbage collector. The whole thing fits in a boot sector.
That’s the actual size of the idea. Everything built on top is application, convention, and — increasingly — accumulated cruft.
The Unrolled Kernel
Functional languages in the C lineage also contain a low-level kernel, but you’d be hard pressed to point at it because it’s been spread out and unrolled throughout the code. The scaffolding is function-entry code and function-exit code — prologues and epilogues, stack frame management, register saving and restoring — repeated thousands of times across your binary. Add in the preemptive scheduling machinery of the operating system (context switches, timer interrupts, thread management) and you have a kernel that is both everywhere and invisible.
This is the trick that made the synchronous, sequential paradigm feel natural. The scaffolding disappeared into the background. You stopped seeing it. And because you stopped seeing it, you stopped questioning whether it was the right scaffolding for the job.
The Monoculture
Here’s the claim I want to make plainly: just about all modern programming languages are based on a single paradigm — the synchronous, sequential paradigm inspired by the widespread adoption of C.
Python, JavaScript, Java, C#, Go, Rust, Swift, Kotlin — these languages disagree about syntax, about type systems, about memory management, about whether you should use semicolons. They agree about something far more fundamental: that computation is a sequence of steps executed one after another, that function calls go on a stack, and that concurrency is a problem to be solved rather than a mode of operation to be embraced.
Modern programming languages are all the same language wearing different costumes.
The Async Tax
The synchronous, sequential paradigm doesn’t handle asynchronous situations well. This shouldn’t be surprising — it wasn’t designed to. It was designed for a world where one processor executed one instruction at a time, and it has been retrofitted, repeatedly and painfully, for a world where that assumption no longer holds.
And so we keep adding baubles to an already overloaded paradigm. async/await. Promises. Futures. Goroutines. Channels. Reactive streams. Each one is an attempt to bolt asynchronous behavior onto a fundamentally synchronous worldview. Each one adds complexity without addressing the root issue: the paradigm itself assumes synchrony.
The result is that concurrent programming remains one of the hardest things in software, not because concurrency is inherently hard, but because we’re trying to express it in a paradigm that treats it as an exception rather than the rule.
What the Early Languages Knew
The early language designers understood something we’ve largely forgotten: the paradigm should match the problem. If your problem is relational, use a relational paradigm. If your problem involves pattern matching over structured data, use a paradigm built for that. If your problem is inherently concurrent, use a paradigm where concurrency is the default, not an afterthought.
Instead, we’ve spent fifty years stretching a single paradigm — sequential, synchronous, function-call-based computation — far beyond its sweet spot. McCarthy’s beautiful idea has metastasized into operating systems measuring 55 million lines of code. Not because the problems got 100,000 times harder, but because we keep using the wrong-shaped tool and compensating with volume.
The scaffolding has become the building. And we’ve forgotten that it was only ever supposed to be scaffolding.
The Way Forward
I’m not arguing we should all go back to writing Forth or APL (though both remain more interesting than they get credit for). I’m arguing that we should recover the mindset that produced them — the recognition that every language is just a thin skin over machine code, and that the skin should be chosen to fit the shape of the problem, not the shape of our habits.
There is only one programming language. The question is what kind of scaffolding you put on top of it, and whether you’ve chosen that scaffolding deliberately or simply inherited it.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html


This article reminds me the older versions of Fortran (from I to 77). This language is function-based and single-threaded, but it has some limitations and some strange features, if compared with more recent languages:
1) All arrays have a fixed size.
2) All paramers (scalars and arrays) are passed by reference, not by value.
3) The memory for subroutines and functions is fully allocated at the start-up of the program.
4) Loops are permitted, but recursion is not.
5) All local variables in functions and subroutines are static (implicit SAVE attribute). If the programmer doesn't want to reuse the value of a local variable from the previous function/subroutine call, it must inizialize the variable after its declaration. Example:
SUBROUTINE sub1()
IMPLICIT NONE
INTEGER :: a
a = 0
...
END SUBROUTINE sub1
If the programmer needs to recover the value of this variable from the previous call, it must inizialize it on the same line of the declaration. In the following example, the "a" variable is used for counting the number of calls of sub2().
SUBROUTINE sub2()
IMPLICIT NONE
INTEGER :: a = 0
a = a + 1
...
END SUBROUTINE sub2
These limitations and features were imposed the the low memory and speed of first computers. Due to this limitations, the old Fortran compilers don't need the dinamic memory allocation, neither the call stack. Fortran proves the sequential programming paradigm can be implemented without a call stack. In my opinion, Fortran subroutines and functions, being provided with static local variables, are very suited for implementing the finite state machines.