Yet More About Single-Paradigm And Multi-Paradigm Thinking And The Function-Based Paradigm’s Unsuitability For Expressing Concurrency
Towards Higher Level Syntax for Programming Languages 2024-11-10
Function-based programming is on an equal footing with Smalltalk, Prolog, Lisp, Forth, etc. etc. All of these require some sort of extra software layered on top of CPUs. I refer to this extra layer of software as "engines".
Some of us once dreamed of having Lisp machines, and Intel tried to sell OOP machines (iAPX432), and we have GA144 machines for Forth (https://www.greenarraychips.com/home/products/index.php) and we had Actor machines (Transputers),,, but, we are saddled only with function-based-paradigm machines, i.e. the current CPU architectures that devote chip space for MMUs, context-switching speedups, etc. [opinion: I’d rather get rid of MMUs, multi-core, caches, etc. and just have 1,000s of MC6809s on a chip].
"Functions" as understood in the mathematical sense do not map cleanly onto CPUs, because mathematical functions are FTL. FTL means Faster Than Light - instantaneous expansion of referential transparency. CPUs, though, have, and always will have, propagation delays which must obey physics and will never be FTL like mathematical functions.
Conflating mathematical functions with CPU subroutines leads to accidental complexities like the over-use of memory sharing and attendant hand-wringing about thread safety, global variables, imports, namespaces, etc.
CPUs are just little bits of electronic logic for sequencing electronic operations, that are coupled with very small, expensive, but, very fast blocks of memory called "registers". CPUs are meant to be bolted onto larger, less-expensive blocks of memory (RAM, ROM).
Multi-core is but a hack which arose from the over-use of the function-based, memory-sharing paradigm.
The Oxford dictionary defines "concurrency" as "the fact of two or more events or circumstances happening or existing at the same time".
This is not what happens in our current implementations of thread libraries (including Erlang and Go). Things do not happen "at the same time" in thread libraries. Instead, things happen sequentially, but are broken up into small step-wise pieces that happen so quickly that our brains perceive them to be "at the same time". Other kinds of things, like nodes on the internet, are truly running at their own speeds and are running at the same time.
I refer to step-wise simultaneity as "synchronous concurrency" and I refer to the other kind of concurrency - the stuff that actually happens at the same time - as "asynchronous concurrency".
I believe that it is futile to try to express asynchronous concurrency using only the synchronous, function-based paradigm, i.e. by using any of the most popular, modern programming languages, like Haskell, Python, Javascript, etc. Just the word “synchronous” means that it ain’t asynchronous.
Our high-level languages are but helpers for generating assembler. The inclusion of so-called concurrency in these helper tools is only a simulation of (synchronous) concurrency. Since real-world concurrency is of the asynchronous kind, simulations of synchronous concurrency are off-base, unhelpful, and tangle us up in accidental complexities.
We can express the innards of nodes in asynchronous concurrent (little and big) networks, using synchronously concurrent programming languages, but, we end up resorting to caveman approaches - like thread libraries, async/await, pub/sub, etc. - that are essentially assembler-like primitives for more interesting compositions of nodes in layered networks. When the number of nodes in a network exceeds something like 7±2, we become confused and find the composition hard to understand and hard to debug. The internet has 1,000s of nodes. The human body has about 500 asynchronous nodes - roboticists wish to emulate this kind of structure. Our current programming workflow is incapable of scaling up to deal with such large numbers of nodes. By sheer perseverance of the human spirit, we’ve managed to tackle the internet and we’ve managed to build a few interesting robots using caveman tools and workflows. These issues would be easier to deal with if we were to allow for the use of a variety of paradigms in our programming workflow.
See also https://programmingsimplicity.substack.com/p/synchronous-execution-of-code?r=1egdky.
See Also
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: https://guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/qtTAdxxU
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: https://tarvydas.gumroad.com
Twitter: @paul_tarvydas