More Thoughts About Pipelines vs. Functions
2025-01-04
Popular programming languages are touted as “synchronous programming languages”.
The main point, here, though, is that the synchronous programming model is not a one-size-fits-all model and should not be pushed into service to solve modern-day, asynchronous problems.
I tried to address this issue in an earlier article "CALL-RETURN Spaghetti" and in a more recent article.
The “pipeline” model is but a starting point for programming in this paradigm. Much more interesting architectures than simple, sequential pipelines can be constructed. If you remove the restriction that a program must be represented as a sequence of written words, you open up a plethora of architectural possibilities which can’t be expressed using textual, sequential programs.
I must qualify the word can’t in the preceding sentence. One can express different architectures using sequential programming, but, there is a tipping point where programmers choose not to bother to think about nor express such architectures, due to the difficulties in using an ill-suited notation.
Popular programming languages - sequential programming languages - grew out of the need to program single-threaded CPUs. Today, though the problem space is radically different. Instead of single-threaded CPUs, we deal with networks of highly distributed nodes - DPUs, if you will. Applying sequential programming techniques to this different problem space unnecessarily restricts the gamut of possible solutions.
Trying to invent new ways to program and new programming languages and notations with the starting point of sequential-based thinking unnecessarily constrains the invention of new ways to program.
Thanks due to the popularity of the C programming language, programmers conflate the concept of subroutine with the concept of mathematical / computational function. This conflation appears to lead to the belief that the only way to program computers is to write equations that loosely resemble the outdated form of mathematical notation that was invented for quill and papyrus. Computers can be programmed with something other than “functions” written in popular programming language notations.
We are forced to use assembler to sequence the operation of CPU chips, but, we are not forced to use the concepts of sequencing in our higher-level languages. Sam Aaron points this out in his TEDx talk, wherein he shows a 3-line program that does not execute sequentially. It creates a musical “chord”, in which all 3 lines run at once instead of in a strict sequential manner. Programmers automatically think that the 3 lines will execute one after the other and are surprised when this does not happen. I am not advocating that we begin using languages wherein each line of text is asynchronous. I am simply advocating that we stop assuming that that all operations of a computer must be carried out in a synchronous manner scripted by synchronous programming languages. Our current programming syntax insists that lines of code execute strictly in a step-by-step manner and that function calls synchronously block the caller until the callee returns a value. A plethora of possibilities open up when the the synchronous restrictions are lifted. I am exploring very basic node-and-wire representations of asynchronous systems, but, new and better ideas become possible when inventions are not blinkered by the requirement for synchronous behaviour.
Functions only appear to provide multiple inputs and outputs, but, in reality only have single inputs and single outputs. Deconstruction is used to chop up incoming and outgoing data into several fields of different types, but, incoming and outgoing data travels in a single block all at once.
Can we use “what we’ve got” while moving forward and changing our programming model? Can we use the libraries of code that have already been written? Or, do we need to scrap everything and start from scratch?
Yes. We can use what we’ve already got.
I give examples of how to do this in other articles and code repositories. The examples are WIP (Work In Progress) and are not very polished. I am working towards creating more examples. In my mind, an interesting example is that this kind of PBP (Parts Based Programming) makes it easy to construct solutions to problems once considered to be extremely difficult, e.g. compiler-building and transpiler-building and construction of little-languages, and, asynchronous control scenarios, and divide-and-conquer.
An example of a different notation might be contained in Fig. 31 of Harel’s paper on Statecharts. Fig. 31 shows, in one diagram, the full operation of a fairly complicated device. The solution contains many asynchronous processes, which Harel calls Orthogonal States. Statechart notation is not a one-size-fits-all notation. For example, expressing equations for cryptography might be cumbersome in Statechart notation. I suggest that we need IDEs that allow us to choose and to combine notations at will. We should use multiple notations to solve real programming problems. We already do a bit of this kind of thing, using DSLs like BNF.
Currently, we are using a single paradigm - functional - and we are basing all new designs on top of that paradigm. In the early days, programming languages were invented as caveman IDEs for creating assembler code. It was hoped that programming languages would help developers create fewer bugs in the process of creating assembler code.
Today, we seem to be concentrating only on improving one programming paradigm while overlooking the original goal: to create assembler sequences for CPU hardware. Hardware is capable of doing more than just building calculators (“computation”), but, our programming languages force us to only build bigger and better calculators while overlooking other possible uses of CPUs and hardware.
We’ve pushed this single functional paradigm so far that we’ve had to alter - and make less efficient - what used to be simple CPU chips.
We are faced with a new class of problems - multi-node, decentralized, distributed programming, like internet, robotics, IoT, etc. This new class of problems was not addressed with outdated programming language design in the early days. Twisting outdated programming language concepts to adapt them to the problems results in making solutions more difficult than they need to be.
We are gradually coming back around to the original goal, by punting work to external devices like GPUs. Yet, we have failed to re-examine why we would need such powerful, single-paradigm, complicated programming languages for CPUs.
See Also
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: tarvydas.gumroad.com
Twitter: @paul_tarvydas
Substack: paultarvydas.substack.com

