Reprogrammable Electronic Machines
2024-11-20
I used to think that Computer Science was about how reprogrammable electronic machines worked. I no longer think so...
The more I think about the split between "coding" and creating efficient assembler scripts, the more I realize that Computer Science is no longer about how RePEMs work but has become about mapping functional notation from the FTL domain (Faster Than Light - pencil and paper, instantaneous, time-less manipulation of equations) into the discrete domain, kinda like FFTs that map analog signals from the time-based domain into the frequency based domain.
This mapping is causing the invention of edicts that clearly contradict the reality of RePEMs. Like the idea of "no mutation". RePEMs are based on the heavy use of mutation - mutating very fast memory (registers), mutating fast memory (RAM), mutating slow memory (disk, external memory). The very idea of "no mutation" causes a cognitive dissonance, albeit going mostly unnoticed.
From this perspective, CompSci has become about how to use RePEMs to act as "hand-held calculators" to help with the task of performing the mapping of FTL functions into the discrete domain.
We conflate the ideas of figuring out paradigms for reprogrammability with ideas for mapping such paradigms into efficient assembler scripts intended for use on lowly CPU chips.
Conflation of function-based thinking with the ideas of production engineering of assembler code has been causing great difficulty and gotchas[1].
Even the word "computer" implies that the purpose of all of this is to figure out how to use RePEMs to compute rather than exploring how RePEMs could be used to solve other kinds of problems.
Dynamic languages, invented in the early days, like Lisp1 and Forth and Prolog, concentrate on DX - Developer eXperience. These languages create little VMs that are plunked onto RePEMs to allow software developers to make electronic machines sing and dance to various tunes. The downfall of these concepts is that they try to describe all of programming using just one paradigm at a time, instead of as a composition of multiple smaller programs each using different paradigms.
CompSci has been commandeered to concentrate on only one little VM - the function-based VM, which supports the idea of functions (the FTL kind) through the use of context-switching and memory-mapping VMs. This extra code is most often embedded in things we call operating systems and RTOSs and in hardware retrofits using things like MMUs. CompSci has fallen into the trap of trying to describe all of programming using just this one paradigm, under the guise of “general purpose programming languages”. We’ve even gone so far as to piggy-back early dynamic languages onto this single paradigm and to filter those various paradigms through the single lens of this one (function-based) paradigm. Functions need help to exist on top of lowly CPUs. Piggy-backed paradigms need double-help to exist on top of the function-based paradigm which exists on top of lowly CPUs.
If you want to solve problems that fall outside of the function-based domain - problems like robotics, internet, etc. - you are SOL. You have to spend time force-fitting your problem into the function-based, time-less, synchronous domain. This force-fitting is causing “code bloat” and increased complexity at the edges, where the function-based paradigm fails to naturally suit the target problems.
The functional domain is succinctly expressed in something like Sector Lisp[2]. You can’t use Sector Lisp to program every kind of problem, esp. problems that need mutation and persistence, but, Sector Lisp demonstrates just how efficiently the functional domain can be mapped onto RePEMs if you avoid issues that don’t cleanly fit into the functional domain. Sector Lisp fully exploits the stack-based mentality espoused by function-based programming, with a GC that is only 40 bytes2 long. There’s something more than just assembler tricks going on behind the scenes in Sector Lisp.
The causes of the over-emphasis on functional thinking applied to RePEMs are further described by Arawjo[3].
What can be done, if RePEMs cannot be fully programmed using the function-based paradigm, with the current crop of popular programming languages, like Rust, Python, etc.?
We need to use workflows that allow the use of multiple paradigms and multiple programming “languages”.
Programming languages are IDEs. Our concept of “programming language” was invented to be a caveman IDE for programming in the early days - a way of creating scripts for RePEMs without getting assembler under our fingernails. This idea was invented in the 1950s due to the exorbitant cost of CPUs and memory back then. The invention of “programming languages” happened long before the invention of distributed computing, internet, IoT, robotics, timeouts, etc.
Gutenberg printing press principles are OK for single-threaded machines, but don’t easily scale up to the use of multiple machines, regardless of our wishful misuse of words like “concurrency”, “multi-threading”, “multi-core” and “async”. All of these things runs on single-threaded machines so fast that they fool us humans into believing that they are truly multi-threaded and asynchronous, even though they aren’t actually asynchronous.
Dealing exclusively with single machines was OK in the early days, since we couldn’t afford to pay for more than one CPU at a time. Today, though, small electronic machines, like Arduinos, are dead cheap and abundantly available.
Using old-reality techniques to program in this newer reality seems like an absurd idea, but, that’s what we’re doing. Grace Hopper correctly warned against[4] the complacency of thinking “because we’ve always done it this way”.
Exploring Other Notations
We can use what we’ve got and we can build new layers on top of this existing stuff. That’s what they did way back in 1950. They were faced with new, unexplored technology, and figured out how to use it. We need to adopt that kind of thinking, again, today.
We have “computers”, and, we have IDEs for programming these devices based on synchronous, single-threaded notations.
What don’t we have? Yet?
We can rapidly build and explore yet more Gutenberg printing-press-based “programming languages” using PEG[5] techniques, espoused in tools like OhmJS[6].
We can even build “macros” for non-Lisp languages using PEG technologies.
We don’t have other ways of expressing programs, other than through the use of Gutenberg-press inspired text. We can explore the use of non-Gutenberg printing-press-based notations by using existing tools that map new notations into the tools’ preferred domain (text). Then, we can use a plethora of already-existing text-based tools to further manipulate the mapped information. Modern diagram editors save diagrams in textual, XML, format. There is much low-hanging fruit to be harvested by exploring DPLs (Diagrammatic Programming Languages).
Note that we have been given hints on where we might end up, with notations like SVG. SVG relegates text to being only one kind of on-screen graphically-manipulable object. SVG deals with various kinds of figures, e.g. rectangles, ellipses, and, text. Each text figure can have its own local idea of what font to use and what size to make the text. SVG, though, is “static” and “2D”. Does Blender give us different ideas? Does the gaming industry hint at new ways to think about scripting machines in the time-domain?
We need to stop thinking that “programming languages” are at the core of IDEs. Programming languages are like new-age paintings on cave walls. We need to create more modern IDEs and workflows that encompass many paradigms and many ways of expressing RePEM scripts.
We need to invent notations that incorporate time instead of treating time as a second-class dimension. Current function-based programming languages do build a very restricted, myopic notion of time into their notations through the use of single-threaded3, synchronous sequencing of operations. This appears to be inspired by how assembler and CPUs work. Anything that falls outside of these restrictions must be manually handled and worked-around instead of being an integral part of the notation.
In contrast, electronic schematics constitute a notation that incorporates massive parallelism. Every chip runs in parallel. Yes, massively parallel problems are harder to think about than single-threaded problems, but, such problems are not impossible to solve - see 1972 Atari Pong[7]. I contend that the current reality involves problems that require massive parallelism. Solving these kinds of problems is harder than solving single-threaded problems. Get over it. This is our current reality. We shouldn’t bend reality to fit our favourite notation, just because “we’ve always done it this way”. We should create new notations to fit the kinds of reality that we face and that we want to deal with and that we need to solve problems in. Solving problems that fundamentally require massive parallelism by first mapping them into the synchronous domain just makes solving these problems harder than necessary. Even Richard Feynman figured this out. Feynman invented “Feynman diagrams” to explore a slice of reality instead of remaining wedded to the idea of thinking about the problem in terms of some old-fashioned notation.
FORTRAN mixed 2 concepts into the same notation - SUBROUTINEs and FUNCTIONs. For some reason, CompSci ignored SUBROUTINEs and ran with FUNCTIONs. Functions have some nice properties, but, they can’t be used to express the full range of possibilities that could be implemented using CPUs.
Over-emphasis on only the function-based paradigm is causing cognitive dissonance in programming. We are told that side-effects are bad and that mutation is bad. Yet, CPUs exist because of these very principles. Cognitive dissonance.
We are told that continuation-passing style is good,,, but, continuation-passing style is only an upgraded form of GOTO. We were taught that GOTO is bad. Deja vu, all over again. Cognitive dissonance.
We are told that pub/sub is good, but that dynamic, self-modifying code is bad. Pub/sub is a politically correct phrase meaning dynamic, self-modifying code. Cognitive dissonance.
OK, the function-based paradigm is provably useful, but, what about the design options that the function-based paradigm ignores and forbids? The function-based paradigm is useful for programming calculators. What if we want to program things that are not calculators? Robotics, IoT, internet, blockchain, GUIs, etc., etc.
RePEMs are not just better pieces of paper. Using an old-fashioned, paper-based notation might not be the best way to explore and exploit the capabilities of these things. Using RePEMs to replace old-fashioned concepts 1:1, like desktops and filing cabinets and 4-banger calculators and telephones, might be an injustice.
Is Dynamicland the only new MOAD[8]? Are there other new MOADs available for harvesting?
I contend that the answer is Yes.
If we stop treating RePEMs as better pieces of paper that can be reasoned-about only with old-fashioned ideas and notations and genuflections, we might see new ways to express how to think about them and to program them.
Bibliography
[1] Mars pathfinder disaster from https://guitarvydas.github.io/2023/10/25/Mars-Pathfinder-Disaster.html
[2] sector lisp from https://justine.lol/sectorlisp2/
[3] Ian Arawjo. 2020. To Write Code: The Cultural Fabrication of Programming Notation and Practice. In Proceedings of the CHI 2020 April 25-30, 2020, 2020, Honoulu, HI, USA. Association for Computing Machinery, [insert City of Publication],
[4] Capt. Grace Hopper On Future Possibilities from https://programmingsimplicity.substack.com/p/capt-grace-hopper-on-future-possibilities?r=1egdky
[5] Parsing Expression Grammar from https://en.wikipedia.org/wiki/Parsing_expression_grammar
[6] OhmJS from https://ohmjs.org
[7] The original Pong video game had no code and was built using hardware circuitry. Here's the original schematics from Atari from https://www.reddit.com/r/EngineeringPorn/comments/ul49zt/the_original_pong_video_game_had_no_code_and_was/
[8] The Mother Of All Demos from https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
See Also
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: https://www.guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: https://tarvydas.gumroad.com
Twitter: @paul_tarvydas
pre-Common Lisp Lisp
[sic] Bytes. Not kilo-bytes, not mega-bytes, not terra-bytes, etc.
So-called “multi-threading” is really just single-threading that performs really fast switcheroos. Humans can’t perceive what’s really happening under the hood. This kind of parlour trick is preventing us from making easy advances on the newer set of problems that weren’t even dreamed of in 1950, like true distributed, asynchronous computing, which includes things like internet, IoT, robotics, GUIs, blockchain, etc., etc. Multi-core is just more of the same - multiple CPUs blocking and synchronizing on globally shared memory.

