2024-06-15-Not Standing On The Shoulders Of Giants
Not Standing On The Shoulders Of Giants
It makes sense to embellish ideas and to use the discoveries of people who worked on problems that you want to work on.
It does not make sense to do this, if your ancestors were solving problems from the ones you’re thinking about.
Obviously, say, if you want to solve the problem of how to make many reprogrammable electronic machines1 work together, you don’t consult a Julia Child book on cooking.
But, that’s what has been happening for some decades, now, in the programming world. The early “giants” of compute-ing needed to solve problems in the 1950s2 which are very different from the problems of 2024.
1950s Biases
CPUs cost a lot of money in 1950, so it made sense to waste human brain-power on finding ways to conserve hardware, like inventing ways to do multi-tasking and time-sharing on hardware that was fundamentally non-reentrant. All of these work-arounds added extra software into the workflow, like operating systems and full preemption. Fundamentally, this is an inefficient use of reprogrammable electronic machines. And, these work-arounds caused the invention of new bits of hardware, like MMUs, which make ICs bigger and waste chip real-estate on work-arounds instead of using the space to provide extra compute power. A lot of the software and hardware work-arounds have been causing gotchas and continuous headaches. We’ve spent a goodly part of five decades on finding out3 what the gotchas are and finding ways to work around them4.
Memory cost a lot of money in 1950, so it made sense to waste human brain-power on finding ways to conserve memory usage, such as segmentation, MMUs, garbage collection, etc.
In 1950, Computers - reprogrammable electronic machines - were a new thing. It made sense to apply old-fashioned ideas and techniques, like the written word (textual programming languages, mathematics), filing cabinets, desktops, etc. to the idea of reprogrammable electronic machines.
In 1950, it made sense to share memory and to waste time figuring out how to work around the associated gotchas that memory sharing causes, like thread safety.
In 1950, reprogrammable electronic machines were scarce. A company or a department at a university was considered to be advanced if it owned one reprogrammable electronic machine.
2024 Realities
In 2024, though, the realities are quite different.
In 2024, CPUs are dirt-cheap. We have access to inexpensive Arduinos, Raspberry PIs, etc. These things cost $10s today, instead of $1,000s or $100,000s or $1,000,000s. I’ve seen single-chip Arduino’s on sale for as little as $5.00.
In 2024, memory is dirt-cheap. We think in terms of mega-bytes and giga-bytes instead of bytes and kilo-bytes.
In 2024, reprogrammable electronic machines are ubiquitous. People carry huge amounts of computing power around in their pockets. The internet is composed of zillions of reprogrammable electronic machines. My credit card has a CPU and private, persistent memory built into it. It’s a “smart card” that uses NFC. My pocket phone has NFC built into it. I should probably get rid of my smart card.
In 2024, most people don’t relate to the old ways of doing things. Filing cabinets full of paper are becoming unfamiliar concepts. Gluing such old-fashioned concepts onto the new medium of reprogrammable electronic machines makes less and less sense.
In 2024, we want to program reprogrammable electronic machines to perform actions that evolve over time in response to external stimuli which come at “random” times. We think about things like clients and servers, Flash, videos, video editing, DAWs, robots, games. The old-fashioned way of writing programs in terms of flat, written text has to give way to a new way of creating programs. At best, old-fashioned programming languages like Python, Haskell, Rust, WAM, etc., can describe the innards of computing nodes, but, can only allow us to write programs at a kind of “assembler” level for collections of computing nodes, i.e. 2024 hardware. Old-fashioned languages insist on dealing with true concurrency from a synchronous perspective. This is not, really, concurrency, it is only step-wise simultaneity. Step-wise simultaneity might be a useful concept for analyzing how machines work, but, it is not a very convenient way of expressing practical programs. I think that we can devise better programming languages for dealing with truly concurrent things - things like NPCs in games and internet nodes and actuators in robots.
In 2024, it no longer makes sense to share memory in most applications. The issues of “thread safety” are best dealt with by ensuring that reprogrammable electronic machines cannot share memory by default. If you need to share memory, you still have to solve the associated problems, but, most applications should not need to pay for solving your shared memory problems.
What Is The Solution?
In general, I think that the solution to most of today’s problems is to use message passing and to create IDEs that allow multiple programming languages - notations - to be used in creating applications.
I discuss and show how to do this elsewhere, using technologies that I call “0D” (for Zero Dependency) and FIFOs and closures, etc. Some POCs5 can be seen in several github repositories, like https://github.com/guitarvydas/0D, https://github.com/guitarvydas/zd-in-python, https://github.com/guitarvydas/zd-in-cl, etc. Multiple notations can be created with t2t6 technologies like https://github.com/guitarvydas/t2t and OhmJS https://ohmjs.org/, and, techniques like transpiling DaS7 to JSON, etc.
See Also
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: https://guitarvydas.github.io/
Blog: https://publish.obsidian.md/programmingsimplicity
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/Jjx62ypR
Leanpub: https://leanpub.com/u/paul-tarvydas
Gumroad: https://tarvydas.gumroad.com
Twitter: @paul_tarvydas
We tend to call these kinds of machines “computers”. I’m not sure that that is a good name for them, since the term implies that these kinds of electronic machines should only be used for one kind of problem - compute-ation. Sometimes, I will use the term REM instead, meaning Reprogrammable Electronic Machine. I want to remember that I am dealing with a piece of electronics that must obey Physics, and, not be dealing with a piece of written mathematics.
When I say 1950, I mean any year/decade in the days of early computing, like 1960, 1970, 1980, etc.
in an ad-hoc manner!
One obvious example is the Mars Pathfinder fiasco which resulted in the work-around called “priority inheritance”. https://www.rapitasystems.com/blog/what-really-happened-software-mars-pathfinder-spacecraft
Proofs Of Concept
Text To Text transpilation.
Diagrams as Syntax