I quote Jim Rootham1: "The purpose of a programming language is not to tell a machine what to do, it's to tell a person what you told the machine to do."
Textual programming languages are a 1960s implementation of a 1430s invention called Gutenberg Typesetting.
The field of Computer Science is filled with very smart people who can solve just about any problem that is understood to be a problem.
IMO, the idea that "we need more/better programming languages" is not the problem. The problem is "we need better ways to explain to other people what we told the machine to do". A sub-problem is that we’d like to have a desk calculator thingie check our work and tell us when our human-oriented explanation is inconsistent (akin to lawyer robots for programming).
We already know how to solve one basic problem: how to re-program electronic machines. Use banks of toggle switches, or, better, use banks of switches arranged in a QWERTY layout that produce lines of assembler code saved in rust and silicon.
We need a newer way to map human-oriented communication into assembler code. Like physicists, we need to be able to focus deeply on a single aspect of a problem until we solve it, then do that over and over again, and to combine all such sub-solutions into a single solution. Physicists do this by inventing different, little notations for each sub-problem. We’re almost there - we created DSLs for laser-focusing on the problem of text parsing (BNF and friends), we know of a bunch of different approaches to thinking about problems - e.g. functional, relational, concatenative, etc. Yet, we continue to strive to fit ALL of those sub-solutions into a single notation (a “programming language”) instead of treating each separately.
In the 1960s, due to hardware and software infancy, we were required to do two things at once:
write code that other people could understand
translate that same code into assembler and switch-closings.
Today, though, we can do better. We can simplify the workflow by cleaving the two concerns apart and by putting something automatic in between to maintain provenance between the two representations.
The 1960s approach increased the complexity of the problems, due to the attempt to solve 2 problems at once. We've been incrementally building upon that approach, under the guise of "standing on the shoulders of giants”, regardless of changing problem domains and environments. The result is wild levels of complexity and bloat.
We can do better, now that we have better hardware and now that we have invented a bunch of software techniques that weren't known in 1960.
Infancy of Cleaving
UNIX gave us an early taste of how powerful the idea of cleaving notations could be. UNIX let us write solutions in various notations - various programming languages.
For some reason, the Programming Language community essentially ignored UNIX and continued to build Gutenbergian, one-language-to-rule-them-all solutions. UNIX processes allow program units to run asynchronously by default, while programming language functions do not allow for default asynchronousity.
Ironically, we now have a simple way to create UNIX-y processes - closures. The UNIX implementation of processes was just a heavy-handed way to build Greenspunian2 closures in the most self-flagellating manner possible, using a stripped-down macro-assembler called “C”. C doesn’t even mimic the operation of CPUs correctly, mis-using the name “function” for things that are really “subroutines”. C functions cause synchronous blocking and this leads to the requirement for using multi-processing operating systems. Does async and multi-tasking seem to be difficult to think about and to program today? Thank C (and Algol, etc). Synchronous blocking is the antithesis of asynchronous operation.
UNIX caused several restrictions, like
Text only
One input, one output - stdin, stdout. Stderr and “exceptions” are just kludges swirled into the notational mix to make the fictional world of text-only continue to work. Exceptions aren’t exceptional, they’re just messages and control flow changes. Forcing this kind of stuff into the functional paradigm caused a clunky, complicated result. Functions are forms of computation. Computation does not deal well with control-flow.
Environment variables, PATH, global variables, etc. are just manifestations of our own laziness, i.e. text-only thinking and refusal to use principles learned in Kindergarten - don’t colour outside of the lines. We didn’t want to specify every parameter every time, yet, we wanted to “stand on the shoulders of giants” and maintain the text-only fiction, so we cheated.
These restrictions weren’t perceived to be a problem, hence, these restrictions were never fixed by otherwise smart people. We need more smart people to figure out what the root cause of problems are before letting other smart people actually spend time fixing the problems.
What Can We Do With What We’ve Got?
After half a century of incremental development, we have a bunch of stuff, hardware and techniques, for doing something new with atoms and molecules. We’ve saddled this new technology with a very leading name - compute-er.
The stuff we’ve got is good at single-threaded sequencing of operations. It’s only so-so at controlling sequencers and distributed devices, such as nodes on the internet, robotics, video editing software, etc. We can program and re-program such things, but, it all seems complicated and confusing and error-ridden.
So, how do we invent new ways of doing things with the stuff we’ve got? In the 1950s, the “giants” did exactly what we need to be doing today. The “giants” of yesteryear only had bunches of electrically-excitable vacuum tubes and wires. They invented new ways to use rust and named these devices “transistors”. Then, they ended up with bunches of electrically-excitable transistors. Designs for devices started to employ so many of these things that the “giants” invented a new “notation” for using these things - ICs and CPUs and assembler code.
Today, we have ended up with so many of these compute-er thingies that we’re facing new problems - how to coordinate them. Not just how to re-program one of them, how to build networks of these things and how to re-program networks of these thingies.
#1 problem - allow programmers to program using multiple paradigms, not just the single FP paradigm, realizing that it is no longer 1960.
#2 problem - how to program and re-program networks of distributed compute-er devices.
Gedanken Experiments
Let a normal human CEO - non-programmer - walk into a room and explain a “vision” to their employees. The room comes equipped with a QWERTY laptop and a whiteboard.
What will the human gravitate towards for explaining the “vision”?
Will the CEO scribble down every detail of operation, or, will the CEO leave most of the details to the employees, trusting them to sufficiently explore all of the nooks and crannies of the “vision”? CEOs trust managers, managers (recursively) trust other managers, lowest-level managers trust Engineers, Engineers trust tradespeople...
If a human draws a couple of boxes on the whiteboard, connected by lines and arrows, what does that mean? Why does everyone - except programmers - understand what that means? Synchronous behaviour vs. asynchronous behaviour. Sequencing and data flow. Programmers are simply fooled into thinking that asynchronous boxes and wires can be thought about and implemented using synchronous, blocking functions. [Actually, programmers can implement asynchronous thingies this way, but it’s so hard, that programmers want to escape from these kinds of mental prisons as soon as possible and they simply abandon all hope of inventing better thingies, overly-influenced by the FP meme].
Why can 5 year olds understand and learn hard real-time notation (sheet music, music lessons), yet, PhDs think that this is a hard, complicated problem?
See Also
Email: ptcomputingsimplicity@gmail.com
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: tarvydas.gumroad.com
Twitter: @paul_tarvydas
Substack: paultarvydas.substack.com
https://cscabal.slack.com/archives/C010W33VAF7/p1748584686039769
> Textual programming languages are a 1960s implementation of a 1430s invention called Gutenberg Typesetting
> Why can 5 year olds understand and learn hard real-time notation (sheet music, music lessons), yet, PhDs think that this is a hard, complicated problem?
Are you saying that programming is missing the equivalents of the 5-line staff and the integral sign? Basically, more compact and/or spatially oriented ways of reading/writing programs?