I believe that programming languages are simplistic IDEs for programming. Let me say that again, programming languages are IDEs for programming.
The concept of PLs was invented in the 1950s/1960s when the best that hardware could do was to consume and display Gutenberg-inpsired, type-set text.
Today, we have hardware that can do better. We can display graphics. Scalable graphics, vector graphics, 4D (x/y/z/t), etc. We have many more kinds of input devices.
The first glimmer of better-than-PL programming was UNIX pipelines.
I conclude that the best programming language is /bin/*sh, not C, Haskell, Rust, JS, Python, etc.
PLs are IDEs for developers. PLs help program developers create assembler that runs on non-programmers’ hardware. Developers use more capable and more expensive machines than non-programmer need to use. Through marketing, though, programmers managed to convince the non-programming public, regardless of I.Q. and level of education, that they need to pay for and use exactly the same machines as developers use. This, sort-of, saves development effort, at the expense of amortizing development costs across the rest of the world’s population.
Programmers must be allowed to iterate on their designs as rapidly as possible. I call this MVI, for Minimum Viable Implementation.
MVI skimps on efficiency and lets designers iterate and “go back to the drawing board” to play around with a problem space and to get a better understanding of what’s actually needed.
The commonly accepted form of skimping is MVP - Minimum Viable Product. Instead of speeding up the design cycle by eliding concerns for efficiency for non-programmer targets, MVP skimps on product design based on what programmers can do in limited time with the current crop of PLs. This unduly warps designs and requires crutches such as CI - Continuous Integration - to allow for actual design improvements down the road.
It used to be the case that Q/A departments made sure that a design was useful to non-programmers. With CI, programmers can skip Q/A and ship buggy, insufficient designs (not to mention shipping buggy code).
A suggested development workflow might be: after getting a design solution to work the way that is needed (by non-programmers), the project is submitted to Production Engineering for release.
What programmers do now, instead, is to hard-wire Production Engineering concerns into every line of application code. This stunts one’s ability to Design good solutions. Design Engineering and Production Engineering is conflated into the same workflow, resulting in longer turn-around times and reticence to change designs at too early a stage.
We need cross-compilers that optimize MVIs into products. We need techniques that make it easier for Production Engineers to understand what the Design Engineers and Architects want in the final shipping product. We need ways to communicate Designs to Production Engineers so that Production Engineers can do their jobs faster and more efficiently. We need to map Design to the final Product - “provenance” - so that Design Engineers and Architects can see that their ideas are being faithfully implemented, and, so that Design Engineers and Architects can change their minds without blowing production schedules out of the water.
I think that this means chopping Designs up into isolated, understandable layers. Changes to one part of a Design should not, ever, affect the behaviour of other parts of the Design. Sure, changes to inter-Part APIs will affect all Parts that use that particular API, but, removing and adding functions, tinkering with efficiency should not substantially affect the Design. OO and FP techniques fail at this by treating apps as intricately designed systems of interlocking gears. Big, earth-shaking changes can still be made using a process that other Engineers use - “Change Orders”. Maybe a Production Engineer spots something that will drastically affect performance. Such a change gets bubbled up to the Architects and Design Engineers as suggestions and come back as Change Orders. The Architects and Design Engineers must approve every such earth-shaking design change, and revamp and retest the design, before bubbling it back down to Production.
In essence, MVI Architects and Design Engineers need to treat software components as LEGO® blocks. To achieve LEGO® blocks in software, we need input and output APIs. Software blocks must not call other software blocks in a will-nilly manner causing blocking, as is currently done with functions.
My first exposure to this kind of LEGO®-block behaviour and component isolation was UNIX® pipes. Each command in a pipeline is spawned as a separate process. At the time, spawning processes was a very slow process and was deemed “inefficient” by the PL community, which merrily continued to build more-of-the-same, function-based programming languages with Gutenberg-inspired syntaxes.
The PL community tends to believe that their PLs must be one-size-fits-all affairs and give developers, both, the power to create efficient assembler code while at the same time coming up with designs. This approach - if you step back and simply observe - is resulting in bloatware and in a continuous complication of simple ideas, like concurrency. This ultimately results in gotchas like callback hell. We’re just waiting for the other shoe to drop on the current fad of async/await over-use and CPS over-use. Yes, techniques like virtual memory are useful for dealing with huge data, but, such techniques could be custom-applied instead of asking every application, no matter how small, to pay the cost of preemption.
Today, spawning processes isn’t such a bad thing - it happens quite quickly. Certainly quickly enough for developers to use in streamlining their own workflows. Actually, end products would run much, much faster and cheaper if programmers didn’t need to ship bloated operating systems for handling developer-specific doo-dads like virtual memory, preemption, etc. to unsuspecting non-programmers.
At one time, non-programmer-end-products were based on the “cartridge” system. Gamers would buy games on ROM cartridges and would physically plug them into their gaming consoles. Game software got full reign over the machines and didn’t need to resort to using bloated operating systems. Game software was, subsequently simpler to write and was Q/A’ed before selling. Game software that crashed didn’t get updated on a quarterly basis, it just lost market share.
0D is an attempt at modernizing UNIX®’s LEGO® block software mentality with new syntaxes and more efficient techniques - like the ideas of closures, 1st class functions, queue libraries, etc. Stuff that wasn’t readily apparent in the 1960s. The idea is that programmers should be able to use many paradigms, each with it’s own highly tuned programming language / programming notation. It turns out that this is embarrassingly simple to accomplish. It mostly requires a mindset change rather than any new high-fallutin’ technologies.
See Also
Email: ptcomputingsimplicity@gmail.com
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: tarvydas.gumroad.com
Twitter: @paul_tarvydas
Substack: paultarvydas.substack.com
> programming languages are IDEs for programming.
I like that.
> The concept of PLs was invented in the 1960s
Fortran, Algol, Lisp were all 1950's: https://pldb.io/lists/explorer.html#searchBuilder=%7B%22criteria%22%3A%5B%7B%22condition%22%3A%22%3E%22%2C%22data%22%3A%22appeared%22%2C%22origData%22%3A%22appeared%22%2C%22type%22%3A%22num%22%2C%22value%22%3A%5B%221940%22%5D%7D%5D%2C%22logic%22%3A%22AND%22%7D&order=3.asc