I am of the opinion that silicon has become much too complicated, since the early machines of the 1960s/1970s. As a programmer who just wants to Engineer solutions to peoples' problems and wants to ignore over-generalization, I want chips that contain 100's of MC6809 8-bit processors. Green Arrays is doing that kind of thing by putting 144 Forth processors onto a chip, but, I haven't checked G.A. out to any great extent.
An observation: in 1960, we only had bowls full of transistors. Somebody figured out how to step up to the next level and invented the concept of opcodes and CPUs, thus, making it possible for people other than Electrical Engineers to build reprogrammable electronic machines, effectively using lots of transistors but not needing to know EE nor how to solder, etc. Today, 50+ years later, we have bowls full of cheap CPUs. Who is inventing the next conceptual layer above that? The best, simple use of CPUs is as single-threaded sequencers. It appears to me that we just keep improving 1960s technologies, instead of inventing new kinds and new layers of technologies.
The general trend, today, is towards off-loading work to GPUs. Why do we even need complicated CPUs and programming languages, anymore? I think that we need languages that let us build devices using 100's of 8-bit CPUs. We know how to build compilers, we don't need to build compilers any more. First, we need silicon that gives us 100's of simple CPUs and queues (instead of stacks) between them. Then we need to figure out how to organize these things so that our brains don't explode. We need to build devices using hardware/software LEGO® (code libraries don't accomplish this, in my opinion). We need to build "black boxes" that anyone can use to build interesting solutions to problems that interest them. What's inside a black box? More black boxes.
Why didn't electronics schematics pan out as a notation for programming? Because schematics were essentially flat.
Aside: I'm staring at the 1972 schematic for Pong (no code, no CPUs) and seeing massive parallelism and a pile of spaghetti connections between parts. Hmm. Sounds like a problem. Once someone clearly identifies a problem, someone smart comes by and solves it.
Pong written in today's code ain't any simpler than 1972 Pong. We've gone sideways, not upwards. Hmm, why is that?
Observation: a big money-maker for Apple was the fact that they sold printers that contained CPUs (running Forth, er, Postscript). This opened up a new market. Instead of Gutenberg-esque type-setting machines, these things became something new and better.
Observation: GCC was the first compiler to do as well as hand-written assembler code. How? GCC used Fraser/Davidson's "peephole" technology, called RTL. I think that Cordy's OCG stuff is even better than RTL. I built an OCG system for a client (Cognos, in Ottawa). The trick to portability between architectures is: emit code in 2 steps. (1) Do the dumb, but correct, thing first. (2) Peephole and MIST it to make it better. Peepholing is a local optimization technique. GCC also applied global optimizations (from the Dragon Book) and got really hot code.
Observation: getting better parallelism in silicon involves assigning private caches to CPUs. Sounds a lot like creating a lot of computers on the same chip (see above). It appears to me that people are too focussed on instruction-level parallelism. I think we need application-level parallelism (and, I think that that is somehow different).
See Also
Email: ptcomputingsimplicity@gmail.com
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Substack: paultarvydas.substack.com