Textual languages are 1950s IDEs for programming.
The necessity of using text was driven by the fact that early computer hardware could, at best, arrange small, fix-sized bitmaps (“characters”) as non-overlapping grids on hardware. Such hardware grids were inspired by Gutenberg’s Printing Press of the 1400s. Video screens - “glass teletypes” - were developed to mimic this arrangement.
Subsequently, higher level textual programming languages were invented to relieve cognitive loading of programmers for generating Assembler. Such languages were, all, inspired by Assembler, i.e. line-by-line sequencing. Today, we can do better than this. Even old-fashioned textual languages like Prolog, prove this point. Prolog isn’t sequenced on a line-by-line basis, even though its syntax looks a lot like functional syntax. Prolog is declarative.
Assembler is the only language that can be used for programming CPUs, but, human programmers don't need to be bound by line-by-line sequencing in their development tools, as long as their notations can eventually be transpiled/compiled into Assembler.
In the 1970s, programming IDEs were advanced to include /bin/sh, pipelines and UNIX.
Programmers could write code units as line-by-line sequences written in textual programming languages, like C, then join the software units together in asynchronous combinations using pipelines and processes.
The drawback of this scheme was that programmers were encouraged to make software units with only 1 input and 2 outputs (stdin, stdout, stderr, respectively). Other combinations were possible, but, were not encouraged due to the textual nature of /bin/sh.
Another drawback is that the transport mechanism between software modules was over-specified. One 8-bit character, newline, was given special status. This was a reasonable choice at the time, since most UNIX tools were built to inherently understand data in terms of lines delimited by the special character. Many UNIX tools were built to handle the problem of text processing. As text processing evolved, though, the distinction between binary data and text was blurred. Both kinds of data were intermingled, resulting in a lack of separation of concerns (see below).
These drawbacks should have only been temporary stepping-stones to greater ideas, but, programming progressed backwards by moving pipeline-like asynchronousity into programming languages, in search of unifying pipeline IDEs and programming language IDEs into the lowest common denominator: 1950’s textual IDEs.
This trend resulted in overly-complicated concepts, like thread safety, 1950s time-sharing rebranded as “concurrency”, callback hell, ‘await’ (chopping synchronous code up into little state machines), CPS (GOTO deja vu all over again), bloatware, etc.
In 2025, what are the IDEs for programming? Textual languages. Still. Again? Programmers have baubles like better editors with syntax colouring, to ensure that certain fix-sized, non-overlapping bitmaps stand out in the resulting walls of noise. Programmers, also, have better and better ways to step, line-by-line, through sequences of low-level control flow details.
Programmers blithely talk about “asynchronousity” and “parallelism”, but, continue to use synchronous, textual programming languages which oppose the very concepts of asynchronousity.
The very word “synchronous” is the opposite of “asynchronous”.
Programmers glued “synchronous” languages on top of already-asynchronous electronics, and, now are trying to glue the concepts of asynchronousity back onto those very same synchronous languages that are glued on top of already-asynchronous hardware. Bloatware? You bet.
Computer Science has evolved from being about programming CPUs into an investigation for writing hoarier and hoarier combinations of textual code to satisfy an old-fashioned, pen-and-paper notation ostensibly meant for a concept called “computation”.
Is programming becoming easier? It’s only been half a century. How’s that working out?
In 1972, Atari produced a piece of electronics called Pong[1]. In 2025, we can produce the same thing in software using programming languages like Lua[2]. As far as I can tell, the 2025 Pong isn’t simpler than the 1972 Pong, it’s just different. 2025 Lua Pong is massively synchronous whereas 1972 Pong is massively asynchronous. Apparently, simple cave-dwellers in 1972 managed to program a piece of hardware that was highly asynchronous without needing to use synchronous programming languages at all.
Today’s computers are small and cheap and abundant and don’t need to be time-shared. In the 1950s, computers were huge and very expen$ive, hence, programmers felt that it was reasonable to waste human-time in designing ways to time-share these things. We don’t need to do that any more, since we have access to inexpensive Arduinos, RPIs for building IoT, robotics, etc.
What Has Been Learned?
Programmers have, indeed, learned at least a few things over the past half-century:
DRY - Don’t Repeat Yourself
One way to solve bigger problems is to make programs bigger by wasting valuable chip space to throw massively more memory at problems. [OTOH, why bother? See below re. off-loading work to auxiliary processors]. But,,, that’s only one way. What are the other ways?
Moore’s Law did apply to hardware progress, but, did not apply to software progress.
Hardware, based on principles of massive parallelism, progressed faster than software which is based on principles of massive synchrony.
Hardware can now support concepts like 2D and 4D graphics - “windows”, “vector graphics”, “Blender”, “Flash”, etc. “4D” is just “3D + 1” (x/y/z/t).
Unlike Gutenberg’s type-settable types (“characters”) - glyphs can be resized and stretched.
Text is only a figure, a glyph. SVG treats text as figures on an equal footing with things like rectangles and ellipses.
If one insists on using low-level memory sharing, one must invent doo-dads like L1, L2, L3 caching to inch ever closer to the asymptote of on-chip parallelism while insisting on low-level memory sharing. Today, though, it is cheap enough to develop devices with off-chip parallelism, using Arduinos, and the like, in little-network configurations.
Networks of distributed nodes, like computers on the internet, cannot share memory by default. Such nodes are already “thread safe” by default. The concept of context-switching doesn’t apply to the choreography of networks of distributed nodes.
Full-blown function-based programming needs MMUs and context-switching for scaffolding. These days, a context switch takes something like 10,000 instructions of O/S code for implementation. That’s approximately 500-1000 LOC.
Green threads - no preemption - is favoured over full-blown preemptive context-switching, due to increased efficiency. But, green threads allow program sub-units to stomp on each other’s memory. There is a word for this kind of problem - “bug”.
GPUs
It is possible to design specialized electronic chips to offload work from CPUs.
The idea of “general purpose programming languages” becomes moot, since the real work is done elsewhere, i.e. on GPUs. Why, then would programmers need a lot of “power” in CPU-based programming languages? Why bother forcing programming languages to time-share code instead of just punting work off to auxiliary processors?
Separation of Concerns, Divide and Conquer
Big programs, like “compilers” can be divided and conquered by chopping the problem up into many very-much-smaller units, like
scanning (string matching)
parsing (token pattern sequencing)
Semantics checking (which, itself consists of at least 2 pieces - collection of information, checking)
allocation
code emission
Notations like Ohm[3], Statecharts[4], S/SL[5], etc. preserve locality of reference and forbid sullying of expression through incorporation of too many details about unrelated concepts.
Ohm physically separates “grammar” from “semantics”. The grammar is written in one file or string, the semantics are written as Javascript functions in another part of the file.
Statecharts pull out control flow into another notation, while leaving textual notation for action code.
S/SL is essentially a data-less language that strictly separates high-level operations and control flow from low-level implementation details. In S/SL, low-level details are expressed in an entirely different language (C, Pascal, etc.). S/SL is ostensibly a language for “parsing”, containing approximately 10 operations geared towards “parsing” and only about 1 operation for “semantics” (invoking operations in “mechanisms”), but, that 1 operation is the hidden gem of S/SL
S/SL was built with the 1970s mind-set of pipelines in mind
S/SL does not require building in-memory ASTs, instead relying on stream-parsing (which is called “syntax-driven translation”), hence, allowing for drastic reduction of memory footprint. S/SL was designed this way out of necessity in the day of 40-pass FORTRAN compilers, but, today’s programmers can learn to preserve valuable chip space by using these very concepts. More 6502s and 6809s on a chip “is a better idea” than bigger L1,L2,L3 cache sizes, in this age of internet.
OOP for semantics structuring
The preferred use of OOP is for structuring high-level concepts and for hiding implementation details
OOP can be used, instead, for code-sharing
This is akin to the original purpose of CALL/RETURN in Assembler,
Assembler’s CALL/RETURN seems to be a driving force behind the design of old-fashioned concepts like textual programming languages.
Type annotations, as we know them, sully code by inserting type-checking details directly into the code instead of putting the annotations somewhere else. Why not just insist that every parameter have a different name, then, relate those names to types at the bottom of the text?
Functional programming is like OOP. FP can be used to structure high-level concepts and high-level types as well as being used to over-specify details and low-level types. There is no inherent restriction and inherent separation of concerns for the use of FP for high-level vs. low-level programming. This concept was called “structured programming” in the days of GOTO-full programming, We no longer use GOTOs, but we do still mix high-level detail with low-level detail in the same program using “general purpose” programming languages.
YACC fails at separation-of-concerns. YACC requires programmers to mingle unrelated semantics-checking, allocation, emitting, and optimizing code within a grammar specification. All “compiler tools” inspired by YACC have this same failing.
PEG libraries, and, other manifestations of PEG, fail at separation-of-concerns. Unlike Ohm, they require programmers to mingle unrelated semantics-checking code, allocation code, emitting code, and optimizing code directly within grammar specifications, hence, obfuscating and muddying otherwise simple grammars.
Programmers can ship buggy code and fix it later, using CI (Continuous Integration).
Companies can ship poorly-designed products and fix them later using CI.
Users will pay for the privilege of using buggy products and will act as negative-cost Q/A departments for companies which have laid-off Q/A departments from their organizations.
Blockchain: state machines are a useful concept.
New problem space. In the 1950s, one central CPU was considered the goal - hence, the name Central Processing Unit. Today, we want decentralization, and, we want many distributed computers, e.g. in the internet, IoT, robotics, etc. Side-note: standing on the shoulders of giants is only a valid approach to problem-solving when the giants happened to solve problems in the past that are comparable to problems of the present.
Too much can be expressed as “computation” resulting in exponentially more work. For example, the functional paradigm is best at expressing pure functions and for implementing “calculators”, but, the functional paradigm has been stretched way out of its sweet-spot to express foreign constructs like control flow and sequencing.
What Have We Got, Today?
What have we already got, that we can use?
Today, most popular programming languages are textual in nature.
Compiler techniques are based on text manipulation. The text is - mostly - composed in a nested / structured manner, using matched brackets, e.g. “{...}”1 to express scopes and encapsulation. Programmers have tools based on concepts of REGEXs, CFGs and PEGs for dealing with such nested languages. In 2025, tools for parsing, transpiling and compiling such languages wildly exceed what was available half a century ago.
We have lots more data structures and techniques. Data structures are helpers for programmers for creating less buggy assembler code. Data structures help programmers think and structure their programs whilst creating assembler code. End-users don’t need data structures, but, programmers do need data structures.
We have several indentation-based notations, like Python and markdown. Most parsing tools are just catching up to the concepts of indentation-based syntax.
Programmers seem to think that type-checking is a must. Programmers think that if they want “efficient” code, they must use “assembler”. Hence, the concept of type-checking is being conflated with the concept of assembler, despite the fact that these are two completely orthogonal concepts. The orthogonal axes are being bent out of shape and crushed into a single ball of complication, in violation of principles of divide-and-conquer.
The Lisp language, developed in the 1950s, is revered by some programmers and hated by a larger majority of programmers. Common Lisp is essentially a big bag of functionality with lots of experience. Common Lisp conflates the orthogonal concepts of static compilation for Production Engineering with dynamic programming for Design Engineering. Lisp syntax is very convenient from a machine-readable perspective, because it has so little syntax to get in the way of code generators.
The most interesting aspect of Lisp, is that it proves that a programming language can have a recursive syntax instead of a line-oriented syntax. Many compiler parsing tools, based on BNF and Kleene operators, deal with line-oriented syntaxes instead of recursive syntaxes. Despite this, existing BNF-like tools can be used to parse Lisp-like recursive syntax using recursive parsing productions whilst avoiding Kleene-like operators. Lisp’s recursive syntax is quite different from assembler’s line-oriented syntax, but, line-oriented-assembler CPUs can easily accommodate and run Lisp code. One might note that SVG, graphML[6], HTML, XML, etc. are more Lisp-like in nature than line-oriented programming languages. SVG, graphML, HTML, XML, etc. are easy to parse using existing tools, especially when using recursive production rules. A lot of display-only information can be culled from SVG, graphML, etc. to make the files - and in-memory ASTs - smaller. Stream-parsing or recursive-descent parsing can be used for culling, since culling doesn’t need to build in-memory ASTs.
Lisp, also, proves that programmers can program using ASTs2. Research into projectional editors strives to invent underlying representations of ASTs, while tending to overlook Lisp, which already exists.
Hardware is much faster than it was a half-century ago.
Data is exponentially fatter than it was a half-century ago, with the result that more interesting detail can be had and can be rendered.
Software is exponentially fatter than it was a half-century ago, too. With what result? You don’t need exponentially fatter software for handling exponentially fatter data. Growth in data size should not be directly related to growth in software size. Faster hardware and near-infinite memory sizes help to hide this fact.
Today’s software workflow lacks application of both halves of the Scientific Method (see below). It seems that, today, “science” and “programming” are being dealt with in a non-scientific manner, using only one-half of the full equation.
Today, there are huge libraries of existing code. And, repositories for such libraries, e.g. github. Most of the code in these libraries has been built on top of only the function-based paradigm and on CALL/RETURN, hence, are quite brittle and lack low-level isolation. We have built up the concepts of “dependencies” and “package managers” to allow programmers to ship code that relies on code in repositories, which results in versioning-hell and over-complicated solutions to namespacing problems.
HTML is the unification of GUI technology. Available to all. HTML comes with a built-in scripting language/assembler (Javascript) to handle cases that go beyond the scope of the original design of HTML. Javascript is basically as powerful as Lisp, but maybe not as experienced. Or, maybe Javascript is even more experienced than modern Lisps? Neither, Javascript, nor Lisp have strict type-checking, but, that only makes them more like “assembler” and more amenable to machine generation. Type-checking is just a helper for human developers. Type-checking is applied as a syntactic skin over basic unchecked operations.
Today, we have browsers that handle SVG, HTML, graphML, etc. The majority or programmers don’t need to build GUIs any more.
We have high-level editors that handle SVG, HTML, graphML, etc. The majority or programmers don’t need to build diagram editors any more.
Sector Lisp[7], BLC[8] are modern, really, really small languages. Rhetorical question - what makes these languages so much smaller than other offerings? It can’t all be due to just assembler tricks. Corollary: if the size decreases are due only to assembler, then we simply need to install these tricks into LLVM and all of our programs would instantly be only hundreds of bytes large.
Today, programmers can freely build simple queues in just about any programming language.
Garbage collection. Today’s computers have near-infinite memory. Why then, do we still have Garbage Collection? This is a display of the power of divide-and-conquer and separation of concerns. Eliding details allows programmers to spend brain-power on more lofty ideas. Programmers don’t actually need Garbage Collection for technical reasons, but, they do need Garbage Collection for psychological reasons.
REGEX. REGEX proves that “compiler technology” can be used in non-compiler programming and that it can be embedded in IDEs and programming languages. REGEX, though, is based on line-oriented ideas and should not be used to parse non-line-oriented constructs like SVG, especially when today’s programmers have tools like OhmJS. Unfortunately, the existence of easy-to-use REGEXs fools programmers into using the wrong tools, and, worse, limits the imagination of programmers. For example, concepts like parsing graphML diagrams are overlooked because REGEX is not up to the task of (easily) parsing such constructs3.
There is a deep belief that there is only one way to program - i.e. through the use of functions and through the use of existing general purpose programming languages.
Functions, though, imply the need for context-switching. Pure functions rely on recursion and tend to hide details such as limited memory and stack size and timing issues. Most of these issues can be swept under the rug by expending software efficiency (by hiding necessary extra code in operating systems) and chip real-estate to erect the necessary scaffolding for supporting context-switching, and, preemptive scheduling.
The word “function” in programming is conflated with the same word used in mathematics. The use of mathematical techniques for manipulating “pure functions” is a fruitful approach for programming some kinds of problems. Yet, the suse of this technique does not exclude the use of other kinds of approaches, like state machines, declarative programming, etc. Given that CPUs are inherently stateful, inherently impure in the mathematical sense (CPUs are fundamentally based on the use of mutation of RAM), and CPUs inherently involve timing and sequencing, it can be concluded that functional programming cannot (easily) express solutions to the full range of capabilities of CPU-based hardware.
The use of functions to specify sequences of assembler instructions was cemented into programming consciousness with the so-called low-level language “C”. A single step in a CPU is an opcode, not a function. The functional paradigm is useful for structuring program sequences and for making fewer mistakes when converting such sequences into assembler, but, the full gamut of programming cannot be expressed using functions. “C” cannot express the full gamut of programming, especially without the “#asm” syntactical construct. Prior to the popularization of “C”, programming languages, like FORTRAN, contained syntax for subroutines and for pure functions. After the popularization of C, the word “function” was conflated to mean any kind of subroutine, whether being a pure function or a subroutine with side-effects. The fact that C was touted to be a low-level programming language, that was only slightly more advanced than assembler, reinforced the idea that “functions” were the only way to program assembler, which gradually led to the idea that all “functions” needed to be pure and should not produce side-effects. This is a direct denial of the reality of how CPUs work.
Building everything, e.g. libraries of code and transport layers between such libraries, based only on the functional paradigm implies that context-switching and low-level memory sharing must be used. This discourages other avenues of thought and the creation of small footprint devices.
Programming is the act of creating assembler and of punting work to off-chip devices. This might be better expressed using notations different from that of textual functions.
Programmers appear to fundamentally believe that “visual programming” can only be achieved by converting text into diagrams, instead of the other way around.
Scientific Method
The Scientific Method is:
Research, new ideas, new theories.
Falsification of theories. Negative criticism. A fail-fast approach to proving each theory to be wrong. This leaves only the theories that can’t be disproven, still standing. This fail-fast approach ensures rapid progress while culling out ideas that lead to rat-holes.
You cannot prove a theory to be correct. You can only prove it to be wrong.
I believe that the success factors for software are not explicitly defined. For example, when learning about compilers, I was taught that “an optimization should only be applied if it reduces or maintains code size after being applied with itself included”. This can be directly tested by using self-compilation.
In science, every theory needs to be accompanied by a falsifying test.
Every “idea” is but a theory. Hence, if you are going to spend time writing code to develop some new idea, you need to know how you are going to evaluate success. Failure is the best way to learn. If the idea fails, writing a paper about the failure is as valuable - or more valuable - than writing a paper about a new idea that works.
For example, if you devise a new programming language, how do you “test” it to be better - or worse, or, no better - than other languages? Believing that you have The Answer isn’t as good as cruel, realistic evaluation of the actual result.
How do you devise a test that isn’t biased by your own belief structure? For example, type-checking-B catches more problems than type-checking-A. But, what about type-checking, itself? Can practitioners build and test and productize software faster when they use type-checking? Can practitioners skip the testing phase when they use type-checking? Are product footprints smaller when type-checking is used4[4]?
Functional programming is clearly a fruitful approach, but, what are the costs of using FP? At what point is the cost of using FP too high? It appears that, today, no one blinks an eye when they ship software that needs 55,000,000 LOC as scaffolding.
A “test” might be something like: half a century ago, programmers spoke in terms of kilobytes. Today, programmers speak in terms of megabytes. A megabyte is 1,000 kilobytes. Is today’s programming workflow 1,000 times better than it was half a century ago? In what ways?
In 1972, how long did it take to develop and test and productize Pong? Today, how long does it take to develop and test and productize Pong?
How many failures-in-the-field did Pong have in 1972? How many failures-in-the-field does a modern Pong have?
When one bench-tested and released an electronic design in 1972, one expected it to have 0 (zero) failures in the field. One would even back the product up with “guarantees”. Something like, “if it fails, we’ll fix it for free”. Does today’s software come with guarantees, or just disclaimers, like “use at your own risk” and EULAs?
Has a new product niche been discovered, something like the middle ground between cheapo trinkets and medical-grade, ultra-tested devices?
Divide and Conquer - Development Tools vs. Product
Larger and larger footprints for software are often useful for development tools.
Shipped products, though, should have all development artifacts removed. This includes context-switching and memory protection - MMUs and the like.
Shipped products should not rely on the existence of full-blown operating systems nor on the existence of high-chip-footprint MMUs and caches. Certainly, these are useful tools during development, but, relying on their existence means that products are expected to be buggy and need to be fenced-off in the field. Likewise, C.I. (Continuous Integration) is essentially an invitation to ship buggy software.
Early games shipped on small-footprint cartridges. A cartridge would take over the whole machine, when plugged in. To run a different game, the user would unplug the current cartridge and plug in another one.
Today’s programmers essentially rely on massive operating systems to act like rotary switches in end-user products. Instead of plugging and unplugging cartridges, users are expected to use 55,000,000 lines of buried code to swap from one app to another. On top of that, their expensive hardware - full-blown laptops, suitable for developers and, at the same time, end-users - supports memory protection, which entices app programmers to skimp on testing and to ship buggy software, knowing that their code won’t harm other apps and can easily be updated with bug fixes. Bugs in code are bad enough, but, deficiencies in design are worse5. Both tend to be ignored and updated later with CI-delivered revisions and upgrades.
Bare Minimum
Programs need to be developed as islands of code connected by some low-level transport mechanism, almost like an internet-in-the-small or pipes or processes with non-function, event-based IPC mechanisms.
Languages that clearly separate specification from implementation (engines), like Prolog[9], Ceptre[10, 11], Nova[12, 13], etc. and UNIX pipelines are all swirling around the idea of islands of code.
Layering. Software needs to be built in layers. Today, software is essentially constructed as huge blobs of highly interconnected functions. Partial solutions to the various gotchas that result, like namespacing, DLLs, etc. have evolved to mitigate problems due lack of layering. Layers must not be tightly bound to other layers. For example, the OSI 7-layer model tends not to be easily implemented using synchronous, general purpose languages, because of the tight-coupling imposed under-the-hood by the use of function calling. This model would be trivial to implement if each layer were well-isolated and if each layer communicated with other layers via some inexpensive, non-function, event-based transport mechanism. Processes with message-queues could be useful, but the concept of processes is tainted by its association with operating systems and heavy-weight concepts like context-switching. Processes are, after all, just closures implemented in a Greespunian fashion[14]. Just about any programmer can implement queues today in any programming language.
Mevent-sending (like message-based event-passing, but, I needed a different word to emphasize the pulse-like, one-way nature of messages (as opposed to Call/Return)). Functions just don’t cut it for expressing this kind of thing.
Structured mevent-sending. Reducing strongly-coupled walls of interconnections down to encapsulated islands of interconnections between simpler units of software. Structuring based on the hierarchy-like concepts of ORG charts in business. Up/down mevents only. No sideways mevent sending. No skipping over subtrees. Mevents travel strictly upwards or downwards, with no “going over the boss’ head”, and, no “micromanagement”. The “GOTO Considered Harmful” equivalent of message-passing.
Move away from the concepts of Gutenberg settable type.
Drawings made in SVG, graphML, etc. can be reduced to text. Text can be parsed and rewritten. Something like t2t[15] can be used for this, or, just raw code.
Application of the Scientific Method, especially including theory testing. Elaboration of success factors, such as footprint size.
Programmers no longer need to build compilers. Enough compilers already exist. Text-to-text transpilation[15] can be used to convert new notations into text that can be compiled to low-level assembler code, like Alan Kay suggested decades ago[16]. LLMs, such as ChatGPT and Claude, can be used to transpile new textual languages into existing textual languages, too.
Bibliography
[1] The original Pong video game had no code and was built using hardware circuitry. Here's the original schematics from Atari from https://www.reddit.com/r/EngineeringPorn/comments/ul49zt/the_original_pong_video_game_had_no_code_and_was/
[2] CS50 Pong in Lua from
[3] OhmJS from https://ohmjs.org
[4] Statecharts from https://guitarvydas.github.io/2023/11/27/Statecharts-Papers-We-Love-Video.html
[5] S/SL (Syntax Semantic Language) from https://research.cs.queensu.ca/home/cordy/pub/downloads/ssl/
[6] GraphML from http://graphml.graphdrawing.org
[7] sector lisp from https://justine.lol/sectorlisp2/
[8] BLC Binary Lambda Calculus from https://justine.lol/lambda/
[9] SWIPL from https://www.swi-prolog.org
[10] Ceptre Dungeon Crawler Example Walk-Through from https://guitarvydas.github.io/2024/01/19/Ceptre-Dungeon-Crawler-Example-Walk-Through.html
[11] Chris Martens. Ceptre: A Language for Modeling Generative Interactive Systems from https://www.cs.cmu.edu/~cmartens/ceptre.pdf
[12] Nova from https://forum.nova-lang.net
[13] Democratizing Software from At about 1:24:00 of
[14] Greenspun’s Tenth Rule from https://en.wikipedia.org/wiki/Greenspun's_tenth_rule
[15] t2t from https://github.com/guitarvydas/t2t
[16] In a 'real' Computer Science, the best languages of an era should serve as 'assembly code' for the next generation of expression (Alan Kay, 31:50) from
See Also
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: tarvydas.gumroad.com
Twitter: @paul_tarvydas
Note that “{...}” is just a synonym for “enclosing rectangle” written in ASCII. The open nature of the brackets has led to countless hours of hand-wringing to deal with concepts like “global variables”. Concepts like “global variables” are not disallowed by ASCII bracket notation, but, are discouraged by closed figures on diagrams (programmers learn not to colour outside of the lines in kindergarten). Rhetorical question: would programmers have written programs using “global variables” if they had used drawings with closed figures instead of using ASCII?
The correct term in this case is, actually, CST - for Concrete Syntax Tree. But, I quibble. Modern programming lore tends to conflate the concepts of ASTs and CSTs. CSTs are but culled ASTs. ASTs represent what is possible, while CSTs represent what is actually there, hence, “A” for abstract and “C” for concrete.
The “obvious” answer is to learn OhmJS and to discard REGEX.
I was taught that state machines were nice, but, useless because of the “state explosion” problem [Harel actually fixed that, with Statecharts]. I think that we’re seeing something like “state explosion” in software due to the over-use of type-checking. Or, maybe this is just a failure to optimize away all artifacts of the use of type-checking? Either way, something smells bad.
often re-branded as “features”