Types Are Unnecessary
2025-03-08
Types and type-checking is helpful, but not actually necessary.
The end result of programming is: assembler1.
Typeless assembler.
Non-programmers pay for applications and receive blobs of assembler. They don’t care what technology was used to create that assembler.
CPUs execute scripts of typeless opcodes. CPUs don’t care how those scripts were created.
Program developers do care. Type-checking alerts developers when the programs they are developing contain blunders. Thinking in terms of types helps at least some developers to think about their designs.
Back when programs fit completely in a single window, e.g with early BASIC programs, types weren’t an issue. The BASIC programs could contain global variables and it didn’t matter. The BASIC programs sometimes didn’t even use scoped variables nor function parameters, and, it didn’t matter.
What mattered was that programmers could see complete programs in one eye-full. They could “reason about” their programs and understand what was going on. They could debug their programs easily, since they could see everything and could guarantee that some code change somewhere else couldn’t change what they were seeing.
Problems arose when programmers tried to build larger systems by making programs larger. They chose to make programs larger by - making programs larger, instead of chopping their larger programs up into smaller, understandable LEGO®-like blocks that they could simply snap together. Programmers were fooled into thinking that they were using LEGO® software blocks based on CPU subroutines and function libraries. What fooled them was the hidden fact that those little blocks of code were tightly coupled through the stack-based mechanism of CALL/RETURN and the blocking mindset of mathematical functions mapped onto CPU hardware.
This went unnoticed for many decades, since most programs ran on single CPUs and didn’t need to behave in an asynchronous manner.
When faced with more modern asynchronous problems, programmers try to force-fit these incompatible concepts into their already-existing techniques. The popular technique of using functions is unsuitable, but, gives a warm-and-fuzzy feeling and allows programmers to think that they are “standing on the shoulders of giants”. It would be more appropriate to go back to the drawing board and to re-think how to address asynchronous problems in a less clumsy manner, instead of injecting foreign concepts into the synchronous, functional paradigm.
Programmers know a lot more techniques for programming than were apparent in the early days. Programmers can use these techniques, without throwing them away, to build new ways to conveniently address this newer class of problems. In the early days, the “giants” used piles of transistors to build something new. In modern days, programmers can do the same, by using what they’ve got - garbage collection, 1st class functions, queuing classes, etc. - to build something new without injecting new concepts into the functional paradigm. Problems like how to remove all traces of dependencies (data and control flow) from programs, and, problems like how to chop up programs into small, WYSIWYG layers that no longer need elaborate forms of type-checking. An example of anti-WYSIWYG would be the concept of method overriding in class-based inheritance. Looking at a top-level class doesn’t tell the whole story, because methods may turn out to have different meanings at lower-levels, i.e. classes don’t obey the principle of “locality of reference” [see Parental Authority]. Classes are borne out of a type-checking mentality. Classes seem to be a good way to describe certain kinds of data, like graphics hierarchies, but, class-based inheritance is not an appropriate way to describe control-flow hierarchies. It is impossible to have true software LEGO® blocks until source code fully obeys WYSIWYG principles.
See Also
Email: ptcomputingsimplicity@gmail.com
References: https://guitarvydas.github.io/2024/01/06/References.html
Blog: guitarvydas.github.io
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Gumroad: tarvydas.gumroad.com
Twitter: @paul_tarvydas
Substack: paultarvydas.substack.com
Modern compilers convert the assembler directly into binary. Previously, compilers converted the source code into assembler, then the assembly program converted the assembler into binary. Non-programmers actually receive binaries. For clarity, I skipped over this detail in this article.

