Architectural Portability
2026-03-11
TL;DR
The most re-usable architecture is the smallest one.
Write a problem-specific notation that emits code in several languages. It doesn’t need to be Turing complete. It just needs to capture the problem structure. You now own the semantic level above any particular target language.
Going from tight to loose is easy. The other direction is the disassembler problem — semantic information has been dropped and you have to re-infer it. Keep the high-level description alive. Emit from it.
Working code is working code, whether written by a human, a generator, or an LLM. Test it either way. When you take code from a repo, freeze it — copy the source, cut the live dependency. Start from repos. Don’t end there.
You can get through alpha several ways: type-checked languages, LLM-generated code, or a REPL-based dynamic language that keeps the program alive while you’re writing it. Programming languages are stone-age IDEs. The REPL is a fundamentally different relationship with the running system. Use it.
Compilation is an optimization, not a methodology. It belongs after beta. And compiled code needs to be tested again before production — it’s a different artifact from what you tested. Develop dynamically → beta test → compile → gamma test → ship.
Type safety is not the finish line. Shipping software that does what users need is.
L;R
The most re-usable architecture is the smallest one.
This sounds obvious until you watch someone spend six months building a framework that can do everything, and then spend the next six months explaining why it can’t quite do this particular thing.
Small and simple isn’t a concession. It’s the goal. A small architecture fits in your head. It composes. It gets out of the way. And critically — you can pick it up and drop it into a new context without dragging half your previous project along for the ride.
The Meta-Language Move
Here’s a technique that pays off disproportionately: instead of writing your solution directly in one programming language, write a problem-specific notation that emits code in several languages.
A word on the “new language is confusing” objection. If you write your logic in a standard GPL — Python, Rust, whatever — a maintainer (possibly yourself, six months from now) still has to decipher what you intended. That work doesn’t disappear just because the language is familiar. And if you write it in a language from Mars, they still have to learn the syntax. But syntax is the easy part. The hard part is always the same: understanding the architecture, the DI — Design Intent.
Current GPLs are not much better than assembler in this regard. They specify the details of operation in ways so excruciatingly detailed that only a machine could love them. That’s the opposite of what a high-level language is supposed to do. The point of abstraction is to summarize what — the DI — not to drown you in how.
If the language from Mars describes your DI in human terms better than the equivalent GPL code, the maintainer pays a small one-time cost to learn the Martian syntax, then spends less time total figuring out what the software is actually doing. The how — the excruciating details — should remain available. But it shouldn’t be the only way in, or the first way in.
Your notation doesn’t need to be Turing complete. It doesn’t need a type system, a module system, or a five-year standardization process. It just needs to be good enough to capture the problem structure and emit reasonable code in Python, JavaScript, whatever you need this week.
This is not a new idea — it’s the idea behind every successful DSL, every template engine, every macro system worth using. What’s new is that the tooling to build these things has gotten embarrassingly cheap. OhmJS, PEG parsers, even a few hundred lines of Python with regex — you can have a working transpiler in an afternoon.
The payoff: you now own the semantic level above the target language. Want to emit Rust next year instead of C? The translation lives in one place. Your problem description hasn’t changed.
Tighter to Looser, Not the Other Way
There’s an asymmetry here that’s worth naming. Going from a tighter, more constrained language to a looser one is easy. Going the other direction is hard — it’s the disassembler problem.
When you compile down, semantic information evaporates. Variable names become offsets. Intent becomes instruction sequences. Structure becomes flat. And if you later need to go back up — to recover that intent, to work at the problem level again — you have to re-infer what was thrown away. That inference is never complete. You’re always guessing.
This is why architecture matters at the source level. Don’t throw away the structure prematurely. Keep the high-level description alive. Emit from it, don’t abandon it.
Should You Let an LLM Write the Code?
Sure. Why not?
Working code is working code. It doesn’t matter whether a human typed it, a generator emitted it, or an LLM hallucinated it into existence on the third attempt. What matters is whether it passes the tests.
The anxiety about LLM-generated code usually confuses authorship with correctness. These are orthogonal. You’ve always needed to test code written by other humans. You test code written by yourself. Testing LLM output is the same discipline, applied to a new source.
One practical note: when you take code from a repo — whether written by a human or generated — freeze it. Copy the source. Don’t maintain a live dependency on something you don’t control. Debug it until it works, then protect it from future upstream changes. The goal is a product that works, not a product that works today but silently breaks when some package maintainer makes a different choice next month.
Start from repos. Don’t end there.
What LLMs Actually Reveal About Type Checking
LLMs have made something visible that was always true but easy to ignore: you can produce working software without type-checking your way to apparent correctness.
This doesn’t mean type checking is useless. It means type checking is one tool, not the destination.
The actual goal is getting to beta testing as fast as possible. Beta is where real feedback lives. Alpha is where you’re still arguing with yourself about the shape of the problem.
There are several reasonable paths through alpha.
You can use a tight, strongly-typed language and let the type checker catch structural mistakes early. Or you can use LLM-generated code and move fast on the assumption that tests will catch mistakes instead. Or — and this one gets underrated — you can use a REPL-based dynamic language and develop interactively, tightening the loop between idea and running code to almost nothing. This is what “RAD” actually means when it’s working.
Programming languages, when you squint at them, are stone-age IDEs. They were designed in an era when you submitted a deck of cards and came back the next morning. Dynamic languages broke from that model: the program is alive while you’re writing it. You poke it. It answers. You adjust. The REPL isn’t a toy feature — it’s a fundamentally different relationship between the programmer and the running system.
AOT-compiled languages trade that interactivity away. Which is fine, eventually. But compilation is an optimization, not a development methodology. It belongs at the end of the process, not woven into every iteration. You compile after beta. Not before.
Compiled code also needs to be tested again before it goes to production. The compilation step is a transformation. Transformations can introduce problems — optimizer bugs, platform differences, linking surprises. If beta was your dynamic version, you haven’t actually tested the compiled artifact. Call it gamma testing if you want a name for it. The point is: the compiled binary is a different thing from the program you tested, and treating it as automatically identical is wishful thinking.
The sequence should be: develop dynamically → beta test → compile → gamma test → ship. Not: refuse to run the program until the types check out.
What’s not reasonable is treating type safety as the end-state — the proof that the software is done. Software is done when it does what users need it to do. That’s a test. Go run it.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html

