Encapsulation Is Not Enough
2026-05-03
Everyone knows about encapsulation. Hide the data. Expose an interface. Don’t let outsiders touch your internals.
It sounds like isolation. It isn’t.
What Encapsulation Actually Does
Encapsulation hides data. It says nothing about control flow.
When module A calls a function in module B, A’s data is hidden from B and B’s data is hidden from A. Good. But A is now suspended, waiting for B to finish. A cannot proceed. A cannot time out. A cannot do anything else. A is frozen until B returns.
That is not isolation. That is a rendezvous. Both parties must be present simultaneously. The caller blocks. The callee runs. The caller resumes. They are coupled in time, not just in data.
The call stack is a data structure that encodes this coupling explicitly. Every frame on the stack is a promise: “I am waiting for you.” The deeper the stack, the more promises, the more coupling.
Encapsulation hides what is inside a module. The call stack exposes everything between modules. Anyone who can read the stack trace can see the entire dependency chain. It is the opposite of isolation.
Why CALL/RETURN Cannot Provide Isolation
CALL/RETURN is a synchronous rendezvous protocol. That is not an accident or a limitation — it is the definition. The entire point of CALL/RETURN is that the caller gets a result back, at a specific moment, from a specific callee. The coupling is not a bug. It is the feature.
This means that no matter how carefully you design your functions, no matter how clean your interfaces, no matter how disciplined your team:
If your notation is based on function calls, your system is synchronous by construction.
You can layer asynchrony on top. Callbacks. Promises. Async/await. Coroutines. Actors. These are all attempts to recover asynchrony after having discarded it by choosing functions as the primitive. Each layer adds complexity. Each layer leaks. Async/await “colours” every function — the synchronous/asynchronous distinction propagates transitively through the call graph until it infects everything.
You are fighting the notation. This is accidental complexity. The underlying problem is not as complex as the notation makes it seem.
What Isolation Actually Requires
True isolation means:
1. A unit of code runs when it is ready, not when its caller is ready.
2. A unit of code produces output when it finishes, not when its caller expects it.
3. Neither party holds a reference to the other’s execution state.
In a function call, condition 3 is violated by definition. The caller holds a return address. The callee holds a reference to the caller’s frame. The call stack is a shared data structure — not by accident, but by design. It is the mechanism that makes return possible.
True isolation requires a different primitive — not as a replacement for CALL/RETURN, but as a peer. Functions remain the right tool for pure computation. For coordination between independent components, something else is needed. Not a call. A message. The sender deposits a message and continues. The receiver processes the message when scheduled. No shared stack. No return address. No suspension. No coupling.
The difference is not cosmetic. It changes what you can easily reason about.
With functions, the question “can A and B deadlock?” requires examining the entire dynamic call graph — every possible path through every possible call chain. The answer is a global property of the system.
With isolated message-passing, the question “can A and B deadlock?” becomes: “is there a cycle in the wiring diagram?” The answer is a local, static, visible property. You can draw it. You can see it. You can reason about it before running the program.
Language Affects Thought
There is a deeper problem.
If your only notation is functions, you will think in functions. Everything becomes a function. Every problem gets cast as input → output → return. Synchronous. Single-threaded. Sequential.
This is not a failure of imagination. It is a failure of notation. Notation shapes thought. A language that has only functions cannot express — or even suggest — concurrent, asynchronous, reactive behaviour without awkward encodings that fight the grain of the language.
The result is that whole classes of problems become hard to reason about, not because they are inherently hard, but because the notation makes them hard. Concurrent systems. Reactive systems. Event-driven systems. Hardware. Protocols. Anything where multiple things happen at once, or where timing matters, or where components need to remain genuinely independent.
These problems are not rare. They are, arguably, the normal case. The synchronous, single-threaded function call is the special case — a simplification that is convenient for a restricted class of problems and deeply misleading for everything outside that class.
The Way Out
The solution is not to add more layers to the function model. It is to stop using the function model as the universal primitive for programming hardware.
Functions are excellent for pure computation. Given data, produce data, no side effects. For that use case, the synchronous rendezvous is fine — better than fine, it is exactly right. Pure functions compose cleanly, test trivially, and require no GC machinery beyond a simple bump allocator (as SectorLisp demonstrates in a tiny language (436 bytes) needing some 20 lines of code to do “garbage collection”).
But not everything is pure computation. Coordination — sequencing, scheduling, routing, timing — is not pure computation. It is inherently stateful and temporal. It needs a different notation.
The better approach is to use multiple notations, each laser-focused on what it is good at:
- Pure functions for computation
- Explicit message-passing for coordination
- Something like Forth or a state machine for sequencing
- Diagrams for wiring components together
Not a single language that tries to do everything. Not one paradigm wearing all the hats. A collection of small, precise notations, each used only where it fits, composed at well-defined boundaries.
The boundary between them is the key. At that boundary, data crosses but control flow does not. The sender is finished before the receiver starts. The stack does not span the boundary. The rendezvous does not happen.
That boundary is what isolation actually means.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paultarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html


This post reminds me the PLCs (programmable logic controllers). The first PLC was invented in 1969 for replacing the relay-based control panels in the metalworking factories. The success of PLC was driven by its first programming language: the LADDER LOGIC, a graphical language wich resembles the hard-wired circuits. The ladder logic is natively equipped with concurrency features (despite the PLCs are usually single-CPU equipped). The electrical technicians are able to program a PLC without learning difficult and obscure textual languages.