You Already Know PBP
2026-05-13
TL;DR
A Unix pipeline is a PBP Container. You just never generalized it.
L;R
When you write ls | grep foo | wc -l, you are doing something remarkable without thinking about it. Three programs run concurrently. None of them knows what it is connected to. grep does not call wc. It writes to stdout and forgets. The wiring — who feeds whom — is declared outside the programs, on the command line, by you.
That is PBP. Parts Based Programming.
Each command in the pipeline is a Part. It has an input port (stdin) and an output port (stdout) and no knowledge of the pipeline it inhabits. The pipeline itself is a Container — it knows the topology; its Parts do not. A Part can itself be a shell script that contains another pipeline, making it a Container in turn. The hierarchy composes.
A sequential pipeline is one kind of Container. PBP Containers are more general — the wiring inside can fan out, loop back, or converge, not just chain. But the Unix pipeline is enough to show you the idea: topology declared externally, components oblivious to their neighbors, communication by message rather than by call.
Where Pipes Stop
Unix pipes prove the concept but the syntax freezes your thinking. Shell scripts can call other shell scripts. File descriptors can serve as ports. The composition is there if you work at it. But the text syntax nudges you toward chains — one program feeding the next, left to right — because that is what the | character expresses naturally. Fan-out, fan-in, and feedback are possible but awkward to write and hard to see.
PBP uses diagrammatic syntax. Fan-out, feedback, and arbitrary topologies are drawn, not written. The diagram makes the structure visible and natural to think about, not a clever trick you have to construct from text primitives.
The second difference is what gets passed. Pipes carry bytes. PBP passes mevents — two-part messages consisting of a tag and an unstructured byte payload. Tag is port name; payload is data. This is a key/value pair, and the key is what makes multiple named input and output ports thinkable. You stop reasoning about a stream and start reasoning about discrete, labelled messages arriving on specific ports. That shift is what lets you break out of the chain-of-processes paradigm entirely.
The one-input-queue-per-Part rule follows from this. Each Part receives mevents in arrival order on a single queue. Mevent ordering becomes something you can reason about explicitly, rather than something you manage through ad hoc state variables scattered across your code.
The third difference is weight. Unix pipes are OS-level — you pay for process creation, scheduling, and kernel context switches. PBP’s process model is defined simply enough that the entire kernel can be implemented in a small amount of Python. No operating system required.
Why Not Just Use CALL/RETURN
The reason software does not work this way by default is CALL/RETURN. Functions call functions, wait for results, and return. That model made sense on a single CPU in 1970. It makes less sense when your components are threads, processes, or machines on a network. CALL/RETURN forces synchrony onto a world that is not synchronous.
Unix got this right in 1973 and then software forgot the lesson.
References
PBP Cookbook beginning with simple Hello World demo and installation of tools and kernel
PBP tools and kernel development , see
kernel/kernel.drawioandkernel/*.rtfor kernel sources (use drawio to read/edit kernel.drawio) ; collaborators welcome
See Also
Email: ptcomputingsimplicity@gmail.com Substack: paultarvydas.substack.com Videos: https://www.youtube.com/@programmingsimplicity2980 Discord: https://discord.gg/65YZUh6Jpq Leanpub: [WIP] https://leanpub.com/u/paul-tarvydasTwitter: @paul_tarvydas Bluesky: @paultarvydas.bsky.social Mastodon: @paultarvydas(earlier) Blog: guitarvydas.github.io References:https://guitarvydas.github.io/2024/01/06/References.html
Paid subscriptions are a voluntary way to support this work.
