Flow
2026-04-12
TL;DR
Switching between thought streams wastes brain power. It is the cognitive equivalent of OS thrashing. Keeping your mind locked on one problem — “flow” — is a requirement for deep thought, for continuity, and for self-consistency of design. Breaking flow wastes time in thrashing, or, worse, makes you lose the thread of where the thought was leading.
L;R
The Core Dump Era
When programmers debugged in assembler, they examined core dumps.
A core dump is a raw snapshot of memory at the moment a program crashed. Finding the bug meant reading hexadecimal numbers and mapping them, manually, to program state. This was not just slow — it usually meant a full interruption. You had to hold the entire machine state in your head, reconstruct what the program was trying to do, and reason backwards to the fault.
Concentration was not optional. A single distraction usually meant starting over.
What IDEs Actually Did
We invented debuggers and IDEs. The common narrative is that they let us inspect program state more easily. That is true but incomplete.
What they really did was maintain flow.
Before a debugger, the feedback loop was: write code → compile → run → crash → decode dump → hypothesize → repeat. The core dump step was an enormous cognitive break. You left the problem domain entirely and entered a forensic domain. It was not that you entered a different mode of thought — it was that you lost continuity of the main mode. The thread you were holding snapped.
A debugger collapsed that loop. You stayed in the problem. You could watch values change in real time, set breakpoints near your hypothesis, confirm or refute quickly. The interruption was eliminated, or at least drastically reduced.
This is why IDEs let us build bigger programs. Not because they made any individual operation faster. Because they stopped the thrashing.
Thrashing
Operating systems thrash when they spend more time swapping pages than executing instructions. The work-to-overhead ratio inverts. The machine gets busier and busier while accomplishing less and less.
The brain does the same thing.
Every time you switch thought streams — from the problem you are solving to a niggly detail that interrupted you, and then back — you pay a re-entry cost. Reloading context. Reconstructing where you were. Remembering which threads matter and which do not. For complex problems, this cost is not seconds. It is minutes. And sometimes the thread is simply gone.
If the interruptions are frequent enough, you never get back to full depth. You skim the surface of each thought stream and switch again before you have accomplished anything meaningful.
Flow Is Not Mystical
Flow gets discussed as if it were a psychological luxury — something athletes and artists chase but engineers should feel slightly embarrassed about wanting.
That framing is wrong.
Flow is the state in which you are holding the problem, and only the problem, in working memory. It is a necessary condition for solving hard problems. It is not a bonus.
Gallwey named it in the context of sport, Csikszentmihalyi studied it formally. What all domains have in common is that the problems are complex enough that partial attention produces poor results. A basketball player in flow is not thinking about their feet. A programmer in flow is not thinking about syntax. They have both pushed the mechanics below conscious attention and freed their processing for the actual problem.
The mechanics have to be automated enough — or supported well enough by tooling — that they do not demand attention at all.
Flow is hard to measure. That is precisely why it tends not to be taught in school and tends not to appear in design requirements. But we need to be very cognizant of any feature, workflow, or tool choice that affects it.
What Breaks Flow
The obvious ones first. Notifications interrupt flow. Menus interrupt flow — even simple ones, because choosing requires shifting attention from the problem to the interface. Modal dialogs interrupt flow. Slow builds interrupt flow because they create a gap long enough that you wonder whether to do something else, and sometimes you do.
Saving a file is a small example worth noticing. Apple Notes and Obsidian save documents transparently, without any user action required. Emacs requires you to remember to save. That is an old-fashioned requirement from the 20th century — a demand that you appease the editor before you can trust your own work exists. Every “did I save?” moment is a micro-interruption, a small withdrawal from the concentration budget.
Two larger ones deserve their own treatment.
Strong typing. Type checkers, even after fifty years of development, are not perfect. They require you to stop and appease the type checker. When the type checker disagrees with you, you are snapped onto a different thought path — one dedicated not to the actual problem, but to satisfying a tool’s static analysis. This is not an argument against type checking. It is an argument for being honest about what it costs.
Changing your mind. As a project evolves, you learn new aspects of it — new nuances, new constraints, new understanding of what the design should be. Changing your mind about the design is easy. Going back and changing existing code, existing tests, existing assumptions, is an interruption. In compiler construction, for instance, if you change your mind about what the parser needs to do, you may need to divert your thoughts to whether the scanner needs to change too. Every dependent piece pulls you away from the idea that prompted the change in the first place.
The Waterfall method, ironically, was an attempt to preserve flow on a single vector — decide everything up front and never revisit it. That preserves a certain kind of continuity but breaks the ability to learn from the work as it develops. What we actually need are tools that make design changes instantaneous — so you can update your thinking without stopping to manage the consequences across a codebase.
Notation Is a Flow Tool
A notation that matches the problem domain keeps you in the problem. A notation that requires translation — from what you are thinking to what the language can express — is a context switch tax on every line.
This is why domain-specific notations matter. Not because general-purpose languages are wrong, but because a notation that does not match your problem forces you to think in two domains simultaneously. That is not flow. That is thrashing.
The deeper point is that no single language can satisfy flow concerns across the full range of problems, because every non-trivial problem involves multiple paradigms. What is actually needed is a workflow — or an IDE, or a composition tool — that allows the programmer to work with multiple notations simultaneously: little languages, sketches, diagrams, each appropriate to the aspect of the problem it addresses. The unit of composition is not the language. It is the thought.
The Sports Analogy Is Not a Metaphor
In competitive sports, staying in the zone is a performance variable coaches actively manage. Pre-game rituals, routine warm-ups, elimination of distractions — all of it exists to reach and sustain flow before and during performance.
Programming is not different. The stakes are just less visible because bugs do not score points for the other team.
If we took flow as seriously in programming as coaches take it in sports — if we designed tools, workspaces, and workflows around preserving concentration — we would build better software. Not incrementally better. Significantly better.
The Practical Consequence
Every tool decision is a flow decision.
Does this tool require me to understand it before I can understand my problem? Does this abstraction require me to think at two levels simultaneously? Does this workflow interrupt me at the moments I am going deepest?
These are not comfort questions. They are engineering questions. The answer affects throughput, defect rate, and the size and complexity of problems we can attempt at all.
The core dump was not just slow. It was a flow killer. The IDE was not just convenient. It was a flow restorer. That is why it changed everything.
Flow Is a Design Requirement
We treat flow as a side effect — something that happens when everything else goes right.
It should be a first-class requirement. Listed alongside correctness, performance, and maintainability. Evaluated deliberately.
When designing a new tool, the question is not only “does it work?” The question is also: does it keep the programmer in the problem, or does it pull them out of it?
Languages. A language that requires you to appease the compiler before you can express your idea is a flow tax. Boilerplate, type annotation ceremonies, incantations that satisfy the toolchain but communicate nothing about the problem — these are mandatory context switches, paid on every line. But further: a single language cannot satisfy flow across the full problem space. The shape of the solution should determine the notation, not the other way around. Flow-preserving tooling makes it possible to compose multiple notations — each a good fit for its subdomain — rather than forcing every thought into a single syntax.
Compilers. Error messages that describe the compiler’s internal confusion instead of the programmer’s likely mistake force a translation step. You decode what the compiler means and map it back to what you did. Python is a clear example — a single exception fills the screen with backtrace details, making you wade through the tool’s internal machinery to find what went wrong in your problem. Every decoding step is a micro-interruption. A compiler designed for flow gives you the error in terms of your problem, not its implementation.
IDEs. A “programming language” is, in one light, a simple caveman IDE — an interface for expressing intent as machine code. GPLs are one variant of that interface, not the only one. The original IDE insight was correct: collapse the feedback loop, stay in the problem. But IDEs have since accumulated their own surface area — plugin ecosystems, configuration panels, preference trees. An IDE that requires you to understand the IDE is partially defeating its own purpose. An IDE designed for flow disappears. And in the 21st century, where problem spaces span paradigms that no single language covers, the IDE needs to go further: it needs to let you compose multiple notations without leaving the environment or breaking your thought.
Debuggers. Many early languages worked in a REPL-like manner — the language and the debugger were the same thing, expressed in the same notation. You built the system and interrogated it using identical syntax. You never left the language. That unity is what made it a flow tool: there was no context switch between “building” and “inspecting.” That unity was abandoned in favour of the edit-compile-run cycle, and the debugger became a separate instrument with its own interface to learn and operate.
Modern programmers use the word REPL but the point has been largely lost. The original insight was not “evaluate expressions interactively.” It was “never make the programmer change mental gears to examine what they just built.”
A modern version of this concept goes further, because the problem has grown. Traditional REPL-based languages enforced a single notation — which meant the programmer stayed in flow by staying in one syntax, but was also trapped in one paradigm. Every problem had to be expressed through that paradigm’s eye of the needle, whether it fit or not. State-machine control flow expressed in a functional language. Asynchronous concurrency forced into synchronous sequential notation. Each mismatch is a flow tax: you spend time and attention translating what you actually mean into what the language can accept.
A REPL designed for flow in the 21st century would let the programmer stay in the problem across paradigm boundaries — composing multiple notations, each a natural fit for its subdomain, interrogating any of them using the same environment. The goal is the same as the original REPL: no gear change between thinking and building. The difference is that modern problem spaces are too large and too varied to be squeezed into any single paradigm without breakage.
The key structural move here is: first honour what the original REPL got right (same-notation unity = flow), then name what it got wrong (single paradigm = hidden flow tax), then project forward to what the modern version would actually need to be.
Workflows. Slow build pipelines, review processes that fragment deep work into hour-long slots, notification systems that fire on someone else’s schedule — these are organisational flow killers. A workflow designed for flow protects concentration as a resource. Slow builds get fixed. Interrupts get batched. Deep work gets time that is actually unbroken.
The principle is simple. Every mandatory context switch costs concentration. Every tool that demands attention for itself steals attention from the problem. The goal of tool design is to reduce that theft to zero.
We do not build tools so programmers can use tools. We build tools so programmers can think.
Flow is what thinking looks like when nothing is getting in the way.
Further Reading
Alan Kay, “Doing With Images Makes Symbols.” At the 57:42 mark, Kay shows a video of Timothy Gallwey teaching a woman named Molly — who has never played tennis — to play competently in twenty minutes, by keeping her analytical mind out of the way and letting her body learn. The point is about flow in learning, and it applies directly to tool design: a tool that forces the user to think about the tool prevents the user from entering the state where real learning and real work happen. Watch from that moment here.
W. Timothy Gallwey, The Inner Game of Tennis (1974). The book behind the Molly episode. Gallwey’s framework — that performance is blocked by the internal interference of the analytical mind — maps directly onto the programmer’s experience of being pulled out of deep work by tooling concerns. The inner game is the one you play against your own distraction. The outer game is the code.
Jef Raskin, The Humane Interface (2000). Raskin attempts to describe, in explicit engineering terms, the aspects of computer interface design that interrupt concentration — and those that preserve it. Where most interface design literature stays qualitative, Raskin tries to quantify the cost of mode switches, attention captures, and interface elements that demand cognitive engagement at the wrong moment. Directly relevant to anyone designing tools with flow in mind.
See Also
Email: ptcomputingsimplicity@gmail.com
Substack: paultarvydas.substack.com
Videos: https://www.youtube.com/@programmingsimplicity2980
Discord: https://discord.gg/65YZUh6Jpq
Leanpub: [WIP] https://leanpub.com/u/paul-tarvydas
Twitter: @paul_tarvydas
Bluesky: @paultarvydas.bsky.social
Mastodon: @paultarvydas
(earlier) Blog: guitarvydas.github.io
References: https://guitarvydas.github.io/2024/01/06/References.html
Paid subscriptions are a voluntary way to support this work.

