You make good points, but you are hiding that "lines of text" IS a type. Try using Linux grep on Windows UTF-16 files.
Software development does need better languages and abstractions but people keep inventing new ones that are the same as the old ones. Phyton and Perl are basically the same. New languages that actually shift the programming paradigm are very slow to catch on.
This article's primary thesis is that 1) we see functions and functional composition being used as the basic primitives for architectural organization; 2) this creates significant incidental complexity; and 3) type theory is designed specifically to manage that incidental complexity.
HN's "evisceration" falls into three categories: a) completely missing the point and strongly asserting that types are really necessary for robust functional programming; b) pointing out that interfaces are types; or c) complaints about shell. Unfortunately, these are all mostly just reiterations of in vogue ideas.
Notably, HN completely misses the opportunity to discuss principles of software architecture and the empirics of how those affect our ability to solve and manage problem domains. I appreciate this article for even trying to broach such a topic. It's way harder to think new thoughts than to reiterate the party line!
The key point here is that components should have well defined and explicit interfaces, and type systems are one way to write down a part of the contract and to enforce it. Unix pipes don't allow for the components to specify or to enforce such contracts, and thus I would say are much more difficult to maintain - the same commands work differently between Linux and MacOS, for example, or even different versions of the same tool. I think it is a fallacy to compare software engineering versus hardware or mechanical. Can you possibly hope to make changes to circuits or airplanes several times a day, test in an automated way, and deploy to millions of users without fear, even assuming you can physically make those changes in time. That's all because we've built automated ways to test and ensure the correctness of software, and type systems are one of the many important methods to do it.
> The key point here is that components should have well defined and explicit interfaces
This only works well when you're problem domain is already solved, well-specified, and unchanging. For most relevant cases, those properties are rarely true. In practice, we end up learning how to build the plane while flying it at the same time.
The issue with components and interfaces, functions and types, services and APIs, is that the initially-coded components, functions, and services invariably turn out to be a bad model of the underlying problem. The original program architecture effectively acts as a hypothesis about the structure of our target problem, which we really would like to update as we do do business and learn more about our customers, the problem itself, etc.
However, when those initial assumptions manifest as interfaces, types, and APIs, the rest of the system ends up depending on their underlying assumptions, making overhauls unrealistically challenging. Core production data tends to look like large product and sum types, sprawling interfaces, and complex APIs that encode all sorts of edge cases. Those types and interfaces then get used all over the codebase, making the interface assumptions virtually impossible to change without a very intrusive, large-scale, and unrealistic rewrite.
The code we first sketch out is the code that we are least informed about. We'd like the initial architectural decisions to be highly maleable as we learn about a problem space. However, "well-specified interfaces" kind of have the opposite effect.
UNIX utilities work specifically because of the strict contracts that they force the user to define. You have to set up the input for a given UNIX utility correctly, or it will not work. At all. That is, you have to shape the data in a way that the program can fit into a type it understands.
HTTP, DNS, and every other network protocol in existence work because they have strongly defined types which they expect to operate on, to the point that RFCs usually contain, as one of the main sections, a C typedef.
Agreeing on common contracts doesn’t necessarily require you to have a strong type system in the language you’re using, but you’re still defining restrictions on inputs into your system when you agree on these contracts, and if that isn’t a type, I don’t know what is.
Commercial electrical and computer engineers make HEAVY use of automated design rule checks that are precisely analogous to static type checking. So much so that I feel your premise is faulty enough to where I just cannot take your argument with any degree of seriousness.
Though, I agree that strong typing is a symptom for managing complexity and having interfaces simplifies integration with different components. However, with use of message passing and events, you are just shifting complexity. For example, messaging passing is more complex and costly than simple synchronous function calling because you have to add serialization/deserialization and potential networking. Another thing to consider is that you often have to guarantee order of operations, which is harder with asynchronous message passing systems. In some cases, you may resort to async/await or message-id for order/idempotency, which in order of magnitude more complex. In comparing choreography and orchestration based systems, I have found former harder for understanding end to end data flow. In my experience, message passing works better for inter-component communication rather than intra-component communication.
I feel like many (including me) have this sense of awe about Unix pipelines (and rightly so) and what they allow us to do, but I'm also skeptical that applying the same concept to software design _in general_ will be as good
You make good points, but you are hiding that "lines of text" IS a type. Try using Linux grep on Windows UTF-16 files.
Software development does need better languages and abstractions but people keep inventing new ones that are the same as the old ones. Phyton and Perl are basically the same. New languages that actually shift the programming paradigm are very slow to catch on.
This atrocious article was eviscerated at https://news.ycombinator.com/item?id=45135391
P.S. The responder is grossly intellectually dishonest and waves his hands furiously while not refuting a single thing said at HN.
More like completely whooshed over HN's head.
This article's primary thesis is that 1) we see functions and functional composition being used as the basic primitives for architectural organization; 2) this creates significant incidental complexity; and 3) type theory is designed specifically to manage that incidental complexity.
HN's "evisceration" falls into three categories: a) completely missing the point and strongly asserting that types are really necessary for robust functional programming; b) pointing out that interfaces are types; or c) complaints about shell. Unfortunately, these are all mostly just reiterations of in vogue ideas.
Notably, HN completely misses the opportunity to discuss principles of software architecture and the empirics of how those affect our ability to solve and manage problem domains. I appreciate this article for even trying to broach such a topic. It's way harder to think new thoughts than to reiterate the party line!
The key point here is that components should have well defined and explicit interfaces, and type systems are one way to write down a part of the contract and to enforce it. Unix pipes don't allow for the components to specify or to enforce such contracts, and thus I would say are much more difficult to maintain - the same commands work differently between Linux and MacOS, for example, or even different versions of the same tool. I think it is a fallacy to compare software engineering versus hardware or mechanical. Can you possibly hope to make changes to circuits or airplanes several times a day, test in an automated way, and deploy to millions of users without fear, even assuming you can physically make those changes in time. That's all because we've built automated ways to test and ensure the correctness of software, and type systems are one of the many important methods to do it.
> The key point here is that components should have well defined and explicit interfaces
This only works well when you're problem domain is already solved, well-specified, and unchanging. For most relevant cases, those properties are rarely true. In practice, we end up learning how to build the plane while flying it at the same time.
The issue with components and interfaces, functions and types, services and APIs, is that the initially-coded components, functions, and services invariably turn out to be a bad model of the underlying problem. The original program architecture effectively acts as a hypothesis about the structure of our target problem, which we really would like to update as we do do business and learn more about our customers, the problem itself, etc.
However, when those initial assumptions manifest as interfaces, types, and APIs, the rest of the system ends up depending on their underlying assumptions, making overhauls unrealistically challenging. Core production data tends to look like large product and sum types, sprawling interfaces, and complex APIs that encode all sorts of edge cases. Those types and interfaces then get used all over the codebase, making the interface assumptions virtually impossible to change without a very intrusive, large-scale, and unrealistic rewrite.
The code we first sketch out is the code that we are least informed about. We'd like the initial architectural decisions to be highly maleable as we learn about a problem space. However, "well-specified interfaces" kind of have the opposite effect.
UNIX utilities work specifically because of the strict contracts that they force the user to define. You have to set up the input for a given UNIX utility correctly, or it will not work. At all. That is, you have to shape the data in a way that the program can fit into a type it understands.
HTTP, DNS, and every other network protocol in existence work because they have strongly defined types which they expect to operate on, to the point that RFCs usually contain, as one of the main sections, a C typedef.
Agreeing on common contracts doesn’t necessarily require you to have a strong type system in the language you’re using, but you’re still defining restrictions on inputs into your system when you agree on these contracts, and if that isn’t a type, I don’t know what is.
What about Erlang processes?
Commercial electrical and computer engineers make HEAVY use of automated design rule checks that are precisely analogous to static type checking. So much so that I feel your premise is faulty enough to where I just cannot take your argument with any degree of seriousness.
Though, I agree that strong typing is a symptom for managing complexity and having interfaces simplifies integration with different components. However, with use of message passing and events, you are just shifting complexity. For example, messaging passing is more complex and costly than simple synchronous function calling because you have to add serialization/deserialization and potential networking. Another thing to consider is that you often have to guarantee order of operations, which is harder with asynchronous message passing systems. In some cases, you may resort to async/await or message-id for order/idempotency, which in order of magnitude more complex. In comparing choreography and orchestration based systems, I have found former harder for understanding end to end data flow. In my experience, message passing works better for inter-component communication rather than intra-component communication.
I feel like many (including me) have this sense of awe about Unix pipelines (and rightly so) and what they allow us to do, but I'm also skeptical that applying the same concept to software design _in general_ will be as good