This "obsession with how quick something is to understand immediately and be productive in" is a good concept to have around, and it's something that's been tickling the back of my mind for a while.
In particular, I believe that this effect is strongly present in the Cult of Text Processing - there's a large group of programmers who believe that Text Must Be The Answer because it's just so easy to write `ls | grep` - never mind that each UNIX text-processing tool actually has its own bespoke input and output formats (it's not actually plain text), and so anything nontrivial quickly becomes a fragile mess riddled with edge-cases and liberal use of sed/awk. What's even more interesting is that because of the perceived ease of immediate understanding, even those that have deep experience with this paradigm (and should understand how inefficient it actually is) still defend it.
Although, I don't think that powerful tools necessarily have to have a high barrier to entry. For instance, maybe you could teach new programmers in Racket without type annotations, then introduce typed Racket as a way to allow them to much more easily learn static typing than adopting a whole new language? Such an approach might have made Rust easier to learn - just disable the borrow checker to learn an easier Rust, and then re-enable it gradually (starting with the easiest parts of your code and then moving to the more complex bits) to ease into what is usually the hardest bit for new Rust users.
Tool- and language-developers should focus on making things that are both easy to pick up (Python) and scale well (Common Lisp/Rust).
> Text Must Be The Answer because it's just so easy to write `ls | grep`
That's actually not quite it. What's so fantastic about the textual interface in Unix is that it is the same packaging[1] for both use and reuse.
This is highly unusual, usually packaging that is optimised for use is unsuitable for reuse (doesn't compose well) and packaging that is good for reuse is unsuitable for direct use (cumbersome/inconvenient).
Using text to fill both roles comes with a ton of compromises, as you rightly point out. It's not really great at either function, but sorta/kinda gets by at both. But even with those compromises, pulling off the feat of having the same packaging for use and reuse is just incredibly powerful.
There are other ways of achieving the same result, for example Naked Objects[2] bridges the gap by the programmer only providing the reuse packaging (an OO API) and the tooling then automatically generating direct-use packaging (a GUI).
Other ways of bridging the gap that can learn from one or both of these approaches clearly seem possible.
Powershell cmdlets return objects that are suitable for reuse and also have their textual representation suitable for (human) use. I think the concept is beautiful, but the execution is awful.
> That's actually not quite it. What's so fantastic about the textual interface in Unix is that it is the same packaging[1] for both use and reuse.
That's the reason CLI tools are so powerful, and the reason why json and yaml aren't able to replace them. It's also the reason why human-friendly CLI syntax like Cisco IOS is better than programatic interfaces like the default syntax in Juniper JUNOS.
That might not be the UNIX environment's true strength, but it's the only one that any of its advocates discuss. I've seen hundreds of people defend the paradigm, and you're the first one who's mentioned "packaging for both use and reuse" - everyone else talks about "text as the universal interface" and "simple, composable commands".
Moreover, I don't think that this feature is unique to UNIX. In particular, Emacs Lisp functions can both be called from other elisp code and directly (and ergonomically) executed by the user (with plenty of features to enrich interactive execution) - and these functions can actually pass typed data between them, unlike shell commands.
This appears to be the same "packaging for use and reuse" that you mention - unless I'm misunderstanding what you're saying, which is very possible because I'm reading through the (exceptionally interesting) PDF that you kindly linked, and I don't understand what it's getting at (although it seems profound).
"text as the universal interface" is the thing that enables "packaging for both use and reuse", so it's not surprising that this is what they refer to.
> Moreover, I don't think that this feature is unique to UNIX.
It's not. It's not common though, and certainly Emacs Lisp code doesn't work at the OS level, it is trapped inside Emacs.
It may not be for you but if you are going to take the piss, please at least give an example that does not generate a syntax error:
ls | grep
grep needs input and something to tell it what to do with the input.
I think Cult is a little harsh but it is handy to simply chain stuff together to quickly get something done. Along with the nominal concept of it's all files you can get an awful lot done in a shell with minimal effort.
Anything non trivial should get the full awk and sed treatment. The sort of system that has the GNU etc toolset available normally has python and perl available out of the box and probably php and a few more too - tools for the job. I also have something called pwsh that I dabble with - handy for fettling with VMware via Power CLI.
> It may not be for you but if you are going to take the piss, please at least give an example that does not generate a syntax error
I mean this kindly, as a person who struggles to infer implied details in a lot of circumstances: I think the quoted code was intended as shorthand, e.g.
That's exactly it. I've been running Linux as my desktop operating system for over a decade and know how to use grep - I just opted for shorthand (or may have underclocked some brain cells - I was in a hurry).
In particular, I believe that this effect is strongly present in the Cult of Text Processing - there's a large group of programmers who believe that Text Must Be The Answer because it's just so easy to write `ls | grep` - never mind that each UNIX text-processing tool actually has its own bespoke input and output formats (it's not actually plain text), and so anything nontrivial quickly becomes a fragile mess riddled with edge-cases and liberal use of sed/awk. What's even more interesting is that because of the perceived ease of immediate understanding, even those that have deep experience with this paradigm (and should understand how inefficient it actually is) still defend it.
Although, I don't think that powerful tools necessarily have to have a high barrier to entry. For instance, maybe you could teach new programmers in Racket without type annotations, then introduce typed Racket as a way to allow them to much more easily learn static typing than adopting a whole new language? Such an approach might have made Rust easier to learn - just disable the borrow checker to learn an easier Rust, and then re-enable it gradually (starting with the easiest parts of your code and then moving to the more complex bits) to ease into what is usually the hardest bit for new Rust users.
Tool- and language-developers should focus on making things that are both easy to pick up (Python) and scale well (Common Lisp/Rust).