A while ago I stumbled across a technique for improving stream buffering that I wish more I/O library implementors knew about. Korn and Vo's sfio library (circa 1991) had a feature called "pooling", whereby distinct streams could be linked together. Any read or write operation to any stream in a pool implicitly synchronized all the other streams in the pool first. This way, when stdio and stderr were pooled, which was the default when both went to ttys, a write on stderr implicitly flushed stdout. I've implemented this feature for myself a couple times; it's fairly easy to do and basically eliminates the need to explicitly flush streams in client code.
Citation: https://archive.org/details/1991-proceedings-tech-conference... but note that the explanation of stream pools there is a little less precise and more general than really necessary. I believe that later versions of sfio simplified things somewhat, though I could be wrong. (I find their code fairly hard to read.)
Anyhow, ISTM a missed opportunity when new languages that don't actually use libc's routines for something reinvent POSIX's clunkier aspects.
There are still I/O libraries that play with read/write buffers really fast & dirty, which C standard explicitly allows for with its "However, output shall not be directly followed by input without an intervening call to the fflush function or to a file positioning function (fseek, fsetpos, or rewind), and input shall not be directly followed by output without an intervening call to a file positioning function, unless the input operation encounters end-of-file" wording.
Re: obtaining a legal copy of Genera, as of 2023 Symbolics still existed as a corporate entity and they continued to sell x86-64 laptops with "Portable Genera 2.0". I bought one from them then, and occasionally see them listing some on Ebay. (This isn't intended as an advertisement or endorsement, just a statement. I think it's quite unfortunate that Symbolics's software hasn't been made freely available, since it's now really only of historical interest.)
I don't know about the Unix certification process itself, but the Single Unix Specification explicitly mentions case-insensitivity among non-conforming file system behaviors that are allowed as extensions (in 2.1.1 item 4, third-to-last bullet):
So a conforming OS has to make case-sensitive file systems available (which MacOS does: you can create case-sensitive HFS or APFS volumes). But I'm not sure if a conforming OS instance (i.e., running system) has to have any case-sensitive mount points, and either way, AFAIK there's no practical and race-free way for a conforming application to detect whether any particular mount point behaves case-sensitively.
So I believe that as far as the standard goes, a conforming application might run on a conformingly-extended OS where no portion of the the file namespace behaves case-sensitively. IOW, a conforming application cannot rely on case-sensitivity for file names.
And then, since the type NIL is a subtype of all types, it's a subtype of CHARACTER. So because the type STRING is the union of all array types whose element type is a subtype of CHARACTER, an array that can't store any values is also a string. Oops.
(Also, just for onlookers, in ANSI Common Lisp, but not its ancestors or its sorta-sibling Emacs Lisp, characters are disjoint from integers. That's why the intersection of BIT and CHARACTER is empty.)
Yes, in Common Lisp, NIL is a value (it's a symbol, and by convention also the empty list).
But when used as a type specifier, NIL denotes the empty set. So no Lisp object is of that type, and an array with that element type cannot store any object.
I worked in Perl for ~2.5 years in the mid-2000s. It wasn't the language for me, but I liked, respected, and am still friends with colleagues who loved it. However, I was always dumbfounded by the experience that none of them could or even professed to be sure of what most code fragments did at a glance, even fragments that only used constructs in the base language. When they worked on existing code, they'd run it, tweak it, run it again, maybe isolate some into a small file to see what it did, look at perldoc, ask questions on IRC, etc. As a Lisp guy, I'm all for interactive & iterative development, but I also like learning tools so as to use them more effectively over time. I didn't find that learning Perl made me more productive with it over time (on a modestly sized, pre-existing code base that needed both maintenance and new feature development), and the Perl lovers I knew didn't seem to mind not having this as part of their work.
Anyhow, toward the end of my time there, I had to interview candidates. Because I came to believe that the above is how one had to work with Perl, I took to asking the ones who said they knew Perl, "Can the reverse builtin reverse a list?" (I won't spoil the answer for you.) Most would answer with "Yes" or "No"; 75% of them were mistaken. Either way I'd ask them "Suppose you weren't confident about that answer. How would you determine the truth?" IIRC, 90% of them said "I'd write this one-liner..." and (I swear) 100% of the one liners would give any reasonable person an impression of an answer that turns out to be incorrect. The ones that said "I'd check the perldoc" were the only ones I'd approve for subsequent interviews.
I hate when you have code that you can't simply read to understand what it does. I'd like to say that probably 99% of the code that I write embodies that, I refuse to use complicated language constructs wherever I can, even if that makes the code longer.
When I was a kid and just started working I would still often code with whatever I came up with initially, but then you go back to that 3 months later and you have to throw the whole thing out because it's impossible to maintain or add to.
On the other hand there are sometimes additions to a language that's just so useful that you have to expand your vocabulary, for example in C# that's happened a few times. One of the notable additions there was LINQ that made manipulating data so much easier. It can become dangerous though, the same as for example with a complicated DB stored procedure.
IDK how much this matters, but the Common Lisp standard doesn't mandate tail call elimination. So although many implementations offer it (usually as a compiler thing), it's conceptually an implementation-dependent detail that borders on a language extension: if your code actually depends on TCE for correct execution, that code might exhaust the stack under ordinary interpreter/compiler settings, and differently across different implementations. So for Common Lisp, if you want to use standardized language features standardly, it's quite reasonable to reach for iteration constructs rather than tail recursion.
> if your code actually depends on TCE for correct execution, that code might exhaust the stack under ordinary interpreter/compiler settings
But Lisp programmers tend to use recursion for iteration and very much count on TCO. So it really has to be implemented, and the language should require it.
Counting on TCO is more of a Scheme thing (where the language spec guarantees it) than a Common Lisp thing. CL does not guarantee TCO so, at least historically, looping (various forms, not just the LOOP facility) was quite common.
As u/dapperdrake points out just below one can use code-walking macros to rewrite recursion into iteration, and Doug Hoyte shows how to do that (specifically for named-let) in Let Over Lambda, thus allowing the illusion of TCO. But IMO just about every language ought to have TCO.
I used to think this, but now mostly (but weakly) don't. Long options buy expressiveness at the cost of density, i.e., they tend to turn "one-liners" into "N-liners". One-liners can be cryptic, but N-liners reduce how much program fits on the screen at once. I personally find it easier to look up flags than to have to page through multiple screenfuls to make sense of something. In this respect, ISTM short options are a /different/ way of helping a subsequent reader, by increasing the odds they see the forest, not just the trees.
Unix's standard error is definitely not the first invention of a sink for errors. According to Doug McIlroy, Unix got standard error in its 6th Edition, released in May 1975 (http://www.cs.dartmouth.edu/~doug/reader.pdf). 5th Edition was released in June, 1974, so it's reasonable to suppose Unix's standard error was developed during that 11 month interval. By that time, Multics already had a dedicated error stream, called error_output (see https://multicians.org/mtbs/mtb763.html, dated October 1973).
All the same, I'd be willing to believe that Unix's standard error could have been an "independent rediscovery" of one feature made highly desirable by other features (redirection and pipes). It's not clear how much communication there was among distinct OS researcher groups back then, so even if other systems had an analogue, Bell Labs people might not have been aware of it.
The story that I recall about the origins of stderr is that without it, pipes are a mess. Keeping stdout to just the text that you want to pipe between tools and diverting all “noise” elsewhere is what makes pipes useable.
Citation: https://archive.org/details/1991-proceedings-tech-conference... but note that the explanation of stream pools there is a little less precise and more general than really necessary. I believe that later versions of sfio simplified things somewhat, though I could be wrong. (I find their code fairly hard to read.)
Anyhow, ISTM a missed opportunity when new languages that don't actually use libc's routines for something reinvent POSIX's clunkier aspects.
reply