I'm glad to see this here, for two reasons: (1) In general it's nice when people return to the primary sources rather than second-hand accounts, and (2) this particular topic is of interest to me; here are a couple of previous comments that were somewhat well-received on HN:
I guess it's a long-standing tradition in literary reviews for reviewers to push their own ideas, rather than confining themselves solely to reviewing the work in question. That is what happened here. Knuth had written a program that he had been asked to write, to demonstrate the programming discipline. But McIlroy, as the inventor of Unix pipes and a representative of the Unix philosophy (at that time not well-known outside the few Unix strongholds: Bell Labs, Berkeley, etc), decided to point out (in addition to a good review of the program itself) the Unix idea that such special-purpose programs shouldn't be written in the first place; instead one must first accumulate a bunch of useful programs (such as those provided by Unix), with ways of composing them (such as Unix pipes). A while later, John Gilbert described this episode this way:
> Architecture may be a better metaphor than writing for an endeavor that closely mixes art, science, craft, and engineering. “Put up a house on a creek near a waterfall,” we say, and look at what each artisan does: The artist, Frank Lloyd Wright (or Don Knuth), designs Fallingwater, a building beautiful within its setting, comfortable as a dwelling, audacious in technique, and masterful in execution. Doug McIlroy, consummate engineer, disdains to practice architecture at all on such a pedestrian task; he hauls in the pieces of a prefabricated house and has the roof up that afternoon. (After all, his firm makes the best prefabs in the business.)
There are other points (not mentioned in this article), e.g. the fact that someone had to have written those Unix programs in the first place and writing them with literate programming can lead to better results, and the fact that Knuth's idea of using a trie (though not a packed/hash trie; that's no longer needed) still seems fastest: https://codegolf.stackexchange.com/questions/188133/bentleys... (please someone prove me wrong; I'd love to learn!)
Knuth gladly included McIlroy's review verbatim when he reprinted this paper in his collection Literate Programming. BTW here's an 1989 interview of McIlroy https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm where he looks back and calls Knuth's WEB “a beautiful idea” and “Really elegant”, and his review “a little unfair”, though of course he reiterates his main point.
Knuth really is a fan of writing monolithic (rather than "modular") programs from scratch, in a way that goes against all the experience of software engineering accumulated over decades, so that criticism is well-deserved.
For example, his big programs TeX (1982) and METAFONT (1984) are each book-length and the source code of each is in a single large file amounting to about 20000+ lines of Pascal code. His programs do not contain much in the way of standard software-engineering practices like abstraction, modules (hiding implementation behind an interface), unit tests, libraries, etc. In fact, he has spoken out against unit tests and code reuse! [1]
> the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works and what doesn’t. Otherwise, lots of time is wasted on activities that I simply never need to perform or even think about. Nothing needs to be "mocked up." ...
> With the caveat that there’s no reason anybody should care about the opinions of a computer scientist/mathematician like me regarding software development, [...] I also must confess to a strong bias against the fashion for reusable code. To me, "re-editable code" is much, much better than an untouchable black box or toolkit. I could go on and on about this. If you’re totally convinced that reusable code is wonderful, I probably won’t be able to sway you anyway, but you’ll never convince me that reusable code isn’t mostly a menace.
Moreover, his sympathies always lay with the "other" side of the "structured programming" revolution (he still liberally uses GOTOs, etc -- still coding like a 1950s/1960s machine code programmer), and in his 1974 paper "Structured Programming With Go To Statements", he approvingly quotes something that might horrify many software engineers today:
> In this regard I would like to quote some observations made recently by Pierre-Arnoul de Marneffe:
> In civil engineering design, it is presently a mandatory concept known as the "Shanley Design Criterion" to collect several functions into one part . . . If you make a cross-section of, for instance, the German V-2, you find external skin, structural rods, tank wall, etc. If you cut across the Saturn-B moon rocket, you find only an external skin which is at the same time a structural component and the tank wall. Rocketry engineers have used the "Shanley Principle" thoroughly when they use the fuel pressure inside the tank to improve the rigidity of the external skin! . . . People can argue that structured programs, even if they work correctly, will look like laboratory prototypes where you can discern all the individual components, but which are not daily usable. Building "integrated" products is an engineering principle as valuable as structuring the design process.
> ... Engineering has two phases, structuring and integration: we ought not to forget either one...
(This comment is slightly tongue-in-cheek, but hopefully provocative enough.)
[0]: Hey it's been a couple of hours and there's no reply attacking my comment, guess I better do it myself. :-)
That's extremism, but I think we need people like that. Nowadays, it seems that we forgot how to write monolithic programs from scratch. I think we went too far with code reuse. See the LeftPad npm fiasco.
Knowing that the one who is possibly the most respected person in the field of computer science has an opinion that goes against the current trends gives us perspective.
In the same way, I don't fully agree with Richard Stallman's activism and Linus Torvald's famous rants, but I'm glad there are people like that to shake things up.
> Knowing that the one who is possibly the most respected person in the field of computer science has an opinion that goes against the current trends gives us perspective.
Computer science is not software engineering. You also don't get physicists and mathematicians determine engineering best practices for precisely the same reason.
For example, the reference to Shanley's design principle is not only wrong in it's core (aerospace engineering instead of civil engineering) it also completely misses the underlying design requirements and constraints. More precisely, aerospace pays a premium for weight, thus design requirements emphasize the need to optimize structural systems wrt structural weight. This design choice favours operational economy at the expense of production costs, maintainability, robustness, and simplifying analysis.
None of this applies to software development, or even civil engineering structures.
Knuth also uses assembly in his book on algorithms.
But generally, algorithms researchers seem to not care about abstractions, as witnessed by TeX and LaTeX in multiple ways.
That's probably because when you really need to invent a fancy algorithm (which is their job), that will often not be built out of reusable components.
> "...goes against all the experience of software engineering accumulated over decades... standard software-engineering practices like abstraction, modules (hiding implementation behind an interface), unit tests, libraries, etc."
You need to be careful not to confuse experience and common practice with empirically proven benefit. Many of the practices that you mention are intended to increase the feasibility (perceived feasibility!) of industrial reuse, and/or division of labor, not to make software better in any other dimension (reliability, code size, power/time/space efficiency, fitness for purpose.)
> Many of the practices that you mention are intended to increase the feasibility (perceived feasibility!) of industrial reuse, and/or division of labor, not to make software better in any other dimension (reliability, code size, power/time/space efficiency, fitness for purpose.)
Actually, nowadays the main driving force for abstractions and modularity is to make the code easier to test, understand, and refactor.
Furthermore, "power/time/space efficiency, fitness for purpose" are concerns that don't really apply to software development. In general the main resource is labour, and all other requirements are secondary (computational reaources, latency, etc)
My personal experience would point to knuth being in essence right on at least one point: The obsession with code reuse is truly unwise; re-editable code is much more important. That's not to say reuse doesn't have it's place; nor that modularity is of no value. Instead, in the name of reuse people almost invariably write code that's too complicated, with opaque yet leaky abstractions that cause the code to be much more brittle, harder to maintain, and often has unfortunate non-functional consequences like having unexpected and unpredictable performance or security gotchas that require one to understand too much of the internals anyhow.
Part of the problem is this almost hero-worshipping nature of underlying libs we rely on; and part of the problem is one of perception: even in today's reuse-fetishized culture, likely almost all code is of the non-reused kind; yet in any given program we see huge amounts of reused code imported from e.g. package managers - because those reused bits are often reused a lot.
We'd be much better at reuse if we were a little more sskeptical of it, and didn't assume that design rules that hold for code that's been packaged for reuse also hold for the more pedestrian but nevertheless very common code that is not currently being reused. We want strict, leak-free abstractions ideally covering both functional and non-functional aspects for the reusable bits; but where we cannot do that or cannot yet afford to, we want the opposite: better clearly transparent code than a mess of leaky abstractions.
By the same token we don't copy and paste code enough. Sometimes a good abstraction is elusive, yet a pattern is still recognizable and useful. We have language features and a culture surrounding directly reusable code, but no such habits for derivative code, even though that would be quite useful. Essentially: I'm perfectly happy to deal with people using some stackoverflow answer to write code, but as soon as people do, it's like we regress into the dark ages: there's no structured citation, no support for detecting updates, no "package manager" that tracks updates (for stackoverflow clearly doesn't do that), and no diff or whatever to show how you tweaked the code segment. So instead people all too often just throw future maintainers under the bus with some random code-golfed answer, or some "reused" library that is hardly much easier to use than the reimplementation, with much harder to spot gotchas and perf/security issues, and often an API that isn't actually convenient for your use case.
So yeah: software needs reusable components, but 99% of the code you write should be re-editable, not reusable; and 100% of the time you should aim for re-editability, and only ever grudgingly accept the need for reusability after multiple use-cases are found (not just 2 or 3!), and there is a leak-free abstraction possible, and you've considered things like perf and debuggability and security.
"... you find only an external skin which is at the same time a structural component and the tank wall ... Engineering has two phases, structuring and integration: we ought not to forget either one..."
That could be interpreted in non-horrifying ways. Sure, the cost of a function call (especially with a modern compiler that can intelligently inline) is negligible in most cases compared with the engineering benefits it offers.
But what about cases where it's less clear? In databases, there are constantly cycles between:
Cycle phase 0: Abstract Storage: separate storage from computation to make the database system easier to test, replace, and reconfigure.
Cycle phase 1: Push computation down to storage: wow, look how much better performance we can get if we intermingle storage and computation!
The interesting thing about this is that TeX continues to be reused 45 years after its inception; new libraries show up on CTAN regularly, and of course TikZ and LaTeX are far from being in the original design of TeX. Very few libraries have shown the survivability and versatility of TeX. So maybe the way we think about software reuse is wrong.
You seem to be claiming that 20,000 lines of code constitutes a large program. Is that really your intention? I mean libjpeg is 34,000 lines of code, and LAPACK 3.6.0 (one of the very few libraries that excels TeX in reusability and enduring value) is 685,000 lines of code, and each of them is just a small part of many programs. I would instead describe the monolithic parts of TeX and METAFONT as small programs of only 20,000 lines of code, omitting even dependencies on libraries.
Yeah as I said it was a tongue-in-cheek comment and I don't believe it, just wanted to provoke some discussion. :-) But in any case what I meant is that TeX/MF are among his biggest programs (that I know of), not that 20000 lines is a large program (he calls it "medium" IIRC).
(Ironically, in my previous job supposedly using "modern" programming practices, a single Python file had organically grown to over 25000 lines in length and people complained to GitHub about the file not being rendered in full in the browser.)
> The interesting thing about this is that TeX continues to be reused 45 years after its inception; (...) So maybe the way we think about software reuse is wrong.
Incidentally you got it completely backwards. TeX is used because it's a convenient interface between higher level descriptions (i.e., book content) and the lower level output (pretty document formats). Thus, once again abstractions and interfaces show their value.
Additionally, TeX is used rarely by humans, while LaTeX is the tried and true workhorse. Again, an abstraction that targets a interface.
And how many TeX and LaTeX reimplementation a are there? Again, the interface and abstractions show their value.
> TeX is used because it's a convenient interface between higher level descriptions…
I think you mean LaTeX, not TeX. (LaTeX is a macro layer on top of TeX that provides these convenient interfaces, while TeX is a low-level typesetter.)
> Additionally, TeX is used rarely by humans…
This sounds a bit contradictory with the previous statement, but maybe you mean that TeX is rarely used directly by many people (without LaTeX or some other macro layer). In any case, the LaTeX macros are implemented in TeX, so the TeX program is always the one being used (which I think was the point of the poster you're replying to).
> And how many TeX and LaTeX reimplementation a are there?
I didn't understand the meaning or point of this, as the answer is either "very few" or "many" depending on what you're counting. Extensions of the TeX program include eTeX, pdfTeX, XeTeX and LuaTeX, not to mention a few others like pTeX and upTeX. (Confusingly, when we say “TeX” we often mean one of these programs as well, as a lot of their code comes from TeX—they are written/implemented as patches (changefiles).) Reimplementations of a small part of the TeX/LaTeX syntax for mathematical expressions (only) include MathJax and KaTeX. Are these a lot, or hardly any (would have expected a lot more)? Depends on your perspective I guess.
It's probably worth pointing out that nobody uses TeX bare without a LaTeX-like macro library. Knuth wrote his books with a macro library confusingly called "plain TeX", which ships with the TeX language interpreter. (That interpreter was the subject of this thread.) LaTeX relies on the plain TeX library, just as, for example, GLib relies on the C standard library. The third alternative popular TeX macro library for document formatting, other than plain TeX and LaTeX, is a thing called ConTeXt. It seems to me that ConTeXt is less popular than LaTeX, but more popular than plain TeX.
But of course to invoke any of these you have to write code in TeX, just as to invoke Rails you have to write code in Ruby, or to invoke Numpy you have to write code in Python.
As far as reimplementations, I think the only reimplementations were done at Stanford in the late 1970s as Knuth and his students wrote a series of prototypes, culminating in the TeX language we know today in 1983.
It is absurd that my repeated, informed rebuttals of this rudely-phrased nonsensical misinformation are being flagged, so that visitors to the site will see only the aggressive misinformation and not the corrections. What kind of site are these people trying to turn this into?
This is not just a "poor me, I am being persecuted" issue. We can let Hacker News turn into Twitter, with insult contests being resolved by flagging campaigns that eventually hellban the accounts of the less-popular side of any issue, or into YouTube, dominated by conspiracy theories and hate; or we can stand up for reasoned discussion and informed comment.
Svat's reply is of course excellent and highly commendable, but they did not post it until after I posted my cri de cœur above. The thread consisted of [svat's reasonable comment], [my reasonable reply], [svat's reasonable reply], [troll post with reckless disregard for the truth], [flagged], [flagged], where the [flagged] posts were the ones where I was correcting the misinformation.
Also, note that the person who posted the misinformation never bothered to thank svat for their careful and courteous correction and their meticulous attempt to try to DWIM the troll comment into something that made some amount of sense. This makes more sense on the assumption that they were trolling than on the assumption that they were merely misinformed — someone who was interested in the truth would surely have offered thanks for such a helpful and polite correction. Instead, it seems clear that they were only posting here to make trouble and waste the time of reasonable, knowledgeable people like svat — particularly in light of https://news.ycombinator.com/item?id=22424205.
I had already come to that conclusion because the comment was clearly based on a set of misconceptions about what TeX and LaTeX are, and how they relate to each other, which could have been corrected by reading the introductory paragraphs of the Wikipedia article for either TeX or LaTeX. If someone isn't investing even that level of care in their comments, they don't care whether they're talking nonsense or not.
We can see that the same account continues to post aggressive comments rudely attacking other users, although since I'm not familiar with the areas they're talking about, I can't tell if they're employing the same reckless disregard for the truth in these cases: https://news.ycombinator.com/item?id=22501626 "I don't see the point of your post, and frankly sounds like nitpicking."
https://news.ycombinator.com/item?id=22499637 "Don't you understand where and why are there abstractions? ... having people [referring to the person he's replying to] naively complain"
Maybe you think this is the kind of dialogue we should be encouraging on here, but I don't. I think that in addition to setting an example of better behavior, as svat did, we should explicitly call out such misbehavior and explain why it isn't desirable, as I did.
Of course that's the case, but thankfully we have you, kragen, to thank for your work in accusing others of being ignorant and clueless while knowing nothing at all about anyone or anything, and adding nothing of value to the discussion in the process.
In cases of aggressive trolling like those statements, I think it's sufficient to point out that they're obviously wrong and assume that people will then check Wikipedia before relying on them. That doesn't work if my contradiction gets flagged, however.
Modularity incurs a complexity tax. If you are smart enough to keep it all in your head, like a Motie Engineer, you can simplify by omitting it. But if you're not...
That's the thing here. Knuth can easily juggle far more complexity in his head than the average programmer, and that's fine for software that only he has to maintain. But when something needs to be maintainable by average programmers, you need to write for them and avoid that complexity.
Who's writing for the average programmers? It's the average programmers. How can average programmers write code that is sufficiently abstract that it hides the complexity they can't handle? How can they change that code when it doesn't work? Does every company of 5 devs need to have a PhD?
> How can average programmers write code that is sufficiently abstract that it hides the complexity they can't handle?
Well said.
Hiding complexity is harder than handling it. Designing an effective way to expose it for use amounts to (part of) a "programming system" [Brooks].
So it's library writers who hide the complexity for average programmers, with a "programming product": stdlib, open source project or (rarer these days) commercial "engine".
When there is no such library, we get the current state of our art: many projects failing.
I think that's the point of literate programming. You no longer have to keep it all in your head. You have an article or a book. You go to the chapter that interests you, and modify it. Now those modifications get scattered throughout the monolith.
The point of literate programming is that you have a text which is structured in a way that humans are good at dealing with, which is not some nifty mathematical abstraction like functional or oop code nor coherent modules.
(Having worked with a badly structured monolith with a mixture of literal code and a million bad abstractions, I must admit the simplicity of working with the used-once literal code has its advantages. But so does the more abstract code. The only thing I know is that I want to control my entry point rather than go via a framework and suffer my concrete classes getting upcasted.)
But always keep in mind Gene Spafford's observation: Debugging is twice as hard as programming. So, if you write a program as cleverly as you can, you are by definition not smart enough to debug it.
"For example, his big programs TeX (1982) and METAFONT (1984) are each book-length and the source code of each is in a single large file amounting to about 20000+ lines of Pascal code."
He also used six-character identifiers. He had to. TeX and Metafont were intended to be compiled by lowest common denominator Pascal compilers. Modules, etc., were all vendor specific extensions at the time.
One might consider that to be good engineering.
"If you cut across the Saturn-B moon rocket, you find only an external skin which is at the same time a structural component and the tank wall."
Soda cans and plastic water bottles are the same; it's how they are cheap enough to be fit for for purpose.
Agreed; I just didn't want to continue arguing with myself and post a counterpoint to the counterpoint; glad others are doing it :-) (Actually I posted a comment along the same lines in another thread: https://news.ycombinator.com/item?id=22409822) BTW one of the features of WEB is/was that it allows arbitrary-length macro names, which mitigates against the Pascal requirement that only the first 8 letters of identifiers count: https://texdoc.net/pkg/webman)
"the idea of immediate compilation and "unit tests" appeals to me only rarely, when I’m feeling my way in a totally unknown environment and need feedback about what works"
I've had to check myself and indeed we have different ideas about the meaning of "speaking against unit tests".
Did you know about all the tests he wrote and run for TEX? There's a book about it as well.
For TeX he has written a single large test called the TRIP test: described in "A torture test for TeX" (http://texdoc.net/pkg/tripman). This is a conformance test (to weed out bad reimplementations), written after TeX was written and debugged. (See https://texfaq.org/FAQ-triptrap for details.) Apart from this report there isn't a separate book about testing TeX specifically, though there is a TeX manual (The TeXbook, Volume A of Computers and Typesetting), the TeX source code as a book (Volume B, roughly equivalent to this: http://texdoc.net/pkg/tex), and there's also a nice paper called The Errors of TeX (https://yurichev.com/mirrors/knuth1989.pdf), supplemented by a changelog (http://texdoc.net/pkg/errorlog).
By "unit tests" Knuth is speaking of tests that each test just one part of the program in isolation, mocking out the rest, and also more broadly about the practice of "test-driven development" which has this as its prerequisite. He means he finds it simpler to just write the whole program up front, and test it wholesale.
I learned Pascal, programming logic based on invariants, WP, ... and algorithms with Pierre-Arnoul de Marneffe between 1994 and 1998. That was a very good time and proved so interesting and useful. I knew he had someting to do with Knuth and Oberon.
> the fact that someone had to have written those Unix programs in the first place
I think the key takeaway here is that it's better all around if somebody like Knuth spends their time and effort on those Unix programs - or, in general, the reusable blocks - and leaves putting them together in various ways to solve the different issues at hand to the less qualified.
Unfortunately we can't control what other people do; everyone does what they are driven to do / what they find valuable. (We can at most create the environment and incentives that will influence them strongly, if their own intrinsic drives / values aren't pulling them strongly enough in some direction. But what incentives could sway a tenured Stanford professor who already had a Turing award and was certain what his life's work was?)
What Knuth seems to enjoy are ideas, and teaching/exposition. That might explain why his "literate programming" is structured around the metaphor of writing, rather than that of, say, engineering, architecture, or tool-making. In this case, he has written an “essay” around a certain data structure that we may call a hash-packed trie, a small twist (possibly never done before!) on the packed tries that had been described in his student Frank Liang's thesis and used for hyphenation in the TeX program.
This data structure is uniquely suited to this problem of going through a file and counting words. It cannot be obtained by putting together "reusable blocks". (At most it can itself be packaged as a library, but would probably be used only for word-counting, which is the original problem itself.) So I would disagree with your "key takeaway" IMO. What you (and McIlroy) are describing is a different approach where one does not try to arrange for a problem to be solved using the data structures and algorithms best suited for it, but tries to express the problem in terms of existing “blocks” even at some cost of efficiency or other considerations. This is a perfectly fine philosophy on its own (see the quote above about "prefabricated house"), but it's not the full story here.
I think it's the other way round: If someone like Knuth can deal with levels of complexity others can't then he should work on highly complex problems. Others are perfectly capable of writing tools that "do one thing well".
I wonder if the two solutions were ever compared on some kind of test case corpus - like encyclopaedia, or something similar. I am talking about execution time, memory use and so on.
Given Knuth's background, it may well be that he was looking for the (computationally) optimal solution, wherein the bash script was more about writing it quickly using existing tools.
1. Note that these were not quite "two solutions" as one might think from the phrase, but rather a commissioned program, and a review of it (which also happened to contain an alternative approach).
2. I did actually end up comparing them on a test corpus just last month -- kind of linked to it twice in the comment but here it is again: https://codegolf.stackexchange.com/questions/188133/bentleys... -- a C++ translation of Knuth's program runs about 140 times faster on the largest case than the shell solution.
3. About "writing it quickly using existing tools": note that while the Unix commands -- tr, sort, uniq, sed -- are widely available today (even on Windows!), they were not so in 1986, unless you were in certain special circumstances such as having paid a lot of money for a Unix system. So McIlroy was pointing out that in their (Unix) world, this problem would be easier than it was outside, so his proposed solution was also sort of an advertisement for Unix, not just the Unix philosophy.
There are many, many ways to tell this story. However it resonates because it fits well with an ongoing dynamic. Which is that programmers naturally want to build perfect tools while the business need tends to be for a quick and dirty solution that can be assembled out of prebuilt pieces.
Except that this one is extreme. It is hard to find any programmer who builds a more perfect tool than Knuth. There isn't a better known problem where the discrepancy between the perfect and the good is this dramatic. Therefore this became the poster child for this ongoing type of conflict retold by those who see themselves as on the side of the quick and dirty solution built out of preassembled pieces. Who, of course, slant the story to further fit their prejudices.
This kind of slanting is common. Take a famous WW II example where they were looking to improve bomber durability by putting more armor plate where the bombers were coming back hit. A wise statistician advised them that those were the spots that didn't need armor, they should put it on where the bombers were hit and didn't come back. Which, assuming that bombers were hit evenly everywhere, was all the places that they didn't find holes.
The reasoning is perfect, and illustrates a key point of statistical wisdom. But the retelling of the story almost never admits that the advice didn't really make a difference. The actual solution to the rate at which bombers were being destroyed was to drop chaff - strips of aluminum cut to half the length of the German radar - so that the radar systems got overwhelmed and the Germans couldn't find the bombers.
> Which is that programmers naturally want to build perfect tools while the business need tends to be for a quick and dirty solution that can be assembled out of prebuilt pieces.
Nope, this is nothing of the sort. It's just a case of two people solving two different (if related) problems, and everybody else trying to draw (by necessity, completely worthless) conclusions by comparing solutions to different problems.
McIlroy solved the K most common words problem. Knuth used a from-the-ground-up solution to that problem as a way to illustrate how Literate Programming works. His solution was never meant to be as tiny as McIlroy's, it needed enough meat on it that LP actually did something non-trivial.
You missed my point. I was not saying that this story is a good example of that ongoing dynamic. But instead that there is such a dynamic in our industry, and the telling of the story resonates well with it.
Another example people use for the KISS principle is of the 'space pen', where the US spent thousands/millions developing a pen that works in space whereas the Soviets used a pencil. However, almost none of the story is true. Regardless, it gets repeated over and over.
I have heard the real reason was to avoid electrical shorts. Pencils inevitably have bits of conductive graphite breaking off and floating about, potentially damaging electrical components
Interesting read! Although it isn’t clear what the combustion source is based upon, on-orbit conditions are generally different from STP and combustion effects may not translate
Uh, false dichotomy. The problem is fictional stories that are propagated as truthful because they reinforce people's biases and/or advance somebody's agenda.
"Which is that programmers naturally want to build perfect tools while the business need tends to be for a quick and dirty solution that can be assembled out of prebuilt pieces"
I spent quite a few years trying to balance different kinds of speed in "quick and dirty solutions" - speed of writing, speed of execution, speed of understanding...and the risk of failure. To me, that's infinitely more interesting and enjoyable than an unconstrained environment or just one dimension of optimization.
> To be precise, regarding Wald's work on aircraft damage we have (1) two short and rather vague mentions in Wallis' memoir of work on aircraft vulnerability and (2) the collection of the actual memoranda that Wald wrote on the subject. That's it! Everything not in one of these places must be considered as fiction, not fact.
(emphasis in the original)
This is a rather extreme take. Ignoring the rather important fact that the author of the piece missed at least one important source where Wallis gives Wald credit (see the postscript), arguing that something must be "considered as fiction" because it is not attested to in the (known) written sources is unreasonable.
I don't know at what point the postscript was added (it appears in the first archive.org snapshot of the page, from 2018; the essay was first published in 2016), but it seems misleading to leave the whole rant up there and then tack on a bit at the end saying "Oh I was wrong by the way."
I think this statement is a reference to a wide body of fake material circulating on social media platforms, including, for instance, a drawing of an airplane dotted with x-y damage locations, which is neither an aircraft that was in use in that theater, nor a type of figure produced by Wald or anyone related.
I did read through one of Wald's papers after reading that article, and I agree with the thesis that there's little in them about survivorship bias or a strong need to correct it. Maybe "debunk" is the wrong term; it's more of a clarification and re-emphasis of his actual work.
Actually, the real solution was to build fighter escort planes that could match the bomber's range, like the P-51 Mustang, or to have bases in France or Italy that the other shorter range fighters could escort the bomber fleets out of. Once this happened, the Luftwaffe was quickly decimated and bomber casualties dwindled.
Actually, the US and UK airforces took very different strategies. And different things worked for them.
The US went during the day mostly aiming for precision bombing of targets. The UK went at night with mass bombing runs.
The UK had horrible casualties until the introduction of chaff. After that they shut down the German air defense and took out Dresden. The USA had lighter casualties but didn't dare hit targets at the same range until they had better fighter escorts. Fighter escorts didn't work so well for the British because the German planes had better radar.
And while we are at it, the European war was mostly won by American manufacturing and Russian soldiers. During the Cold War decades, we retold the story in ways that minimized the Soviet contribution. But more German soldiers died fighting on the Eastern front than were killed by all other countries in all other theaters of war combined.
> the European war was mostly won by American manufacturing and Russian soldiers.
That's one way to think about it. Another way to think about it is that WW2 was two separate campaigns: one between all the powers you normally think about, and the other just between Germany and Russia, an entirely-optional war that likely wouldn't have begun if Germany hadn't decided to start it. It's not really that Russia was aiding the Western powers in a single fight; it's more that an independent, simultaneous Russo-German conflict starved Germany of the materiel it needed for its other campaign.
If Germany didn't invade then the USSR would attack first (and WW2 would be over sooner, probably with larger part of Europe under Soviet control). Both powers had attack plans, Germans executed theirs sooner. Some historians claim that Germans got ahead if Soviet attack just by few weeks.
There's also the point that Germany absolutely needed the oil fields to keep their war machine running.
I'm skeptical on the USSR executing on a plan to attack Germany only two weeks later given how disorganized they were in the face of the German attack. The Russian army isn't something you can move across the country in a couple of weeks.
The argument as I've seen it presented was that a large part of the army was already moved up close to the border and in attack positions, and not set up for defensive operations. Which is why, the argument goes, there were so many soldiers to surround and cut off.
I'm not endorsing the argument, note; I haven't looked into this enough to have anything resembling a useful opinion.
You have to rush to disseminate it. If you don't, the topic drops off the front page and eventually it gets locked. Hacker News has many pros, but it doesn't exactly facilitate slow, careful and thorough discussion. (Old fashioned mailing lists might've I guess, but I'm not aware of any contemporary platforms that do.)
This is absolutely funny how balanced unopinionated comment providing additional information gets downvoted. If this is not monoculture then I don’t know what is.
Possibly, although as I mentioned he's not alone in his claims, he does have a fair bit of evidence, and he is most certainly not western nor is his claim new.
> It's not really that Russia was aiding the Western powers in a single fight;
Setting intentions aside, as mentioned by the OP, the Eastern front is where Germans lots the most blood and resources. If not for that, the resistance in the other theaters would be fierce.
>an entirely-optional war that likely wouldn't have begun if Germany hadn't decided to start it
There is a great deal of consensus among historians that it was absolutely inevitable and unpreventable. A reasonable minority even support Suvorov's claims that the USSR was planning to start the conflict and Germany only beat them to it by a couple of weeks.
And while we are at that, if the russians hadn't made a deal with the nazis to steal poland, there wouldn't have been a war to begin with and all those untold lives would be alive.
Many European states other than USSR made non-aggression pacts with Nazi Germany:
- Poland [1]
- Estonia [2]
- Latvia [3]
- UK, France, Italy [4]
It is important to note that all of these pacts were made prior to the Molotov-Ribbentrop pact.
Your comment misrepresents the historical situation, which is that USSR observed many powerful Euro countries making agreements with Nazi Germany, and Stalin realised that USSR would be on their own in a war with Germany. Which is in fact what happened.
Not just non-aggression - Poland, in particular, directly participated in the partitioning of Czechoslovakia under the Munich Agreement, forcibly annexing parts of its territory under the threat of a military invasion.
Poland fought Germans in 1939, so the USSR wasn't "alone in their own war with Germany". It was just that the USSR decided to invade Poland instead of joining the fight against Germans in 1939.
If you want hypotheticals, here is what Winston Churchill had to say on the topic in a speech to the US Congress after the war was won.
President Roosevelt one day asked what this War should be called. My answer was, "The Unnecessary War." If the United Stated States had taken an active part in the League of Nations, and if the League of Nations had been prepared to use concerted force, even had it only been European force, to prevent the re-armament of Germany, there was no need for further serious bloodshed. If the Allies had resisted Hitler strongly in his early stages, even up to his seizure of the Rhineland in 1936, he would have been forced to recoil, and a chance would have been given to the sane elements in German life, which were very powerful especially in the High Command, to free Germany of the maniacal Government and system into the grip of which she was falling.Do not forget that twice the German people, by a majority, voted against Hitler, but the Allies and the League of Nations acted with such feebleness and lack of clairvoyance, that each of Hitler's encroachments became a triumph for him over all moderate and restraining forces until, finally, we resigned ourselves without further protest to the vast process of German re-armament and war preparation which ended in a renewed outbreak of destructive war. Let us profit at least by this terrible lesson. In vain did I attempt to teach it before the war.
In his book The Gathering Storm, he adds several more points where Hitler could have been easily stopped before he started.
Here's another view, from the economist John Maynard Keynes (of Keynesian economics fame), regarding the devastating reparations that were imposed by the Allies on Germany after the end of WW1:
...Keynes began work on The Economic Consequences of the Peace. It was published in December 1919 and was widely read. In the book, Keynes made a grim prophecy that would have particular relevance to the next generation of Europeans: "If we aim at the impoverishment of Central Europe, vengeance, I dare say, will not limp. Nothing can then delay for very long the forces of Reaction and the despairing convulsions of Revolution, before which the horrors of the later German war will fade into nothing, and which will destroy, whoever is victor, the civilisation and the progress of our generation."
Germany soon fell hopelessly behind in its reparations payments, and in 1923 France and Belgium occupied the industrial Ruhr region as a means of forcing payment. In protest, workers and employers closed down the factories in the region. Catastrophic inflation ensued, and Germany's fragile economy began quickly to collapse. By the time the crash came in November 1923, a lifetime of savings could not buy a loaf of bread. That month, the Nazi Party led by Adolf Hitler launched an abortive coup against Germany's government. The Nazis were crushed and Hitler was imprisoned, but many resentful Germans sympathized with the Nazis and their hatred of the Treaty of Versailles.
A decade later, Hitler would exploit this continuing bitterness among Germans to seize control of the German state. In the 1930s, the Treaty of Versailles was significantly revised and altered in Germany's favor, but this belated amendment could not stop the rise of German militarism and the subsequent outbreak of World War II.
It is way more complicated then just "reparations and inability to pay for them" economics only makes it sound.
The stab in the back Jews made us loose WWI myth started right after WWI ended. Nazi and other radicals were very active well before Ruth and did engaged in Ruth making happen the way it did too.
The continuing bitterness among Germans was something Hitler not just exploited, but actively worked on keeping and inflaming. He actively worked against solutions and agreements that could make situation better.
Significant portion of Germans did not believed they actually lost the WWI and really wanted to redo - not just for economy. But also because Germany had long militaristic tradition and values that did not just died after war.
Of course it'd have been easier to pay reparations if Germany wasn't squirreling away money to sidestep the treaty's prohibition on further preparation for war. Nazi Germany didn't place an Amazon Prime order for next day delivery of bomber aircraft and submarines the night before they invaded Poland - preparations took many years, and cost a lot of money.
You've got this badly out of sequence. There's fifteen years of Weimar struggling to pay the Versailles reparations in more or less good faith.
On small scale, the Prussian old guard tried to side-step some of the armament restrictions - notably doing some joint tank development with the Soviet Union - but before Hitler said piss off and began rearming in earnest, the Wehrmacht was limited to 100k, virtually no planes, and a remnant of the Kaiser's fleet.
At that point, it was too late. WWII was caused by Woodrow Wilson. He lied to USA voters, with the campaign slogan "He kept us out of war!" that was widely celebrated until a month after his re-inauguration when he rushed us into war. With USA saving their asses, UK and France had no reason to negotiate with Germany in good faith. If we had left them to suffer the consequences of their poor decisions, they would have found a way to live in peace with the Germans.
"if the russians hadn't made a deal with the nazis to steal poland"
And how exactly do you know this?
Maybe if the USSR hadn't bought the time and buffer space by this deal, the Germany would've taken the whole Poland and continued with the attack on the USSR?
Maybe the West would've been standing aside (like it did during the Munich Betrayal, like it did during the Phony War) and watching with satisfaction how Nazies are killing Communists?
And later watching how all Slavic and Jewish population of occupied territories gets exterminated?
If they wouldn't split Poland, then both of them would have tried to just capture it, and essentially the same end result would be achieved. Both powers were primed to go to war.
I have no problem with making assumptions but it would hopefully be grounded in a domain specific understanding.
If you read some of the replies, you’ll see one of the fixes involved fighter escorts. This is probably based on an assumption that fighters contributed to the downing of aircraft. These would probably have a different shot distribution than anti aircraft artillery.
Not to belabor the point, but jumping to conclusions on assumptions is exactly what good root cause analysis guards us against. A random distribution may be reasonable for an uninitiated statistician but I’m not sure one with some domain background in the problem would make the same assumptions.
Site is down, but it looks like archive.org got it.[1] But the entire page is blank, because javascript. It has a fallback, which has what looks like the article in a noscript tag.
> It looks like archive.org got it. But the entire page is blank, because JavaScript.
The AJAXification of the web, even for trivial web pages, is defeating archive.org on multiple occasions, it's a huge threat to our history preservation. We cannot stop people from overusing AJAX, and we need to develop better archival tools.
This is an equivalent problem to the web search problem, right? I believe Google indexes webpages by visiting them in headless Chrome and dumping the resulting DOM - it seems like archive.org could do the same.
I'm wondering.. is there a way to use the archive.org tools to download the website and later transfer it to my own database to not take up too much space in the archive? If thats not possible then the archive controller alone is in control of history.
That's the only real solution. Trying to redirect the whole ecosystem on to a more convenient path is doomed to fail unless your path is actually easier or better in some way.
> unless your path is actually easier or better in some way
Well, I would argue it is, in nearly every way.
The largest drawback is that by avoiding pulling the content with JS you won't automatically deny service to people that are trying to avoid trackers... This site looks like a medium wannabe, so this is probably important to them.
Ultimately how do you expect the websites to fund? There's a few options a) tracker based ads from a network, b) contextual ads based on the content, c) paywall, d) patreon style where a core of dedicated fans get some additional content or inside info and the rest is put out for free, e) authors pay to post (for some reason?) or f) some kind of attention coin or just crypto mining on visitors browsers. Only (a) really starts getting you money from day one, even if you are willing to lose readers with some sort of pay gate [0] there's still the initial stage where you're not making much because you don't have an audience. Websites need some way to make money.
[0] And just look at the HN comments for any NYT or WSJ article to see how well that goes over even with people who complain about ad trackers and privacy.
That's basically the patreon/authors pay model and doesn't work for anything actually trying to run a business or make money using a website. Also it's kind of limited in scale because eventually hosting costs are going to catch up to a point individuals are able to pay unless the community is very small or you're only hosting text.
I get the nostalgia for the pre Endless September internet but the cats out of the bag on that pretty much. There's some small efforts to remake it with peer hosting and federated networks like Peertube or Mastodon but they'd crumble if they got as popular as the Youtube or Twitter they want to replace.
Something related that's been on my mind recently: a lot of the advantage in building well-designed programs (documented, modular, built to be tested, free of smells, etc.) is less about getting the program right than about getting future changes right, either by someone else, or even yourself when you've forgotten how to do it. But future changes only matter if you want to use the existing program as a starting point.
McIlroy's pipeline is a little bit hard to read, but I would bet that most people with moderate experience in building shell pipelines could rebuild something equivalent from scratch, even if they'd have trouble explaining how the current pipeline works. (Or people with experience in Python, Perl, etc. could throw together an equivalent script from scratch quickly.)
An implication is that, if you're in a language where you can write a productive program to do a task from scratch within (say) 30 minutes, there's a lot less of a reason to think about good programming design than if you're in a language where doing that same task from scratch is likely to take you a day. In the second language, most of the value of writing documented and well-structured code is so that it takes you 30 minutes when someone asks you to modify it. But in the first language, you can throw away your code when you're done with it and still solve the modified problem within 30 minutes.
Another possible implication: it's better to build reusable components (libraries and utilities) than easily-modifiable code. Part of why McIlroy's pipeline works so well is that tools like "tr" and "uniq" exist - and most of us will never have reason to modify the source code of those tools. We need to know what their interfaces are, but we don't need to know their internals.
What you are describing may say more about the problem definition than the rate of change. If the software will only be used in certain limited situations, then you can make a bunch of assumptions and assemble the solution from spare parts. As long as your assumptions hold everything is fine. When the assumptions fail, someone will tell you, and you can rewrite it with the new assumptions in mind.
But some software might be used in a lot of strange environments with strange inputs. A lot of effort needs to be spent trying to clarify and improve semantic and performance edge cases. Using libraries here is not necessarily helpful because sometimes you spend more time working around the edge cases in the library than you would just re-implementing it.
I recently re-implemented a minheap even though there was already a library readily available, because the library assumed pointer-width types and I needed it to support longs. On my machine, a pointer and a long are the same length, but that's not guaranteed.
Regardless of environment, good software engineering (where by "software engineering" I mean "the process of producing code that continues to do useful things over time") requires a feedback loop from how the software is being used in production back to the developers.
If your assumptions are failing, and you know about it, then you can deliver new code to match the new assumptions, either by rewriting it or by modifying it as makes sense.
If your assumptions are failing and you don't know about it, you're in a terrible position. You can make further assumptions about which of your assumptions might fail (e.g., "the software is being compiled on a machine where a long is longer than a pointer") and which won't, but those are again assumptions - you don't know what changes you haven't thought about might come down the pipe (e.g., "the software is being translated to JavaScript, which has no longs, only floating point" or even "there's now a requirement to use a stable sort, so a heap is out of the question"). It's not at all clear that building rigid, well-armored software and throwing over the wall can even work. If at all possible, effort is better spent closing the feedback loop.
This comment reminds me of something of a thought-experiment-come-hacky-side-project that I can’t seem to find.
The premise was, what if you could never edit a function? Instead, you had to delete it and recreate it entirely whenever changes were needed.
The incentives are twofold - keep functions small, so you don’t lose too much investment if you have to delete code, and think before you start writing, else you waste time.
As I remember, this project came with an AST manipulator that would helpfully delete your function if it’s unit tests failed.
You know, thanks to Hyrum's Law https://www.hyrumslaw.com/ , there's a variant of this in many engineering contexts: you can never remove functionality directly, you can only add new functionality, deprecate (but maintain) the old ones, and kill it once everyone has moved.
Put that way, it's surprising that people haven't felt incentivized to ship the smallest and most replaceable things.
The project you remembered but couldn't locate is likely Prune within Limbo in the TCR (test && commit || revert) experimental development style by Kent Beck et al which you can review here:
https://increment.com/testing/testing-the-boundaries-of-coll...
It seems to have the "HN Hug of Death" ... here's a snapshot summary from memory:
Knuth and McIlroy gave examples of a word count program. Knuth used Literate Programming and did it in 8 pages, McIlroy used shell utilities and did it in 8 lines.
This post goes back to the original paper and discovers that Knuth has been grossly misrepresented as to what he was trying to do, and what he achieved.
Edit
bonyt's[0] comment[1] has links to copies of the content.
The argument, in short, is that the problem Knuth is solving isn't "find K most common words", but rather "Use the K most common words problem as the basis to demonstrate how you use Literate Programming". Knuth actually tackles the latter, McIlroy's rebuttal is just the former, so doesn't actually serve as much of an argument about anything.
I didn't mean to imply there was anything underhanded about McIlroy's critique. Just that that, because McIlroy's code isn't trying to solve the same problem, it shouldn't be taken as a rebuttal to Knuth's point, which is what, apparently, many people believe it to be.
There's lots of interesting facets here. Doug's solution requires a working Unix system. You might not have that in the problem space.
On the other hand, his solution shows that by assuming some nice things about your data, things which Pascal does not allow, you can do some useful things with terse programming. This is reminiscent of the array programming model of APL or J. I haven't written a solution but I'd be surprised if it is more than about 15 characters.
In the shell case, the terse program is like a Chinese classic. Written in its own jargon requiring basically a prior understanding of the program to read it.
> In the shell case, the terse program is like a Chinese classic. Written in its own jargon requiring basically a prior understanding of the program to read it.
However, if Knuth wanted to be terse, he could have been and write a Pascal program which uses his library routines without showing them ! -- in effect using Unix solution by Doug is equivalent to "not only you must use already pre-written libraries in order to be terse, they are even not the plain routines but they are instead packed as the whole executables, and to use these executables you have to use them on exactly this operating system, exactly using this specific shell, all that property of AT&T (at that time Linux didn't exist, and who had rights to use what wasn't clear, unless you've bought something expensive) just to be able to call these library routines.
So it can't be considered a serious "critique." At that time, under these circumstances, it was just an unfair ad.
From that:> I can’t help wondering, though, why he didn’t use head -${1} in the last line. It seems more natural than sed. Is it possible that head hadn’t been written yet?
It's possible that it wasn't written yet. It's also possible that it did exist but he saw no reason to start using it, since "sed Nq" continued to work fine. Heck, it's even one less character to type than "head -N"!
BTW, apparently "head -N" is considered to be obsolete syntax. The man page for GNU head doesn't even mention it. It just gives this:
-c, --bytes=[-]K print the first K bytes of each file;
with the leading '-', print all but the last
K bytes of each file
-n, --lines=[-]K print the first K lines instead of the first 10;
with the leading '-', print all but the last
K lines of each file
-q, --quiet, --silent never print headers giving file names
-v, --verbose always print headers giving file names
--help display this help and exit
--version output version information and exit
The "info" documentation for GNU head does mention the "-N" form, and says it is an obsolete syntax supported for compatibility, and says if you must run on hosts that don't support the "-n N" form it is simpler to avoid "head" and use "sed Nq" instead.
On my Mac, with BSD head, the man page gives only this form:
I'm not sure how it's a fair comparison between ready-made Unix tools and written-from-scratch solution. I mean it's like somebody asked me to implement a C compiler, and when I produced a huge pile of source code would tell me "Idiot, I can do all that by just typing "gcc"! Muah-hah-ha!"
In my second compiler design course, we were told we could use any language. I (jokingly) asked whether we could choose shell, and consider a C compiler a feature of the language. I was (in the same spirit) informed that it would be cheating.
It's a fair comparison in a real world context. If you were to solve the problem at your job, would you start writing the program from scratch, or would you use ready-made tools made for the job?
It's not Knuth's job to write efficient word-sorting programs. That job is long done and now it's nobody's job (unless one can do better than existing tools, but that wasn't Knuth goal either). His job was to demonstrate how you do literate programming, which is completely different. If I needed to demo how to program certain algorithm so that I could explain it to students, I'd write one code, if I needed absolutely fastest possible tool, I'd write another code (probably in C or some asm even). Different tasks - different approaches.
I was around when that encounter happened. I still recall my feelings when I read it.
I don't recall feeling Knuth was framed.
Although McIlroy's solution didn't really address the question Knuth was asked, it was cute and demonstrated the power of this newfangled thing called "Unix Philosophy", which at the time needed some oxygen. However the words McIlroy accompanied it with were simply unnecessary, almost childish. I recall cringing when I read them.
I do remember wondering if Bentley had done the right thing in publishing McIlroy's raw comments, but decided the main attraction of his column was a refreshing honesty. He presented his pearls without any the usual breathless hype or artificial conflict a journalist might add to spice up interest. Having set up this experiment, it would not have been "Programming Pearls" if he didn't report exactly what happened, as it happened.
Bentely's style wasn't to constrain or direct for a particular outcome he wanted. That meant McIlroy had lots of rope and McIlroy used it to hang himself, that was McIlroy's problem.
--
The other day I was talking with a friend about structured editing and literate programming came up. LP was one of Donald Knuth's ideas, to structure programs as readable documents instead of just machine docs. He was interested in it, I was cautiously skeptical. We both knew the famous story about it:
https://en.wikipedia.org/wiki/Literate_programming
"In 1986, Jon Bentley asked Knuth to demonstrate the concept of literate programming by writing a program in WEB. Knuth came up with an 8-pages long monolithic listing that was published together with a critique by Douglas McIlroy of Bell Labs. McIlroy praised intricacy of Knuth's solution, his choice of a data structure (Frank M. Liang's hash trie), but noted that more practical, much faster to implement, debug and modify solution of the problem takes only six lines of shell script by reusing standard Unix utilities. McIlroy concluded:
>>Knuth has shown us here how to program intelligibly, but not wisely. I buy the discipline. I do not buy the result. He has fashioned a sort of industrial-strength Faberge egg—intricate, wonderfully worked, refined beyond all ordinary desires, a museum piece from the start."
The program was print out the top K most-used words in a text.
(and so it goes on...)
---
To clarify, it was an email newsletter. The YOW! thing had just gotten published so I got that out of the way before diving into the meat of the newsletter post, which was about LP.
Good analysis. Didn't Knuth famously say that his job was to get to the bottom of things, not to stay on top of things?
If I were writing a one-off program to do this once for a paid project, the shell script is absolutely the way I would go about it.
If I were writing it as a computer scientist, accustomed to teaching students how to find optimal solutions, something like the Knuth program is absolutely the way I would go about it (although in 2020, I would likely use C, not Pascal). I also would likely roll my own approach if I was writing it for the kind of target machine I work on now - a very small one (an embedded microcontroller). And Knuth made his bones when computers were (physically) huge but (in memory and speed) tiny.
The shell script uses utilities that are written in C, probably totaling way more lines of system programming language code than Knuth's Pascal solution.
It's a pretty unfair comparison, since the literate programming solution was not presented in order to show a code-golfed word histogram, but how to annotate code.
The has trie structure was included in it for that purpose, to show how you use literate programming to annotate such a thing.
> First of all, we found that Literate Programming as conceived by Knuth not just “text, code, text, code”.
Unfortunately, it's something worse! It's a system in which you chop up a program into arbitrary sections, and give these macro-like names. The sections are presented in some aribitrary order and hierarchically combined together using those macro-like names.
Like, say:
The program's overall structure is to accept input, performs some processing and produce output:
accept_input
do_processing
produce_output
The divisions don't follow function boundaries. A specific loop inside a specific function might have its own section, and the variable declarations above their own.
It's basically horrible. You can't see the code all in one piece at all when editing it. It won't play with your existing tools. The generated result won't even have proper indentation.
Web and CWeb are programs geared toward someone who is an academic, mainly interested in writing a paper that revolves around a small amount of code.
What you're really writing is a paper, such that both the typeset paper (with nicely typeset code), and accompanying code pops out from the same source. The raison d'etre is that paper though.
You would be suicidal to use this to write an actual application that doesn't need to be detailed in an academic paper.
Knuth did somewhat take it to those extremes, but working alone.
If we look at TeX, the bulk of it consists of a single tex.web file which is simultaneously not such a large volume of work (less than 25,000 lines of code of documentation and data, less than a 1024K megabyte) ... yet too large to be all in a single file.
I love this comment, and probably wouldn't even disagree with "It's basically horrible" :), but just to point out a few things:
- "The generated result won't even have proper indentation." Actually, what you see in the typeset output generated by weave/cweave is what Knuth considers proper indentation, and he has written paeans multiple times to Myrtle Kellington (executive editor for ACM publications) who developed that style, etc. There are a lot of lines in WEAVE devoted to getting the indentation exactly so. I personally find it hard to read as well (as I imagine do most programmers), and both McIlroy in his review ("Second, small assignment statements are grouped several to the line with no particularly clear rationale. This convention saves space; but the groupings impose a false and distracting phrasing...") and Harold Thimbleby in his Cweb article mention these departures from what C (etc) programmers are used to.
- "Web and CWeb are programs geared toward someone who is an academic, mainly interested in writing a paper that revolves around a small amount of code." I would disagree: yes they are geared towards someone who is an academic -- specifically Knuth -- but from everything he's said, his love of LP is about the programs themselves, not papers about them. (Look at https://cs.stanford.edu/~knuth/programs.html for some of his programs; he says he writes several programs a week and keeps most of them to himself; the ones published online before Sep 2017 I had typeset here: https://github.com/shreevatsa/knuth-literate-programs -- I ought to clean up and refresh that stuff.)
- Finally, WEB arose out of certain specific constraints. After he had written the original version of TeX in SAIL for his personal use, it turned out there was widespread demand for it, and people at other places had started porting it into their local systems/languages (with risk of incompatible/irreproducible implementations). For this he decided to embark on a two-year rewrite into the language that was available at the most number of university computer systems: Pascal. This language had been designed primarily for teaching, and at this time there wasn't even a Pascal standard by that name -- every compiler did its own thing. So he was targeting the "common denominator" of Pascal compilers, which meant no separate compilation units to be linked in; everything had to be in a single file at least as seen by the compiler. (In fact his TeX78 in SAIL had been written as several separate files in a more conventional (to us) style.) And yeah, the fact that he had been requested to eventually publish the source code of TeX (which he did, Volume B of Computers and Typesetting) played a part.
Thank you! I can't believe how far down I had to scroll to find this. Having worked in an LP codebase, it's by far the worst workflow I've used. The academic paper angle makes a lot of sense.
I wish we could just put literate programming to bed as a practical solution. However, on the bright side, we may have gotten Wolfram notebooks and Jupyter notebooks because of that. Again, you wouldn't/couldn't write a decent program with them. But they're excellent for (data) analysis, prototyping, and you can end up with a decent-ish document.
I like the idea that code should look more and more like human language. I think that programming languages have gotten much more human-friendly nowadays and this trend will continue. I have created my own human-like programming language prototype to explore these ideas and have implemented TodoMVC with it: https://github.com/tautvilas/lingu/blob/master/examples/todo...
You cannot compare Knuth's software with a script that use pre compiled tools that have thousands of lines.
If you want to compare both solutions, use the Kolmogorov complexity: compress both programs together with any tool/compiler it uses and just then, compare both sizes. I bet Knuth's solution has an order of magnitude less complexity.
These days, though, you can have it both ways with a Jupyter notebook or something similar. It's literate, in that you can keep thoughts and discussion in markdown cells, and you can neatly combine computations in computation cells.
Unix shell scripting is an obtuse practice that is really hard to discover. That short shell script is built on hours of hard learned Unix experience that would probably be at least 8 pages long, if explained thorough.
I will probably be downvoted for questioning the legends, but... I don't think the poster child of Literate Programming - TeX itself - really has that readable source code.
I think the idea is that if you want to make changes to an existing program, you get up from your desk, go look at your shelves and pull out the sheaf of paper for that program, read/study it, think about what changes are needed, write down the code changes, then go to the computer and type it up -- only in the last step you're looking at the .web file but only for mechanical transcription from paper to computer.
(This is not a joke. Knuth still writes not just his papers/books but even his programs by hand on paper first. And this how TeX was written, according to the person other than Knuth who should know best: https://news.ycombinator.com/item?id=10172924)
Maybe people should take into account that Knuth is a mathematician and computer scientist, not a software engineer.
His constraint is correctness under all circumstances, an engineering constraint is about efficiency which constrains correctness to a subset of the potential uses.
So from his perspective, the problem was something to be solved, not something to be implemented. As such, describing the problem and the solution is much more a literate requirement, being clear about it to the reader, than a computing requirement, being clear to the computer.
Personally, I think most software "engineering" is a crock, built on false assumptions and invalid data, subject to fads like "Agile" (or CMMI or SPICE or RUP or...).
The software industry would be better focussed on architecture, not the naive patterns of the GoF, but on fundamentals like types and data flows, Nouns vs Verbs and taking the science and making it practical for use in day to day development by incorporation into the tools used.
If anyone managed to catch it when it wasn't down, care to share a screenshot of the article? I seem to be getting some Heroku error now.
> An error occurred in the application and your page could not be served. If you are the application owner, check your logs for details. You can do this from the Heroku CLI with the command
I often recommend reading this essay. The conception of LP described (and embodied) is only tenuously related to modern systems claiming to be LP, and what's presented is really interesting.
That said, I still find myself unsure how it translates into practice. Does anyone have experience working on a system described this way? In particular, I wonder what refactors feel like, and how general problems around stale documentation apply (or fail to).
I program mainly in R and I always use RMarkdown. I write extensively about question definition, notes, reference, exploration, different directions in RMarkdown. In the end if I ever need to have a script version I have some utility function to pick code chunks and output as script.
This serve as a very good documentation and is much better than code comments.
Comparing Knuth's LP demo to a 6 line shell script is comparing apples to oranges. There's no language that's going to be able to do that job in as few lines as that shell script, but that doesn't mean all languages are useless. They're both interesting demonstrations, but it's a useless comparison.
I haven't heard of the leo editor. I have used org mode in Emacs for little things. I also came across a reimplmentation of the tangling functionality: https://github.com/thblt/org-babel-tangle.py
It's worth playing with: it's primarily designed for writing literate Python, and I greatly enjoyed working with it on a project as an experiment. I shuttle between neovim, emacs and leo, and dream of an editor that would somehow bring together all of their benefits. Oh, and maybe some smalltalk too!
The proposed example problem that would be difficult to solve on the Unix command line can be written as a pipeline of just nine commands. See https://www.spinellis.gr/blog/20200225/
Aw. Well, it was worth a shot, that kind of mild homonym abuse is kind of direction that these sorts of titles usually tend to take. Fortunately the real story is more interesting.
The unix command solution is likely orders of mangnitude slower than what Knuth wrote, which given the computer power of the time should have seemed fairly relevant.
I've seen similar defenses of Knuth over the years, but I'm still inclined to view this McIlroy's way. I think focusing on the shell script is a canard, the point is he picked what seemed like the right tool for the job. Isn't that what commenters are saying here all the time?
Anyway, perhaps I just suffer from a failure of imagination, but I can't see why the "Blah"s interspersed with `foo`s and `bar`s is meant to be revelatory.
I think it's meant to promote looking at how a literate programming system is mangling program output. It's way more complicated and powerful than just "inverted comments" (like e.g. CoffeeScript did in recent memory). Composite parts of the final program can be presented out of order, sent to different output files, etc.
For something slightly more modern than this sample program or TeX, there's "A Retargetable C Compiler: Design and Implementation", which presentes the code of lcc, a quite conformant C compiler (which, IIRC, still enjoys some popularity on Windows).
Sadly the strengths of literate programming are rarely useful for mere mortal programmers -- we're not often entrenched in deriving algorithms and/or presenting something in a didactic manner.
> The actual paper paints LP in a much better light than we thought it did....The actual content is effectively his stream-of-consciousness notes about what he’s doing. He discusses why he makes the choices he does and what he’s thinking about as primary concerns. It’s easy to skim or deep-dive the code.
Now imagine if these stream-of-consciousness comments getting in the way of the code were written by your colleagues, rather than Donald Knuth.
My point is that people should be encouraged to keep code comments to a minimum, and always strive to first rewrite the code so that it is clear (well-chosen variable names, etc). LP does the opposite: it encourages verbosity, and tangential discussion.
My point is that literate programming isn't about comments. Comments are asides, they aren't the core focus of what's being read, the code is. Literate programming is about documentation, which should be kept concise, but not necessarily minimal.
As to whether this leads to tangents or not, that's up to the authors to determine how to organize. Appendices or rationale sections are a great way to separate the core focus from what others may see as extraneous.
Yes, sorry, I did understand that that was your point. I just feel very strongly about the tendency, for people (especially new programmers) to be told that it's important to comment code heavily.
Yes I've never seen a good solution for documentation in the teams I've worked in on larger systems/codebases. I'm not really a believer in documentation that evolves in a separate git repo (or god forbid, in Confluence) for the obvious reason that it gets even more stale than documentation that evolves in the same git repo. So that would put me in favour of your position here.
But, how can documentation (with appendices or rationale sections) be interleaved with code without destroying the ability to read the code? I've tried RWeave in R a long time ago, and I've even collaborated and published on a literate programming tool, and I have always come back to the conclusion that I want to read code with the minimum of intervening prose.
So my position is documentation should absolutely be written and maintained, it should be in the same git repo as the code, it should be in a separate file (not in comments or docstrings or automagically weaved sections using some special syntax), it should be written tersely and without much personality, and code reviewers should request documentation updates if a PR renders some documentation stale or requires new documentation.
So I'm pro documentation, anti LP, and pro minimal code commenting.
https://news.ycombinator.com/item?id=22221592
https://news.ycombinator.com/item?id=18699718
For further context you can look at past and future issues of Bentley's column (and its spinoff); a list of them I collected here: https://shreevatsa.net/post/programming-pearls/
I guess it's a long-standing tradition in literary reviews for reviewers to push their own ideas, rather than confining themselves solely to reviewing the work in question. That is what happened here. Knuth had written a program that he had been asked to write, to demonstrate the programming discipline. But McIlroy, as the inventor of Unix pipes and a representative of the Unix philosophy (at that time not well-known outside the few Unix strongholds: Bell Labs, Berkeley, etc), decided to point out (in addition to a good review of the program itself) the Unix idea that such special-purpose programs shouldn't be written in the first place; instead one must first accumulate a bunch of useful programs (such as those provided by Unix), with ways of composing them (such as Unix pipes). A while later, John Gilbert described this episode this way:
> Architecture may be a better metaphor than writing for an endeavor that closely mixes art, science, craft, and engineering. “Put up a house on a creek near a waterfall,” we say, and look at what each artisan does: The artist, Frank Lloyd Wright (or Don Knuth), designs Fallingwater, a building beautiful within its setting, comfortable as a dwelling, audacious in technique, and masterful in execution. Doug McIlroy, consummate engineer, disdains to practice architecture at all on such a pedestrian task; he hauls in the pieces of a prefabricated house and has the roof up that afternoon. (After all, his firm makes the best prefabs in the business.)
There are other points (not mentioned in this article), e.g. the fact that someone had to have written those Unix programs in the first place and writing them with literate programming can lead to better results, and the fact that Knuth's idea of using a trie (though not a packed/hash trie; that's no longer needed) still seems fastest: https://codegolf.stackexchange.com/questions/188133/bentleys... (please someone prove me wrong; I'd love to learn!)
Knuth gladly included McIlroy's review verbatim when he reprinted this paper in his collection Literate Programming. BTW here's an 1989 interview of McIlroy https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm where he looks back and calls Knuth's WEB “a beautiful idea” and “Really elegant”, and his review “a little unfair”, though of course he reiterates his main point.