Is it meant to do something? It doesn't follow the same cause/effect syntax as the tutorial, and plopping that welcome block into https://playground.nova-lang.net/ doesn't seem to do anything. I assume it's the note taking part of the syntax?
Its not necessarily meant to do anything on its own. The text there is the same cause/effect syntax, just with slightly different delimiters. If you were to include the fact it needs to execute for the rule to work on after the code, like: "|| - Welcome to Nova! -", then the rule would execute.
OH! Ok that makes more sense. `:` from the tutorial is `-` or `~`, because it's the first char after the pipe.
I do lose track after that though, in my brain, It looks like the entire second part after the second pipe character should be just one long fact assigned to the stack between tildes, but I think it's adding each one of the bullet-prefixed lines to it.
This is a well-referenced essay, drawing the on writing of David Parnas [1], Peter Naur [2], and Zach Tellman [3].
As software developers we’re intimately familiar with these ideas. But the industry still treats it as “folk knowledge”, despite decades of academic work and systemization attempts like the original Agile.
We really need more connective work, relating the theoretical ideas to the observed behavior of real-life software projects, and to the subsequent damage and dysfunction. I liked this essay because it scratches that itch for me. But we need this work to go beyond personal blogs/newsletters/dev.to articles. It needs to be recognized & accepted as formal “scientific” knowledge, and to be seen and grokked by industry and corporate leadership.
I suspect systemization would require quantifying some of the variables involved, which include things like
- The size and complexity of the code base (for some definition of size and complexity)
- The quality of the code and docs (for some definition of quality)
- The skill and experience of the people involved
In four years in a big tech role, my team twice inherited and had to modify a code base without any input from the original authors. One was a quagmire, the other was a resounding success:
- The first was a media player control that we had to update to support a new COM interface and have a new UI. We decided that it was too complicated, and nobody understood it, so we’d reimplement it from scratch. One year later it mostly worked, but still had bugs and performance issues that the original version didn’t have. In hindsight, I suspect it would’ve been cheaper to try to revive the original code base.
- The second was a music database for an app running on a mobile device. Our current one was based on the version of SQL available, but some principal engineers on another team suggested replacing it with a custom in-memory database that already shipped in another device. We argued that the original authors had left and the code was unwieldy; they argued that “it’s just code, we can read it” and its performance was known to be better. They did the work to revive it and successfully integrated it into our app. Wild success.
The flip side of “it’s impossible to revive a dead system” is “don’t rewrite a working system from scratch”. Absent more research, the only way to correctly guess which situation you’re actually in is to have tons of experience.
Probably also these situations are dependent on the people involved. If it weren't those particular principal engineers on the project, it's possible trying to revive the in-memory database would not have been successful.
I’m a violinist (amateur but play regularly). When I have an important note, which is held for a while and needs vibrato, I frequently decide to shift my left hand position so that my middle finger is responsible, rather than the index finger. It feels stronger, easier to nail the intonation (pitch) with precision, and freer to perform the desired type of vibrato. (String players do vibrato by wiggling the left hand finger, which affects the pitch and overtones / oscillation modes of the string.) In fact, I tend to avoid using the index finger on notes that require vibrato.
That preference might be explained here, by the precision/strength combination. I tried holding a hammer as described in the author’s hammer exercise, and there’s similarity, though it requires much more weight-holding. The left hand doesn’t hold the weight of the violin (consider a cello or a guitar with shoulder strap), but a little grip strength is required to securely hold down the string, especially with vibrato.
Overall, fascinating article. I feel quite motivated to read more on hand anatomy and biomechanics.
Similar on guitar with bends I think. I feel like using the index finger is very awkward, I use the middle finger or ring finger (when what I'm playing allows it) rather than the index finger. Typically with the next finger behind to guide and provide stability/strength.
The proprioception on the index finger while bending on guitar is worse for me than locking the ring finger and using the wrist to control the magnitude of the bend. Useless backfill-ass finger.
Agreed. I recall being taught in college physics labs: there is no such thing as “human error”. Instead, think about the causes and mechanisms of each source of error, which helps both quantifying and mitigating them.
Same energy here. “Be more careful” is extraordinarily hand-wavy for a profession that calls itself engineering.
A while back I prototyped (very roughly) an auditory equivalent to “syntax highlighting”, using ambient tones and white noise, rather than discrete beeps/sound effects. [1]
I’m actually revisiting this project right now! I’m reimplementing it in Rust and also exploring different ways to communicate parser state and other contextual information through sound.
I’m glad I saw your comment!! I’ve experienced this exact phenomenon when playing Minecraft while listening to an audiobook or podcast. Returning to that area will immediately remind me of the topic or narrative that I heard. Presumably it’s related to the “memory palace” technique, but otherwise, I can’t make heads or tails of it. It’s immediate, as if the location is a hash key mapping to the information. Or as if they’re stored in literally the same place, and fetching one implies fetching the other.
Similarly to you and the article’s author, this doesn’t happen with whatever thoughts I may think while at a location. But in that situation the brain is engaged in generating those thoughts, and not with the task of learning new information. So I don’t find it surprising that it works differently.
I haven’t thought about it in relation to “consciousness” yet. Will have to chew on this article a bit.
The museum was also a generous community space; I remember attending a Seattle Indies game jam and other events before the pandemic. It was very special to be surrounded by reminders of early-computing exploratory spirit.
*If* such a thing could be done under typical corporate incentives/behavior, then I suspect the “high impact” part would need to be scrapped. Because when something is important to a corporation, it turns its eyes that thing (so to speak), which disrupts the other properties.
Or, “high impact” could be spread over the long term. So, unknown-payoff R&D. It would need to be an “invest and ignore” strategy and require a lot of institutional trust.
Question for ML/AI people: is it still common to use the term “overfitting” for these cases, where a model is overtrained on one thing, to the detriment of another? Or is that term only used for literal curve fitting?