Hacker News new | past | comments | ask | show | jobs | submit | jez's comments login

I’m not sure what this URL is. The canonical source for the Tufte CSS project is hosted here:

https://edwardtufte.github.io/tufte-css/


The handwriting in some of these snippets, while sometimes difficult to read for one reason or another, is nonetheless beautiful: did everyone who wrote have such great handwriting back then?

I'm looking at the piece in the Instagram post linked by the page, which begins, "honor of holding in their service". The lines are so straight, the letters are so uniform!


As someone with terrible handwriting but decent cursive, i think cursive provides a better structure for achieving cleaner penmanship compared to non-cursive writing. My theory is that cursive’s consistency of soft, flowing loops rather than a mix of abrupt angles and disconnected lines helps create a more uniform result.

I also remember teachers telling you when writing cursive to seldom lift your hand from the page. I think that act of keeping your pen on the page for most of the writing process encourages a smoother and more natural flow, reducing the chance of jerky, uneven strokes


Handwriting is a skill, you get better with practice!

A lot of bad handwriting stems from using it to write down things quickly (see: https://imgur.com/doctors-strike-5ANma ).

If you instead focus on doing slow calligraphy, your handwriting can improve rapidly.


Widespread literacy is an extremely recent phenomenon.

I highly doubt most people could write that well


The US is an extreme outlier with regards to a high rate of literacy compared to almost everywhere else during the 1600-1800s. Today is a different story, Massachusetts had a higher rate of literacy when education was made compulsory in the 19th century than it does currently, which is kind of astounding.

> Sheldon Richman quotes data showing that from 1650 to 1795, American male literacy climbed from 60 to 90 percent. Between 1800 and 1840 literacy in the North rose from 75 percent to between 91 and 97 percent. In the South the rate grew from about 55 percent to 81 percent. Richman also quotes evidence indicating that literacy in Massachusetts was 98 percent on the eve of legislated compulsion and is about 91 percent today.

https://www.independent.org/publications/article.asp?id=307


I'm happy to be proven wrong.

Any reason for this being an American thing?

I'd still assume fine penmanship was a mark of the upper class though


It never occurred to me that ffmpeg’s license had these restrictions on use (I guess that should be obvious in retrospect, it’s LGPL licensed).

The ffmpeg docs have an straightforward list of things you have to do to link against ffmpeg and also be compliant with the license, which I found interesting:

https://ffmpeg.org/legal.html


Calling it libTerminal might be confusing. There are other terminal emulation libraries—for example libvterm[1], notably used to power the embedded terminal in Neovim.[2]

[1] https://www.leonerd.org.uk/code/libvterm/

[2] https://neovim.io/doc/user/terminal.html


> What do you do if you need to look up the definition/implementation of some function which is in some other file?

Code search (ripgrep, GitHub search, Sourcegraph, git grep, or even just plain grep). You can use VS Code search but I prefer CLI tools so I can filter the output of my search with more searches—VS Code makes it hard to post-process/filter project-wide search results.

Filename search (fzf, fd, or just find).

There are cases where LSP-powered Go to Def is faster, but there are also cases where it’s less accurate. I’m talking:

- Untyped code in a graduate typed language

- References of a method name in a YAML file pulled out with reflection

- References of that method in a comment of some other method, where that method is actually the method I’m looking for.

- A one-to-one mapping of method names with some enum somewhere else, but lowercase in one spot, and all caps in the other spot.

So yes, sometimes go to def will be faster, but you’ll lose out on so many other possible references.

Another case: repos where the codebase is so large that the editor tooling is slow. Repos where I’m brand new to it, because I just need to check why some code in a third-party library is going haywire.

Grep (code search) is just so powerful.

I have a post about how to do large-scale code migrations/codemods. Everyone assumes the post is going to be about how to use high-powered, language-aware AST-based static analysis tooling. But half the post is talking about the unreasonable effectiveness of regular expressions.


In a similar line, it would be nice if there were a way to predeclare instance variables so that there was a canonical place to put type annotations for instance variables. Ruby type checkers right now all invent their own ways of declaring the types of instance variables.

If Ruby supported something like

    class A
      def @instance_variable
      def self.@singleton_class_instance_variable
      def @@class_variable
    end
as a way to pre-declare the object shape at class load time, it would be an obvious place to also let type checkers declare the types of these instance variables.

(Not saying that this specific syntax is ideal, but rather just that having some syntax for declaring instance variables at load time, not method run time, would solve both the "predetermined object shape" and "predetermined ivar types" problems.)


For those curious, this article is a good introduction to the topic:

https://railsatscale.com/2023-10-24-memoization-pattern-and-...


The Python version and this Ruby version are not equivalent.

In Ruby, default parameters are allocated for each call to the function, which means that they are not shared across calls.

In Python, default parameters are allocated once and shared across all calls, allowing them to be mutated.

This becomes obvious if we change the Python and Ruby versions to print when they're computing something that is not yet cached. For back-to-back calls to `fib(10)`, the Python version prints only on the first call to `fib(10)`. The Ruby version must recompute all values from 2 – 10 again.


I recently read a lot about solutions people have come up with to put a class hierarchy into a relational DB, because it all mostly applies when the domain’s data model makes heavy use of ad hoc union union types (like TypeScript’s union types).

If you only skimmed the StackOverflow post or were about to bounce because “my models don’t form complex inheritance hierarchies,” ask whether you’ve tried to serialize something with type `A | B` to the database and been frustrated at there being no good solution. Then (re-)read the post in that framing and see if the solutions proposed and tradeoffs discussed look more relevant to you.

It’s made me realize that there’s probably a lot more collected wisdom locked up in writings from the 90’s and early 00’s that I disregard because it’s so heavily class inheritance focused. Alternatively: free blog post ideas by re-contextualizing this content in contemporary languages and technologies!


I might be biased because I do what you might call "data engineering", but my first thought was "there are three obvious ways to do this", and the accepted answer lists exactly those three ways. You'll find that none of them work very well when you have 100 classes rather than 2, because the "correct" answer is to model your data to be relational in the first place. Yes, sometimes it's unavoidable because the structure of the data is just like that, but more often than not someone somewhere is doing something very wrong.

> realize that there’s probably a lot more collected wisdom locked up in writings from the 90’s and early 00’s

Yes and no, it's the "design patterns" discussion all over again. Some of it was genuinely interesting, but most of it just was incidental complexity caused by early Java and the absolute horror of pre-standard C++. When you filter that out so that only timeless foundational concepts remain, you'll find that most of it had already been said before.


I was also genuinely shocked to see this article make it so far up in HN.


My main wish for postgres for a while has been to support ADT. I know it's probably incompatible with optimizing storage, but it would solve so many problems I have when designing dbs


The README of MarkItDown mentions "indexing and text analysis" as the two motivating features, whereas Pandoc is more interested in document preparation via conversion that maintains rich text formatting.

Since my personal use leans towards the latter, I'm hesitant to believe that this tool will work better for me but others may have other priorities.


MarkItDown feels like running strings; the output is great for text extraction and processing, not for reading by humans


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: