Hacker Newsnew | past | comments | ask | show | jobs | submit | xyzzy_plugh's commentslogin

I do this so religiously that when I'm setting up a new system I am always surprised that rich text is the default.

TextEdit is pretty great.


> I am always surprised that rich text is the default.

It's because RTF support was an early headline feature for NeXTSTEP, and TextEdit was meant to be as much of an API demo for the NS/OPENSTEP/Cocoa† APIs as it was meant to be a usable application.

Peep the NeXT 0.9 release notes: https://vtda.org/docs/computing/NeXT/NeXT%200.9-1.0%20Releas...

“Built-in RTF Support: Rich Text Format (RTF) is a standard document interchange format specified by Microsoft Corp. In addition to opening and saving documents in its own internal format, the 0.9 version of WriteNow supports opening and saving documents in RTF format. Using this format, WriteNow on the NeXT Computer can exchange documents with Macintosh or IBM PC programs like WriteNow or Microsoft Word. RTF documents retain most of their font and formatting information.”

And the NeXTSTEP 3.0 programming book which goes on and on and on about the `Text` object and how good their RTF support is: https://simson.net/ref/1993/NeXTSTEP3.0.pdf#G16.44605

https://developer.apple.com/library/archive/samplecode/TextE...


This vaguely reminds me of Styledit, the included text editor from BeOS / Haiku.

It supports basic text formatting - alignment, different fonts/sizes/colours - but these are stored as extended attributes in the file, while the "actual file" remains plain text.


Early releases of the dev tools even included the TextEdit source for you to learn from.

Same

I would love to better understand what you mean by "classify however it wants." Is the output structured?

Yeah, the output is json structured, but I mean the entity value that is returned. A simple case is classifying the Brand of the ad. It might return any of "Ford", "Ford Motor Company", "Ford Trucks", "The Ford Motor Company", "Lincoln Ford" even on very similar ads. Rather than try to enhance the prompt like "always use 'Ford Motor Company' for every kind of Ford" I just accept whatever the value is. I have a dictionary that maps all brands back to a canonical brand on my end.

What are you using to build the dictionary? Particularly when it encounters something you've never seen before.

This is really interesting to me.


Continuing the brands example, by default I store all of the brands returned as is (in SQL). On occasion, I will manually come across different variations of a brand that I decide is better combined into a primary brand. All of the secondary brands get marked as relating to a primary brand. Then the next time a new ad gets tagged as a secondary brand, I know I can use the primary brand instead.

So in essence, the process is what I might call 'eventually modelled' (to borrow from the concept of eventual consistency). I use the LLM entities as is, and gradually conform them to my desired ontology as I discover the correct ontology over time.


Style opinions are borderline irrelevant without appropriate linters.


Go and use Google BigQuery auto-formatter in a complex query with CASE and EXTRACT YEAR FROM date, and you will have a totally different opinion.

How that auto-formatter indents is borderly almost a hate crime. A thousand times better to indent manually.


I've even seen the BigQuery formatter change the behaviour of a query, by mixing a keyword from a comment into the real code.


Doesn't seem like it. Using the interactive board on the website I was able to produce a solution that only revealed numbers.

This is quite clever.


I think the idea is that the deletes would eventually be compacted, so it's ultimately half as much, but I digress.

The cost isn't that bad all things considered. Hot, durable and available data ain't that cheap, especially in the cloud. Self-hosting is within an order of magnitude.


I think ideally you could map retention of cold data to file objects itself and using key space naming strategy and lifecycle rules, expire the data that is not needed, thus saving on the storage costs (as much as possible hopefully)


The Go compiler is already ridiculously fast. As far as I know the garbage collector usually doesn't even activate for short-lived programs, which compilation usually is. Turning garbage collection off entirely doesn't have much of an impact on build times.

What significant opportunities exist for performance with a Rust implementation that aren't possible in Go?


I've been out of the hardware game a minute but Qualcomm was a great partner for helping you ship products. Everything about them sucks, but they will actually send engineers to your office. They always took bug reports seriously and pretty much always delivered patches. Also they always had ample samples, both in terms of dev boards and software. I know of several products that basically shipped the sample code with minimal modifications.

If I were a company trying to ship V1 of our first product, I would hands down pick Qualcomm. MediaTek et al are great for when you know what you're doing with minimal handholding.

I absolutely hated working with them, but at least they were a vendor you could work with. Perhaps the cheaper vendors have upped their game here but I wouldn't know.


I heard that Qualcomm can be decent to work with - if you are in a company the size of Qualcomm, or can dangle "500000 units to ship" in front of them like a carrot.

But "decent" is Qualcomm at its absolute best. And Qualcomm at its worst?

I'd rather chew down broken glass than work with Qualcomm.


I can add a minimal anecdote. I got some support from a couple engineers on a telecom project, and it wasn't even that big of a thing, but they were more than decent to work with. I did say to one guy, "you guys are a lot cooler to work with than some of the stuff you see in the news" and matter-of-fact he was just like "oh, yeah that's legal"

my vision of them is that the engineering side can be great to deal with when they want to be (and my personal experience is they want to be). but the other part of their business is like set the standard, and then enforce it.


To get to the engineers, you need to get through the viper pit that is the sales first.

The only time I have seen this incredible feat accomplished was in a company large enough that they had a department dedicated to dealing with other large companies.


At least they're up front about it? When I think of a vendor I think of sales taking your money and then being ghosted by support staff.


This is only true when the dependency structure is not already apparent. Almost all modern languages solve for this in their import statements and/or via their own package manager, at which point pushing everything up into Bazel is indeed redundant.

If anything this highlights the failure of languages solving for this themselves. I'm looking at you, C++.

It's no surprise Bazel is a hard sell for Rust, Go, Node, etc. because for those languages/ecosystems Bazel BUILD files are not the best tool to represent software architecture.


The problem is that anything that's _apparent_ and not _enforced_ will be messed up over time. Maybe not in a project with few people where everyone is an expert on how "things are supposed to be", but it will inevitably happen when you add more and more people.

And the whole point of the article is to say that import statements do actually _not_ solve this issue, because import statements are at the file level, not at the module level (whatever module means in your mind).

In any case. As I mentioned in the article en passing, other languages _do_ provide similar features to Bazel's build files though, and I explicitly called out Rust as one of them. When you are defining crates and expressing dependencies via Cargo, you are _essentially doing the same_ as what I was describing in the article. Same with Go if you are breaking your code apart into multiple modules

But then we all know that there are some huge repos out there that are just "one module" and you can't make anything out of their internal structure. Hence you start breaking them apart into Crates, Go modules, NPM packages, you name it or... you know, add Bazel and build files. They are the same tool -- and that's why I didn't write Bazel in the title, because I imagined "build files" more generically. I guess I needed to be clearer there.


> The problem is that anything that's _apparent_ and not _enforced_ will be messed up over time

We already have the tools to enforce these things in many mainstream languages.

Breaking things apart into crates/modules certainly makes sense sometimes, but other times it does not? If you have a monorepo, do you really need multiple modules? And if you don't, does that mean your architecture is difficult to understand? I don't think that tracks at all, so I don't really agree with where you're headed.

> But then we all know that there are some huge repos out there that are just "one module" and you can't make anything out of their internal structure.

There's always some shitty code out there, sure. But I don't like the suggestion that "one module" can't be coherent. It's orthagonal to the architecture. Not everything needs to be made generic and reusable.

> And the whole point of the article is to say that import statements do actually _not_ solve this issue, because import statements are at the file level, not at the module level (whatever module means in your mind).

This is not true for Go, for example. Import statements absolutely do solve this problem in Go. I rarely need to ever look at module files which are in some ways a byproduct of the import statements.


> This is not true for Go, for example. Import statements absolutely do solve this problem in Go. I rarely need to ever look at module files which are in some ways a byproduct of the import statements.

Go imports still work at the Go package level. If you have multiple .go source files in one package, you have the exact same issue I described for Java.

    .../pkg1/foo.go -> import .../pkg2
    .../pkg1/bar.go -> import .../pkg3
If I'm editing / reviewing a change to pkg1/foo.go, I cannot tell that pkg1 _already_ depends on pkg3. Can I?


go list can tell you that pkg1 imports pkg2.

At work, go list was too slow and depended on a git checkout so we wrote our own import graph parser using the go std lib parser and operate on byte slices of the files we read directly from git. It’s speed of light fast and we can compute go import graphs in parallel from multiple commits to determine what has changed in the graph so we can reduce the scope of what is tested.


Adding more and more people is often the thing to avoid.

I'm not going to say it can be avoided in all cases but modularity, team structure and architecture both system and organisational can avoid this in a lot of cases.


On top of that, the software world has changed dramatically since Bazel was first released. In practice, a git hash and a compile command for a command runner are more than enough for almost everyone.

What has changed in the past ~15 years? Many libraries and plugins have their own compilers nowadays. This increases the difficulty of successfully integrating with Bazel. Even projects that feel like they should be able to properly integrate Bazel (like Kubernetes) have removed it from the project as a nuisance.

Back when it was first designed, even compiling code within the same language could be a struggle; I remember going through many iterations of DLL hell back when I was a C++ programmer. This was the "it works on my machine" era. Bazel was nice because you could just say "Download this version of this thing, and give me a BUILD file path where I can reference it." Sometimes you needed to write some Starlark, but mostly not.

But now, many projects have grown in scale and complexity and they want to have their own automated passes. Just as C++ libraries needed special library wrappers for autotools within Bazel, now you often need to write multiple library compiler/automation wrappers yourself in any context. And then you'll find that Bazel's assumptions don't match the underlying code's. For example, my work's Go codebase compiles just fine with a standard Go compiler, but gazelle pukes because (IIRC) one of our third-party codegen tools outputs files with multiple packages to the same directory. When Etsy moved its Java codebase to Bazel, they needed to do some heavy refactoring because Bazel identified dependency loops and refused to compile the project, even though it worked just fine with javac. You can always push up your monocle and derisively say "you shouldn't have multiple packages per directory! you shouldn't have dependency loops!", but you should also have a compiler that can run your code just like the underlying language without needing to influence it at all.

That's why most engineers just need command runners. All of these languages and libraries are already designed to successfully run in their own contexts. You just need something to kick off the build with repeatable arguments across machines.


But there is a lot of C, C++, and Java in the world.

It also helps in a mono-repo to help control access to packages. BAZEL makes it so you can't import packages that aren't visible to your package.


Bazel is a hard sell overall.


You're practically begging for it...

Wine Is Not an Emulator!


This is true but it is pedantic. When people say “emulator” they usually mean “software that lets me run a program on platforms other than the original intended platform”. They don’t care about the implementation details. At some point we need to internally correct minor errors of terminology because the meaning is clear in context.


How are you able to distinguish when people use a word as it is defined from when they redefine it to something else?

For myself, I make every attempt to use words correctly.


It obviously is an emulator though (which uses entirely HLE). Notably, Wine doesn't use the acronym anymore (about page prefixes with "originally known as").


It is not a traditional emulator which is a virtual machine that executes foreign bytecode or ar least uses a virtual hardware setup. QEMU fits that sense of emulator. Wine doesn't.

Wine lets userspace code to be executed as is with the full permissions of the host system. It is more like an alternative executable format support package / subsystem. It needs to emulate the Windows system DLL calls, but everything else is no different than loading a piece of ELF executable and jumping into it.


True, though as soon as they supported Power PC Mac, emulators have been integrated with it as needed.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: