Can verify. When I started in the catalog department in '97, "the catalog" was essentially a giant Berkeley DB keyed on ISBN/ASIN that was built/updated and pushed out (via a mountain of Perl tools) to every web server in the fleet on a regular cadence. There were a bunch of other DBs too, like for indexes, product reviews, and other site features. Once the files landed, the deploy tooling would "flip the symlinks" to make them live.
Berkeley DBs were the go-to online databases for a long time at Amazon, at least until I left at the turn of the century. We had Oracle databases too, but they weren't used in production, they were just another source of truth for the BDBs.
The stocks app is essentially Apple Business News, with stock prices on one side. I'm not entirely sure how the weather app pushes News on you, but my guess is that it's there somewhere.
my stock app has the Apple business news towards the bottom where it’s pretty easy to ignore and I find it’s at least somewhat relevant, even if I don’t look at it. Even so, totally get it. I don’t really want ads there one way or another.
I don’t see ads on the weather app (at least on my phone)
I have a lot of respect for Postgres' massive feature set, and how easy it is to immediately put to use, but I don't care for the care and feeding of it, especially dealing with upgrades, vacuuming, and maintaining replication chains.
Once upon a time, logical replication wasn't a thing, and upgrading major versions was a nightmare, as all databases in the chain had to be on the same major version. Upgrading big databases took days because you had to dump and restore. The MVCC bloat and VACCUM problem was such a pain in the ass, whereas with MySQL I rarely had any problems with InnoDB purge threads not able to keep up with garbage collecting historical row versions.
Lots of these problems are mitigated now, but the scars still sometimes itch.
Maybe? When I did LFS/BLFS I opted for an i3-gaps setup with a compositor and some other eye candy, and had a lot of fun tinkering. I suppose some folks might want the experience of building an entire DE from source, but that seems like a bit much.
That's funny, I did LFS a few years ago and specifically chose the systemd version so I could better understand it. I don't think this is a huge deal, I believe the older versions of the document that include SysVinit will still be available for a long time to come, and people who want it will figure out how to muddle through. If at some point in the future things diverge to such a point where that that becomes untenable, someone will step up and document how it is to be accomplished.
Didn't you find though that systemd was just a black box? I was hoping to learn more about it as well- and I did manage to get a fully baked LFS CLI system up and running, and it was just like "ok install systemd..." and now... it just goes.
Sysv at least gave you a peak under the covers when you used it, and while it may have given people headaches and lacked some functionality, was IMHO simple to understand. Of course the entire spaghetti of scripts was hard to understand in terms of making sense of all the dependencies, but it felt a lot less like magic than systemd does.
> "ok install systemd..." and now... it just goes.
I believe it's `systemctl list-unit-files` to see all the config that's executed, included by the distro, and then if you want to see the whole hierarchy `systemd-analyze dot | dot -Tpng -o stuff.png`
To me, seems much easier to understand what's actually going on, and one of the benefits of config as data rather than config as scripts.
The only other page that covers it is how to compile it and it install it (make configure, make, make install essentially- with a bunch of flags).
It kind of touches upon a few commands that will let you know what its doing and how to get it started, but from this page you don't learn much about how it works.
In fact, one of my takeaways from LFS was that I already kind of knew how a linux system starts... and what I really wanted to learn was how the devices are discovered and configured upon startup to be used, and that is pretty much all done in the black box that is SystemD.
This decision means that no testing of SysVinit will be done in future LFS and BLFS versions. The onus will be on the experimenter each time, but my hope is that a body of advice and best practices will accumulate online in lieu of having a ''works out of the book'' SysVinit solution.
> Only to have a machine ingest, compress, and reiterate your work indefinitely without attribution.
Everything I write, every thought I have, and the output of my every creative endeavor is profoundly shaped by the work of others that I have ingested, compressed, and iterated on over the course of my lifetime, yet I have the audacity to call it my own. Any meager success I may have, I attribute to naught but my own ingenosity.
I think this is a paradox that AIs have introduced.
We write open source software so everyone can learn and benefit from it. But why do we not like it when they are being trained on them and allow normies to use it as well?
We want news, knowledge and information to be spread everywhere. So why don't we share all of our books, articles and blogs openly to any AI companies that want to use them? We should all want to have our work to be used by everyone more easily.
Personally, I don't have any fundamental refutation to this. There's a sense that it is wrong. I can somewhat articulate why its wrong in term of control and incentives. But those are not well formed just yet.
I sit along this fence and the only real "wrong" I can garner from it is that the economic model simply hasn't updated to reflect this shift.
I think the "wrongness" would go away, for me anyways, if we found a way that everyone was still remunerated in some way when this sharing occurred.
A lá, the vision of what crypto ought to have been: Every single thing created and shared has a .000xx cent value and as it makes it's way around being used/shared/iterated upon just sends those royalty checks to the "creator" always, forever.
Humans participate in the human struggle of existence, limited in our time, attention, energy, and host of other various constraining natures. We are limited and finite. To learn from those of greater talent than yourself is to dedicate all of those resources towards its acquisition. AI has no such limitations, and so does not participate in the same category as humans. A human struggles to learn the patterns of an artist, a machine does not. A human tires of learning, a machine does not. A human puts in effort, a machine does not.
It is the humanness that is the difference, that which exists outside the abstraction of the imposed categories. The human cannot compete with the machine which ingests ALL works and renders the patterns easily available. The artist toiled to perfect those patterns, and now is no longer granted the decency of reaping the fruits of their labors. Humans can give, the machine can only take.
Man is a tool using animal. Language, writing, the printing press, photography, audio recording, computers, the Internet, and now AI -- each of these innovations has fundamentally changed how we create and preserve knowledge, art, and culture.
None of these changes came without losses. Writing eroded our capacity to remember, printing fueled decades of bloody religious conflict, photography harmed portrait artists, audio recordings wrecked social collaborative parlor music, computers fucked up our spelling & arithmetic, and the Internet... don't get me started.
Ultimately there is no going back, we will change and adapt to our new capabilities. Some will be harmed and some will be left behind. So turns the wheel of progress, I don't think anyone can stop it.
I'm surprised nobody has yet mentioned how pleasant it is to create coffee stains using Typst, and if only LaTeX wasn't the de-facto standard in academia and stain-related journals, they would have already switched to it.
Of course, you can create coffee stains in HTML as well, but it's not something you can do in Markdown.
I know it was probably said just as a joke, but are you really writing papers using Rust? I don’ t use Rust, BUT if you’ve got a better way to write symbol heavy type theory and/or logic than having to make PNG’s and put them in as images in a word processor I would love to hear about it.
That package still has the core limitation of Typst: images can only be placed top-middle-bottom and left-centre-right. Typst still has yet to support arbitrarily placed images.
You mean absolutely positioning it? You can do that with the place function and displacing it with dx/dy from the origin (https://typst.app/docs/reference/layout/place). Example: #place(top + left, dy: 2cm, dx: 4cm, image("image.png"))
That seems usable for manual layout, but it looks painful to use to place images without knowing exactly where they might end up on a page. I reuse my LaTeX code to make volumes of books, and I never touch the code. It's fire and forget for me, which this does not seem to solve.
Parameterize! That's a new word I didn't know. It adequately describes how I typeset my books, and I must not be alone. The ability to tell LaTeX to drop a picture around here, to the best of its ability, with the possibility of moving it down a paragraph or two if it doesn't fit is vital for me.
I think that's a missing feature of Typst yes, to have figures be either "here" or "top next page" automatically, with that priority. It can't do that.
The confusing part was that this has nothing to do with the images of this coffee stain package, because they are foreground/background and can be placed freely on the page (any corner or any custom offset from any corner; i.e from top left corner you can use page coordinates).
The coffee stains overlay/underlay text, so no layout problems at all.
But the dx/dy arguments also take percentages besides absolut lengths. I still don't get what the the other poster means by that fundamental limitation. I think they're confused about absolute positioning of background images vs floating figures. But typst has the analog setting of `[htbp]`, so the same "fire and forget" workflow is possible.
The compiler is open-source and can be run locally. You need an account if you want to use their web editor, which is nice (it shows error messages where they occur along with an explanation and link to docs, and also shows a real-time updated preview).
As for Latex vs Typst, as a language Typst is much better, compiles very quickly, and has sane error messages. However, Typst still has a few rough edges, and can't do everything you can with Latex + packages (yet).
I've been using Typst for most of my documents for a few months and I've been generally happy with it.
There is a very prominent web site that offers a hosted version without much clarity about the fact that you can run it yourself. The hosted version offers collaborative editing similar to what Overleaf provides which is incredibly useful.
I have never really used the web thing personally. I always use the command line version, and it works perfectly fine and it's FOSS.
I find the syntax to Typst to be generally better than LaTeX. I don't like its equations as much, but Typst has one huge advantage that makes it easier to forgive its faults: it compiles several orders of magnitude faster than LaTeX. This might not sound like much but it honestly sort of changes how you even think about problems. I keep Neovim open on the left, run `typst watch` in the background, and Evince on the right, and my updates show up immediately upon saving.
Also, adding plugins and libraries is trivial. All you have to do is declare it at the top of the file and it will automatically fetch it, which is considerably easier than LaTeX.
I don't like the default font it ships with, but it's easy enough to add a Latin Modern font and get something that looks like LaTeX.
Before Typst, I had typically been using Pandoc with Markdown to write my documents, and that served me well for quite awhile, but it had the disadvantage of being extremely slow to compile. A slide deck that I gave last year [1] would take a bit more than a minute to compile. This became an issue because I had to make a few small last-minute changes and having to wait an entire minute to view them actually made it so I was really pushing against the wire.
If I had done my slides in Typst, they would have compiled in about 40 milliseconds, they wouldn't have looked any worse, and I'd have a syntax not dissimilar to Markdown. I'm pretty much a convert at this point.
Typst is an application you can use on your local machine without any signup. The compiler is hosted on GitHub. The Typst web app (the online editor at typst.app) is closed source and offered as a paid with cloud storage, collaboration, autocomplete, etc...
I struggled greatly with this article. There was something halting about it. Something precious. I felt that the author desperately wanted to elevate the mundane into the realm of the sublime.
I found myself annoyed.
I thought to myself "Are paragraphs a renewable resource? Is it wrong to waste them?"
It doesn't matter.
In neuroscience, there is a thing called the "default mode network" which is best known for being active when a person is not focused on anything in particular. The mind is awake, but at rest, like when you're daydreaming, bored, and have no goal oriented tasks. All sorts of neat stuff happens in this network, things like "shower thoughts", self reflection, autobiographical memories, thoughts about future goals and events, trying to figure out the people in your life -- their desires, intentions, emotions and thoughts. In boring situations like when I'm on the bus, or waiting in line for something, I'll spin it as an opportunity to spend time with the ole' default mode network. It's a good time observe people around you, as they're often completely engrossed in their devices. Occasionally I'll seek out other folks who are also chilling in the default mode network, and we'll sometimes share a knowing look.
For example, I read your first four sentences/paragraphs. When I got to your last paragraph, it was so long that I started skimming halfway and then just gave up.
I think a mix and match of small paragraphs and single line sentences for emphasis is a pretty good writing format for holding my attention, but I can see how others might be annoyed by it.
Yeah, I should have split the final paragraph into two, but I kept it long thinking it would be a funny contrast; it wasn't. In retrospect, it probably would have been better not to have made fun of the poster's style at all. I also find it annoying when commenters complain about trivial stylistic issues in folks' writing rather than engaging with the substance.
I've seen some other commentary on the short/one-line paragraphs trend, and linking it to LLMs. I think it is just kind of a thing of the times—attention spans and all that.
I think it is more suited to the ways people consume text these days, kind of like how digital platforms moved to sans-serif fonts. Long dense paragraphs are fine in books and newspapers but hard to read and don't flow right on web browsers.
Books have also always had sections of short paragraphs for dialogue or pacing effect. I find myself breaking my own writing into more succinct paragraphs/thoughts that start to feel like jumbled run-on sentences without line breaks.
That neuroscience bit sounds like complete bullshit, but your annoyance is justified and I think shared by others in this thread.
There are many walks of life and some people are wired in ways that annoy us when they present themselves, or talk about themselves as this individual has. It is not only to elevate the mundane to the realm of the sublime, rather it’s to beat a profound lesson of life into us by proxy of whoever the characters are. Notice the shift from the friends, to I, to “you”. Notice the use of “you” in the blog post. You are being lectured. You need to be taught things that this individual just discovered, because you are clueless and they are wise. That is why you feel annoyed.
Whenever you hear someone using the royal “we” to lecture you, you’re always welcome to ask “who is we?”, because it’s appropriate to understand who is actually being discussed. This individual thinks that we are clueless and they are carrying the stone tablets to teach us. They have a long way to go.
With both Windows 11 and macOS Tahoe now being non-starters for many, it's clear that we're going to continue to see impressive growth in the Linux desktop in 2026. Last year I migrated my Windows gaming machine to Ubuntu, and it's been a great success. I don't play games that require kernel level anti-cheats, so for me, Proton has worked great. I'm playing new games like Anno 117 on my 2019 vintage RX 5700xt and am having a blast. I'm about to wipe my Windows 10 partition and not look back.
I still have an M1 laptop with a broken screen that is going strong in clam shell mode, but once it dies or I can no longer run Sequoia for whatever reason, I'll be tempted to abandon macOS if Apple can't move beyond the mess they've made with Tahoe.
I’m still on Sequoia; I have high hopes that Tahoe is an aberration that will be fixed with the departure of Alan Dye. But let’s keep things into perspective here. The subtle enshitifications of macOS are mild compared to the train wreck of Windows 8 onwards. I daily drove Windows 7 until 2015; IMHO it’s the greatest version of Windows ever.
My wife works for a large corporation that is 100% Windows. I first used Windows 11 a few weeks ago when I was troubleshooting a connectivity problem on her laptop. To some extent my lack of experience with Windows 11 was a factor, but configuring network settings shouldn’t be so obtuse and fragmented. It didn’t feel serious. It felt like a parody of an operating system.
I agree that Tahoe is considerably less enshitified than Windows, but they are slowly turning the screws on us. With every release, it becomes harder and harder to run unsigned macOS binaries, and I can't shake the feeling that their ultimate goal is turn the Mac into more of a "trusted appliance" and less of a general-purpose computer.
Gatekeeper & notarization, System Integrity Protection, hardware level security enforcement, all of these shifts reek of security paternalism, platform convergence, and ultimately ... control. This frog is starting to feel the water boil, and to mix metaphors, can see the walls of the garden getting higher.
I agree there’s a lot of security paternalism, but the "trusted appliance" model is also the objectively correct choice for 99 percent of users. The real frog-in-warming-water problem, in my view, isn’t control being taken away — it’s the exponential growth of operating system complexity and connectivity. Computers are becoming more of a window into our souls every year, and with that the terrible opportunities for bad actors grows too.
Ultimately, choosing macOS is choosing to trust Apple. So the real question is: what do I get in return for that trust? As a "1 percenter" you’d think I’d resent ceding control. But when I look at Gatekeeper, notarization, Signed System Volume, and the rest, my reaction is: thank you, Apple, for doing your fucking job — for doing what I pay YOU to do for ME. I don't want to think about kernel extensions or rootkits, just keep my computer secure. Even as a 1 percenter, I still treat my main desktop as an appliance. Any time I want to go deeper into a computer, I'm in an ssh terminal to Linux machines under my control.
For me the logic is simple. If I don’t trust Apple to manage the security of my computer, then I shouldn’t be running macOS, period. Personally, I do trust Apple as much as I can trust anyone, including the presumptively honourable neckbeards who oversee your favourite Linux distro.
The new Liquid Glass UI has a lot of detractors, both on iOS and on macOS, but it seems like the clamor is even louder on macOS. Beyond the looks, it's created a lot of usability issues for folks. Buttons and controls can overlap awkwardly, navigation can be more difficult when it's hard to identify different UI elements on the screen, all the eye candy like transparency and rounded corners can create accessibility problems for folks less than perfect vision. It's a bit of a mess.
I read it this year too. I was surprised by the amount of heartfelt soliloquising the monster did, he was much more compelling than I expected. Victor of course was the real monster in the story, self obsessed, not taking responsibility for his actions, I found myself actively rooting against him.
Berkeley DBs were the go-to online databases for a long time at Amazon, at least until I left at the turn of the century. We had Oracle databases too, but they weren't used in production, they were just another source of truth for the BDBs.
reply