For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles. There's no hard evidence here, just hearsay. Nothing really to report until there's more information. I don't want drama in reporting, I want facts. But I guess I'm an outlier which is how we got both this AI style and the clickbait it was trained on...
It also doesn't help that all the title graphics have the same dramatic feeling and are certainly AI generated.
> For me it's the constant feel of everything being "exciting" while no real information is actually conveyed. It's a common tactic of both AI and clickbaity articles.
Yes! You put your finger on what bugs me about "no LLM" rules. It's not that LLMs writing isn't uniquely bad, but that it tends towards low quality clickbaity prone writing we already see everywhere. Banning LLM content is redundant.
Side note, I'd guess LLMs don't tend towards vapid writing just because of clickbaity training material. Rather it's more fundamental. Writing well takes effort and energy. LLMs seem to avoid effort just like humans. Emotional based reasoning in humans is itself a heuristic system favored by evolution. Thinking is expensive. Emotional slop is cheap.
Imagine you know a guy named Patel. He pirated every movie ever made and is a prolific writer. So prolific, in fact, that he has a blog, called "Patel's Log." On this blog is a review of every movie ever made.
At first, you think that's neat. It's not exactly a book of all knowledge, but it's a significant human achievement, perhaps even historic.
Things take a turn for the worse when you're reading a review in the Times. You recognize Patel's distinctive style, and call him up to ask if the Times stole his post. He says that a Times columnist asked for his opinion, and he sent them a link. It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.
You find this humorous, until you recognize his style in the Atlantic - then the Post. Eventually you're disappointed when the Ebert staff publish an opinion piece in favor of Patel's Log matching (PatelLM), and you're forced to wonder if that' what Ebert would have thought.
Your boss sends you copy-pasted PatelLM content in a morning Slack message about a movie she watched over the weekend. Your friends quote Patel's Log verbatim on Discord. Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.
> It turns out the columnist copied his blog post verbatim: but he says he can't complain without being inconsistent, since he pirated every movie ever made.
Copyright laws should be applied to LLMs and their users just like any others. If they verbatim reproduce a post (or near enough), then it should be a copyright violation.
> You find this humorous, until you recognize his style in the Atlantic - then the Post.
There's nothing inherently wrong with humans or LLMs learning to mimic someones style. This is actually a basis for styles and genres, etc. Whole trends in arts are just people copying others style's. Sometimes with little improvements.
> Hollywood starts using PatelLM to indirectly plagiarize other movies. Soon, Patel's posts begin to echo each other as the supply of novel perspectives is overwhelmed by PatelLM. Film criticism become a dessicated corpse, filled with plastic and presented in a glass case with a pin through its heart. Thought is dead. There is only Patel.
How exactly is this different than what Hollywood did pre-LLMs for the last decade or two? LLMs didn't cause the homogenization of culture. Corporate Hollywood and the internet did that.
I've had a similar complaint about publishing in machine learning conferences. They're putting in these "no LLM" rules but those are just idiotic. Proving LLM usage is really really difficult, but (one of) the underlying problem has always been bad reviews, or low quality reviews. So why write a LLM rule? Why not tackle the problem more directly?
I don't care if people use LLMs, I care about generating slop. The two correlate, but by concentrating on the LLM part you just let the problem continue. It's extremely frustrating. Slop is slop. Doesn't matter how it is generated or by who. Slop is slop. Doesn't matter if you dress it up and put lipstick on a pig. Slop is slop.
You could also check Matt Levine from Money Stuff - Bloomberg. He is quite known on HN. The way he writes plus his great knowledge with no BS makes him my favorite (and only) journalist I follow.
Thanks, but the link to the journalist I shared has been threatened multiple times, and yet she kept trucking through. I rarely say "avoid MSM," but in this case, in 2026, I would personally recommend avoiding your MSM recommendation.
> Look, I am sorry. But if you go to Jump Trading and Jane Street and say “hello, I have an unregulated poorly designed mechanism that could lead to $50 billion of market value collapsing overnight, would you like to trade with me,” they are going to say yes, but their eyes are going to light up, you know? If at Time 0 you give them an extremely gameable system that can produce billions of dollars of profit, at Time 10 your system is going to be a smoking wreckage and they are going to have billions of dollars of profit. That’s their whole job, you know? I couldn’t tell you in advance what all the intermediate steps will be, and in fact in hindsight I cannot tell you what the intermediate steps actually were, how Jump and Jane Street made money off the collapse of Terra. But as a heuristic, I mean, come on. Terra was like “hello we have a balloon full of money, here is a pin, dooooooon’t pop the balloon.” Guess what!
That point was the crux of Matt Levine's argument: Terra and Luna were unregulated and easy-to-game securities. So you can't complain when the smartest people on Wall Street figured out how to pop the balloon in their favor -- (not ai emdash) particularly when it's their job.
I will quote the first few paragraphs leading up to it though:
>The basic story of Terra is:
>Terra was a big crypto project, led by a company called Terraform Labs and a guy named Do Kwon, which at its peak had a market value of about $50 billion.
>It had a token, the currency of its blockchain, called Luna, which at its peak traded at almost $120 per token.
It also had an algorithmic stablecoin, TerraUSD, whose mechanism was that it could always be redeemed for $1 worth of Luna.
>That’s a bad idea! The problem, which was extremely obvious and which everyone knew about, was that, if people lost confidence in Luna, there would be a death spiral: People would redeem TerraUSD for Luna and sell the Luna, which would drive down the price of Luna, which would lead to more redemptions, which would create even more Luna, until Luna was trading at a tiny fraction of a penny and every TerraUSD would be redeemed for millions of them.
>In May 2022 that very much happened. Terra collapsed, people lost a lot of money and Do Kwon got 15 years in prison for fraud.
>At its peak, though, Terra was a pretty big crypto project, and it had various dealings with some very smart and somewhat sharky trading firms like Jump Trading and Jane Street.
“Can’t complain” doesn’t make it legal. I had this argument a number of times with cryptobros at the time “if it’s on the chain it’s fair game” I heard quite often. Just, no. Just because some code allows you to get away with something doesn’t make it not illegal[1].
The thing is you or I don’t get to say what is or isn’t a market that is covered by market abuse laws. Regulators do, and while it’s true to say none of the relevant regulators had stepped up and conclusively shown these markets were under their jurisdiction, they had repeatedly said they were looking into them and given hints they felt they had jurisdiction. Heck, I was in a meeting with Kevin Warsh around 2014 or so[2] where he asked about bitcoin so it’s clear the fed was at least looking into crypto at that time long before they made public comment. ISTR talking to the cftc at the same time and they asked about it too.
So “unregulated” in this context doesn’t mean “not covered by regulation” it means “regulatory status extremely uncertain”. If you want to go in with a very aggressive strategy you’re taking some risk that regulators will post facto go after you because they do that a lot in conventional markets.
[1] Market abuse in this case, but it’s obviously the case in cybersecurity also.
[2] This isn’t some kind of weird boast btw, cbankers and regulators meet with people from industry all the time as part of their normal information-gathering process and he met with a group of us who were working with some bank on detecting things like market abuse. He had some sort of academic position at Stanford at the time iirc looking into various types of bank regulation, but he was still plugged into the fed governors because he had only just left that.
> I had this argument a number of times with cryptobros at the time “if it’s on the chain it’s fair game” I heard quite often. Just, no. Just because some code allows you to get away with something doesn’t make it not illegal[1]
But that is/was the cryptobros argument: Code is Law! And now instead of fixing the algos they're going right to suing each other just like TradFi with TradLaw.
Not seeming unnatural is literally what the LLM is trained to be, but it's pretty interesting how little sense they make when you dig in. Goes to how little attention we pay normally, and/or how much weight we put on text seeming natural.
"A new lawsuit doesn’t just revisit the $40 billion Terra-Luna meltdown; it questions whether..." -- the purpose of a lawsuit is to question something (by making an allegation), you don't sue someone to "revisit".
"Ten minutes is not a coincidence. It is a trade." So is an hour, or thirty seconds, or...?
"Not just as bystanders, but as alleged participants" -- the "just" doesn't make sense; participants aren't bystanders.
Of the list, only "It reads less like a rescue offer" and "These are not isolated; they are part of Snyder’s broader efforts..." makes any sense in context.
BMW and Toyota have famously used bio-derived insulation reported to be like catnip for rodents.
The bio-oil plasticizers also migrate out more quickly in thermal cycling than the old dead dinosaurs approach. Hilariously, when I asked my mechanic about getting an M5, he laughed and explained that the radiator components are known to turn brittle and crack after 5-6 years because of this.
(I don't envy automotive folks. The stuff they have to deal with is next level.)
Last time I had to call AAA to jump my car, the guy opened the hood very carefully and told me he’d had three rats jump out of engines at him that day, presumably because of the “soy wires.”
Nothing can beat the 600 Mercedes for cachet though. Look at the list of owners: Idi Amin, Ceauşescu, Saddam Hussein, F.W.de Klerk, Papa Doc, Mugabe, Brezhnev, Tito, Mao, Kim Il-Sung, Ferdinand Marcos, Deng Xiaoping, Mobutu, Jean-Bédel Bokassa, Mubarak, Berlusconi, Pablo Escobar, Jeremy Clarkson, nothing will ever come close to that.
I'm not sure where else you can get a half TB of 800GB/s memory for < $10k. (Though that's the M3 Ultra, don't know about the M5). Is there something competitive in the nvidia ecosystem?
I wasn't aware that M3 Ultra offered a half terabyte of unified memory, but an RTX5090 has double that bandwidth and that's before we even get into B200 (~8TB/s).
You could get x1 M3 Ultra w/ 512gb of unified ram for the price of x2 RTX 5090 totaling 64gb of vram not including the cost of a rig capable of utilizing x2 RTX 5090.
I don't think I can recommend the Mac Studio for AI inference until the M5 comes out. And even then, it remains to be seen how fast those GPUs are or if we even get an Ultra chip at all.
Is local jamming or removing their antennas a viable strategy? Seems like it could be easier to just make them unable to phone home, rather than trying to surgically rip out the bundle of hardware and software responsible for it while leaving everything else intact.
Vehicles differ. Disconnecting the antenna is easiest in some. Removing a fuse is sufficient in some. Disconnecting the relevant module is not surgical in some. Some nag if the antenna is disconnected.
Oh man, my grandmother was like this with finding four leaf clovers. She would just find them constantly, all the time, on command, or maybe while standing around having a conversation. Her description of it was "it's like they're just jumping up and waving at me" which somewhat fits with the author's description of motion. Never heard of anyone else like this though, neat to see others in the comments.
What does the GPD Win 4 do in this scenario? Is there a step w/ Agent Organizer that decides if a task can go to a smaller model on the Win 4 vs a larger model on your Mac?
The point here is that the doc you linked is a year and a half old, this (if real) is much newer. Security is a constant arms race between attackers and defenders, nothing is static so updates of this nature are always welcome.
Also fair! I think "leaker" is just bristly to me in this context, when there's a nearly identical version of it just hanging out for folk to find. But also just a hope that some folk might poke around documentcloud for similar documents lying around. Lots of newsworthy gems in there just waiting to be picked up and this's a good example.
> Or will the iPhone have a multi-hour update where it decrypts its entire iCloud archive on the client-side, and then reuploads it without encryption?
More likely that the phone just sends the keys to Apple in that case
But that passphrase you saved is an additional key, in case you lose all your Apple devices for example. You can tell it isn’t required for your phone to decrypt data because you don’t have to type it in to access your data, or even migrate to a new phone.
And if they allow rescue contacts in case you lose the password and you can decrypt the data through their account, there is a chance they also keep a key for themselves, just in case.
If you got sensitive data, learn to encrypt it yourself. That is the ONLY way to make sure. If you trust another company to do the encryption at rest for you, that is your own fault.