> To accomplish that feat, the treatment is wrapped in fatty lipid molecules to protect it from degradation in the blood on its way to the liver, where the edit will be made. Inside the lipids are instructions that command the cells to produce an enzyme that edits the gene. They also carry a molecular GPS — CRISPR — which was altered to crawl along a person’s DNA until it finds the exact DNA letter that needs to be changed.
That is one of the most incredible things I have ever read.
On the off-chance someone at Apple reads this, I'll repeat my perennial beg that Apple stops popping up 'Give me your (local admin) password right now' dialogs randomly throughout the day because the computer has a hankering to install updates or something.
Anyone with basic skills can whip up a convincing replica of that popup on the Web, and the "bottom 80%" (at least) of users in technical savvy would not think to try dragging it out of the browser viewport or switching tabs to see if it is fake or real.
The only protection against this kind of stuff is to NOT teach users that legitimate software pops up random "enter your password" dialogs in front of your work without any prompting. That's what these dialogs are doing.
Display a colorful flashing icon in the menu bar. Use an interstitial secure screen like Windows does. Whatever. But the modern macOS 'security' UI is wildly bad.
(I work at Mozilla, but not on the VCS tooling, or this transition)
To give a bit of additional context here, since the link doesn't have any:
The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.
In the short term the mercurial servers still exist, and are synced from GitHub. That allows automated systems to transfer to the git backend over time rather than all at once. Mercurial is also still being used for the "try" repository (where you push to run CI on WIP patches), although it's increasingly behind an abstraction layer; that will also migrate later.
For people familiar with the old repos, "mozilla-central" is mapped onto the more standard branch name "main", and "autoland" is a branch called "autoland".
It's also true that it's been possible to contribute to Firefox exclusively using git for a long time, although you had to install the "git cinnabar" extension. The choice between the learning hg and using git+extension was a it of an impediment for many new contributors, who most often knew git and not mercurial. Now that choice is no longer necessary. Glandium, who wrote git cinnabar, wrote extensively at the time this migration was first announced about the history of VCS at Mozilla, and gave a little more context on the reasons for the migration [1].
So in the short term the differences from the point of view of contributors are minimal: using stock git is now the default and expected workflow, but apart from that not much else has changed. There may or may not eventually be support for GitHub-based workflows (i.e. PRs) but that is explicitly not part of this change.
On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.
I am the author of this piece, and i didn't share it to HN, I don't hang out here. I just gotta say wow, tough crowd. i wrote this piece from an emotionally low point after another fruitless day of applying to jobs. I didn't have a particular agenda in mind. I was voicing what i've been through and some of what I was experiencing with no expectations.
you'll notice in the comments section that the population of substackistan is much less FUCKING CYNICAL AND NEGATIVE than you guys, with many commenters saying they are in the same position. I heard from writers, designers, engineers, going through similar times.
my portfolio site is https://shawnfromportland.com, you can find my resume there. if you have leads that you think I might match with you can definitely send them my way, I will even put a false last name on an updated resume for you guys.
for those who are wondering, I legally changed my name to K long ago because my dad's last name starts with K, but I didn't like identifying with his family name everywhere i went because he was not in my life and didnt contribute to shaping me. I thought hard about what other name I could choose but nothing resonated with me. I had already been using Shawn K for years before legally changing it and it was the only thing that felt right.
How did these clowns manage to make my mouse cursor laggy? It is incomprehensible for me to live in such a big bubble with such a big paycheck and then spend zero brainpower on systems without graphics acceleration.
This is extremely bad engineering and these engineers should be called out for it. It takes a special kind of person to deliver this and be proud of it.
Once they made their millions at Google these engineers will be our landlords, angel investors, you name it. The level of ignorance is unfathomable. Very sad.
As someone who works professionally on embedded software devices that update over the internet, car companies are stuck not because they can't get software talent, but because they have no ability to actually build the electronics alongside the software, which is ultimately what constrains embedded software.
Without the right hardware, the constraints are just insurmountable, you can not do X feature because board A doesn't have the API to your MCU, or it runs some dogshit speed communication system that means you have 500ms lag. The feature is just unworkable, and if the PMs push it anyways you get what happens for the legacy car makers, terrible underpowered infotainment systems with no central design philosophy, stuck in an awkward, bad, middle between a full software stack and all buttons for everything. Their model of integrating 3rd party vendor computers just doesn't really work for this kind of thing; Tesla, Rivian, and the Chinese EV makers all manufacture all their own electronics, which lets them achieve the outcome. But you can not just roll all your own electronics in a year.
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
Remove the LTE chip and all functionality related to ads, support wireless CarPlay and android auto, and use physical buttons. You’ll win every award in the industry.
I think that this comment is a great example of the total disconnect these conversations always have.
On the one hand we have lots of people on here who are building full-featured web apps, not websites, on teams of 30+. These people look at frameworkless options and immediately have a dozen different questions about how your frameworkless design handles a dozen different features that their use case absolutely requires, and the answer is that it doesn't handle those features because it doesn't require them because it's a blog.
Meanwhile there are also a lot of people on here who have never worked on a large-scale web app and wonder why frameworks even exist when it's so obviously easy to build a blog without them.
It would be nice if we could just agree that the web hosts an enormous spectrum of different kinds of software and make it clear what kind of software we're talking about when we were opining about frameworks—whether for or against.
It's nice to see a paper that confirms what anyone who has practiced using LLM tools already knows very well, heuristically. Keeping your context clean matters, "conversations" are only a construct of product interfaces, they hurt the quality of responses from the LLM itself, and once your context is "poisoned" it will not recover, you need to start fresh with a new chat.
As someone who has a terminal cancer diagnosis (and I'm mid-way through the range of time I was told I had left, months, FTR), I don't agree with a lot of this. And I'm essentially on my deathbed (mentally), even though I'm currently not bed-bound.
Yes, my state now is not a representative state of the one I was in a year ago before my health started failing. But I'm still the same person. I forgot that briefly after my terminal diagnosis, and starting doing things I thought were the right things (making sure things would be OK for my wife, tidying up a litany of messes that would be hard for her to deal with without just giving up and selling things for pennies or giving them away), but after a few weeks and speaking to the right people, I started living more normally again.
Yes, my priorities have changed massively - things that I thought were important 4 months ago are truly meaningless to me now - but many things that are important to me now were so before. And they will be until I cease to exist. I'm making the most of the time I have left because it's important that my experience at this point is as good as it can be, and because I want my wife to have good memories of our last months together.
I've never suffered from 'reason 2'. I've always felt I made the right decision at the time with the information I had and the person that I was at the time. So I don't have many regrets - none of significance to speak of, certainly. I know I am lucky in this respect.
Reason 3 is meaningless, IMO - both generally and certainly to me. I'm 53.
And I don't think many people really do think about this seriously until it's actually on the table for them. I certiainly know I didn't - even last year when I had an operation which hopefully would have removed the cancer and given me years of life, I hadn't really thought about the finality of death and what it means (or doesn't) to me. FTR I'm an Atheist, and I think that 2026 will have as much meaning/experience for me as 1969 (i.e. before I was born).
I've transcended the vanilla/framework arguments in favor of "do we even need a website for this?".
I've discovered that when you start getting really cynical about the actual need for a web application - especially in B2B SaaS - you may become surprised at how far you can take the business without touching a browser.
A vast majority of the hours I've spent building web sites & applications has been devoted to administrative-style UI/UX wherein we are ultimately giving the admin a way to mutate fields in a database somewhere such that the application behaves to the customer's expectations. In many situations, it is clearly 100x faster/easier/less bullshit to send the business a template of the configuration (Excel files) and then load+merge their results directly into the same SQL tables.
The web provides one type of UI/UX. It isn't the only way for users to interact with your product or business. Email and flat files are far more flexible than any web solution.
One other fun part of gene editing in vivo is that we don't actually use GACU (T in DNA). It turns out that if you use Pseudouridine (Ψ) instead of uridine (U) then the body's immune system doesn't nearly alarm as much, as it doesn't really see that mRNA as quite so dangerous. But, the RNA -> Protein equipment will just make protiens it without any problems.
Which, yeah, that's a miraculous discovery. And it was well worth the 2023 Nobel in Medicine.
Like, the whole system for gene editing in vivo that we've developed is just crazy little discovery after crazy little discovery. It's all sooooo freakin' cool.
Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
You essentially outline why it should be broken up.
I'm not convinced making the ad tech sector more competitive would prompt that outcome but, "It would disrupt mature products" isn't a compelling argument to allow the existence of a monopoly.
Google is a monopoly, they exert monopoly power and enjoy monopoly pricing.
I think the more likely outcome would be more dynamic products under smaller bannerheads.
Pretty cool that Linus Torvalds invented a completely distributed version control system and 20 years later we all use it to store our code in a single place.
We feel your pain at Nextcloud. Our team at Everfind (unified search across Drive, OneDrive, Dropbox, etc.) has spent the past year fighting for the *drive.readonly* scope simply so we can download files, run OCR, and index their full-text for users. Google keeps telling us to make do with *drive.file* + *drive.metadata.readonly*, which breaks continuous discovery and cripples search results for any new or updated document.
Bottom line: Googles "least-privilege" rhetoric sounds noble, but in practice it gives Big Tech first-party apps privileged access while forcing independent vendors to ship half-working products - or get kicked out of the Play Store. The result is users lose features and choices, and small devs burn countless hours arguing with a copy-paste policy bot.
In the same way that crypto folks speedran "why we have finance regulations and standards", LLM folks are now speedrunning "how to build software paradigms".
The concept they're trying to accomplish (expose possibly remote functions to a caller in an interrogable manner) has plenty of existing examples in DLLs, gRPC, SOAP, IDL, dCOM, etc, but they don't seem to have learned from any of them, let alone be aware that they exist.
Give it more than a couple months though and I think we'll see it mature some more. We just got their auth patterns to use existing rails and concepts, just have to eat the rest of the camel.
Strongly recommend this blog post too which is a much more detailed and persuasive version of the same point. The author actually goes and builds a coding agent from zero: https://ampcode.com/how-to-build-an-agent
It is indeed astonishing how well a loop with an LLM that can call tools works for all kinds of tasks now. Yes, sometimes they go off the rails, there is the problem of getting that last 10% of reliability, etc. etc., but if you're not at least a little bit amazed then I urge you go to and hack together something like this yourself, which will take you about 30 minutes. It's possible to have a sense of wonder about these things without giving up your healthy skepticism of whether AI is actually going to be effective for this or that use case.
This "unreasonable effectiveness" of putting the LLM in a loop also accounts for the enormous proliferation of coding agents out there now: Claude Code, Windsurf, Cursor, Cline, Copilot, Aider, Codex... and a ton of also-rans; as one HN poster put it the other day, it seems like everyone and their mother is writing one. The reason is that there is no secret sauce and 95% of the magic is in the LLM itself and how it's been fine-tuned to do tool calls. One of the lead developers of Claude Code candidly admits this in a recent interview.[0] Of course, a ton of work goes into making these tools work well, but ultimately they all have the same simple core.
It's like reading "A Discipline of Programming", by Dijkstra. That morality play approach was needed back then, because nobody knew how to think about this stuff.
Most explanations of ownership in Rust are far too wordy. See [1]. The core concepts are mostly there, but hidden under all the examples.
- Each data object in Rust has exactly one owner.
- Ownership can be transferred in ways that preserve the one-owner rule.
- If you need multiple ownership, the real owner has to be a reference-counted cell.
Those cells can be cloned (duplicated.)
- If the owner goes away, so do the things it owns.
- You can borrow access to a data object using a reference.
- There's a big distinction between owning and referencing.
- References can be passed around and stored, but cannot outlive the object.
(That would be a "dangling pointer" error).
- This is strictly enforced at compile time by the borrow checker.
That explains the model. Once that's understood, all the details can be tied back to those rules.
Also, they still expect you to authenticate when they phone you. No, I'm not going to tell you my birthday when you phone me. No wonder so many people get scammed, when banks are training people on how to get scammed.
Airbnb made the same mistake Google did: They screwed up their core service. I used to be a steady ABB customer but now hotels are almost always cheaper, offer better service, and are more predictable.
Not to mention that hotel websites are typically easier to navigate and contain a lot less React-sludge that makes every click take forever to respond.
I've seen a lot of high level engineers at Google leave over the past couple of years. There's vastly more pressure from management and much less trust. And a bunch of L7+ folks have been expected to shift to working on AI stuff to have "enough impact." The increased pressure has created a lot of turf wars among these folks, as it isn't enough to be a trusted steward but now you need your name at the top of the relevant docs (and not the names of your peers).
Prior to 2023 I pretty much only ever saw the L7s and L8s that I work with leave Google because there was an exciting new opportunity or because they were retiring. Now most of the people I see leave at this level are leaving because they are fed up with Google. It's a mess.
If your arrays have more than two dimensions, please consider using Xarray [1], which adds dimension naming to NumPy arrays. Broadcasting and alignment then becomes automatic without needing to transpose, add dummy axes, or anything like that. I believe that alone solves most of the complaints in the article.
Compared to NumPy, Xarray is a little thin in certain areas like linear algebra, but since it's very easy to drop back to NumPy from Xarray, what I've done in the past is add little helper functions for any specific NumPy stuff I need that isn't already included, so I only need to understand the NumPy version of the API well enough one time to write that helper function and its tests. (To be clear, though, the majority of NumPy ufuncs are supported out of the box.)
I'll finish by saying, to contrast with the author, I don't dislike NumPy, but I do find its API and data model to be insufficient for truly multidimensional data. For me three dimensions is the threshold where using Xarray pays off.
That is one of the most incredible things I have ever read.