Hacker Newsnew | past | comments | ask | show | jobs | submit | jonkoops's commentslogin

Ah, yes! The universal and uncheatable LLM! Surely nothing can go wrong.


Perfect is the enemy of good. Current LLM systems + "traditional tools" for scanning can get you pretty far into detecting the low hanging fruit. Hell, I bet even a semantic search with small embedding models could give you a good insight into "what's in the release notes matches what's in the code". Simply flag it for being delayed a few hours, till a human can view it. Or run additional checks.


I can't wait to read about your solution.


You don't need to be a chef to tell that the soup is too salty.


As i wrote "not perfect". But better than anything else or nothing.


The Politician's Syllogism[0] is instructive.

[0] https://en.wikipedia.org/wiki/Politician's_syllogism


OK, we are here now on reddit or facebook?

I thought we discuss here problems and possible solutions.

My fault.


I'm not sure why everyone is so hostile. Your idea has merit, along the lines of a heuristic that you trigger a human review as a follow-up. I'd be surprised if this isn't exactly the direction things go, although I don't think the tools will be given for free, but rather made part of the platform itself, or perhaps as an add-on service.


I don't think "we should use AI to solve this" is a solution proposal.


Yes, the backend-for-frontend (BFF) architecture is an excellent fit for this purpose.


Just use a generic data structure for your validation rules that you can apply on the front-end, and validate on the back-end. Using JavaScript and doing validation on a server are not mutually exclusive.


There absolutely is, this is just extra cruft you need to maintain, and who says that the HTML is universal enough to be used everywhere? This is exactly where a front-end or a backend-for-frontend (BFF) makes sense.


These 'just use HTML' shitposts really miss the mark. Every time I see this stuff it is a form with two fields, come on. Realistically, a lot of applications are forms, but they are much more complex. E.g. fields that can be added and removed, conditional logic depending on selected state, and most importantly a non-flat data structure.

Once you start bolting on all this stuff to HTML, congratulations, you have built a web framework.

I am not advocating that everyone should start using React. But HTML forms are severely underpowered, and you cannot escape JavaScript at some point. I would love it if forms could consume and POST JSON data, that would make this all a lot easier in a declarative manner.


> Every time I see this stuff it is a form with two fields

Yup. No-one is suggesting using React for a login form with 0 interactivity

"why do we even need forklifts?! people can pick their laptops up with their hands!" ok thanks


Well, nowadays, you can implement a lot of logic for forms using HTML and pure CSS, without JS at all. Example:

  form:has(.conditional-checkbox:not(:checked)) .optional-part-of-form {
    display: none;
  }
I’m not saying it’s better (it’s not). Just saying there’s a lot of space between “just HTML” and “a web framework”. It’s worth considering these other options instead of going “full React” from the get-go.


But they are not in the actual Linux kernel, and are a separate module that does not adhere to Linux kernel code conventions. It also has no user-space driver that isn't NVIDIA's proprietary driver. Anything further open-source such as NVK is not being worked on by NVIDIA, but by other 3rd parties.

Compared to AMD and Intel, NVIDIA is very much not an 'out of the box', or stable experience.


As with all things, the poison is in the dose. A tonne of incredibly useful medicine can kill you if dosed incorrectly.


[deleted]


> Public awareness that Tylenol causes liver failure which will last weeks before death might dissuade some.

I suspect it will: There's statistical evidence from how Britain migrated its cooking gas systems away from carbon-monoxide-heavy mixes [0] indicating overall suicide-rates are sensitive to convenience and involve short-term periods of vulnerability. As contrasted to "if they really want to they'll find a way no matter what." [1]

__________

[0] https://www.npr.org/2008/07/08/92319314/in-suicide-preventio...

[1] That said, I wouldn't be surprised if there's a bimodal distribution lurking in there, between "depressed but otherwise healthy" versus "terminal diagnosis and chronic pain." The latter-group might not be deterred by inconvenience.


Sure but it’s pretty easy overdose on paracetamol.

Since it’s a mild and really common painkiller, sometimes seen as not dangerous, someone uneducated about it who is really suffering could easily take 3 or 4 times the dose.

Unlike a lot of drugs, you are not going to have a lot of immediate side effects if you overdose on paracetamol. You’ll just horribly die some days after,


How can you reach adulthood and think it's ok to take 3 or 4x the specific dose of any medication? If you're in that much pain you go to a doctor. Which makes me think maybe the issue is the private health system and not the drug.


Conversely, my grandmother's Alzheimer's became apparent when she was overdosing on either ibuprofen or aspirin, I forget which.

She would take a dose for a headache, ten minutes later she'd forgotten so she'd take another dose, ten minutes later, she had forgotten, take another dose, rinse and repeat until it's time for a trip to the hospital.

Maybe it wasn't ten minutes between, probably fewer given how much she'd ended up taking. This happened thirty or so years ago.


> someone uneducated about it who is really suffering could easily take 3 or 4 times the dose.

And the solution is simple: educate people about that.

And it's not something hard to do, just have pharmacists say “respect the dose as it will kill you if you don't” every time they sell things and it'll work.


But Tylenol/acetaminophen/paracetamol is an over the counter drug, so there is no pharmacist involved in most sales. And I certainly wouldn't count on everyone reading the tiny text on the bottle.


That's entirely a sales (de)regulation problem though.

It's funny that in a country where you can win massive sums of money on stupid trials a drug like paracetamol is sold without supervision.


That can be integrated with Playwright, or did you mean to say it is already used under the hood for their reports?


Gregor was saying it works without needing playwright, and provides more detailed trace recordings than playwright does.

we plan to use rr-web and maybe browsertrix for our website archival / replay system for deterministic evals.


I have converted several large E2E test suites from Cypress to Playwright, and I can vouch that it is the better option. Cypress seems to work well at first, but it is extremely legacy heavy, its API is convoluted and unintuitive, and stacks a bunch of libraries/frameworks together. In comparison, Playwright's API is much more intuitive, yes you must 'await' a lot, but it is a lot easier to handle side effects (e.g. making API calls), it can all just be promises.

It is also just really easy to write a performant test suite with Playwright, it is easy to parallelize, which is terrible in Cypress, almost intentionally so to sell their cloud products, which you do not need. The way Playwright works just feels more intuitive and stable to me.


I have to be honest that I don't really understand the appeal of Ladybird from a purely technical perspective. It is written in C++, just like all the existing engines (yes there is some Swift, but it is negligible), so what benefit does it provide over Gecko or Blink? With Servo, I can see there is a distinct technical design around security and parallelism.


Many many factors to consider. Simplistic take: KHTML was picked up by Apple because of its clean design and codebase; there's an extra 30 years of accumulated improvements in C++; you don't write stuff in Rust 1.0 either.

Also: Andreas has worked on Webkit/Blink. He knew what he was doing when he started the project, even if it was "just for fun". Linux started under similar circumstances.


Linux started under circumstances of Linus wanting to learn 386 assembler. He got two processes writing alternating As and Bs to demonstrate 386 multitasking. Linus had to find SunOS docs etc to copy system call signatures from. It's truly an accidental success story, and while Linus is smart & capable he sure as hell did not "know what he was doing", the whole point was to learn and fiddle with it.

Starting a new browser project without a solid security architecture seems just a bad idea to me. It's such a hostile operating environment. Personal computing in the early 90s was a very different place.


Torvalds had his debate with prof. Tanenbaum (a specialist in OS research), stuck with his own opinions, and achieved an extraordinary success, continuing as a technical project leader and architect for 35+ years and counting. It's circumstantial, but not accidental.


There is more to a project this massive than its choice of language. For me though it's mostly about breaking from the monopoly and putting a check on Google's influence over browser space.


> There is more to a project this massive than its choice of language

For writing a game, or Figma replacement, or some cutesy something that runs on your machine without requiring network access, probably. For one of the most impactful applications that downloads untrusted code from the Internet and executes it without any confirmation whatsoever, the language for sure does matter. "Chrome RCE" is a phrase that strikes fear, and it's not a rare occurrence. I'll point out that Google is not lacking some of the most skilled security researchers and tooling in the world, so I wish the Ladybird folks godspeed doing their own "does this build have vulns?" work


Well that's why they're going to rewrite parts of the code where those things are more likely in Swift


Rust does not produce magic in the assembly code. It does not even produce faster assembly code. Rust toolchain on it's own does not even produce assembly code. It just passes that to LLVM that does THE ENTIRE optimization. Without LLVM (written in C++) doing the heavy lifting, Rust is probably slower than V8 (written in C++) running JavaScript. There's no technical marver in Servo compared to Ladybird. I don't understand the yapping how a language makes projects good/bad, despite it being proven completely false again and again. The appeal is in independence and politics around the project.


The express purpose of building Servo in the first place was to experiment with ways to parallelize the layout algorithm. The advantage of Rust is that it is a language which enables the compiler to better enforce some of the rules that need to be followed to write correct thread-safe code. Note that Mozilla had previously tried--more than once--to do the same thing in C++ for Gecko, and both attempts had failed.

As for the rest of your comment... I believe Rust now has MIR-based optimizations, so it's no longer the case that "THE ENTIRE optimization" is due to LLVM. But it's a non-sequitur to say that Rust would be slower without LLVM. Rust doesn't do many optimizations on its own, because it's quite frankly a lot of boring tedium to implement an entire optimizing compiler stack, and if you've got a library you can use to do that, why not? If no such library were available, Rust would merely be implementing all of those optimizations itself, much as V8 implements those optimizations itself.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: