Hacker Newsnew | past | comments | ask | show | jobs | submit | fwlr's commentslogin

There are costs to doing ads (e.g. it burns social/political capital that could be used to defuse scandals or slow down hostile legislation, it consumes some fraction of your employees’ work hours, it may discourage some new talent from joining).

You have AGI, why do you care about new talent? You have AGI to do the ads. You have AGI to pick the best ads.

Isn't that the pitch of AGI? Solve any problems?


Yes. Infinite low cost intelligence labor to replace those pesky humans!

Really reminds me of the economics of slavery. Best way for line to go up is the ultimate suppression and subjugation of labor!

Hypothetically can lead to society free to not waste their life on work, but pursue their passions. Most likely it’ll lead to plantation-style hungry-hungry-hippo ruling class taking the economy away from the rest of us


If you had told me in 2011, when I first started discussing artificial intelligence, that in 2026 a trillion dollar company would earnestly publish the statement “Our mission is to ensure AGI benefits all of humanity; our pursuit of advertising is always in support of that mission”, I would have tossed my laptop into the sea and taken up farming instead.

> I would have tossed my laptop into the sea and taken up farming instead.

You still can, no-one is stopping you now.


I thought your quote was hyperbole or an exaggerated summary of the post. Nope. It's literally taken verbatim. I can't believe someone wrote that down with a straight face... although to be honest it was probably written with AI

In 2011 I would've had trouble believing there could be a trillion dollar AI company, but that if there was such a company I could almost expect they would make such an asinine statement.

15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.


It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).

If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.

DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.

The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.

Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).


That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.


Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.

Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)

There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.

It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"


When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.

Please don't.


THANK YOU


This really is a very wrong take. My iPhone 11 isn't that old but it struggles to render some websites that are Chrome-optimised. Heck, even my M1 Air has a hard time sometimes. It's almost 2026, we can certainly stop blaming the client for our shitty webdevelopment practices.


>There comes a point where supporting 10yo devices isn't worth it

Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.

I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".


Tangentially related: one of my favourite things about JavaScript is that it has so many different ways for the computer to “say no” (in the sense of “computer says no”): false, null, undefined, NaN, boolean coercion of 0/“”, throwing errors, ...

While it’s common to see groaning about double-equal vs triple-equal comparison and eye-rolling directed at absurdly large tables like in https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guid... but I think it’s genuinely great that we have the ability to distinguish between concepts like “explicitly not present” and “absent”.


From quickly messing around in the playground, it seems (in math mode) Typst treats multiple spaces identical to single spaces. A simple, consistent, flexible, and probably-not-majorly-breaking-old-documents rule would be “anything with no spaces has higher precedence / tighter binding than anything with one space, anything with one space has higher precedence / tighter binding than anything with two spaces”, etc, and then - only within each spaces category - you apply one of the precedence rulesets described in the article. Any confusion or surprise can be solved intuitively and without thought by mashing spacebar.


I agree with you conceptually, and am also laughing a bit thinking about how many people get angry about significant whitespace in Python and how much deeper down that rabbit hole "operator precedence changes based on whitespace" this proposal is :D


Scrolling up and down the list, just how onerous is this reporting regulation? It seems almost cartoonishly excessive, even for critical safety applications.


Literally no amount of incident reporting is excessive when it comes to nuclear power. Not just because of the safety of the plant itself, but because so much is reliant on it.

It's important to identify even small defects or incidents so that patterns can be noticed before they turn into larger issues. You see the same breaker tripping at 3x the rate of other ones, and even though maybe nothing was damaged you now know there's something to investigate.


Aaaand it’s this alarmist attitude which is why we don’t have abundant cheap nuclear energy.

Sea-drilling rigs (oil) have far more potential for environmental damage than modern nuclear plants

Yet they have no federal public register for when a worker falls overboard (an incident far more likely to result in death).


> Sea-drilling rigs (oil) have far more potential for environmental damage than modern nuclear plants

Key word: "modern". A key aspect of a modern nuclear plant, that supports its high level of safety, is the required incident reporting and followup.

The relevant issue is not really about a single worker being injured or dying. It's about detecting safety issues which could lead to a catastrophe far beyond what a sea oil drilling rig can, at least when it comes to human life and habitability of the surrounding area.

For example, after Chernobyl, much of Europe had to deal with contamination from cesium 137.

The entire planet's geological history shows when the nuclear age started, because humans are irresponsible in aggregate. (See also global warming.)

> Aaaand it’s this alarmist attitude ...

You're providing an object lesson in why humans can't really be trusted to operate systems like this over the long term.


> You're providing an object lesson in why humans can't really be trusted to operate systems like this over the long term.

Ironically so are you. The coal we burn puts far more radioactivity into the environment than nuclear plants do. Yet we make sure nuclear isn't viable and burn coal like crazy. We do this only because of the type of risk telescoping you are doing. If you do a rational risk assessment, you will see that even operating nuclear plants as shown in the Simpsons would have less risk than what we are doing now. There is a risk to doing nothing. You are missing that part in your assessment.


> It's about detecting safety issues which could lead to a catastrophe far beyond what a sea oil drilling rig can

A worker falling into a reactor pool (which is just room temp water with very little risk) is not a catastrophe, yet due to the absurd safetyism surrounding nuclear it requires a federal report.

We don’t require this level of cost insanity for far more deadly worker events at oil, gas, solar or wind facilities.

There is no systemic risk from worker falls. MAYBE the plant in question should address hand railing heights from pre-ADA construction. It certainly shouldn’t require multiple federal government employees to create a report on it and be publicly listed in federal register and reported on by hundreds of news outlets.

You’re making my point.


> We don’t require this level of cost insanity for far more deadly worker events at oil, gas, solar or wind facilities.

This is not saying what you think it's saying.


> Not just because of the safety of the plant itself, but because so much is reliant on it.

When an oil rig has an incident, cities and hospitals and food storage and logistics aren't disrupted.


Having the infrastructure for reporting incidents is the expensive part.

Doing it often doesn’t really add to the cost. More reporting is helpful because it explicitly makes it clear even operational issues can have lessons to be learned from. It also keeps the reporting system running and operationally well maintained.

WebPKI does this as well.


I believe this is a case of “developers who went into the wallet business”, actually.


In code the semantic difference is pretty small between “select one at random” and “select two at random and perform a trivial comparison” - roughly about the same difference as to “select three at random and perform two trivial comparisons”. That is, they are all just specific instances of the “best-of-x” algorithm: “select x at random and perform x-1 comparisons”. Natural to wonder why going from “best-of-1” to “best-of-2” makes such a big difference, but going from “best-of-2” to “best-of-3” doesn’t.

In complexity analysis however it is the presence or absence of “comparisons” that makes all the difference. “Best-of-1” does not have comparisons, while “best-of-2”, “best-of-3”, etc., do have comparisons. There’s a weaker “selections” class, and a more powerful “selections+comparisons” class. Doing more comparisons might move you around within the internal rankings of the “selections+comparisons” class but the differences within the class are small compared to the differences between the classes.

An alternative, less rigorous intuition: behind door number 1 is a Lamborghini, behind door number 2 is a Toyota, and behind door number 3 is cancer. Upgrading to “best of 2” ensures you will never get cancer, while upgrading again to “best of 3” merely gets you a sweeter ride.


JavaScript promises are objects with a resolver function and an internal asynchronous computation. At some point in the future, the asynchronous computation will complete, and at that point the promise will call its resolver function with the return value of the computation.

`prom.then(fn)` creates a new promise. The new promise’s resolver function is the `fn` inside `then(fn)`, and the new promise’s asynchronous computation is the original promise’s resolver function.


Seems very likely this will lead to “professional repackagers” whose business model is “for a fee you may install our fork of curl and we will promptly reply to emails like this”, unfortunately.


Red Hat would be smart to get in on this


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: