Hacker Newsnew | past | comments | ask | show | jobs | submit | more UncleEntity's commentslogin

>> Sam Altman has said “We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”

Any individual Sunday service is nearly cost free if we don't calculate in the 100+ years it took to build the church...


Lol anyway, the point is that even in a scenario where all the major models disappeared tomorrow (including OpenAI, Anthropic, etc), we would still keep using the existing open source models (GLM, Deepseek, Qwen) for a long long time.

There's no scenario where AI goes away completely.

I don't think the "major AI services go away completely" scenario is realistic at all when you look at those companies' revenue and customer demand, but that's a different debate I guess.


> There's no scenario where AI goes away completely.

the scenario is if training becomes impossible (for any reason), then the currently available models quickly become out of date

say this had happened 30 years ago

today, would you be using an "AI" that only supported up to COBOL?


There's no reason I can think of where this isn't the case.

I mean, we're not even up to the "Model T" era of AI development and more like in the 'coach-built' phase where every individual instance needs a bunch of custom work and tuning. Just wait until they get them down to where every Teddy Ruxpin has a full LLM running on a few AA batteries and then see where the market lands.

I always imagine these AI discussion in the context of a bunch of horses discussing these 'horseless carriages' circa 1900...


>> Did I miss anything?

Derived operators?

And, 'A B C' as an array isn't valid (ISO) APL but an extension, the 'array syntax' only covers numbers and the parser is supposed to treat is as a single token.

Your useless information of the day...


Yeah, if you can somehow convince them you really, really want them to follow the specification and not just do whatever they want.

And is doesn't matter how many times you tell them the implementation and, more importantly, the tests needs to 100% follow the spec they'll still write tests to match the buggy code or just ignore bugs completely until you call them out on it and/or watch them like a hawk.

Maybe I'm just holding it wrong, who knows?


Would that be Rapidly Decompressing Capacitors?

...only know what an inductor is from watching a video one the youtubes where they were talking about using them on the suspensions of F1 cars and they explained their relationship to electronic circuits, forget what their actual name is.


Maybe something like a constitutional republic of independent states?

The only way to ensure one political party doesn't seize full and complete control over the entire thing and bend its will to exclusively their goals.


Yeah maybe some system with checks and balances.


Isn't the "Disposable System" in TFA the actual AI?

Today you have to have it write some software to accomplish a task and it's pretty obvious what's going on but when the AI itself becomes the UI then it doesn't matter as much, are the steps to complete a task the goal or is it the end result?

The only thing really stopping the commodification of software is the development of said software.


>> What is the point?

To replace humans permanently from the work force so they can focus on the things which matter like being good pets?

Or good techno-serfs...


>> ...or a shitty context

This is my guess, sometimes it churns through things without a care in the world and other times is seem to be intentionally annoying to eat up the token quota without doing anything productive.

Kind of have to see which mode it's in before turning it loose unsupervised and keep an eye on it just in case it decides to get stupid and/or lazy.


I've been working on this thing where the proofs (using the esbmc library) check the safety properties and the unit tests check the correctness so the state space doesn't explode and it takes a year to run the verification. Been working out pretty well so far (aside from spending more time tracking down esbmc bugs than working on my own code) and found some real issues, mostly integer overflow errors but other ones too.

Kind of loosely based on the paper "A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification" (https://arxiv.org/abs/2305.14752) which, I believe, was posted to HN not too long ago.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: