Hacker Newsnew | past | comments | ask | show | jobs | submit | gmerc's commentslogin

AI Slop

It's the old playbook again. They're using massive money to distort the market until the competition is bled dry while also operating the platform and using signal from the platform to target their competitors, classic DMA violation really. This all boils down to Chinese vendors getting banned from the market for "national security reasons" because if not, this all dies in a fire for Google investors. Nothing a gold pixel phone to the right places can't fix

Presumably ...? It's the business model. Subsidize until the competition is down to 2, then extract. That's the entire Valley. Which is why the Chinese and Open Source need to be pushed from the market for the whole banana to work

Yup. The entire business models of Google (or rather, Gemini), OpenAI and Anthropic stand on open source models not being as good.

They're literally all just a single open source model away from effectively becoming trillion dollar paperweights.


That's what scaling compute depth to respond to the competition look like, lighting those dollars on fire.

Anyone with a brain can see that Github will have to be enshittified at scale with the pressure created from AI Code and everything generation. These guys are just getting ahead of the curve

Until 2 remain, then it's extraction time.

Or self host the oss models on the second hand GPU and RAM that's left when the big labs implode

China will stop releasing open weights models as soon as they get within striking range; c.f. seedance 2.0.

ByteDance never really open sourced their models though. But I agree, they will only open source when it doesn't really matter.

Have you heard about this thing called AI coding agent....

If you know the author you know it's a match made in heaven

Maybe OpenAI has finally learned that dancing on all the parties at once, when all parties are progressing towards "commodity".

Why do people fall for this. We're compressing knowledge, including the source code of SQLite into storage, then retrieve and shift it along latents at tremendous cost in a while loop, basically brute forcing a franken version of the original.

Because virtually all software is not novel. For each single partially novel thing, there are tens of thousands of crud apps with just slightly different flow and data. This is what almost every employed programmer does right now - match the previous patterns and produce a solution that's closer to the company requirements. And if we can brute force that quickly, that's beneficial for many people.

> Because virtually all software is not novel.

That isn't true, not by a long shot. Improvements happen because someone is inspired to do something differently.

How will that ever happen if we're obsessed with proving we can reimplement shit that's already great?


At the code level it's still rehashing the same ideas over and over again. I wrote lots of things from software 3d on a weird system to jit to websites to telephony software to compilers to firmware for hardware to cloud orchestration and many other things and none of this was novel - someone wrote every single pattern from them before even if nobody put them together the same way. Putting known pieces together is not novel. And as a proportion, almost all software produced is just business apps of various types, with absolutely nothing novel in them.

Also from actual researchers, I know just one person who did something actually novel and it was with queuing.


> At the code level it's still rehashing the same ideas over and over again.

I agree that rehashing the same ideas over and over again is sufficient - for some strange, complacent definition of the word. It's not the only way to think about the discipline, and thank goodness enough smart people realize that.

> Also from actual researchers, I know just one person who did something actually novel and it was with queuing.

Think how many people have to be trying at any given time for it to happen at all.


I agree.

While I'm generally sympathetic to the idea that humans and LLM creativity is broadly similar (combining ideas absorbed elsewhere in new ways), when we ask for something that already exists it's basically just laundering open source code


Months (years?) of publicity from AI companies telling us that the AI is nearing AGI and will replace programmers. Some people are excited about that future and want it now.

In reality, LLMs can (currently) build worse versions of things that already exist: a worse database than SQL, a worse C compiler than GCC, a worse website than one done by a human. I'd really like to see some agent create a better version of something that already exists, or, at least, something relatively novel.


>a worse database than SQL, a worse C compiler than GCC, a worse website than one done by a human.

But it enables people who can't do these things at all to appear to be able to do these things and claim reputation and acclaim that they don't deserve for skills they don't have.


License laundering and the ability to not credit or pay the original developers.

Laundering public domain code no less

copyright laundering machine. which could poison the very notion of ip / copyright, either open or close source. the only code that can't be laundered becomes code hidden behind a server api

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: