Hacker Newsnew | past | comments | ask | show | jobs | submit | midnitewarrior's commentslogin

If model makers adopt an LTS model with an extended EOL for certain model versions, these chips would make that very affordable.

I fed this to Claude, and it makes an interesting point in how the Poison Fountain is going to help concentrate AI into the hands of those who can filter out the poison, and out of the hands of those low-budget / open source efforts to build more equitable models that cannot afford to filter out the poison.

> But the strategy is incoherent in a way that bothers me. The framing is "machine intelligence is a threat to the human species, therefore poison the training data." But poisoned training data doesn't make AI disappear — it makes open and smaller models worse while barely denting organizations with the resources to detect and filter adversarial data. Google, Anthropic, OpenAI all have data quality pipelines specifically designed to catch this kind of thing. The people most hurt would be smaller open-source efforts and researchers with fewer resources. So the actual effect is likely to concentrate AI power further among the largest players — the exact opposite of what someone worried about existential risk from AI should want.


It's a valid concern, and one that was raised on reddit a few times too.

But if you're building an open and fair model, I hope you're not just sucking up the entire web and training it on endless stolen data, DoS'ing open source projects constantly. If you just send out crawlers to consume everything, expect some poison. So maybe don't build models that way.


Yes but back then, everybody just got bored of their Ataris and moved on.

Nobody is throwing out their phone or computer. Software will still be needed.

That said, there will be a lot of noise, with 100 choices in each category, how does one rise to the top? Is it simply the one that sticks around the longest and doesn't become abandonware?


Accountants will still exist, but we'll need fewer of them at any given time. In your example of flipping the 80/20 ratio, you are implying that each accountant would be able to (theoretically) handle a 5x workload with AI making up the gap.

Perhaps in reality more like a 3x advantage, due to human inefficiencies and the overhead of scaling the business to handle more clients.

Given that, 3x increase of productivity implies we either need 1/3 the accountants, or the accountancy supply brings down prices and more clients start hiring accountants due to affordability.


Do that and the AI might fork the repo, address all the outstanding issues and split your users. The code quality may not be there now, but it will be soon.

This is a fantasy that virtually never comes to fruition. The vast majority of forks are dead within weeks when the forkers realize how much effort goes into building and maintaining the project, on top of starting with zero users.

While true, there are projects which surmount these hurdles because the people involved realize how important the project is. Given projects which are important enough, the bots will organize and coordinate. This is how that Anthropic developer got several agents to work in parallel to write a C compiler using Rust, granted he created the coordination framework.

I think the difference now (in case code quality is solved with LLMs) is the cost of effort is now approaching zero.

Good enough AI is not cheap (yet). So at the moment it's more a scenario for people who are rich enough. Though, small projects with little maintenance-burden might be at a risk here.

But thinking about, this might be a new danger to get us into another xz-utils-situation. The big malicious actors have enough money to waste and can scale up the amount of projects they attack and hijack, or even build themselves.


XZ Utils is EXACTLY what came to mind for me.

That exploit / takeover happened precisely because an angry user was bullying a project maintainer, and then a "white knight" came in to save the day and defend the maintainer against the demanding users.

In reality, both the problem and the solution were manufactured by the social engineer, but bullying the maintainer was the vector that this exploited.

What happens when agents are used to do this sort of thing at scale?


This might be true today, but think about it. This is a new scenario, where a giga-brain-sized <insert_role_here> works tirelessly 24/7 improving code. Imagine it starts to fork repos. Imagine it can eventually outpace human contributors, not only on volume (which it already can), but in attention to detail and usefulness of resulting code. Now imagine the forks overtake the original projects. This is not just "Will Smith eating spaghetti", its a real breaking point.

I'm equal parts frightened and amazed.


If your bot is actually capable of doing as your say, why waste time forking OSS repos? Why not instruct it to start 1000 new tech startups and start generating you tons of money? I can "think about" winning the lottery with just as much rigor and effect as day dreaming about the kind of all encompassing intelligence you describe.

Maybe it's time to stop being "frightened and amazed" and come back to reality.


Because everyone's bot will be trying to start a business and make money at the same time, hence nobody will make any money.

Now technically if bots actually improve tech the cost of actual products/services will go down Because of competition.

You keep saying 'come back to reality' but drag someone from 200 years ago to today and our reality would be so shocking they may not recover from it.


> The code quality may not be there now, but it will be soon.

I'm hearing this exact argument since 2002 or so. Even Duke Nukem Forever has been released in this time frame.

I bet even Tesla might solve Autopilot(TM) problems before this becomes a plausible reality.


I mean in 1850 I kept hearing heavier than air flight was just a year away, and yet here we are without heavier than air flight...

I am perfectly willing to take that risk. Hell i'll even throw ten bucks on it while we are here.

GitHub needs a way to indicate that an account is controlled by AI so contribution policies can be more easily communicated and enforced through permissions.

Well GitHub is Microsoft who bet everything on AI and trying to force-feed it into anything. So I wouldn't hold my breath. Maybe an agent that detects AI.

3 years too late to get serious about GPUs.


Not at all. There is no indication that the world won't need more GPUs going forward.


Intel started making and selling their own gpus many years ago, this news is just that they are going to fab the chips themselves, instead of outsourcing to TMSC.


It's too late after Sony and Microsoft choose AMD64 SOC (CPU+GPU) as gaming platform. AMD is the only provider over 10 years.


You can submit an application online, and require a printed QR code be mailed via post with a handwritten cover letter, achieving the personal intent, inconvenience tax, and cost, while still providing the convenience of digital resumes and applications. The mailed QR serves as a CAPTCHA.


idk if you invented this but "add a ? to the end to get an AI response" seems like a convention that should catch on.

Well done!


OMG you've noticed!

I was so proud when I came up with it, but I don't think anyone ever commented on it until now.

and I agree. It needs to catch on!


...says the company that is not under any threat of being replaced by AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: