Hacker Newsnew | past | comments | ask | show | jobs | submit | LordHumungous's commentslogin

What if I tell the model to go commit fraud or crimes and it complies? What if users are having psychotic episodes driven by their interactions with the model?

Just because safety is a hard and messy problem doesn't mean we should just wash our hands of it.


It is a hard and messy problem, and it doesn't help when people muddy the water further by stirring things like "Don't commit fraud," "Don't infringe on Disney's trademark," and "Don't be racist" into the mix and try to lump those things under the "Safety" umbrella.

Maybe this is an outdated definition, but I've always thought of safety as being about preventing injury. Things like safety glasses and hardhats on the work site, warning about slippery floors and so on. I think people are trying to expand the word to mean a great many more things in the context of AI, which doesn't help when it comes to focusing on it.

I think we need a different, clearer word for "The AI output shouldn't contain certain unauthorized things."


The more messy a problem is, the less it should be decoupled and siloed into its own team.

Instead of making actual improvement on the subject (you name it, safety, security, etc), it becomes a checkbox exercise and metrics and bureaucracies become increasingly decoupled from truth.


But think about how much money there is to be made by just ignoring it all!

Why not rewrite if code is free

suppose, for them generating code is indeed free (tokens), but autonomous coding is not solved (at least yet)

if it gets solved, (1) we are all f....d! Not only software engineers. (2) they can rewrite it using Assembly. they == AGI


However there are many people out there making the argument that code is free or nearly so. I think the article is directed at them.

Usually all code has an owner though. If I encounter a bug the first thing I often do is look at git blame and see who wrote the code then ask them for help.

Kind of funny ngl

If the worst predictions about AI's effect on employment turn out to be correct, then I'd expect to see movements to force government regulation of AI. Particularly if it becomes the case that profits are accruing to a few massive corporations who run the AI.

There is no reason people have to tolerate a technology that is destructive to society, anymore than they have to tolerate companies selling fentanyl at 7/11.


You are thinking about this as a reasonable, compassionate human being who is at the very least neutral toward your fellow man’s well being.

The psychos who run the show don’t think like that. Many of them enjoy abusing other people.

They will wall themselves off with their robots with instructions to kill to control the masses.

Unless, power is given to the people through widespread (direct-)democratic reform, urgently.


I wish it was just psychos with power that are causing these issues. It's worse than that I think. It's the competition based systems of human organization that will result in what you're describing.

Even if people in one country manage to get rid of the psychos and give power back to the people, the countries that continue at full speed to full automation of the economy and the military will just win the competition over resources and power. For as long as our economic system and the systems that govern relationships between countries are based on competition, we are forced to continue on this path. The ones that choose not to will lose.

We would need to quickly build systems based on cooperation instead of competition if we want to avoid a disaster. No more markets, no more competing nation states. Probably an impossible task considering we don't have much time left before we have automated systems that make it impossible for people to take back control from the owners of those systems.


I believe you are fundamentally wrong.

I believe decentralized, democratic systems(not the sham imposed in most countries) are inherently _better_ systems than autocratic rule, and will produce better rules for the whole.

Competition is good, but must be done by rules enforced by the global community.


Could be, if we can come up with efficient ways to govern using direct democracy that lead to better decisions than what we now have. I don't see much work being done to come up with such a system, though.

Already implemented in Switzerland.

What they do there is not enough. All decisions should be made using some form of direct democracy, otherwise you leave an opening for power concentration again, which will lead to the same problems we're now facing. We can't make all decisions using referendums.

> then I'd expect to see movements to force government regulation of AI

The big question that never gets answered is: What regulation, specifically?

Any one country could come out and declare that AI can’t be used or just be taxed at an exorbitant rate or something along those lines, but what would happen? The AI usage would go to another country.

If the US heavily regulated AI, China just runs away with it all. None of the calls for regulation I’ve seen have an answer for this, aside from the completely crazy calls to bomb data centers in other countries.


Most people are far more concerned about their livelihood than about abstract notions of beating China in a game of geopolitics. In that scenario the US becomes an isolated economy, and other nations likely follow suit. It will be a poorer, less dynamic world, but most people will choose that outcome over poverty.

Where were those concerned people when they lost their jobs due to production moving to China?

Seems like what you are saying already failed in the past.


very easy.

nationalize. the people own the AI and the profits go to them.


I'd expect it to be more dramatic. The worst predictions are something like "50% of all jobs will be completely eliminated in two years." That's violent uprising territory, not pressure on governments to improve regulation.

> The worst predictions are something like "50% of all jobs will be completely eliminated in two years."

Yes but we’ve been hearing this for two years now and it’s not happening.

Even the silly AI2027 project was predicting society destroying levels of AI arriving next year and that has aged poorly.


If "50% of all jobs will be completely eliminated in two years" comes to pass, then there will be a violent contraction, perhaps even a total collapse, of all advanced economies. In this eventuality, the venture capitalists who funded the rise of AI will lose their money.

If a large percentage of jobs have not been eliminated in two years' time, it will be because AI has largely failed to deliver on its boosters' predictions. In this eventuality, the venture capitalists who funded the rise of AI will lose their money.

What's the end game for these people?


>I'd expect to see movements to force government regulation of AI.

I agree. It will be an interesting debate to watch play out, because a) lots of end-users love using AI and will be loath to give it up, and b) advances in compute will almost certainly allow us to run current frontier models (or better) locally on our laptops and phones, which means that profits no longer accrue to a few massive AI labs. It would also would make regulating it a lot tricker, since kneecapping the AI labs would no longer effectively regulate the technology.


That would be an interesting scenario.

Yes, and that’s the least catastrophic option. I get the sense the boosters don’t read a lot of history.

> There is no reason people have to tolerate a technology that is destructive to society,

All evidence to the contrary. Aside from the French occasionally burning down some cars the western populations (me unfortunately included) have become remarkably relaxed about such things.

Even very extreme examples like blatant refusal by government to investigate absolutely horrific stuff like Epstein gets at most some mildly upset TikTok reels

Add some aggressive lobbying by big tech and perhaps a sprinkle of palantir population monitoring and I don’t think we’ll see a refusal to tolerate at scale


There's a big difference between the Epstein files and 50% of your voters losing their jobs and being unable to find another one at similar pay-rates.

One of those is an annoyance, the other is full blown revolution territory.


> For apps that run locally—no servers, no cloud costs—subscriptions make no sense anymore.

Did it ever make sense? I always scoffed at the idea of paying a subscription to use a text editor or paint tool.


It never made sense, it was just possible to get away with it because there's often been no alternative for many people.

Good riddance to software subscriptions.

I hope proprietary software goes the same way entirely. If it's trivial to build an open source competitor, why pay for software can't modify (also trivially).


> Good riddance to software subscriptions.

Counter argument ... at what point is software still profitable to be sold?

I am running my Office 2007 still, and that thing is now almost 20 years old. That was a one time sale, with no other revenue for Microsoft.

I am not condoning subscriptions but one time selling software only works good, if your a small team with low overhead. The more you sell, the more support becomes a issue. And normal customers do not pay for support.

Making software now has become easier with LLMs but the same problem keeps existing in regards to support. Sure, you can outsource this to LLMs but lets just say that is problematic (being kind).

So unless you plan on making software that is not heavily supported/updated, and keep a low single/team cost...

If you sold a program for a one time fee of ... $39.

What if somebody now sells the same for $29 with LLMs. And the next guy in China does it even cheaper because his overhead is even smaller. Eventually you get into abandonware where software is made to just eat sales from the bigger guy and that is it.

Unless you focus on companies, and they have way less issue paying for subscriptions (if it includes support). You see the issue. People kind of overlook the cost of actually running a self employed job or a company (this is a MAJOR cost the moment you need to hire somebody).

So no, i do not see subscriptions going away because companies will pay for it. And on the normal consumer level, paid support as the solution?


I buy the support argument for companies.

I also buy the argument that a lot of time people are actually paying for cloud storage. While I'd love to see a generic protocol for cloud or self-hosted storage that every app can sync to, I expect we'll continue to see subscription software persist by locking down and gatekeeping cloud storage and sync, too.

But really I would be happy for that to go away.

I don't use much software that's sold in any way[0], and I'd prefer it to be none. The ideal situation is for it to always be better to collaborate on open source software than to build in private and keep it to yourself.

[0] I do donate to projects I like and use, though


>The ideal situation is for it to always be better to collaborate on open source software than to build in private and keep it to yourself.

This works for some software (developer tools is the prime example) but not so much for other things. Who is going to maintain the MTD software I used for my VAT returns without recompense. Who is going to update the PAYE payroll software I relied on.

Even with developer tools I feel we will lose something without companies like JetBrains. There would be no Kotlin without people paying for their software.

That's before you think about huge corporations leeching off of our free work or the AI companies vacuuming up open source only to regurgitate it for $200 a month.

Developers should consider that most people value things by how much they pay and if they aren't paying anything then you and your work can't have much value.


> Developers should consider that most people value things by how much they pay and if they aren't paying anything then you and your work can't have much value.

Most people in most situations pay as little as they can get away with not as much as they value the product or service.

The only time this is arguably somewhat untrue is when the point of having the thing is to signal wealth, but even someone buying e.g. a Rolex wants to do so as cheaply as possible, so it's only really true when they're directly spending the money in front of people (think bar, restaurant, nightclub, etc.)

I agree that right now it's mostly developer tools that are doing best in terms of open source. But browsers, operating systems, 3D modelling software, image/photo editing, and many others are not so far behind either.

My assertion/belief[0], though, is that the direction of travel is for open source to become dominant in more and more classes of software, especially as AI reduces the cost of contribution and collaboration, and disincentivises closed, proprietary software.

[0] Based on what I and others around me have been able to do with AI already and how fast it is moving.


Agreed, the abundance of many apps and the fact that subscriptions and paid apps are going to zero means anyone can make an app for themselves or use an open source one.

No need to pay for someone else’s one.


And we will all love from fresh water and love, can't wait for that world!

Seriously, you pay for software so people can make a living to improve it. It is a service like anything else.


The paid software I use is the worst software I use.


>Did it ever make sense? I always scoffed at the idea of paying a subscription to use a text editor or paint tool.

Whether it made sense for you is mostly irrelevant. The question is whether it worked for the developer. You can read endless complaints about Adobe's subscription model but the profits kept rolling in.


(Assuming you're okay with paying for software, just not an ongoing subscription)

Reasons why subscriptions may be a "better" than upfront licenses, even when the subscription cost more in the long run:

1. Cashflow management

2. Bypass budget approvals due to smaller amounts


LLMs finally deliver on the crochety front end dev's dream of writing everything in vanilla JS. Hallelujah.


I have mixed feelings. On the one hand, I totally feel what this author is saying. On the other hand , I love that I am now able to push into areas that I could have never touched before, and complete successful projects in them.


Do people actually have success with agent orchestrators? I find that it quickly overwhelms my ability to keep track of what its doing.


This is the fracture in the industry I don't think we are talking about enough.

It overwhelms everyone's ability to keep track of what it's doing. Some people are just no longer keeping track.

I have no idea if people are just doing this to toy projects, or real actual production things. I am getting the sneaking suspicion it's both at this point.


Orchestration buys parallelism, not coherence. More agents means more drift between assumptions. Past a point you're just generating merge conflicts with extra steps.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: