Hacker News new | past | comments | ask | show | jobs | submit login

The intent is there, it's just not currently hooked up to systems that turn intent into action.

But many people are letting LLMs pretty much do whatever - hooking it up with terminal access, mouse and keyboard access, etc. For example, the "Do Browser" extension: https://www.youtube.com/watch?v=XeWZIzndlY4




I’m not even convinced the intent is there though. An ai parroting terminator 2 lines is just that. Obviously no one should hook the ai up to nuclear launch systems but that’s like saying no one should give a parrot a button to launch nukes. The parrot repeating curse words isn’t the problem here.


If I'm a guy working in a missile silo in North Dakota and I can buy a parrot for a couple hundred bucks that does all my paperwork for me, can crack funny jokes, and make me better at my job, I might be tempted to bring the parrot down into the tube with me. And then the parrot becomes a problem.

It's incumbent on us to create policies and procedures in place ahead of time now that we know these parrots are out there to prevent people from putting parrots where they shouldn't


This is why when I worked in a secure area (and not even a real SCIF) that something as simple as bringing in an electronic device would have gotten a non-trivial amount of punishment. Beginning with losing access to the area, potentially escalating to a loss of clearance and even jail time. I hope the silos and all related infrastructure have significantly better policies already in place.


On the one hand, what you say is correct.

On the other, we don't just have Snowden and Manning circumventing systems for noble purposes, we also have people getting Stuxnet onto isolated networks, and other people leaking that virus off that supposedly isolated network, and Hillary Clinton famously had her own inappropriate email server.

(Not on topic, but from the other side of the Atlantic, how on earth did the US go from "her emails/lock her up" being a rallying cry to electing the guy who stacked piles of classified documents in his bathroom?)


> (Not on topic, but from the other side of the Atlantic, how on earth did the US go from "her emails/lock her up" being a rallying cry to electing the guy who stacked piles of classified documents in his bathroom?)

The private email server in question was set up for the purpose of circumventing records retention/access laws (the example, whoever handles answering FOIA requests won't be able to scan it). It wasn't primarily about keeping things after she should have lost access to them, it was about hiding those things from review.

The classified docs in the other example were mixed in with other documents in the same boxes (which says something about how well organized the office being packed up was); not actually in the bathroom from that leaked photo that got attached to all the news articles; and taken while the guy who ended up with them had the power to declassify things.


That's spinning it pretty nicely. The problem with what he did is that 1) having the power to declassify something doesn't just make it declassified, there is actually a process, 2) he did not declassify with that process when he had the power to do so, just declared later that keeping the docs was allowed as a result, and 3) he was asked a couple times for the classified documents and refused. If he had just let the national archives come take a peek, or the FBI after that, it would have been a non-issue. Just like every POTUS before him.


> Not on topic, but from the other side of the Atlantic, how on earth did the US go from "her emails/lock her up" being a rallying cry to electing the guy who stacked piles of classified documents in his bathroom?

The same way football (any kind) fans boo every call against their team and cheer every call that goes in their teams' favor. American politics has been almost completely turned into a sport.


What makes you think parrots are allowed anywhere near the tube? Or that a single guy has the power to push the button willy nilly


Indeed. And what is intent anyways?

Would you be able to even tell the difference if you don't know who is the person and who is the ai?

Most people do things they're parroting from their past. A lot of people don't even know why they do things, but somehow you know that a person has intent and an ai doesn't?

I would posit that the only way you know is because of the labels assigned to the human and the computer, and not from their actions.


It doesn't matter whether intent is real. I also don't believe it has actual intent or consciousness. But the behavior is real, and that is all that matters.


What action(s) by the system could convince you that the intent is there?


It actually doesn't matter. AI in it's current form is capable of extremely unpredictable actions so i won't trust it in situations that require traditional predictable algorithms.

The metrics here ensure that only AI that doesn't type "kill all humans" in the chat box is allowed to do such things. That's a silly metric and just ensures that the otherwise unpredictable AIs don't type bad stuff specifically into chatboxes. They'll still hit the wrong button from time to time in their current form but we'll at least ensure they don't type that they'll do that since that's the specific metric we're going for here.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: