Hacker News new | past | comments | ask | show | jobs | submit login

It's true that the more complex and capable the tool is, the harder it is to understand what it empowers the humans using it to do. I only wanted to emphasize that it's the humans that are the vital link, so to speak.



You're not wrong, but I think this quote partly misses the point:

>The problem to be solved here is not how to control AI

When we talk about mitigations, it is explicitly about how to control AI, sometimes irrespective of how someone uses it.

Think about it this way: suppose I develop some stock-trading AI that has the ability to (inadvertently or purposefully) crash the stock market. Is the better control to put limits on the software itself so that it cannot crash the market or to put regulations in place to penalize people who use the software to crash the market? There is a hierarchy of controls when we talk about risk, and engineering controls (limiting the software) are always above administrative controls (limiting the humans using the software).

(I realize it's not an either/or and both controls can - and probably should - be in place, but I described it as a dichotomy to illustrate the point)


My first thought is that the problem is with the stock market. The stock market "API" should not allow human or machines to be able to "damage" our economy.


Which is exactly one of many ways to phrase the "control problem": you may sandbox the stock market, but how do you prevent the increasingly powerful and incomprehensible stock-trading AI from breaking out of your sandbox, accidentally or on purpose?

Also, remember that growing intelligence means growing capabilities for out-of-the-box thinking. For example, it's a known fact that in the past, NSA managed to trick the world into using cryptographic tools the agency could break, because they created a subtle failure mode in otherwise totally fine encryption scheme. They didn't go door to door compromising hardware or software - they literally put a backdoor in the math, and no one noticed for a while.

With that in mind, going back to the hypothetical scenario - how confident are you in the newest cryptography or cybersecurity research you used to upgrade the stock market sandbox? With the AI only getting smarter, you may want to consider the possibility of AI doing the NSA trick to you, poisoning some obscure piece of math that, a year or two later, will become critical to the integrity of the updated sandbox. In fact, by the time you think of the idea, it might have happened already, and you're living on borrowed time.


I think you missed my point.

If the stock market crashes, there is a bug in the stock market.

You should fix the bug, not pass a law telling people not to do the thing that cashes the stock market.


Nice sentiment, but exactly nothing outside of purely theoretical mathematical constructs work like this. Hell, even math doesn't really work like this, because people occasionally make mistakes in proofs.

EDIT: think of it this way: you may create a program that clearly makes it impossible for a variable X to be 0, and you may even formally prove this property. You may think this mean X will never be 0, but you'd better not wager anything really important over it, because no matter what your proof says, I can still make X be 0, - and I can do it with just a banana. Specifically, by finding where in memory X is physically being stored, and then using the natural radioactivity of a banana to overwrite it bit by bit.

Now imagine X=0 being the impossible stock market crash. Even if you formally prove it can't happen, as long as it's a meaningful concept, a possible state, it can be reached by means other than your proven program.


Bubbles in the market have been happening for hundreds of years; how would you propose fixing them? Because the only things I can think of tend to erode the whole idea of a market.


It's not really my job to debug the stock market, and well, yeah, perhaps the solution is to have a less free market. I would remove High Frequency Trading for a start. I would make trades slow, really slow. So slow that humans can see and digest what is going on in the system.

All I'm saying is, if there are problems in a system, fix the system. Not throw up our hands and declare the system can't be fixed.


Reality doesn't work that way. Systems are conceptual ideas, they have no real, hard boundaries. Manipulating a system from outside it is not a bug, and is not something that can be fixed.


That requires knowing how it will fail. It’s hard enough to do so with lots of interfaces and even more so when the software is opaque.

Now extend that to safety critical domains where a separate party doesn’t control an API and it gets harder still.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: