Hacker News new | past | comments | ask | show | jobs | submit login
Establishment of the U.S. Artificial Intelligence Safety Institute (commerce.gov)
71 points by frisco on Nov 1, 2023 | hide | past | favorite | 86 comments



Recent and related:

Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence - https://news.ycombinator.com/item?id=38067314 - Oct 2023 (334 comments)


Let's shoot US innovation and leadership in the foot by establishing random limits on foundation model research.

According to EO's guidelines on commpute, something like GPT4 probably falls under reporting guidelines. Also, in the last 10 years GPU compute capabilities grew 1000x. What will be happening even 2 or 5 years from now?

Edit: yes, regulations are necessary but we should regulate applications of AI, not fundamental research in it.


Mature corporations need liability protection in order to operate. As "AI" tools become widespread, they're going to want and then require assurance that liability for using those tools falls on somebody else.

A healthy regulatory body provides for that by setting standards and holding the relatively few vendors liable for conformance rather than the countless users.

It does interfere with innovation for those vendors doing foundational research, but it enables richly funded innovation in applications. It seems like we're at a point where lots of people want to start working on applications using current/near technology; failure to provide them the liability protections they need is what will stifle practical, commercial innovation and would leave AI applications in the hands of the few specialist technology companies who are confident in their models and have the wealth to absorb any liability issues that arise.


I don't want the companies recklessly operating AIs protected. The capabilities of AI aren't dangerous, only their applications, and those who want to commercialize AI should have to demonstrate that they are using them responsibly. If they're not ready to do that yet, then the field is not mature enough yet to warrant fostering commercialization.


Name one limitation imposed by this EO or this agency. The word 'limit' doesn't even appear in the article. The limitations recommended in the EO mostly focus around government use of the technology.

As for reporting minimums, the ones in the EO are explicitly temporary. Quoting directly: "...shall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements..." "Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements..."

So, my question is: why are you ignoring the actual things happening in favor of complaining about phantoms?


You write it like there is litteraly 0 risk. Are you aware there are some malicious deviations already possible with AI ? Edit : Totally agree with qualifiedai's edit


The risk of AI, is much smaller than the risk of computer graphics or social media.

My point is that applications of AI must be regulated, not fundamental research.


How is it different if the same application can be done without the use of an ai?


I totally agree with your second point.


> Are you aware there are some malicious deviations already possible with AI ?

Alright, you hooked me in. What are they?



That is not what is being discussed. Those things are already very illegal and we don't need new and novel ways to address them.


That's exactly what's being discussed?

A dramatic reduction in cost and increase in effectiveness of some undesirable behavior is exactly when you should look for new ways to address it. The goal of making things illegal is to prevent their occurrence, and if they get suddenly much cheaper and more effective, then your prior methods of deterring them will no longer work.


> The goal of making things illegal is to prevent their occurrence.

Making drugs illegal didn't stop people from using drugs. Only a person can stop themselves from doing something, that's not something a law does.


I didn't say all laws are 100% effective, or even greater than 0% effective. I stated why we have laws at all. Pretty wild logic you've got here. Let's try this one:

> Making murder illegal didn't stop people from murdering. Only a person can stop themselves from doing something, that's not something a law does.

Should we not have rape, murder, arson, or fraud laws?


You're confusing a bad law vs a law you don't like.

Turns out drug laws for adults are bad because a huge portion of the population does them. This said very few would agree that we should start letting kids do drugs.

Nuance is important and many people don't seem to grasp that distinction on things that hit close to home with them.


For example, some black hats trained LLMs to pentest, so to find more easily vulnerabilities. Those can be used either to improve your defenses or attack entities.

The AIs like Copilot et al are trained on code poorly written with bad security practices (there is a lot more than you think), hence reproducing these bad practices on produced code.

Because also AI are fallible, the spreading of misinformation more than we already have. The retrieval of credentials with prompt hacking, because people push their credentials.

Because they are generated by AI, the misuse of deepfakes, for example a spanish girl was blackmailed with alleged naked pictures of her, but could be used for far worse.

And I did not scratch the copyright/artistic side of AI.

It's not the AI per se the risk, but what people can do with it. Everything is not beautiful. But there are also good things with AI, I agree.

I think there is the need for some form of regulation in a way or another, the sooner the better. I don't expect the regulation to restrain creativity, but to help prevent bad stuff happening.


You need to argue two things:

1. There are risks specific to AI or specifically aggravated by AI (easy)

2. Federal regulation of AI safety will reduce those risks (good luck)

When articulating your arguments for point 2, I would recommend addressing the thorny issue of proliferation.


It's not my job nor I have the imagination or the knowledge to argue 2.

But don't you agree at least some legal questions should be asked about this overhype of AI ? Because I don't see any so far.

Edit : this is the kind of legal question I was talking about, just learned it now : https://news.ycombinator.com/item?id=38102760


>But don't you agree at least some legal questions should be asked about this overhype of AI ?

I have trouble answering that question as you've asked it. It seems like we agree on several things, namely:

1. that any technology is subject to worst-case analysis; and,

2. that it is appropriate in principle for law to govern the use of technology.

Here's what I'm having trouble unpacking in your question:

1. What are the exact legal questions you think should be asked, and aren't? (N.B. Your link is paywalled, and doesn't seem to refer to a specific legal question)

2. What is it about AI exactly that you think is overhyped, and seem to think I disagree with?

I don't have a lot of context to go on, so some of my questions may also contain unwarranted assumptions. I hope you'll point them out :)

1. Have you thought about the difficulties involved in legislating around AI? Specifically, I've found it very difficult to articulate what is and isn't appropriate use of AI with any real precision. Let me give an example. I think we can all agree that "nudifying" photographs of minors is at least in poor taste, if not outright dangerous, and that it is fair game to make this particular usage of technology illegal. However, where do you stand on the idea that regulators should disallow the "nudification" use altogether? I can think of several legitimate (if a bit niche) uses, ranging from the creation of medical diagrams and teaching materials to filming love scenes in mainstream cinema with cloths-on and removing the cloths in post-processing. Do you think it's fair game to disallow these uses? If so, should this be absolute liability or should there be a notion of intent? If you think, as I do, that the technical capability should be unrestricted except insofar as it is employed to illegal ends, then we don't need any new laws. We simply apply the laws against, say, involuntary pornography and sexual exploitation of minors, and the problem is solved from a legal perspective; it is now a job for the executive branch.

2. I would appreciate it if you could speak to the risk of misclassification. Many of the proposed regulations involve training AI systems to monitor other AI systems (or themselves, as with the case of prompt engineering). What happens when the black box makes mistakes? Do we accept that a small number of innocent people will be labeled X by AI? How should the law take this possibility into account? Again, do we accept that legitimate uses are de facto crippled or entirely disabled? That's one outcome I would very much like to avoid.

3. On a macro-scale, how do we deal with the fact that other (perhaps less scrupulous) nations will have access to unrestricted AI?

Point 3 is particularly troubling from a regulation perspective, because the penchant of software for proliferation is astronomically higher than that of, say, nuclear weapons. This feels like the 90's crypto export-controls all over again, which is minimally a gigantic waste of resources and maximally a crippling economical vulnerability.

P.S.: My friend, it is exactly your job to argue your case when speaking about public issues. The term for this is "civic duty".


You write like there are 0 other countries.


Are you aware there are some malicious deviations already possible without AI too?


Be glad they didn't put it into DOE. It would be NRC 2.0.


You're worried about winning the race. I'm worried the prize is an accidental intelligence explosion killing everyone. A government slamming on their brakes would be the most encouraging thing I've heard all year.


"Accidental intelligence explosion" - you have to provide a reasonable argument that this can happen. AIs we have now or currently under development are still just tools in the sense that all agency and consciousness comes solely from human operators. Of course, human operators can be malicious, which is why we should regulate applications of AI, not fundamental research in it.


I'm sure you're familiar with the arguments. It's the explicit goal of several AI companies to make something more capable than us at engineering. Once you have that, there's the possibility of very rapid self-reinforcing improvement. If you lose control of something like that, it's game over.

GPT4 may not generate world class code, but it does it at a scale and speed unmatched by humanity. Alpha Zero took a week to go from nothing to better than any human in history at Go.


US leadership was literally asking for this.


US leadership seems clueless and they just fell for regulatory capture.

Regulate AI applications, not fundamental research in it.


I mostly agree but it is rich considering datasets like LAION were built for research yet are the bedrock of billion dollar companies now.


[flagged]


> we're in the final stages of capitalism

This is wrong in the sense that modern mixed economies are already post-capitalist if you take a narrow view of capitalism, and also almost certainly wrong if you take the broader view of capitalism which encompasses the modern mixed economy (say, any system featuring private ownership of the means of production even if the practical exercise of control of owners is constrained by democratically constituted public authority enforcing some views of the common interest.)

“Late-stage capitalism” wishcasting may be the most annoying current form of secular millenarian eschatology.


You're right. I should have said: "we're in the final stages of any semblance of socioeconomic mobility within a free market system".

> “Late-stage capitalism” wishcasting may be the most annoying current form of secular millenarian eschatology.

This is a luxury belief one can adopt nearer to the apex of the pyramid and when not amongst the literally billions of people being crushed at the bottom. You should try putting in 60+ hours of blue collar labor across 3 different jobs just to end up paycheck to paycheck in the slummiest part of a major US city. And that would still be miles better than where most people are at in the rest of the world.


> You're right. I should have said: "we're in the final stages of any semblance of socioeconomic mobility within a free market system".

"Free-market system" isn't a thing that actually exists in the real world, but even leaving that aside, no, that's not defensible, either.

> > “Late-stage capitalism” wishcasting may be the most annoying current form of secular millenarian eschatology.

> This is a luxury belief one can adopt nearer to the apex of the pyramid and when not amongst the literally billions of people being crushed at the bottom.

It's exactly the opposite: eschatological wishcasting doesn't contribute to solving problems; it is a luxury belief one can adopt nearer to the apex of the pyramid, etc., etc., without any cost, and without any motivation to do something because the problem will be imminently fixed by the teleological design of the universe.

> You should try putting in 60+ hours of blue collar labor across 3 different jobs just to end up paycheck to paycheck in the slummiest part of a major US city. And that would still be miles better than where most people are at in the rest of the world.

"Lots of people live in crappy conditions" doesn't support either your original or revised description, however true it certainly is, and however much of an urgent problem it is to address.


So people outside of Silicon Valley are virtuous and selfless?

These kind of regulations are self defeating unless hard sanctions or trade bans are placed on countries that don’t adhere to similar AI guidelines.

Eg, China and manufacturing. China grew its manufacturing base by violating labor and environmental standards and now the same people who tout AI safety are against any restrictions on China.


yes and - the FoundationModel approach encases un-duplicatable master-models as central providers with faucets -- perfect fit for technocracy fueled with cash looking for a return.


Read the https://www.stateof.ai/ reports or even the Stanford AI Index (https://aiindex.stanford.edu/) reports from the last few years. There's plenty of reason to try to create some limits on AI. It's very likely the proposed limits won't be "random" as you say.


Better than companies doing their own thing and not having any restrictions or oversight at all. Remember that those companies will get rid of some portion of their workers as soon as they think an AI can do most of that work. Heck, they did layoffs and started hiring again without any advancements.

You can't trust companies to self-regulate.


I'm sure China and Russia and Iran and all those other nations will 100% see the value of artificially restricting their AI efforts as well because of the risk of harm, and be socially responsible global citizens, that won't exploit this edge for thier own geopolitical agenda.


And when these countries overtake the US, the same people who brought AI regulations in the US will cry foul could when we regulate AI products from overseas.


I doubt any military or strategically important applications will be regulated


That sounds like the worst of all worlds. Killer robots and childproofed Alexas and nothing in between.


They’ll probably use ai to build an alternative to microsoft windows and Active Directory to start with. Interestingly microsoft makes just 2% of its revenue from china, despite an 85% market share for personal computer operating systems there. But I am sure China would prefer to not deal with windows telemetry also.


China’s government is extremely interested in restricting their AI efforts because they don’t want it to contradict the Communist party.


Yeah people bringing up China as if it's some bastion of freedom in comparison to us are hilarious.


Soviet man and American man were arguing about which country was more free.

The American man said "I am so free I could walk up to the White house right now and scream 'I hate Ronald Reagan he is an incompetent buffoon.' Without getting arrested."

The Russian man responded "That is not any more free than me I too could walk up to the Kremlin and scream 'I hate Ronald Reagan he is an incompetent buffoon.' without getting in any trouble."


No one did that. That is a straw man. People are less free but govt and institutions aligned with govt are more free.


But they are doing it in a way that do not hinder their fundamentals - they are enforcing that on alignment (see Baichuan2-chat) and application levels.

Unlike Biden's silly EO which puts restrictions on foundation model compute levels.


The party is under the authority of its ‘forever’ president Xi Jingping, a good friend of another dully elected president - Putin.

And the interest is: “whatever is on the mind of an aging, non-elected dictator and his favorites”.


If you think the US is going to restrict its AI efforts, you are mistaken. This is just a big tech lobbying circus. They'll happily sell Uncle Sam robotic dogs with RPGs and targeting computers, autonomous drone swarms and other AI enabed hardware to deploy across all operations theatres.

The Chinese are busy studying “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era", the Russians are busy getting wasted in Ukraine and the Iranians are busy checking if women are properly wearing their hijabs and smuggling weapons into Gaza.


Insane to me that, given a multi-year lead in tech, capability, talent, the USA is shooting itself in the foot re: innovation around AI.

Talk about snatching defeat from the jaws of victory... damn


"Insane" is par for the course when talking about existential threats. I expect you don't believe there are existential threats from AI capabilities research, so anyone planning around them will look insane. To me, it's insane to say the threats can't exist.

(I'm not endorsing this regulation. It's not at all clear than any regulation could be helpful. As you say, these regulations aren't going to slow non-US research efforts.)


The only way out, is through :)


Great way to understand how all this works is watching All-In Summit: Bill Gurley presents 2,851 Miles[1]. Basically, regulate your competition into the ground.

[1] https://www.youtube.com/watch?v=F9cO3-MLHOM


Great way to understand how all this works is listening to Behind the Bastards: The Deadliest Workplace Disaster in U.S. History[1]. Basically, don't regulate anything until a bunch of poor people die.

[1] https://www.iheart.com/podcast/105-behind-the-bastards-29236...

Edit: To elaborate, it's pretty easy to cherry-pick cases of either over and under regulation and use that to "prove" either side of the argument. There's nothing in the Bill Gurley talk that provides any insight into whether AI should be regulated or not because it doesn't directly engage with issues around AI specifically. Instead, it just says: "tech regulation bad".


Ok, can we restrict all trade and place sanctions against nations that don’t restrict AI like we do?


> Specifically, USAISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.

I have been afraid of over-regulation of AI but standards and testing environments don't sound so bad.

It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.


> It does not sound like they are implementing legal regulations that will protect incumbents at the expense of AI innovation, at least at this point.

Give them a minute, an agency needs to exist before it can be captured. There hasn't been time yet for a single revolving-door hire.


Regulatory capture in action right before our eyes. Fears of Skynet are going to lead us to a cyberpunk dystopia where only large corporations have any access to powerful AI. What a bizarre time to be alive


William Gibson had the "Turing heat" in seminal cyberpunk novel "Neuromancer". Here's the real-life beginning of just such an organization.


  Thou shalt not make a machine in the likeness of a human mind
I guess we're heading for spice then


I'm unsure what limits will do. Selling weapons and explosives is regulated, but it doesn't stop the government from doing it. So by limiting it, we're only limiting the people?


Cool. Great job guys. Now do one for CONSUMER DATA PROTECTION, RIGHTS AND PRIVACY. I WILL EVEN LET YOU COME UP WITH A FUNNY LITTLE 3 LETTER AGENCY NAME FOR IT. I DO NOT CARE.


Since 1914, there has been a US law on the books that empowers the current Federal Trade Commission (FTC) to make broad enforcement of unfair or deceptive business practices:

"(a) prevent unfair methods of competition and unfair or deceptive acts or practices in or affecting commerce;

(b) seek monetary redress and other relief for conduct injurious to consumers;

(c) prescribe rules defining with specificity acts or practices that are unfair or deceptive, and establishing requirements designed to prevent such acts or practices;

(d) gather and compile information and conduct investigations relating to the organization, business, practices, and management of entities engaged in commerce; and

(e) make reports and legislative recommendations to Congress and the public. "

[1] https://www.ftc.gov/legal-library/browse/statutes/federal-tr...


Data Intelligence Agency?

Netizen Safety Agency?

Citizens(/Consumers) Browsing in Privacy?

to repurpose a couple.


"Despite the increasing complexity and capabilities of machine learning models, they still lack what is commonly understood as "agency." They don't have desires, intentions, or the ability to form goals. They operate under a fixed set of rules or algorithms and don't "want" anything.

Even in feedback loop systems where a model might "learn" from the outcomes of its actions, this learning is typically constrained by the objectives set by human operators. The model itself doesn't have the ability to decide what it wants to learn or how it wants to act; it's merely optimizing for a function that was determined by its creators.

Furthermore, any tendency to "meander and drift outside the scope of their original objective" would generally be considered a bug rather than a feature indicative of agency. Such behavior usually implies that the system is not performing as intended and needs to be corrected or constrained.

In summary, while machine learning models are becoming increasingly sophisticated and capable, they do not possess agency in the way living organisms do. Their actions are a result of algorithms and programming, not independent thought or desire. As a result, questions about their "autonomy" are often less about the models themselves developing agency and more about the ethical and practical implications of the tasks we delegate to them."

The above is from the horse's mouth (ChatGPT4)

My commentary:

We have yet to achieve the kind of agency a jelly fish has, which operates with a nervous system comprised of roughly 10K neurons (vs 100B in humans) and no such thing as a brain. We have not yet been able to replicate the Agency present in a simple nervous system.

I would say even an Amoeba has more agency than a $1B+ OpenAI model since the Amoeba can feed itself and grow in numbers far more successfully and sustainably in the wild with all the unpredictability in its environment than an OpenAI based AI Agent, which ends up stuck in loops or derailed.

What is my point?

We're jumping the gun with these regulations. That's all I'm saying. Not that we should not keep an eye and have a healthy amount of concern and make sure we're on top of it, but we are clearly jumping the gun since we the AI agents so far are unable to compete with a jelly fish in open-ended survival mode (not to be confused with Minecraft survival mode) due to the AI's lack of agency (as a unitary agent and as a collective).


Is there a point buried in all that? You seem to be implying that there shouldn't be any regulatory body to address self-improving AI until it already exists? I don't think the government moves quickly enough for that to be okay.


No, we/they should be focusing on other actual things that are happening here and now. Not science fiction.


Most of us care if the drugs we're taking are properly tested and won't have adverse side effects or at least that the adverse side effects are known so the risk/reward may be calculated. Most of us care if the cars we drive are safe for us and don't have any hidden flaws that may fatally emerge. Same goes for food and drinks I assume. Actually, it's probably easier to find areas with beneficial regulations than functionally no regulations at all. Why is it that in this case people are willing to abandon caution and just dive in without looking ?


I must be missing something, as I'm not seeing the information in the linked press release that is fueling the specific commentary here around what the government intends to do, as all this seems to say is that they plan to create standards and provide testing environments. I'm sure there is more to it, I just didn't see where any of those facts were posted.

So I'm assuming some of you have seen more details - can someone share where they can be found?


No, you're not missing anything. HNers and techies in general lean to the right-libertarian corner, and with that comes a common belief that when the government gets involved in something, they'll mess it up.

It is against HN rules to call out a commenter for having not read the article, and earlier comments set the tone of the discussion when a post hits the front page. For many posts, by the time it hits the front page, the top-voted comments often include hot takes from someone who just saw the title and wrote a comment about whatever they imagined the article to be.


Alarming how many people think that the development of AI should have…no government oversight? Are none of you familiar with history?


Useless comment without explaining what particular events in history you think are relevant. Are you familiar with how to make an argument? Read any books lately?


One of the problems is that the (very big) companies that will most benefit as a result of this type of measures are de facto functioning as an extension of the government (both in the US and in the EU) when it comes to employing AI in the field, so getting back at those (very big) companies with "this use of AI is against the rules!" won't have any discernible effect (because it would be like telling the government that is is breaking its own rules, i.e. futile).


Why, exactly, should it have government oversight? The overwhelming majority of research, especially in computer science, has no government oversight.


Are you?


I think most disastrous situations (wars, genocides) in humans history was led by governments applying its power.


For those in the know, is this a bipartisan position? Any chance of seeing rules like this one "over-ruled" (don't know the exact technical term) in case of different politicians coming to power in the US?


It's executive action, and can be changed on a whim (provided appropriate processes are followed).

Legislative action would theoretically be best, but our current congress couldn't produce a better bill than a wet speak and spell.


That (and the rest of the regulatory package) looks like a framework to handicap AI technology when existing laws can handle the existing problems.

It can only help existing companies to stifle competition and guarantee revenue.


Just the people I wanted to regulate cutting edge niche technology.


I believe this could possibly be about denial of technological advantages for their competitors and potential threats to their control of the markets.


Often such practices are about that. I already got mine, block the next company. But there’s also a certain potential danger. Lower end book cover artists lost their jobs this year with the various picture creator software coming out. What job category is next? It will happen. I expect my job as a software engineer to require more and more use of “programmer aids” which are just llm code writer tools. If I don’t learn how to use them, I’ll be less effective as a programmer, at some point I’d be less employable.


What a fucking joke. I am voting libertarian if i bother to vote at all. They're against AI regulation.


I lean libertarian but I'm very much in favor of AI regulation


Well maybe you're leaning towards the wrong party?

https://twitter.com/LPNational/status/1719099143845765148


That's normal. Most libertarians are very much in favour of regulation.


The party is against it actually so I don't know what you're talking about.

edit: if you plant to draw a distinction between executive order regulation and standard regulation, they're against that too. Everything coming from the Libertarian party is against regulation and government interference in general.

https://twitter.com/LPNational/status/1719099143845765148




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: