Not all AI assisted writing is "slop," especially if, as your screenshot shows, significant portions of the article were written by a human. Drawing attention to any and all hints of AI assisted writing is not the public service announcement you think it is.
Are there specific parts of the article which are inaccurate or misleading? If so please say, it would be very interesting and add to the discussion.
I actually think AI-human collaboration is quite beneficial. I have a more fundamental issue that it's just bad writing when you use pure LLM generated text. My general feeling is "why should you expect me to spend my time reading something that you didn't care enough to spend your time writing?"
Also, most of the suggestions provided in the AI generated section are just useless. While I think this law is terrible, the suggestions provided completely contradict what the lawmakers are intending. I'll explain what I mean with some of the suggestions provided.
> Narrow the Scope to Intent, Not the Tool
This is essentially a suggestion to throw out the entire law as written. Sure, but this is meaningless advice to lawmakers.
> Drop Mandatory File Scanning
This is the same suggestion as before but rephrased.
> Exempt Open-Source and Offline Toolchains
This is asking them to create a massive loophole in their own law making it useless. Once again, essentially just asking them to throw out the entire law.
> Add safe harbor for sellers and educators who don’t modify equipment or participate in unlawful manufacture.
Two fundamentally different concepts here jammed into one idea. Do you want to add safe harbor for sellers who don't modify equipment or do you want to throw out the entire law and have it not apply to anybody who doesn't participate in unlawful manufacture? These are very different ideas, it makes no sense to treat them as one cohesive concept.
All of these are signals that not much thought went into this. If a human had used AI for ideas and writing assistance, but participated in the writing process as an active contributor, I think they would have caught things like this. I don't think they would have chosen to make multiple bullet points semantically identical. I think they would have chosen to actually cite specific aspects of the law and propose concrete solutions.
Another example, one of their suggestions is to improve the working groups to add specific members. Genuinely a fairly good idea. Having actually read the law, I would have cited the specific passage, which requires that the working group "SHALL INCLUDE EXPERTS IN ADDITIVE MANUFACTURING TECHNOLOGY, ARTIFICIAL INTELLIGENCE AND DIGITAL SECURITY, FIREARMS REGULATION, PUBLIC SAFETY, CONSUMER PRODUCT SAFETY, AND ANY OTHER RELEVANT DISCIPLINES DETERMINED BY THE DIVISION TO BE NECESSARY TO PERFORM THE FUNCTIONS PRESCRIBED HEREIN." I would question, who do they consider to be experts in additive manufacturing? Why does it seem that the working group will be far more heavily weighed towards policy experts as opposed to 3D printing experts? The article suggests that "standards will default to large vendors" yet there is no evidence here that vendors will be included at all.
> Second half of this article has signs of AI slop, as confirmed by Pangram
The corporation you're citing named "Pangram" cannot confirm anything of the sort. They only make claims, like the ones in your screenshot.
Indeed, this very "citation" of the AI-generated output of Pangram Inc.'s product is a good example of outsourcing work to an LLM without verifying it.
Pangram has extremely high accuracy. While there's no way to prove AI use, it's a very good proxy for that metric. It's obvious to my eyes that the article is written with AI, I supplied Pangram as a citation to convince people such as yourself who didn't notice the AI usage when reading the article.
> Is that true in New York? Maybe it currently requires permits
What are you referring to as "it" here? When OP mentioned getting a gun from "off the street", that's referring to obtaining one illegally, without a provenance chain or any permitting.
If you want to shoot a CEO, its far easier to buy an untraceable gun on the streets (or obtain a non-serialized 80% lower receiver that you drill yourself) rather than an unreliable fully 3D-printed gun.
Ah, I wasn't familiar with "off the street" meaning that, I thought they were saying "go to a store and buy a gun". Thanks!
Is it that easy to acquire even illegal firearms in the US, that you can just walk around in NYC to the shadier streets and find randoms willing to sell them to you?
I can't directly attest to that (never bought an illegal gun) but from my understanding, yes, people have no challenge obtaining illegal guns.
However, you really don't even need to do that. You could just drive across the NY border to a state with looser gun laws, buy one there, shave off the serial number, and bring it back to NY. You could also just steal a gun from one of the many Americans who already own one.
You can also legally buy an unfinished lower receiver in many states (the part of a gun that is typically serialized). Since it's technically unfinished, it doesn't require a serial number. Then you drill a few holes into it and assemble it with off the shelf, also un-serialized gun parts.
I'm not sure if it's still this way but when I was a kid you could buy old guns at rural flea markets or antiques shops. I've never attempted to purchase an illicit firearm, but I can't imagine it's any harder than buying illegal drugs.
It wouldn't protect against this attack though. The Notepad++ update servers were hijacked. Presumably you would allow Notepad++ updates through Little Snitch so you would be equally as vulnerable.
No, why would you allow automatic updates? It makes no sense. You should audit every update as if each payload could contain malware. It’s a paranoid way to live, but that’s what it takes.
We also need better computer science education in high schools, teaching students how to inspect network packets, verify SSL certificates, and evaluate whether a binary blob might contain malicious code.
People have gotten complacent about the internet, which is why they still get hacked, when it should be the other way around. With everything we’ve learned over the years, why are breaches more common than ever? I don’t understand why people are so careless about online security today, compared to decades ago when we were taught not to share personal information and not to trust anything on the internet.
Do you go by the smell of the executable or just general vibes? Nobody has never reviewed even a tiny fraction of the software they run, closed source or open source.
What's frustrating is the author's comments here in this thread are clearly LLM text as well. Why even bother to have a conversation if our replies are just being piped into ChatGPT??
There have been a few times I've had interactions with people on other sites that have been clearly from LLMs. At least one of the times, it turned out to be a non-native English speaker who needed the help to be able to converse with me, and it turned out to be a worthwhile conversation that I don't think would have been possible otherwise. Sometimes the utility of the conversation can outweigh the awkwardness of how it's conveyed.
That can said, I do think it would be better to be up front about this sort of thing, and that means that it's not really suitable for use on a site like HN where it's against the rules.
I've seen that as well. I think its still valuable to point out that the text feels like LLM text, so that the person can understand how they are coming across. IMO a better solution is to use a translation tool rather than processing discussions through a general-purpose LLM.
But agreed, to me the primary concern is that there's no disclosure, so it's impossible to know if you're talking to a human using an LLM translator, or just wasting your time talking to an LLM.
>What's frustrating is the author's comments here in this thread are clearly LLM text as well
Again, clearly? I can see how people might be tipped off at the blog post because of the headings (and apparently the it's not x, it's y pattern), but I can't see anything in the comments that would make me think it was "clearly" LLM-generated.
Honestly, I can't point out some specific giveaway, but if you've interacted with LLMs enough you can simply tell. It's kinda like recognizing someones voice.
One way of describing it is that I've heard the exact same argument/paragraph structure and sentence structure many times with different words swapped in. When you see this in almost every sentence, it becomes a lot more obvious. Similar to how if you read a huge amount of one author, you will likely be able to pick their work out of a lineup. Having read hundreds of thousands of words of LLM generated text, I have a strong understanding of the ChatGPT style of writing.
Do you really need an LLM to talk on HN? Genuinely, this research seems cool but its hard to trust your findings when there's clearly AI being used heavily in writing the article and in your comments here.
sure, but plenty of software already exists for those devices to block adult content and social media. it works just fine without a header. its actually even better, because that software can even block nefarious websites that would never comply with adding a header
Blocklists are useful, but a hint from the website that, actually, they don't want to cater to children would be useful when those blocklists aren't up to date.
Uh oh, not good that a major Nvidia competitor with genuine alternative technology will no longer be competing... Chances this tech gets killed post-acquisition?
https://i.imgur.com/gGIAApA.png
Hard to trust an article like this when the legal analysis and suggestions are being outsourced to an LLM.
reply