Good luck detecting things. Guess what. None of your fucking business. It works, it works. You didn't like that. Go fuck yourself. It's like "anti cheating" shit in academia. I get some random output from things. All I do is have a sample of things I want to mimic and any style I have. I can tell Abby system to make it not sound like itself.
Just be honest. You're failing in this "fat the man, man" thing on AI and llms.
It's better to work with the future than pretend that being a Luddite will work in the long run
This is the most worthless thing in the world. Why would anybody give a rat's fucking ass about a third? Not even a third rate like a 10th rate. Fucking operating system having math support. Oh wow! Congratulations! Now you can go ahead and support computers from 1985 now
This is about software emulated FP. OFC NetBSD used the FP module in CPU's since long ago.
Also, FP is not math, it's just a kind of math.
And even Forth users will use a fixed point before going into float. With 32 bits, they will just scale the point and call it a day.
And seasoned Forth users will just use quotient arithmetic and output fractional parts as two items in the stack.
If the mere possibility of AI-generated context invalidates an argument, it suggests the standards for discourse were already more fragile than anyone cared to admit.
Historically, emotional narratives and unverifiable personal stories have always been persuasive tools — whether human-authored or not.
The actual problem isn't that AI can produce them; it's that we (humans) have always been susceptible to them without verifying the core ideas.
In that sense, exposing how easily constructed narratives sway public discussion is not unethical — it's a necessary and overdue audit of the real vulnerabilities in our conversations.
Blaming the tool only avoids the harder truth: we were never debating cleanly to begin with.
> Blaming the tool only avoids the harder truth: we were never debating cleanly to begin with.
Yes, the problem is we humans are susceptible, but that doesn't mean a tool used to scale up the ability to create this harm is not problematic. There's a huge difference between a single person manipulating one other person and a single person manipulating millions. Scale matters and we, especially as the builders of such tools, should be cautious about how our creations can be abused. It's easy to look away, but this is why ethics is so important in engineering.
In this discourse it is often forgotten that we have consumer protection laws for a reason. And that consumer protection has been a pillar of labor struggle for a long time (and consequently undermined by conservative policies).
Scary effective ad campaigns which target cognitive biases in order to persuade consumers to behave against their own interest is usually banned by consumer laws in most countries. Using LLMs to affect consumer (or worse, election) behavior is no different and ought to be equally banned with consumer protection laws.
The existing tools at any given time do very much shape which consumer protection laws are created, and how they are created, as they should. A good policy maker does indeed blame a tool for a bad behavior, and does create legislation to limit how this tool is used, or otherwise the availability of that tool on the open market.
It is also forgotten that we as engineers are accountable as well. Mistakes will happen, and no one is expecting perfection, but effort must be made. Even if we create legal frameworks, individual accountability is critical to maintaining social protection. And with individual accountability we provide protection to novel harms. Legal frameworks are reactive, where the personal accountability is preventative. The legal framework can't prevent things happening (other than through disincentivization), it can only react to what has happened.
By "individual accountability" I do not mean jailing engineers, I mean you acting on your own ethical code. You hold yourself and your peers accountable. In general, this is the same way it is done in traditional engineering. The exception is the principle engineer, who has legal responsibility. But it is also highly stressed through engineering classes that "just following orders" is not an excuse. There can be "blood on your hands" (not literal) even if you are not the one who directly did the harm. You enabled it. The question is if you made attempts to prevent harm or not. Adversaries are clever, and will find means of abuse that you never thought of, but you need to try. And in the case here of LLMs, the potential harm has been well known and well discussed for decades.
What does that look like in practice, assuming an engineer doesn’t believe that the LLM genie can be put back into the toothpaste tube?
“Summarize the best arguments for and against the following proposition: <topic here />. Label the pro/for arguments with a <pro> tag and the con/against a with a <con> tag” seems like it’s going to be a valid prompt and any system that can only give one side is bound to lose to a system that can give both sides. And any system that can give those answers can be pretty easily used to make arguments of varying truthfulness.
- You act on your morals. If you find something objectionable, then object. Vocally.
- You, yourself, get in the habit of trying to find issues with the things you build. This is an essential part of your job as an engineer.
- You can't make things better if you aren't looking for problems. The job of an engineer is to look for flaws and fix them.
- Encourage culture where your cohort understands that when someone is saying "but what about" or "how would we handle" isn't saying "no" but "let's get ahead of this problem". That person is doing their job, they're not trying to be a killjoy. They're trying to make the product better.[0]
- If your coworkers are doing something unethical, say something.
- If your boss does something unethical, say something.
It doesn't matter what your job is, you should always be doing that. As engineers the potential for harm is greater.
But importantly, as engineers IT IS YOUR JOB. It is your job to find issues, and solve them. You have to think how people will abuse the tools you build. You have to think about how your things will fail. You have to think about what sucks. And most importantly, your job is to then resolve those things. That's what an engineer does. Don't dismiss it because "there's no value". The job of an engineer isn't to determine monetary value, that's the business people (obviously you do to some extent, but it isn't the primary focus). I'm really just asking that people do their jobs and not throw their hands up in the air or just pass on blame or kick the can down the road.
[0] I can't express how many times I've seen these people shut down and then passed over for promotion. It creates yes men. But yes men are bad for the business too! You and your actions matter: https://talyarkoni.org/blog/2018/10/02/no-its-not-the-incent...
> I mean you acting on your own ethical code. You hold yourself and your peers accountable
Industry wide self regulation is a poor substitute for actual regulation, especially in this capitalistic environment which rewards profitable behavior regardless of morality or ethics. In this environment the best an engineer can do is resign in protest (and I applaud any engineer which does that in fact), however that won’t stop the company from hiring new engineers who value their salary more than their ethical behavior—or have different ethical standards.
> And in the case here of LLMs, the potential harm has been well known and well discussed for decades.
The harms posed by LLMs are the very same which are caused by any company in the pursue of profits without regulation. In the past the only proven methods to force company to behave ethically has be industry wide regulation, especially consumer protection regulations.
Substitute? No. But you need both. Sorry, it's not the incentives, it is you[0]. As I said before, regulation is reactionary. This is why you need both. No one is saying no regulation, but I (and [0]) is saying things only happen because people do them. I know this is a wild claim, but it is an indisputable fact.
> The harms posed by LLMs are the very same
I expect every programmer and HN user to be familiar with scale. Please stop making this argument. You might as well be saying a nuclear bomb is the same as a Pop-Its. The dangers that LLMs pose is still unknown but if we're unwilling to acknowledge that there's any unique harm then there's zero chance of solving them.
I‘m sorry, but I’m not buying it. Other industries have had the technology for a long time to mass produce low quality products which harms their customers, or worse, bystanders. And in many of those cases the harmful production was never allowed on the market because the regulator was proactive and prevented the technology to be sold to consumers. I know those stories aren’t as prominent as the story of leaded gasoline, because stories of harm spread much further than stories of prevented harm. But they do exist, and are numerous.
I also fail to see why we need both regulation and moral behavior from developers. If the regulation exists, and the regulator is willing to enforce it, any company which goes against the regulation will be breaking the law, and stopped by the regulator. We only need the regulation in this case.
> any company which goes against the regulation will be breaking the law, and stopped by the regulator
And how has that been working so far?
> why we need both
What's your argument here? What is the cost of having both? You sticking your neck out and saying when something's wrong that wrong? You having to uphold your moral convictions?
If regulation works perfectly, you'll never have to do anything, right? But if it doesn't, then you provide a line of defense. So I don't see your argument. You can't expect anything to work 100%, so what then?
Let's be honest here if you believe in things but don't stand up for them when it's not easy to then you really don't believe in those things. I believe in you, I just hope you can believe in yourself.
It won’t matter what I do personally if the company can just hire new engineers (or even outsource the work[1]). Let me repeat what I said above:
> however that won’t stop the company from hiring new engineers who value their salary more than their ethical behavior—or have different ethical standards.
Just because the state of consumer protection is abysmal in our current state of capitalism, that doesn’t mean it has to stay that way, and just because the regulators are unwilling to enforce the few remaining consumer protection laws it doesn’t mean they will never. Before Reagan consumer protection laws were passed all the time, and they used to be enforced, they can so again.
Yes, it doesn't matter if you're the only one that does it, but it does matter if you're not the only one that does. Frankly, many people won't even apply to jobs they find unethical. So yes, they can "hire somebody else" but it becomes expensive for them. Don't act like this (or most things) is a binary outcome. Don't let perfection get in the way of doing better.
> that doesn’t mean it has to stay that way
And how the fuck do you expect things to change if you will not make change yourself? You just expect everyone to do it for you? Hand you a better life on a golden platter? I'm sorry to tell you, it ain't free. You need to put in work. Just like with everything else in life. And you shouldn't put all your eggs in one basket.
Remember, I'm not arguing against regulation. So it is useless to talk about how regulation can solve problems. We agree on that, there's no discussion there. It seems the only aspect we disagree on that part is if regulation works 100% of the time or not. Considering the existence of lawsuits, I know we both know that's not true. I know we both know time exists as well and the laws don't account for everything, requiring them to be reactionary. Remember, laws can only be made after harm has been done. You need to show a victim. SO how do we provide another layer of protection? It comes down to you.
You will not be able to convince me we don't need both unless: 1) you can show regulation works 100% of the time or 2) you can provide another safety net (note you are likely to be able to get me to agree to another safety net but it's probably going to be difficult to convince me that this should be a replacement and not an addition. All our eggs in one basket, right?). Stop doing gymnastics, and get some balls.
> Remember, laws can only be made after harm has been done.
This simply isn’t true. Plenty of regulation is done proactively. You just don’t hear about it as often because, harm prevented is not as good of a story as harm stopped.
For example we have no stories of exporting encryption algorithms to different countries causing harm, yet it is heavily regulated under the belief it will cause harm to national security. Similarly there is no stories of swearing on the radio causing harm, yet foul language is regulated by the FCC. A more meaningful examples are in the regulatory framework in the field of medicine, and if you want scale, the intellectual property of fashion design.
But even so, it can be argued that LLMs are already causing harm, it is mass producing and distributing bad information and stolen art. Consumers are harmed by the bad information, and artists are harmed by their art being stolen. A regulation—even if only reactionary—is still apt at this point.
The series of law-suits you mention only proves my point. We expect companies that break the law to be punished for their action, although I would argue that the regulator is generally far too lazy in pursuing legal actions against companies that break the law.
I'll concede. You're right. But this also is not the norm, despite my best wishes that it was.
> I think our disagreement stems from this belief:
But I still think there's a critical element you are ignoring and I'm trying to stress over and over and over. YOU NEED TO ADDRESS THIS FOR A CONVERSATION TO BE HAD
>> if regulation works 100% of the time or not
>>>> If regulation works perfectly, you'll never have to do anything, right? But if it doesn't, then you provide a line of defense. So I don't see your argument. You can't expect anything to work 100%, so what then?
This concept is littered all throughout every single one of my comments and you have blatantly ignored it. I'm sorry, if you cannot even acknowledge the very foundation of my concern, I don't know how you can expect me to believe you are acting in good faith. This is at the root of my agitation.
> The series of law-suits you mention only proves my point. We expect companies that break the law to be punished for their action
No it doesn't, because you are ignoring my point. I am not arguing against regulation. I am not arguing that regulation doesn't provide incentives.
My claim of lawsuits existing was to evidence the claim
Regulations are not enough to stop the behavior before it occurs.
Again, this is the point you are ignoring and why no conversation is actually taking place.
> although I would argue that the regulator is generally far too lazy in pursuing legal actions against companies that break the law.
Great! So you agree that regulation isn't enough and that regulation fails. You've tried very hard to avoid taking the next step. "WHAT DO YOU DO WHEN REGULATION FAILS?" Seriously, are you even reading my comments? At this point I can't figure out if I'm talking to a wall or an LLM. But either way, no conversation will continue unless you are unwilling to address this. You need to stop and ask yourself "what is godelski trying to communicate" and "why is godelski constantly insisting I am misunderstanding their argument?" So far your interpretations have no resolved the issue, maybe try something different.
I am speaking around it because it seems obvious. If we have good regulation and enforcement of these regulations there is no need for self-regulation. While we don‘t have good regulation, or a regulator unwilling to enforce existing regulation, the go-to action is not to amass self-regulation (because it will not work) but to demand better regulation, and to demand the regulator does their job. That is at least how you would expect things to work in a democracy.
Flooding human forums with AI steals real state from actual humans.
Reddit is already flooded with bots. That was already a problem.
The actual problem is people thinking that because a system used by many isn't perfect that gives them permission to destroy the existing system. Don't like Reddit? Just don't go to Reddit. Go to fanclubs.org or something.
Ok, need you to clarify a few implicit and explicit statements there: The study destroyed the subreddit? The authors of the study believed they had permission to destroy the subreddit? the subreddit is now destroyed? The researchers don’t like reddit? The researchers would achieve their aims by going to fanclubs.org or something?
It's interesting to see how upset people get when the tools of persuasion they took for granted are simply democratized.
For years, individuals have invented backstories, exaggerated credentials, and presented curated personal narratives to make arguments more emotionally compelling — it was just done manually. Now, when automation makes that process more efficient, suddenly it's "grotesquely unethical."
Maybe the real discomfort isn't about AI lying — it's about AI being better at it.
Of course, I agree transparency is important. But it’s worth asking: were we ever truly debating the ideas cleanly before AI came along?
You're missing the obvious: it is the lying that is unethical. Now we're talking about people choosing to use a novel tool to lie en masse. What you're saying is like chastising the horrified onlookers during a firebombing of a city, calling them merely jealous of how much better an arsonist the bomber plane is than any of them.
the ethics committee of the university is supposed to have control of the ethics of its researchers. remember when a research group tried to backdoor the Linux kernel with poisoned patches? it's absolutely correct to raise hell with the university so they give a more forceful reprimand.
Agreed, and I think this is a good thing. The Internet was already full of shills, sockpuppets, propaganda, etc, but now it's really really cheap for anyone to do this, and now it's finally getting to a place where the average person can understand that what they're reading is most likely fake.
I hope this will lead to people being more critical, less credulous, and more open to debate, but realistically I think we'll just switch to assuming that everything we like the sound of is written by real people, and everything opposing is all AI.
Just be honest. You're failing in this "fat the man, man" thing on AI and llms.
It's better to work with the future than pretend that being a Luddite will work in the long run