Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Who thinks they are contributing something when they do crap like that? Set up an AI to bomb a group working for free to do something they value. Gotta be a kid with no sense.


I know a guy who does this. He finds a problem, then tells ChatGPT about it. Then ChatGPT elaborates it into dross. He says look at the magical output, without reading it or bothering to understand it. Then posts it to the vendor. The vendor gets mislead and the original issue is never fixed. Then I have to start the process again from scratch after 2 weeks are wasted on it.

The root cause is that LLMs are a damage multiplier for fuckwits. It is literally an attractive hammer for the worst of humanity: laziness and incompetence.

I imagine that could be weaponised quite easily.


> The root cause is that LLMs are a damage multiplier for fuckwits.

Reminds me of Eco's quote about giving the "village idiot" a megaphone. But, transposed to the age of AI.-


It's much worse than that. It's giving the village idiot something which turns their insane ramblings into something that is incredibly verbose and sounds credible but inherits both the original idiot's poor communication and the subtle ramblings of an electronic crackhead in it.

Bring back the days of "because it's got electrolytes" because I can easily ignore those ones.


To quote another frontpage article, it transforms the village idiot into a "Julius".


For others since it's about to fall off the front page: https://news.ycombinator.com/item?id=42494090


Oh shit I just read that and am utterly horrified because I've been through that and am going through it. I have instantly decided to optimise myself to retire as quickly as possible.


Don't worry, you're not alone. I'm in the same boat. :)


The culture is been driven by village idiots.


Since ...

... at least when villages were prevalent. Or, earlier ... :)


> I imagine that could be weaponised quite easily.

I've been dealing with a vexatious conartist that has been using chatgpt to dump thousands of pages of garbage on the courts and my legal team.

The plus side is that the output is exceptionally incompetent.


> The plus side is that the output is exceptionally incompetent.

It won't be for long. This is reminiscent of the development of the first rifles, which often jammed or misfired, and weren't very accurate to a long range. Now look at weapons like a Barrett .50 cal sniper rifle -- that's what AI will look like in 10 years.


Ah, yes, AI jam tomorrow.

(Though, perhaps an unusually pessimistic example of the “real soon now, it’ll be usable, we promise” phenomenon; rifles took about 250 years to go from ‘curiosity’ to ‘somewhat useful’).


I keep hearing this but the current evidence, asymptotic progress and financials say otherwise.


I guess what you are saying would probably have been said by AI skeptics in the 70s, but LLMs provided a quantum leap. Yes, progress is often asymptotic and governed by diminishing returns, but discontinuous breakthrough must also be factored in.


Please tell me what quantum leap was provided by LLMs. Please inform me of any developments that made current LLMs possible.

I contend that there are none. Witness the actual transformer kernel technologies over the last 20 years and try to find a single new one.

Neural Networks? that's 90's technology. Scale is the only new factor I can think of.

This is an investor-dollar driven attempt to use brute-force to deliver "magical" results when the fundamental technology is being mis-represented to the general public, to CTOs, and even to Developers.

This is dishonest and will not end well.


The biggest capability jump comes from semantic search. You can now search based on the content of a text rather than a literal character level match.


I am slowly coming 'round to the same conclusion: Word2Vec might be as fundamental as fire - all due caveats aside, of course ...


Nailed it.


Technological advances tend to happen in jumps. We are no doubt approaching a local optima right now, but it won't be long until another major advancement propels things forward again. We've seen the same pattern in ML for decades.


Please name me one technological advance of major import in the fundamental transformer kernel space that has occurred in the last decade that has any import at all on today's LLMs.

I will wait.


The very idea of the Transformer architecture. Surely you've heard of "Attention is all you need".


it is unlikely that the output of LLMs will improve. there is no fundamental breakthrough in transformer technology (or anything else) powering todays' LLM "revolution"

There is only scale being employed like never before => vast datasets being plowed through being sufficient to provide the current illusion for the less observant humans out there...

10 years from now this current fad of LLM's pretending to be intelligent will look preposterous and unbelievable: "how COULD they all have fallen for such hype, and what a cost of joules/computation... the least deterministic means possible of coming to any result... just wasteful for no purpose..."


On the other hand, firearms were invented in the 10th century, and I don't think we got reliable cartridges until mid 1800's.

All that time the bow and arrow were long-range, accurate, quiet and worked in the rain. :)

[But yeah, you're right - in our lifetimes technologies have changed immensely.]


And the judge hasn't sanctioned said conartist yet?


He was just awarded a prison sentence, but it's suspended for two years on the condition that he doesn't bring any more of these cases. He's left the jurisdiction and has been hopping around extradition havens, so an arrest order would just have the effect of ending the court's ability to influence him.

He's already announced that he's going to attempt to appeal. I expect that similar to his recently rejected appeal he'll file another few thousand pages of chatgpt hallucinations in this one.


Goodness gracious. Are we getting to DDoJ? (Denial of Justice by AI?) ...

... getting to an "AI arms race" where the team with the better AI "wins" - if nothing else by virtue of merely being able to survive the slop and get to the actual material - and then, of course, argue.-


Flooding the court system with frivolous charges and mountains of irrelevant evidence is already a common tactic for stalling justice. Sometimes it's just to make the other side run out of money and give up. Sometimes it's an attempt to overwhelm the jury with BS so they can't be confident in their conclusion. And more recently, we've seen it used to delay a decision until they are elected and therefore immune.


My wife has a coworker like this.

Except I stead of bug reports, he just gets some crap code written and sends it to her assuming it can be dropped in and ran. (It is often wrong)


But _why_? What’s his motivation for doing this, vs just writing a proper report?


Well said!


Or it might be state threat actor trying to tire out people and then plant some stuff somewhere in between. Like Jia Tan but 100x more often or then getting their people to “help” cleaning up.

You just underestimate evil people. We are long past “bored kid in his parents basement in a hoodie”.

Any piece of OSS code that might end up used by valuable or even not so valuable target is of interest for them.


I have got to say that - on a first, naive, approach - the whole situation hit me in a very "supply chain attack" way too.-


People do things like this because it makes your gamified GitHub metrics go up.


I call those drive-by PRs. If you work on an even moderately popular project, you’ll end up with people showing up - who have never contributed before - and submitting patches for stuff like typos in your comments that they found with some auto scanning tool.

As far as I can tell, there are people whose entire Github activity is submitting PRs like that. It would be one thing if they were diving into a big codebase, trying to learn it, and wanted to submit a patch for some small issue they found to get acquainted with the maintainers, or just contribute a little as they learned, but they drop one patch like that, then move on to the next project.

I don’t understand the appeal, but to each their own, I guess.


Genuine curiosity - admittedly these sorts of “contributors” probably aren’t doing it out of a passion for FOSS or any particular project, but if it’s something that fixes an issue (however small) is that actually a net negative for the project?

Maybe I have some sort of bias, but it feels like a lot of the projects I see specifically request help with these sorts of low-hanging-fruit contributions (lumping typos in with documentation).


I don't think these PRs are a net negative for a project - I've never turned one down, anyway. I just don't understand what the contributor gets out of the arrangement, other than Github badges. Some theories I've held:

1) They want/intend to contribute more and are starting small, but either get overwhelmed, lose interest, don't have enough time, etc.

2) They are students padding their resumes (making small PRs so they can say they contributed to x-number of open source projects.)

3) Its just the Github badges.


>1) They want/intend to contribute more and are starting small, but either get overwhelmed, lose interest, don't have enough time, etc

Yeah that's essentially how people are encouraged to start contributing to open source projects, making small changes and cleaning up documentation and such. It's hard to allow for this while also preventing the other two categories.


>submitting patches for stuff like typos in your comments that they found with some auto scanning tool

Wonder how long it'll be until we see wars back and forth between people submitting 'corrections' for things that are just spelled different between dialects like we see on Wikipedia.


... and then maintainers having to come up with "style manuals" to codify a project's preference and avoid this ping pong ...


This is exactly why it's a bad thing in general to have a single metric or small group of metrics that can be optimized. It always leads to bad actors using technical tools to game them. But we keep making the same mistake.


You could just do it onto fake projects created for metrics as well, so nothing real is harmed :D


They might be looking for some open-source fame. The contribution to their resume is more important than the contribution to the project.


I fixed a single word typo in a doc string in github.com/golang/go, resulted in a CONTRIBUTORS entry and a endless torrent of spam from "headhunters".


This was an issue without LLMs too, and it sucks. GH has a tag for "good first issue" which always gets snatched by someone who only cares about the contribution line. Sometimes they just let it sit for weeks because they forgot that they now have to actually do the work.


It's people, usually students, trying to pad out their GitHub activity and CVs.


It's an insidious incentive, in an age where an AI is going to look through your CV, and not much care - or be able to tell the difference ...


If the AI was told to care, identifying low grade or insignificant contributions is well within it's capabilities.


Not in 10 years when the contributions become more sophisticated. Like many other scenarios of this time, it's an arms race.


If the contributions become sophisticated enough to be actually good, then problem solved, right?


No, because the arms race will be using huge amounts of energy and resources so no, not problem solved.


Good point.-


On Hackerone you can get money for a bug report, could that be the reason? I think that the first sentence in report is probably written by a human and the rest by AI. The report is unnecessary wordy, has typical AI formatting and several paragraphs with detailed explanation of "you are absolutely right" are signs of a LLM.


I've been active on Hackerone for a decade. A good report written by a human has issues making it through. These AI written reports have no chance.


It's a cultural thing. In the absence of opportunities for good education and jobs cargo culting kicks in.


I am trying - honestly - to wrap my head around it ...

Who knows. Might be some sort of "distributed attack" against Open Source by some nefarious actor?

I am still thinking about the "XZ Utils" fiasco. (Not implying they are related, anyhow).-


Have you been on the internet? There are plenty of just plain moronic people out there to produce (with the help of LLMs) ample bullshit to be able to clog approximately any quasi-public information channel.

Such results used to require sophistication and funding, but that is no longer true


I’d like to inject a personal gripe here, namely: the people who take the time to answer questions on Amazon with “I don’t know.” Why.


Because Amazon sent them am email. When questions go unanswered, Amazon emails some of the prior purchasers asking them the question. These emails are constructed to look like they are personally being asked for help.

So, why? Courtesy, believe it or not. Blame Amazon.


Oh, that makes total sense.


"Elroy was here"


Awful. See comment about "morons" upthread ...


Na, it’s not intentionally malicious - people are just trying to pad their resumes for new roles while unemployed or as students. I did the same (not with AI, but picking up the low hanging easy tasks to add a few lines to my CV years ago).


Wonder if you had to give more detail on those experiences during the interviews? How did it go?


Thanks. Yes, this as a modus, is becoming apparent from the threads.-


No need to go there.

There is a more simple explanation and it is being discussed in the comments.

Kids trying to farm github fame using LLM:s.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: