Who thinks they are contributing something when they do crap like that? Set up an AI to bomb a group working for free to do something they value. Gotta be a kid with no sense.
I know a guy who does this. He finds a problem, then tells ChatGPT about it. Then ChatGPT elaborates it into dross. He says look at the magical output, without reading it or bothering to understand it. Then posts it to the vendor. The vendor gets mislead and the original issue is never fixed. Then I have to start the process again from scratch after 2 weeks are wasted on it.
The root cause is that LLMs are a damage multiplier for fuckwits. It is literally an attractive hammer for the worst of humanity: laziness and incompetence.
It's much worse than that. It's giving the village idiot something which turns their insane ramblings into something that is incredibly verbose and sounds credible but inherits both the original idiot's poor communication and the subtle ramblings of an electronic crackhead in it.
Bring back the days of "because it's got electrolytes" because I can easily ignore those ones.
Oh shit I just read that and am utterly horrified because I've been through that and am going through it. I have instantly decided to optimise myself to retire as quickly as possible.
> The plus side is that the output is exceptionally incompetent.
It won't be for long. This is reminiscent of the development of the first rifles, which often jammed or misfired, and weren't very accurate to a long range. Now look at weapons like a Barrett .50 cal sniper rifle -- that's what AI will look like in 10 years.
(Though, perhaps an unusually pessimistic example of the “real soon now, it’ll be usable, we promise” phenomenon; rifles took about 250 years to go from ‘curiosity’ to ‘somewhat useful’).
I guess what you are saying would probably have been said by AI skeptics in the 70s, but LLMs provided a quantum leap. Yes, progress is often asymptotic and governed by diminishing returns, but discontinuous breakthrough must also be factored in.
Please tell me what quantum leap was provided by LLMs.
Please inform me of any developments that made current LLMs possible.
I contend that there are none. Witness the actual transformer kernel technologies over the last 20 years and try to find a single new one.
Neural Networks? that's 90's technology.
Scale is the only new factor I can think of.
This is an investor-dollar driven attempt to use brute-force to deliver "magical" results when the fundamental technology is being mis-represented to the general public, to CTOs, and even to Developers.
Technological advances tend to happen in jumps. We are no doubt approaching a local optima right now, but it won't be long until another major advancement propels things forward again. We've seen the same pattern in ML for decades.
Please name me one technological advance of major import in the fundamental transformer kernel space that has occurred in the last decade that has any import at all on today's LLMs.
it is unlikely that the output of LLMs will improve.
there is no fundamental breakthrough in transformer technology (or anything else) powering todays' LLM "revolution"
There is only scale being employed like never before => vast datasets being plowed through being sufficient to provide the current illusion for the less observant humans out there...
10 years from now this current fad of LLM's pretending to be intelligent will look preposterous and unbelievable: "how COULD they all have fallen for such hype, and what a cost of joules/computation... the least deterministic means possible of coming to any result... just wasteful for no purpose..."
He was just awarded a prison sentence, but it's suspended for two years on the condition that he doesn't bring any more of these cases. He's left the jurisdiction and has been hopping around extradition havens, so an arrest order would just have the effect of ending the court's ability to influence him.
He's already announced that he's going to attempt to appeal. I expect that similar to his recently rejected appeal he'll file another few thousand pages of chatgpt hallucinations in this one.
Goodness gracious. Are we getting to DDoJ? (Denial of Justice by AI?) ...
... getting to an "AI arms race" where the team with the better AI "wins" - if nothing else by virtue of merely being able to survive the slop and get to the actual material - and then, of course, argue.-
Flooding the court system with frivolous charges and mountains of irrelevant evidence is already a common tactic for stalling justice. Sometimes it's just to make the other side run out of money and give up. Sometimes it's an attempt to overwhelm the jury with BS so they can't be confident in their conclusion. And more recently, we've seen it used to delay a decision until they are elected and therefore immune.
Or it might be state threat actor trying to tire out people and then plant some stuff somewhere in between. Like Jia Tan but 100x more often or then getting their people to “help” cleaning up.
You just underestimate evil people. We are long past “bored kid in his parents basement in a hoodie”.
Any piece of OSS code that might end up used by valuable or even not so valuable target is of interest for them.
I call those drive-by PRs. If you work on an even moderately popular project, you’ll end up with people showing up - who have never contributed before - and submitting patches for stuff like typos in your comments that they found with some auto scanning tool.
As far as I can tell, there are people whose entire Github activity is submitting PRs like that. It would be one thing if they were diving into a big codebase, trying to learn it, and wanted to submit a patch for some small issue they found to get acquainted with the maintainers, or just contribute a little as they learned, but they drop one patch like that, then move on to the next project.
I don’t understand the appeal, but to each their own, I guess.
Genuine curiosity - admittedly these sorts of “contributors” probably aren’t doing it out of a passion for FOSS or any particular project, but if it’s something that fixes an issue (however small) is that actually a net negative for the project?
Maybe I have some sort of bias, but it feels like a lot of the projects I see specifically request help with these sorts of low-hanging-fruit contributions (lumping typos in with documentation).
I don't think these PRs are a net negative for a project - I've never turned one down, anyway. I just don't understand what the contributor gets out of the arrangement, other than Github badges. Some theories I've held:
1) They want/intend to contribute more and are starting small, but either get overwhelmed, lose interest, don't have enough time, etc.
2) They are students padding their resumes (making small PRs so they can say they contributed to x-number of open source projects.)
>1) They want/intend to contribute more and are starting small, but either get overwhelmed, lose interest, don't have enough time, etc
Yeah that's essentially how people are encouraged to start contributing to open source projects, making small changes and cleaning up documentation and such. It's hard to allow for this while also preventing the other two categories.
>submitting patches for stuff like typos in your comments that they found with some auto scanning tool
Wonder how long it'll be until we see wars back and forth between people submitting 'corrections' for things that are just spelled different between dialects like we see on Wikipedia.
This is exactly why it's a bad thing in general to have a single metric or small group of metrics that can be optimized. It always leads to bad actors using technical tools to game them. But we keep making the same mistake.
This was an issue without LLMs too, and it sucks. GH has a tag for "good first issue" which always gets snatched by someone who only cares about the contribution line. Sometimes they just let it sit for weeks because they forgot that they now have to actually do the work.
On Hackerone you can get money for a bug report, could that be the reason? I think that the first sentence in report is probably written by a human and the rest by AI. The report is unnecessary wordy, has typical AI formatting and several paragraphs with detailed explanation of "you are absolutely right" are signs of a LLM.
Have you been on the internet? There are plenty of just plain moronic people out there to produce (with the help of LLMs) ample bullshit to be able to clog approximately any quasi-public information channel.
Such results used to require sophistication and funding, but that is no longer true
Because Amazon sent them am email. When questions go unanswered, Amazon emails some of the prior purchasers asking them the question. These emails are constructed to look like they are personally being asked for help.
So, why? Courtesy, believe it or not. Blame Amazon.
Na, it’s not intentionally malicious - people are just trying to pad their resumes for new roles while unemployed or as students. I did the same (not with AI, but picking up the low hanging easy tasks to add a few lines to my CV years ago).