Hacker Newsnew | past | comments | ask | show | jobs | submit | BeetleB's commentslogin

You can if you own the copyright to the content. I don't know the state of Linux, but this is a reason the FSF (and many other projects) requires people assign their copyright to them when they submit code.

It also helps when you take an offender to court. If I contribute to a project but don't assign copyright, then they cannot take offenders to court if my code was copied illegally. The burden is on me to do so.

Of course, all code released prior to the change still remains on the original license.


The FSF stopped requiring copyright assignment in 2021.

> Here's what I think. If you have a public blog, it's fair game at an interview. If you write mostly about data science stuff but you apply for a software engineering job, you ought to be prepared to explain the contrast. Understand that, for most top firms, hiring good people and getting them to stick is hard. Most employers will want some assurance that you are serious about the position you're applying for. If you send signals that you might want some other position, be prepared to get asked about those signals.

Great! Let me trawl through all candidates' HN and social media comments, and ask why they spend more time talking about politics, movies, science fiction, than CRUD SW development. They need to justify it!


That's certainly one way of interpreting what I wrote.

My point was that potential employers are not blind to what you put out in the public space. If what you put out would cause a reasonable employer to have questions about your viability as candidate, you ought to be prepared for those questions. If you're lucky, they'll ask you those questions and you can dispell their concerns.


I would lean towards disallowing those. With "According to a Google search ...", someone can ask for specific links (and indeed, people often say to link to those sources to begin with instead of invoking Google). With "According to AI ... " - why would most readers care what the AI thinks? It's not a reliable source! You might as well say "According to a stranger I just met and don't know ..."

If you're going to say that the AI said X, Y, Z, provide a rationale on why it is relevant. If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.


For reference, the point here isn't to say "what AI thinks", but what you found with the help of AI. The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't. Sometimes I could spend half an hour verifying them independently, sometimes I can't do that but they still seem worthwhile.

> If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.

I think you're seeing this as too black-and-white, and missing the heart of the issue.

The purpose of mentioning AI is to convey the level of (un)certainty as accurately as possible. The most accurate way to do that would often be to mention any use of AI, rather than hiding it.

If AI tells me that it believes X is true because of links A and B that it cites, and I find those links compelling, then I absolutely want to mention that AI gave me those links because I have no clue whether the model had any reason to bias itself toward those sources, or whether alternate links may have existed that stated otherwise.

Whereas if a normal web search just gives links that mention terms from my query, then I get a chance to see the other links too, and I end up being the one who actually compare the contents of the different pages and figure out which one is most convincing.

Depending on various factors, such as the nature of the question and the level of background knowledge I have on the topic myself, one of these can provide a more useful response than the other -- but only if I convey the uncertainty around it accurately.


> The majority of the cases where I would say "according to AI, <blah>" are where <blah> actually does cite sources that I feel appear plausible. Sometimes they're links, sometimes they would be other publications not necessarily a click away. Sometimes I could independently verify them by spending half an hour researching (which is), sometimes I can't.

In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions.

This is true not just from the chat, but for Google AI summaries.

When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar?

(If you look at my other comments, I'm actually in favor of using LLMs in some capacity for HN comments. Just not in this case.)


>> actually does cite sources that I feel appear plausible.

> In my experience, LLMs hallucinate citations like crazy. Over 50% of the times I've checked, the citation either didn't exist, or it did but didn't support the LLM's assertions.

Note that those are specifically not the cases where the AI is citing "sources that I feel appear plausible."

(I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...)

> When the references are more often wrong than not, you can understand why many will simply downvote you for bringing LLM citations into the conversation. Why quote a habitual liar?

To be clear, I do understand both sides of the argument, and I don't think either side is unreasonable. I've also had the experience of being on both sides of this myself, and I don't think there's a clear-cut answer. I'm just hoping to get clarity on what the new policy is as far as this goes. I'm sure it'll be reevaluated either way as time goes on.


> (I also don't find over 50% hallucination to be accurate for Google AI summaries in my experience, but that depends on your queries, and in any case, I digress...)

I should point out that I'm not saying 50% of the AI summaries have an error. Merely that the references it provides me don't state what the summary is claiming. The summary may still be accurate, while the references incorrect.


Anything I post here is always in my own voice - even when I use an LLM. 95% of the times grammar/spelling is fixed, it's because my brain lapsed while typing, not because I don't know the grammar well and am using LLM to shape my voice.

I would wager that this use case is much more prevalent than ones where the LLM changed the comment significantly enough to change one's voice.

I never copy/paste from an LLM into HN. Everything is typed by myself (and I never "manually" copy LLM content). I don't have any automatic tools for inserting LLM content here.[1]

Always, always, always keep in mind that you don't notice these positive use cases, because they are not noticeable by design. So the problematic "clearly LLM" comments you see may well be a small minority of LLM-assisted comments. Don't punish the (majority) "good" folks to limit the few "bad" ones.

Lastly, I often wish we had a rule for not calling out others' comments as "AI slop" or the like.[2] It just leads to pointless debates on whether an LLM was used and distracts far more than the comment under question. I'm sure plenty of 100% human written comments have been labeled as LLM generated.

[1] The dictation one is a slight exception, and I use it only occasionally when health issues arise.

[2] Probably OK for submissions, but not comments.


Oh wow. I did not anticipate that, which is embarrassing given that I wrote this just recently:

https://news.ycombinator.com/item?id=47326351

Yes, please at least have a carveout for accessibility. I definitely have dictated HN comments in the past, and my flow uses LLMs to clean it up. It works, and is awesome when you're in pain.


> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?

Lots of people break HN guidelines. I see it virtually every day.

> And why would you want to "improve your writing" for an HN comment?

Some people like to write well regardless of the medium. Why is that a problem for you?

> I think people here value raw authenticity more than polished writing.

Classic false dichotomy. Asking an LLM for feedback is not making your comment less authentic. As I pointed out elsewhere, it can make your comment more authentic by ensuring that what you had in your head and what you wrote match.

Go and study writing and psychology. For anything of value, it's rare that your first attempt reflects what you meant to say. It's also rare that the first attempt, even if it reflects what you meant, will not be absorbed by the recipient. Saying what you mean, and having it understood as you meant it, is a difficult skill.


> Lots of people break HN guidelines. I see it virtually every day.

Yes, and AI won't help here. People will use AI to better break the guidelines.

> Go and study writing and psychology

Is this a case where you should have read the guidelines? Maybe an LLM could have helped you here? Please don't send me study anything, you know what they say of ASSuming.

> Some people like to write well regardless of the medium. Why is that a problem for you?

HN is more like talking than writing. And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

> For anything of value, it's rare that your first attempt reflects what you meant to say.

You can always edit your comment. And in any case, HN is like a live conversation. Imagine if your friend AI-edited their speech in real-time as they talked to you.


Depends on how you use the AI. if you use it a bit like you'd ask a human to proof-read your work, AI can actually be quite helpful.

The other important thing you can do is have an AI check your claims before you post. Even with google and pubmed, a quick check against sources by hand can take 30 minutes or longer, while with AI tooling it takes 5. Guess which one is more likely to actually lead to people checking their facts before they post. (even if imperfectly!) .

I'm not talking about people who lazily ask the AI to write their post for them. Or those who don't actually go through and actually get the AI to find primary sources. Those people are not being as helpful. Though try consider educating them on more responsible tool use as well?


To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.

I don't think that's what this new HN guideline is against either.

What I object is the AI writing your comments for you. I want to engage with other human beings, not the bot-mediated version of them.


> To clarify my thoughts on this, I'm not against using AI to research/hone your arguments. It's no different to using Wikipedia or googling.

> I don't think that's what this new HN guideline is against either.

This is actually how many commenters here are interpreting it, though - and that's what I'm pushing back against. They are actively advocating against using LLMs this way.

I don't have the LLM write the comment for me. I (sometimes) give it my draft, along with all the parents to the root, and get feedback. I look for specific things (Am I being too argumentative? Am I invoking a logical fallacy? Is it obvious I misinterpreted a comment that I'm replying to? Is my comment confusing? etc). Adding things like (Am I violating an HN guideline?) are fair game.

Earlier today I wrote a lot of comments without using the LLM's feedback. In one particular thread I repeatedly misunderstood the original context of the discussion and wasted people's time. I reposted my draft to the LLM and it alerted me of my problematic comment. Had I used it originally, I would have saved a lot of people time.

Incidentally, since I started doing this (a few months ago), I've only edited my comment once or twice based on its feedback. Most of the time it just tells me my comment looks good.


The problem is that there's a vast range of values between “using AI to research/hone your arguments” v. “AI writing your comments for you”, and between the rule itself and dang's various remarks on it, where exactly the rule draws the line is about as clear as mud.

> Yes, and AI won't help here. People will use AI to better break the guidelines.

AI is a general purpose tool. People will use AI for multiple reasons, including yours. I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

> HN is more like talking than writing.

Says you. Many disagree.

> And LLMs don't help you write well, they help you sound like a clone, which is unwanted.

Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

> Imagine if your friend AI-edited their speech in real-time as they talked to you.

When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.


> I'll wager, though, that your use case is much more challenging to do than mine, and that my use case will dominate in number.

I don't know how comparatively challenging, I only know your use case is now (fortunately!) against HN rules.

> Patently false on both counts. Sorry, you're cherry picking and not addressing the part of my comment that discusses this.

It's not false. It's one of the major reasons people have come to dislike AI written comments and articles. It all ends up sounding the same.

> When a conversation is heated (as it occasionally is on HN), I actually would rather he AI-edit in real time - provided that the output reflects what he intended.

In real life? Sounds like a fucking dystopia. But everyone is free to choose the hell they want to live in.


People who are particular about spelling do not want to write misspelled words! It's not about whether you/others will tolerate it. I have my standards, and I hold to them.

I personally don't use an LLM to spellcheck (browser spellcheck works fine), but I see no problem with someone using an LLM to point out spelling errors.

And while I don't complain about others' spelling errors, I sure do notice them. And if someone writes a long wall of text as one giant paragraph that has lots of spelling/grammatical issues, chances are very high I won't read it.

Some people write very poorly by almost any standard. If an LLM helps the person write better, I'm all for it. There's a world of a difference between copy/pasting from the LLM and asking it for feedback.


> I have my standards, and I hold to them.

Spellcheckers exist, you don't need an AI to change your voice.

Also, if you have standards, you can always train yourself to spell better!


> Spellcheckers exist, you don't need an AI to change your voice.

How is using an AI to spell check changing my voice?

Yes, thank you - I know spellcheckers exist, as my comment clearly states. The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

> Also, if you have standards, you can always train yourself to spell better!

"You can always ..." is not an argument against alternatives.


Calm down. You're getting defensive, but it's not warranted. I'm not attacking you.

> The amusing thing is that an LLM who had access to the thread would have alerted you to a basic error you're making.

I didn't make the "basic error" of assuming you didn't know spellcheckers existed. I was stressing that since spellcheckers already exist, you don't need an AI assisting your comments-writing. Much basic, non-style-altering alternatives exist and are better.

> "You can always ..." is not an argument against alternatives.

The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.

Alternatively, if you're lazy then your standards aren't too high.

And yes, this is an argument against the alternative you're suggesting.


> The argument I'm making is that if you care so much about standards you can always hone them yourself instead of taking the lazy way out of having an AI write for you.

It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance. I use code formatters not because I'm too lazy to indent code myself, but because it helps guarantee that it's formatted consistently. I use a stud finder when mounting things to walls not because I'm too lazy to do the “knock on the wall” trick, but because the stud finder is more precise and reliable at it.

I don't use AI to edit my comments, but if I did, it would be not because I'm too lazy to check for all the things I want to avoid putting in my comments, but as an extra layer of assurance on top of what I've already trained myself to do.


> It's pretty clear that in this case the use of AI is not a matter of laziness, but rather quality/consistency assurance

But that's not something anybody wants of you in an informal context such as this (HN). It will flatten your voice and make you sound like a drone. We value a human voice.

Code is different. Outside of hobbies, code is not a form of self-expression. There's a reason why following your companies coding styles & practices is valued in software engineering. Companies value coders being interchangeable with each other, they do not want a "unique voice". I think it's completely unrelated to what we're discussing here.

> I don't use AI to edit my comments

What are we even debating, then?


An LLM telling me I mispeled a word isn't changing my voice. Especially when I know the proper spelling and simply have a typo.

An LLM telling me I omitted a qualifier and that my statement isn't saying what I meant it to say isn't changing my voice - it's ensuring what you see is my voice.


There's a simple solution to the spelling part. Use a spell checker. They seem to work pretty well.

> There's nothing inherently better about the edited version.

Easier to read ==> More likely to be read.

No, it's not saying the same thing, especially if the tool is telling you that your statement is ambiguous and should be rephrased.


Easier to read is mostly related with predictability of the text. Any time the brain mispredicts the next word, you'd have to go back and re-read.

Unless you are purposely train on that specific way to expression, it ain't easier to read.


I don't know why this is confusing. If I forget to put the "not" qualifier in a sentence, do we agree that it can confuse (or worse, mislead) the reader?

I never said - confusing. Just not easier to read as in relative term.

I don't think the edited version is easier to read.

I'll ask the same question I asked someone else:

https://news.ycombinator.com/item?id=47342324

You're saying removing ambiguity does not make it easier to read? You're saying using a word that means nothing like what you meant to say is easier to read than using the correct word?

Really?


What are you referring to? What word did the GP use that means nothing like what they meant to say?

OK. My brain farted, and I misunderstood the top post to be saying something else, and your and others' criticisms were misinterpreted by me.

Now here's the thing. I wrote all my prior comments on a machine with no LLM access. On my personal machine, I had a while ago installed a TamperMonkey script that sends my draft, along with all the parents (to the root) to an LLM for feedback (with a specific prompt). All it does is give feedback (logical errors, etc). So I tried again with one of my comments, and its feedback found several flaws with my comment, and ended it with this suggestion:

"Considering all this, it might be BETTER to either not reply ..."

Had I had this advice when I was writing those comments, it would have saved me and others a fair amount of time.

This is (mildly) useful. It'd be sad to ban such use.


More formal register doesn’t mean easier to read or understand. To many people the exact opposite is the case.

> More formal register doesn’t mean easier to read or understand.

And who is advocating for a more formal register?


"I have no idea what the answer is, so it must be ..."

A phenomenon I call "Out of ignorance, comes certainty".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: