Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Relatedly, there was a major controversy at work recently over the propriety of adding something like this to a lengthy email discussion:

> Since this is a long thread and we're including a wider audience, I thought I'd add Copilot's summary...

Someone called them out for it, several others defended it. It was brought up in one team's retro and the opinions were divided and very contentious, ranging from, "the summary helped make sure everyone had the same understanding and the person who did it was being conscientious" to "the summary was a pointless distraction and including it was an embarrassing admission of incompetence."

Some people wanted to adopt a practice of not posting summaries in the future but we couldn't agree and had to table it.



I think the attribution itself is a certain form of cowardice. If one is actually confident that a summary is correct they'd incorporate it directly. Leaving in the "Copilot says" is an implicit attempt to weasel out of taking responsibility for it.


It's probably just transparency, because the summary will be written in a different voice and sound AIish either way.

If I were to include AI generated stuff into my communication I'd also make it clear as people might guess it anyway.


But if you haven't personally verified the summary is accurate then you should not share it. And if you have verified it, you don't need to disclose.


I'd push back on this: I wouldn't share something I haven't checked, yet my natural writing style is nothing like Gemini, so having a whole paragraph in a different style in my mail requires a disclaimer (I don't want people double guessing what I'm quoting or if I had some different intent writing that paragraph)

Rewriting it in my own words would clear the issue, but then why am I even using an AI in the first place ?


I see it more as a form of honesty, though maybe also laziness if they weren't willing to edit the summary, or write it themselves.


LLMs aren't even that good at summarizing poorly structured text, like email discussions. They can certainly cherry-pick bits and pieces and make a guess at the overall topic, but my experience has been that they're poor at identifying what's most salient. They get particularly confused when the input is internally inconsistent, like when participants on a mailing list disagree about a topic or submit competing proposals.


It is an admission of incompetence. If you need a summary, why don't you add it yourself? Moreover, any person nowadays can easily create a chatGPT summary if necessary. It is just like adding a page of google search results to your writing.


Maybe your co-worker will see the responses here and learn their lesson.

Nobody will call you a lazy and incompetent coward for taking the default option: Hit reply-all, write your one-sentence response above all 50 quoted emails, hit send.


I've noticed that even on here, which is generally extremely bullish on LLMs and AI in general, people get instantly downvoted into oblivion for LLM copypasta in comments. Nobody wants to read someone else's slop.


I often find Copilot summaries to be more or less an attempt at mainsplaining a simple change. If my tiny PR with a one line description requires Copilot to output a paragraph of text about it it’s not a summary, it’s simply time wasted on someone who loves to talk.


How is mansplaining related to this? And saying that summaries of already short information are a waste of time is not really relevant to someone talking about summaries of long and probably repetitive/hard to read discussions.


I checked your website after this and wasn't disappointed. Funny stuff.


Hah, thanks! Haven't touched it in a long time, almost forgot about the username.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: