Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
This Post Was Edited by a Rock. Deal with It (alec.is)
4 points by arm32 24 days ago | hide | past | favorite | 4 comments


I haven’t read any of the author’s other posts, so I don’t know if he is always this careful, but I do not mind the level of LLM assistance present in this post.

It becomes a problem when it is obvious that the LLM had a much bigger contribution to the writing, which is something that we do see a lot on posts here.

This is the same as LLM-assisted PRs (generally fine) and LLM-authored PRs (harmful).


I can definitely add this to the list of Headlines That Don't Make Me Click.


I don't have a "problem" with AI being used in this fashion. That being said, this article (and others on the blog) sound quite generic. They're characterized by the staccato, "I wanted this. Then this. Also this" sentence structure and headings like "The Problem" and "What it Does" etc.

The thing about an editor is that if you're not careful, your voice is lost. That's fine if the publication you're writing for has a distinctive voice or you have a specific style in mind; this article [1] describes the "New Yorker" voice as an example:

>The New Yorker sort of voice—or rather, the New Yorker voice I was using—is one that sounds on top, or ahead, of the material under discussion. It is a voice of intelligent curiosity; it implies that the writer has synthesized a great deal of information; it confidently takes readers by the hand, introduces them to surprising characters, recounts dramatic scenes, and leads them through key ideas and issues. The voice narrates the material in the first-person and describes the researcher conducting the research, encountering people, reacting to situations, thinking thoughts. The voice is smart-sounding. It is an effective voice for a lot of long-form journalism...

The "default" LLM voice isn't one that I find particularly appealing. For lack of a better term, it has these "zingers" every third or fourth sentence that, if you were writing a spammy piece, would be bolded/italicized. It also reads like the LLM has no faith in the reader's intelligence, or that it's trying too hard to make you feel smart.

This article has that feel to it. I'm not saying it was written by an LLM; I trust that the author isn't lying about only using it for editing. But it has that same style and voice that spammy LinkedIn/Facebook posts have.

[1]: https://www.publicbooks.org/ditching-the-new-yorker-voice/


AI is just another tool that allows me to go deeper into the accuracy of the story I am writing. In my last book I used AI notably to tell me how long a current state of the art (2024) computer would take to decode a cypher encrypted in a one time pad, if the cypher used the text in one of the ten most popular books available in world war 2; My guestimate was a bit off as it seems! Another time I gave it my draft blog and asked for it to fact check it. I also use it when writing to check my plot is not too similar to another that has already been written.

I usually give two different AI's the same prompt for nuance. My problem is still that they tend to drivel on, as if a one word answer is not good enough. Still I would rather have them than not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: