Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I write with AI, it feels smooth in the moment, but I’m not really thinking through the ideas. The writing sounds fine, but when I look back later, I often can’t remember why I phrased things that way.

Now I try to write my own draft first, then use AI to help polish it. It takes a bit more effort upfront, but I feel like I learn more and remember things better.



The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.


Probably interesting to note that this is almost always true of weighted randomness.

If you have something that you consider to be over 50% towards your desired result, reducing the space of the result has a higher chance of removing the negative factor than the positive.

In contrast, any case that the algorithm is less than 100% capable of producing the positive factor, adding on to the result could always increase the negative factor more than the positive, given a finite time constraint (aka any reasonable non-theoretical application).


> "LLMs are good at reducing text, not expanding it"

You put it in quote marks, but the only search results are from you writing it here on HN. Obviously LLMs are extremely good at expanding text, which is essentially what they do whenever they continue a prompt. Or did you mean that in a prescriptive way - that it would be better for us to use it more for summarizing rather than expanding?


>You put it in quote marks, but the only search results are from you writing it here on HN.

They said it was a rule of thumb, which is a general rule based on experience. In context with the comment they were replying to, it seems that they are saying that if you want to learn and understand something, you should put the effort in yourself first to synthesize your ideas and write out a full essay, then use an LLM to refine, tighten up, and polish it. In contrast to using an LLM as you go to take your core ideas and expand them. Both might end up very good essays, but your understanding will be much deeper if you follow the "LLMs are good at reducing text, not expanding it" rule.


I think that this conflates two issues though. It seems obvious to me that in general, the more time and effort I put into a task, the deeper I will understand it. But it's unclear to me how this aspect of how we learn by spending time on a task is related to what LLMs are good at.

Intentionally taking this to a slightly absurd metaphor - it seemed to me like a person saying that their desire to reduce their alcohol consumption, led them to infer the rule of thumb that "waiters are good at bringing food, not drinks".


I think the key is how you define “good” - LLMs certainly can turn small amounts of text into larger amounts effortlessly, but if in doing so the meaningful information is diluted or even damaged by hallucinations, irrelevant info, etc., then that’s clearly not “good” or effective.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: