Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I hadn't heard that, but it would certainly explain why the model made a mess of this task.

Tried it again like this, using a regular prompt rather than a system prompt (with the https://github.com/simonw/llm-hacker-news plugin for the hn: prefix):

  llm -f hn:43825900 \
  'Summarize the themes of the opinions expressed here.
  For each theme, output a markdown header.
  Include direct "quotations" (with author attribution) where appropriate.
  You MUST quote directly from users when crediting them, with double quotes.
  Fix HTML entities. Output markdown. Go long. Include a section of quotes that illustrate opinions uncommon in the rest of the piece' \
  -m qwen3:32b
This worked much better! https://gist.github.com/simonw/3b7dbb2432814ebc8615304756395...


Wow, it hallucinates quotes a lot!


Seems to truncate the input to only 2048 input tokens


Oops! That's an Ollama default setting. You can fix that by increasing the num_ctx setting - I'll try running this again.

The num_predict setting controls output size.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: