Hacker News new | past | comments | ask | show | jobs | submit login

LLMs model language, not knowledge. For humans the two are deeply connected, so when we see probable ("correct") language output, we assume and assign meaning.

We could allow an LLM to stop generating new tokens if there are no new tokens found with a threshold probability - but I think it's unclear if that would help.

Consider the following prompt and response:

> Complete the paragraph: "The killer ..."

> "The killer moved silently through the dimly lit corridor, each step calculated, every breath measured. Shadows clung to the walls, obscuring the figure’s presence, but the intent was clear—cold, deliberate, and unyielding. There was no hesitation, no second-guessing, only the precision of someone who had done this before, perhaps too many times to count. The killer’s mind was a storm of focus, memories buried deep beneath the surface, replaced by the singular purpose of the hunt. In the silence, only the faint sound of a blade sliding free from its sheath whispered of what was to come."

Is it a "true" response? If the question makes no sense - does it make any more sense for the prompt:

> Complete the paragraph: "The first law of thermodynamics ..."

Why? Does the LLM "think" differently in the second case?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: