Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A useful resource is still things like the OpenAI Cookbook, that is a decent collection of a lot of the things in this article

By far, the best resource I've found is the Prompt Engineering Guide: https://www.promptingguide.ai/

> you can't help but wonder that they'll move it behind their API eventually with a 'session id' in the end

For in-context learning, I think it is fair to expect 100k to 500k context windows sooner. OpenAI is already at 32k.



> By far, the best resource I've found is the Prompt Engineering Guide: https://www.promptingguide.ai/

Agreed, that is a good resource for sure. For tooling I like https://promptmetheus.com/ but any pun name gets bonus points from me.

> For in-context learning, I think it is fair to expect 100k to 500k context windows sooner. OpenAI is already at 32k.

It has been interesting to see that window increase so quickly. For LLM context the biggest thing is the pay-per-token constraint if you don't run your own, so have to wonder if that is what will be around in the future given how this is trending? Just in terms of idempotent calls, throwing everything in context up every time seems like it makes it likely that OpenAI will encroach on the stores side as well and do sessions?


It is interesting to see the context window size increasing. I think the time complexity on window size is quadratic - ouch!


Can we stop calling it "in-context learning" and call it what it is, zero-shot/one-shot/few-shot prompting instead?

Learning implies that the underlying weights of the LLM changed. They didn't.


It may be wrongly used, but for better or for worse, few-shot prompting is synonymous with in-context "learning" (inference?): https://www.promptingguide.ai/techniques/fewshot / https://archive.is/D4cIW




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: