LLMs are not human, they see the whole context window at once. On the contrary it’s ridiculous to assume otherwise.
I’ll reiterate what I said before: put the whole source of the new library in the context window and tell the LLM to use it. It will, at least if it’s Claude.
Attention works better on smaller contexts since there's less confounding tokens so even if the LLM can see the entire context, it's better to keep the amount of confounding context lower. And at some point the source code will exceed the size of the context window; even the newer ones will millions of tokens of context can't hold the entirety of many large codebases.
Of course, but OP’s 1kloc is nowhere near close to any contemporary limit. Not using the tool for what it’s designed because it isn’t designed for a harder problem is… unwise.
I have experienced quite a few of mistakes by claude as documentation grows larger (and not necessarily too large compared to certain standards). Eg some time ago, I fed a whole js documentation for some sensors into the context window and asked to generate code. The documentation mentioned specifically that it does not fully support ES6, and also explicitly that it does not support const. Claude did not bother and used const. And many times I have experienced that Claude makes mistakes using syntax in a (much less common than js or python) language that would make sense in some other language maybe, but not that one. I have inserted instructions not to do the specific mistakes in system prompts, told it to make sure it is valid syntax for X language, but Claude once in a while keeps doing the same mistakes. Negative prompts are hard, especially when probably going against a huge bunch of the training set.
I’ll reiterate what I said before: put the whole source of the new library in the context window and tell the LLM to use it. It will, at least if it’s Claude.