There is no chunking built into the postgres extension yet, but we are working on it.
It does check the context length of the request against the limits of the chat model before sending the request, and optionally allows you to auto-trim the least relevant documents out of the request so that it fits the model's context window. IMO its worth spending time getting chunks prepared, sized, tuned for your use case though. There are some good conversations above discussing methods around this, such as using a summarization model to create the chunks.
It does check the context length of the request against the limits of the chat model before sending the request, and optionally allows you to auto-trim the least relevant documents out of the request so that it fits the model's context window. IMO its worth spending time getting chunks prepared, sized, tuned for your use case though. There are some good conversations above discussing methods around this, such as using a summarization model to create the chunks.