Hm... Why would people not just paste in sections of the book to the "raw" model in the playground (gpt instead of chatgpt) and just see if it completes the text correctly? Is the concern that chatgpt may have used the book for training data but not the original llm?
edit: i meant to say "used the book for chat finetuning/rlhf but not the original llm". Also, I saw one example of the regurgitation by openAI of a NYT article, and it was indeed GPT-4, not ChatGPT.