You're right, but the idea of looking things up instead of computing them can be useful when we are constrained by the available compute power. I'm not talking about simple lookup tables, of course, but if you look at recent trends in large foundational models, there's a lot of interest in efficient access to external information, or ways to pay attention to the inputs selectively, rather than in all-to-all fashion (e.g. landmark attention tokens).
Oh yeah I didn't see the current discussion as related to that but I find the topic of fact databases for LLMs pretty interesting, thanks for drawing the analogy.