In context, I think this sentence is pretty nicely written, and sounds like something I'd expect of a human.
It's basically saying: LLMs are not structured like databases, their knowledge is not localized. In a database, a fact lives in a specific place; cut that out, lose the fact. In an LLM, every thing the model learned in training is smeared across the entire network; cut out a small piece, and not much is lost.
As a crude analogy, think of a time vs. frequency domain representation of a signal, like a sound or an image. The latter case is particularly illustrative (!). Take a raster image, pick a pixel. Where is the data (color) of that pixel located in the Fourier transform of the image? A little bit in every single pixel of the transformed image. Conversely, pick a block of pixels in the Fourier transform, blank them out, and transform back to "normal" image - you'll see entire image got blurred or sharpened, depending on which frequencies you just erased.
So in terms of data locality, a database is to an LLM kinda what an image is to its Fourier transform.
(Of course, the other thing the sentence is trying to communicate is, LLMs are not (a different representation of) databases - they don't learn facts, they learn patterns, which encode meanings.)
It's basically saying: LLMs are not structured like databases, their knowledge is not localized. In a database, a fact lives in a specific place; cut that out, lose the fact. In an LLM, every thing the model learned in training is smeared across the entire network; cut out a small piece, and not much is lost.
As a crude analogy, think of a time vs. frequency domain representation of a signal, like a sound or an image. The latter case is particularly illustrative (!). Take a raster image, pick a pixel. Where is the data (color) of that pixel located in the Fourier transform of the image? A little bit in every single pixel of the transformed image. Conversely, pick a block of pixels in the Fourier transform, blank them out, and transform back to "normal" image - you'll see entire image got blurred or sharpened, depending on which frequencies you just erased.
So in terms of data locality, a database is to an LLM kinda what an image is to its Fourier transform.
(Of course, the other thing the sentence is trying to communicate is, LLMs are not (a different representation of) databases - they don't learn facts, they learn patterns, which encode meanings.)