Their model is overfitted. If you type "What the fuck did you" it will output Navy Seal copypasta with almost no changes. No wonder why it sometimes generates human-like texts, because in some cases it literally spews out its training examples with minimal or no changes.
Yeah. It seems like there is a big difference between an AI that can spit out human-like responses, and an AI that can understand what it’s saying. In other words, can you ask the AI to “elaborate” on its idea? If not, it’s just a talking point regurgitator.
You can try a couple of times. But the idea is amusing. Like it's big enough to just have long runs of known text embedded so it's really just holding on to actual text. That would be funny.