It was funny. On a more serious note, if one works in a sphere where expanding with AI makes "good enough" documents, then I have bad news for him - the sphere has too much redundancy in the first place (the same place that was used for training). So no new information is created in millions of documents made by humans, and this was noticed by the training pattern recognition. You cannot do the same with historical texts; unless we live in a simulation with predictable random generators, the events are random, and there are no rules like "If the king's name starts with a G, he will likely die in the first week of October."
This needs more attention than it's getting. Perhaps if you made some changes to the landing pages could help?
"outperforms the fastest JSON libraries (that make use of SIMD) by up to 120x depending on the benchmark. It also outperforms schema-only formats, such as Google Flatbuffers (242x). Lite³ is possibly the fastest schemaless data format in the world."
^ This should be a bar graph at the top of the page that shows both serializing sizes and speeds.
It would also be nice to see a json representation on the left and a color coded string of bytes on the right that shows how the data is packed.
As already mentioned in other comments, it doesn't really make sense to compare to json parsers since lite3 parses, well, lite3 and not json. It serves a different use case and I think focusing on performance vs json (especially json parsers) is not the best thing about this project
I'm working Solarite, a library for doing minimal DOM updates on web components when the data changes. And other nice features like nested styles and passing constructor arguments to sub-components via attributes.
I've built Solarite, a library that's made vanilla web components a lot more productive IMHO. It allows minimal DOM updates when the data changes. And other nice features like nested styles and passing constructor arguments to sub-components via attributes.
There are no properties of matter or energy that can have a sense of self or experience qualia. Yet we all do. Denying the hard problem of consciousness just slows down our progress in discovering what it is.
Even if they do, it can only be transiently during the inference process. Unlike a brain that is constantly undergoing dynamic electrochemical processes, an LLM is just an inert pile of data except when the model is being executed.