Hacker Newsnew | past | comments | ask | show | jobs | submit | eric-p7's commentslogin

Why is it 2026 and I still can't apt install deno?


Imho software that moves fast shouldn't be apt installed.

You end up with old versions as default installs that are hard to upgrade


Indeed. I would say the bigger question is why Debian does package yt-dlp even though it's basically guaranteed to be unusably far out of date.


I'm interested in hearing about successes and failures of using this.


Reminds me of Lite3 that was posted here not long ago:

https://github.com/fastserial/lite3


"Chat, expand these 3 points into 10 pages."

Later, at someone else's desk:

"Chat, summarize these 10 pages into 3 points."


It was funny. On a more serious note, if one works in a sphere where expanding with AI makes "good enough" documents, then I have bad news for him - the sphere has too much redundancy in the first place (the same place that was used for training). So no new information is created in millions of documents made by humans, and this was noticed by the training pattern recognition. You cannot do the same with historical texts; unless we live in a simulation with predictable random generators, the events are random, and there are no rules like "If the king's name starts with a G, he will likely die in the first week of October."


This needs more attention than it's getting. Perhaps if you made some changes to the landing pages could help?

"outperforms the fastest JSON libraries (that make use of SIMD) by up to 120x depending on the benchmark. It also outperforms schema-only formats, such as Google Flatbuffers (242x). Lite³ is possibly the fastest schemaless data format in the world."

^ This should be a bar graph at the top of the page that shows both serializing sizes and speeds.

It would also be nice to see a json representation on the left and a color coded string of bytes on the right that shows how the data is packed.

Then the explanation follows.


As already mentioned in other comments, it doesn't really make sense to compare to json parsers since lite3 parses, well, lite3 and not json. It serves a different use case and I think focusing on performance vs json (especially json parsers) is not the best thing about this project


I'm working Solarite, a library for doing minimal DOM updates on web components when the data changes. And other nice features like nested styles and passing constructor arguments to sub-components via attributes.

https://github.com/Vorticode/solarite


Woah, definitely looking into this. This is exactly how I created https://bid-euchre.com

Native custom web components that render different parts of themselves based on attribute changes.

Nice to see other people with the same idea! It’s so refreshing to build with.


I've built Solarite, a library that's made vanilla web components a lot more productive IMHO. It allows minimal DOM updates when the data changes. And other nice features like nested styles and passing constructor arguments to sub-components via attributes.

https://github.com/Vorticode/solarite


Genes and species are also sometimes given ridiculous names.


There are no properties of matter or energy that can have a sense of self or experience qualia. Yet we all do. Denying the hard problem of consciousness just slows down our progress in discovering what it is.


We need a difference to discover what it is. How can we know that all LLMs don't?


If you tediously work out the LLM math by hand, is the pen and paper conscious too?

Consciousness is not computation. You need something else.


This comment here is pure gold. I love it.

On the flip side: If you do that, YOU are conscious and intelligent.

Would it mean that the machine that did the computation became conscious when it did it?

What is consciousness?


The pen and paper are not the actual substrate of entropy reduction, so not really.

Consciousness is what it "feels like" when a part of the universe is engaged in local entropy reduction. You heard it here first, folks!


Even if they do, it can only be transiently during the inference process. Unlike a brain that is constantly undergoing dynamic electrochemical processes, an LLM is just an inert pile of data except when the model is being executed.


(Hint: I am not denying the hard problem of consciousness ;) )


Yes yes very impressive.

But can it still turn my screen orange?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: