Hacker Newsnew | past | comments | ask | show | jobs | submit | ohxh's commentslogin

Took a while to find on their website but here’s a benchmark vs AWS Aurora:

https://planetscale.com/benchmarks/aurora

Seems a bit better, but they benchmarked on a kind of small db (500gb db / db.r8g.xlarge)


Fair to say 500GB is small, especially compared to some of the folks who've already migrated, but do note that it's 15x RAM on the benchmark machines, so we really were testing the whole database and not just the memory bandwidth of the CPUs.


Lots of non-chatbot uses in property management. Auditing leases vs. payment ledgers. Classifying maintenance work orders. Creating work orders from inspections (photos + text). Scheduling vendors to fix these issues. Etc.


They say "thus, on average, about two-thirds of cats preferred to sleep on the left side of their body with their left shoulder down", and their image for leftward lateral bias shows this. So I guess leftward means "lying on their left side", not "curling left".

But, they suggest this is because "Upon awakening, a leftward sleeping position would provide a fast left visual field view of objects", which seems suspect. When my cats sleep on their left, it's their left eye that's obscured by their paw, and their right eye that has a better field of view!


> Here, I don't think it's even useful to look at this problem in electronic terms

I always thought this problem was a funny choice for the comic, because it’s not that esoteric! It’s equivalent to asking about a 2d simple random walk on a lattice, which is fairly common. And in general the electrical network <-> random walk correspondence is a useful perspective too


This seems unusually shallow for the hedgehog review. I thought we'd largely moved on from this sort of sentimental, "I can't get good outputs therefore nobody can" style essay -- not to mention the water use argument! They've published far better writing on LLMs too: see "Language Machinery" from fall 23 [1]

[1] https://hedgehogreview.com/issues/markets-and-the-good/artic...


Johnson-lindenstrauss lemma [1] for anyone curious. But you can only map to k>8(\ln N)/\varepsilon ^{2}} if you want to preserve distances within a factor of \varepsilon with a JL-transform. This is tight up to a constant factor too.

I always wondered: if we want to preserve distances between a billion points within 10%, that would mean we need ~18k dimensions. 1% would be 1.8m. Is there a stronger version of the lemma for points that are well spread out? Or are embeddings really just fine with low precision for the distance?

[1] https://en.wikipedia.org/wiki/Johnson%E2%80%93Lindenstrauss_...


“One could offer so many examples of such categorical prophecies being quickly refuted by experience! In fact, this type of negative prediction is repeated so frequently that one might ask if it is not prompted by the very proximity of the discovery that one solemnly proclaims will never take place. In every period, any important discovery will threaten some organization of knowledge.” René Girard, Things Hidden Since the Foundation of the World, p. 4


Or maybe they do believe this, and entered into trades to express this sentiment. Now, they need the market to correct to what (they believe) is accurate, so they can take profits and free up their capital again.


> These are all special instances of a more general computational problem called the hidden subgroup problem. And quantum computers are good at solving the hidden subgroup problem. They’re really good at it.

I assume they mean the hidden subgroup problem for abelian groups? Later they mention short integer solutions (SIS) and learning with errors (LWE), which by my understanding both rely on the hardness of the shortest vector problem, corresponding to the hidden subgroup problem for some non-abelian groups. I haven't read into this stuff for a while, though


There’s a fun version of this that they have at 7-11 (or 7 & I) there. Probably other convenience stores too. They come in a plastic wrapper that separates the nori from the rice and filling so it doesn’t get soggy. When you pull a little tab, it somehow removes the plastic from in between without messing up the shape. Magic!


The article talks about this at the end. I’m confused about how it works though.


There are two layers of plastic wrap, one on either side of the nori. It works because there's no nori at the side and that's the direction you pull the plastic off.


You have to be pretty specific about how you pull it apart. Most people will ruin their first onigiri because they do it wrong. But basically you break a seal and then pull the plastic out on each side, sliding them out.


Yeah there's instructions on them but they're in Japanese and the pictures aren't very clear in my experience.

But it's not a big deal if you're going to eat it right away because the nori doesn't get soggy that quickly.


Rest assured, everyone is confused first time and no one knows how it works. It's just there aren't many Japanese left that hadn't learned the lesson.


It does ruin the aesthetic of it though


Oh ok, it didn't really for me. It did pull the nori open a little bit so I had to fold it back in.


You haven’t looked at the article, did you?


Right? It's literally the majority of the article .. the onigiri wars.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: