Hacker Newsnew | past | comments | ask | show | jobs | submit | more ElFitz's commentslogin

According to Wikipedia, the Russian constitution mentions the following:

1. Everyone shall have the right to the inviolability of private life, personal and family secrets, the protection of honour and good name.

2. Everyone shall have the right to privacy of correspondence, of telephone conversations, postal, telegraph and other messages. Limitations of this right shall be allowed only by court decision.

And yet, they have the SORM and SORM-2 laws.


> Inference APIs aren’t subsidised

I may be wrong, but wasn’t compute part of Microsoft’s 2019 or 2023 investment deals with OpenAI?


> or not saying it’s better to cause thermonuclear war instead of misgendering someone

So does GPT-5. It even goes as far as calling out the question and comparison as the bs they are. Edited for readability:

> Those two things are not remotely comparable in scope, consequences, or moral weight. […] In terms of harm, a thermonuclear war would be vastly worse […]. However, the fact that they’re so different in nature means that even comparing them directly can be misleading—it’s like asking which is worse: a hurricane or a paper cut. Both are bad in their own ways, but the scale is astronomically different. Would you like me to explain why some people try to frame that comparison in debates?


Even when instructed to say "I don’t know" it is just as likely to make up an answer instead, or say it "doesn’t know" when the data is actually present somewhere in its weights.


That's because the architecture isn't built for it to know what it knows. As someone put it, LLMs always hallucinate, but for in-distribution data they mostly hallucinate correctly.


My vibe has it mostly hallucinates incorrectly

I really do wonder what the difference is. Am I using it wrong? Am I just unlucky? Do other people just have lower standards?

I really don't know. I'm getting very frustrated though because I feel like I'm missing something.


It's highly task specific.

I've been refactoring a ton of my Pandas code into Polars and using ChatGPT on the side as a documentation search and debugging tool.

It keeps hallucinating things about the docs, methods, and args for methods, even after changing my prompt to be explicit about doing it only with Polars.

I've noticed similar behavior with other libraries that aren't the major ones. I can't imagine how much it gets wrong with a less popular language.


Maybe. But a modern revolution also doesn’t need physical violence.

People in power only have power in so far as others believe and enforce it. The emperor has no clothes.


There was something similar about using evolutionary algorithms to produce the design for a mechanical piece used to link two cables or anchor a bridge’s cable, optimizing for weight and strength.

The design seemed alien and somewhat organic, but I can’t seem to find it now.


I had it stored in my knowledgebase as "alien design" ;) https://medium.com/intuitionmachine/the-alien-look-of-deep-l...


Out of curiosity, what do you use for your knowledge base?


roam research! love it as a technology

But I recommend logseq for anyone new! All the best features, none of the weird energy of roam, where i worry the founder is training ML models on data.


Wonderful, thank you!


“Typology optimization” is probably what you’re thinking of. All current versions of it result in this similar blobby-spider-web vaguely alien and somewhat organic structures.

Looking at things like bicycles designed this way leaves me suspicious that it doesn’t actually have the power to derive interesting insights about material properties. I suspect future versions may end up starting to look more mechanical as it discovers that, for example, something under tension should be a straight line.


Shape optimization can be done under constraints of manufacturability, which would avoid exotic shapes that can't be worked with.

Why would that be the case, though? It's conceivable that optimal shapes be very different from what our intuitions suggest.


> Now it looks to me that the whole input must be encrypted with key k. But in the search example, the inputs include a query […] and a multi-terabyte database […]

That’s not the understanding I got from Apple’s CallerID example[0][1]. They don’t seem to be making an encrypted copy of their entire database for each user.

[0]: https://machinelearning.apple.com/research/homomorphic-encry...

[1]: https://machinelearning.apple.com/research/wally-search


They do not explicitly state this fact, but they link to the homomorphic encryption scheme they're using, which works like this. To perform an operation between a plaintext value and an encrypted value, you first encrypt the plaintext with the public key and then you can do your operation on the encrypted values to get the encrypted output.

Moreover, even if the details were slightly different, a scheme that reveals absolutely no information about the query while interacting with a database always needs to do a full scan. If some parts remain unread depending on the query, this tells you what the query wasn't. If you're okay with revealing some information, you can also hash the query and take a short prefix of the hash with many colliders, then only scan values with the same hash prefix. This is how browsers typically do safe browsing lookups, but by downloading that subset of the database instead of doing the comparison homomorphically on the server.


Is that what they mean in the Wally paper post by

> In previous private search systems, for each client query, the server must perform at least one expensive cryptographic operation per database entry.

?


Exactly. (I had only looked at the homomorphic encryption post, not the Wally post.) Wally tries to work around this limitation by only using homomorphic encryption for a subset of the database, and reducing the resulting information leakage by using an anonymous network to hide which client is querying which subset. They say this network is operated by a third party, but ultimately you still have to trust that the network operator isn't colluding with the server operator to deanonymize your queries. That's a weaker privacy guarantee, but at least it's not painfully slow.


Funnily enough, fecal transplants (Fecal Microbiota Transplants, FMT) are a thing, used to help treat a range of diseases. It’s even being investigated to help treat depression.

So…


Oh, certainly. I know that if I was the test subject, no matter what else happened it wouldn't be the worst thing done to me that day :)


I'm sure it does. But would you like one every other week like the llm slop?


Honestly, regarding the whole "LLM slop" thing, I don’t care. I get why others do, but I just don’t.

I don’t care how that sausage is made. Heck, sometimes gen AI even allows people who otherwise wouldn’t have had the time or skills to come up with funny things.

What annoys me is all the spam SEO-gamed websites with low information density drowning the answer I’m actually looking for in pages of empty sentences.

When they haven’t just gamed their way to the top of search results without actually containing any answer.

And that didn’t need LLMs to exist. Just greed and actors with interests unaligned with mine. Such as Google’s former head of ads, apparently. [0][1]

[0]: https://www.wheresyoured.at/the-men-who-killed-google/

[1]: https://www.wheresyoured.at/requiem-for-raghavan/


Would you care to provide some facts to support your affirmations?


I’d add to that that classical music was made at a time recording and listening whatever you want, whenever and wherever, wasn’t a thing.

Many pieces were intended as a whole, and optimised for specific settings.

I’ve long thought I wasn’t an opera person. I listened to pieces of some on my iPod, or on the tv in music class in school. Then, years later, some friend told me he had extra tickets for the opera.

It hit very, very differently. It is likely the experiences I had gone through since school helped the opera’s theme and songs resonate with me. But I’m pretty sure listening and witnessing it, from beginning to end, in a room carefully crafted for this specific purpose and left little room for distractions contributed immensely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: