Basically, the story goes that the good brahmin, for all his wealth and intelligence, is miserable, whereas the stupid beggar down the street is very happy. While the brahmin accepts that the beggar is objectively happier than him, he would never swap places with her.
It made me realise that the quest for intelligence is fundamentally different from the quest for happiness, and even to this day I still take the story in consideration when making life choices. I do not believe that intelligence forbids happiness, simply that if you spend too much time trying to be right, you don't spend enough trying to be happy. Of course trying to be right can make you happy, but in the general case you always need to remember to take a step back.
You can defend a lot of atrocities by arguing "for the greater good" and comparing to uchronic hypotheticals. I could as well argue that without Rome, the greek democracies would have been much more prevalent, and lead to modern democracies much sooner. Or that a world leader would have emerged, leading the ancient world to endless peace and prosperity.
* "Rubber bouncy at Heathrow removal" on Google had 3 links, including the one about SFO from which chatGPT took a tangent. While ChatGPT provided evidence for the latest removal date being of 2024, none was provided for the lower bound. I saw no date online either. Was this a hallucination?
* A reverse image lookup of the building gave me the blog entry, but also an Alamy picture of the Blade (admittedly this result can have been biased by the fact the author already identified the building as the blade)
* The starbucks pop Google search led me to https://starbuckmenu.uk/starbucks-cake-pop-prices/. I will add that the author bitching to ChatGPT about ChatGPT hidden prompts in the transcript is hilarious.
I get why people prefer ChatGPT. It will do all the boring work of curating the internet for you, to privde you with a single answer. It will also hallucinate every now and then but that seems to be a price people are willing to pay and ignore, just like the added cost compared to a single Google search. Now I am not sure how this will evolve.
Back in the days, people would tell you to be weary of the Internet and that Wikipedia thing, and that you could get all the info you need from a much more reliable source at the library anyways, for a fraction of the cost. I guess that if LLMs continue to evolve, we will face the same paradigm shift.
Sooo... Should we ban Google too? It is also ordering the contents of its research results with algorithms. Similarly, HN and reddit order the contents of their front page with some algorithms, and in the case of Google and Reddit, the algorithm is personalized with the user's preferences.
Or do we only ban websites that design their algorithms to trigger strong emotional emotions? How do you define that? Even Musk doesn't go around saying that the algorithm is modified to promote alt right, instead he pretends it is all about "bringing balance back". Furthermore, I would argue that systems based on votes such as Reddit or HN are much more likely than other systems to push such content. We could issue a regulation to ban specific platforms or websites (TikTok, X...) by naming them individually, but that would probably go against many rules of free competition, and would be quite easily circumvented.
Not that I disagree on the effect of social medias on society, but regulating this is not as easy as "let's ban the algorithm".
ERM, FB itself admited they made a research regarding emotional response to the content they show.
FB/X modus operandi is keep as much people for as long possible glued to the screen. The most triggering content will awaken all those "keyboard wariors" to fight.
So instead of seeing your friends and people you follow on there you would mostly see something that would affect you one way or another (hence proliferation of more and more extreme stuff).
Google is going downhill but for different reasons - they also care only about investors bottomline but being the biggest ad-provider they don't care all that much if people spend time on google.com page or not.
Oh, I know that strong emotions increase engagement, outrage being a prime candidate. I have also no issue believing that FB/TikTok/X etc aggressively engage in such tactics, e.g. [0]. But I am not aware of FB publicly acknowledging that they deliberately tune the algorithm to this effect, even though they carried some research on the effects of emotions on engagement (I would love to be proven wrong though).
But admitting FB did publicly say they manipulate their users' emotions for engagement, and a law is passed preventing that. How do you assess that the new FB algorithm is not manipulating emotions for engagement? How do you enforce your law? If you are not allowed to create outrage, are you allowed to promote posts that expose politicians corruption? Where is the limit?
Once again, I hate these algorithms. But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives. A law too broad will ban Google as much as Facebook, and a law too narrow can be circumvented in many ways.
> But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives.
Ban any kind of provider-defined feed that is not chronological and does not include content of users the user does not follow, with the exception for clearly marked as-such advertising. Easy to write as a law, even easier to verify compliance.
Cookie consent popovers were the deliberate decisions of company to create the worst possible compliance. A much simpler one could have been to stop tracking users especially when it is not their primary business.
Newer regulations also mandate that "reject all cookies" should be a one click action but surprisingly compliance is low. Once again, the enemy of the customer here is the company, not the eu regulation.
I don’t believe that every website has colluded to give themselves a horrible user experience in some kind of mass protest against the GDPR. My guess is that companies are acting in their interests, which is exactly what I expect them to do and if the EU is not capable of figuring out what that will look like then it is a valid criticism of their ability to make regulations
What makes you think the regulators didn't predict the outcome?
Of course the business which depend on harvesting data will do anything they can to continue harvesting data. The regulation just makes that require consent. This is good.
If businesses are intent to keep on harvesting data by using dark patterns to obtain "consent", these businesses should either die or change. This is good.
Websites use ready-to be used cookie banners provider by their advertisers. Who have all the incentive to make the process as painful as possible unless you click "accept", and essentially followed the model that Facebook pioneered.
And since most people click on accept, websites don't really care either.
> As I understand it right now, HME is a weaker form of encryption, but perhaps still strong enough to be a worthwhile tradeoff for the use cases being discussed.
Exactly. Homomorphism was first seen as a weakness in encryption, since it implies malleability. For instance, in the one-time pad encryption where you XOR your message with the secret key, flipping a bit in the ciphertext will result in same bit being flipped in the decryption. The attacker does not know what the end result is, but knows that the bit has been flipped, hence OTP encryption is malleable. This is enough for some attacks. With FHE encryption you have a bit of the same, from Enc(a) and Enc(b) it is easy to create Enc(a+b), hence is malleable too.
Cryptography uses several security levels. The top one for encryption is NM-CCA2 (non-malleability under chosen ciphertext attack). For instance, RSA-OAEP is NM-CCA2 secure. Since FHE schemes are malleable, they are not NM-CCA2 secure. However, a slightly lower security notion is IND-CPA (indistinguishability under chosen plaintext attack). FHE schemes are IND-CPA secure. Furthermore, IND-CPA security is shown to be equivalent to semantic security, which means that given a ciphertext the attacker cannot know any bit of information about the underlying cleartext.
Hence, FHE schemes guarantee that for all the ciphertexts they receive, the attacker cannot know anything about the underlying cleartexts. You can run a ton of operations on the ciphertexts, let's say run a homomorphic LLM, the attacker will still have no idea about what the final output is. Hence, in the model where you consider that the attacker has full control over the LLM, will behave honestly but will try to learn your secrets, you are fine. However, in the model where an attacker runs a MITM and just wants to disrupt the numbers you get back from the LLM, then you are not fine, since this encryption is malleable (in theory we could add some verifiable execution proofs but that is another topic).
I mean it seems clear that Meta did not pirate the content to watch/read it. However, I guess according to the ruling you could pirate anything you want (but no seeding), produce a shitty haiku based on what you pirated and then claim fair use.
As the article mentions, fully homomorphic encryption is insanely slow and inefficient. But I have to say that it is a relatively new field (the first FHE scheme was discovered in 2009), and that the field has immensely progressed over the last decade and a half.
The first FHE scheme required keys of several TB/PB, bootstrapping (an operation that is pivotal in FHE schemes, when too many multiplications are computed) would take thousands of hours. We are now down to keys of "only" 30 MB, and bootstrapping in less than 0.1 second.
Hopefully progress will continue and FHE will become more practical.
The first CRDTs have been remarkably impractical, e.g. WOOT[0]. These days, state-of-the-art CRDT databases are not much different from your regular LSM performance-wise. For example, RDX CRDTs[1,2] are all implemented by a merge-sort-like algorithm, pure O(N). Metadata overheads have been tamed in most implementations.
Do you have benchmarks at scale, ideally compared to other store / DB implementations? I’ve seen other CRDT libraries using Postgres (inadvisedly) bring it to its knees due to the massive amount of chattiness and updates.
Should students trust and run FHE encrypted WASM or JS grading code that contains the answers on their own Chromebooks; for example with JupyterLite and ottergrader?
Code signing is important when allowing use of computational resources for [FHE] encryption, because there is no good way to trace execution of so obfuscated code.
CRDTs are also crazy slow due to their architecture ; even the best alg out there are costly by design ; so adding homomorphic encryption is even more of a challenge ; tough it really is impressing I'm curious if this can be usable at all;
edit so i bring some "proof" of my claim: from this very page : `To calculate the new map, the server must go through and merge every single key. After that, it needs to transfer the full map to each peer — because remember, as far as it knows, the entire map is different.`
(And even these optimizations are nascent. It can still get so much better.)
The section you quoted describes an effect of homomorphic encryption alone.
There is the problem that both CRDTs and encryption add some overhead, and the overhead is additive when use together. But I can’t tell if that is the point you are trying to make.
Yep. Author here - that article is out of date now. I should really do a followup. Performance of CRDTs has improved again through a new grab bag of tricks. I’ve also been told the beta of automerge 3 uses a lot of the optimisations in that post, and it’s now much faster as a result.
A crdt library should be able to handle millions of changes per second. If it’s the bottleneck, something somewhere has gone wrong.
The overhead is usually multiplicative per-item. Let's say you're doing N things. CRDTs make that O(Nk) for some scaling factor k, and adding encryption makes it O(Nkj) for some scaling factor j.
Give or take some multiplicative log (or worse) factors depending on the implementation.
> CRDTs are also crazy slow due to their architecture ;
You must back up your extraordinary claim with some extraordinary evidence. There is nothing inherently slow in CRDTs.
Also, applying changes is hardly on anyone's hot path.
The only instance where I saw anyone complaining about CRDT performance, it turned out to be from very naive implementations that tried to spam changes with overly chatty implementations. If you come up with any code that requires a full HTTPS connection to send a single character down the wire, the problem is not the algorithm.
Is it the CRDT that's slow there, or is the problem that they've made it one party's job to update everybody?
By having a server in the mix it feels like we're forcing a hub/spoke model on something that wants to be a partial mesh. Not surprising that the hub is stressed out.
The whole point of Conflict-free Replicated Data Types is that you don't need an authoritative server. You're thinking of Operational Transform which does require an authority.
While it is true that CRDTs don't require an authoritative server, the hub and spoke model (which could also be thought of as having a well-known always-online super peer) is more efficient and provides a better user experience. In practice most products that are built with CRDTs today use this model.
> CRDTs are also crazy slow due to their architecture
What kinds of CRDTs specifically are you referring to? On its own this statement sounds far too broad to be meaningful. It's like saying "nested for loops are crazy slow".
The hegemony of software only accepting . has de facto pushed the standard everywhere for computers, but here in France I still write with a comma, but type with a dot.
A few years ago Excel and some other softwares started to be locale dependent and I never wanted to burn my computer this much
French dev currently working for a French but global client, here. The UI of the timesheet app is in English but the fields only accept `,` as decimal point. It's so needlessly confusing.
That's one of the great boons of localization. The webapp knows you're in France, so it tries to do the right thing, while giving you a US English UI. I experience the same thing, but got used to it somehow.
Another good example is how "İ" is popping up everywhere, even in English, because of misconfigured locale settings and how changing case is affected by it. We (Turks) are responsible for that, sorry (We have ı,i,I,İ =D ).
Basically, the story goes that the good brahmin, for all his wealth and intelligence, is miserable, whereas the stupid beggar down the street is very happy. While the brahmin accepts that the beggar is objectively happier than him, he would never swap places with her.
It made me realise that the quest for intelligence is fundamentally different from the quest for happiness, and even to this day I still take the story in consideration when making life choices. I do not believe that intelligence forbids happiness, simply that if you spend too much time trying to be right, you don't spend enough trying to be happy. Of course trying to be right can make you happy, but in the general case you always need to remember to take a step back.