> JQuery already had a feature that rendered the shadow DOM unnecessary, but it would require discipline that most developers did not have nor understand.
It is the ability that JQuery gave us to scope the css selector to a particular node. If you know POSIX, similar to the "at" functions for filesystems. By CSS child selector, classes, IDs, and what JQuery gave us, you could already develop self contained components, HTML custom elements if you like, without the need for shadow DOM. If you teach people to write well defined CSS, they argue that the CSS is over qualified and similar nonsense. Then the industry turns around and invents the shadow DOM. Any fool can come up with a complicated solution, but the best mind comes up with the simplest. And simplicity is not easy.
Take a careful look at "http://eyeandtea.com/crxcmp", and see how the need for shadow DOM is completely absent.
And this simple thing is just one of the geniuses exposed by JQuery.
The relevant part of their quote isn't "more people" it's more people benefiting. OpenAI does not care in the slightest whether people benefit. They want more people to use their product yes, but they don't care if they benefit.
if your neighborhood gets denser you will see the benefits
if you want to live there you can pick from more options
developers capture value, but the buildings are there
obviously the usual problem is that the land value goes up, and thus the rent goes up too (because suddenly the neighborhood becomes more desirable - which again is a sign of benefits for those who already live there)
... yet still tens of millions of eligible voters don't even bother
the country is very low-density, there's no one obvious point to protest (there was Occupy Wall Street... and then the Seattle TAZ and .... that's it, oh and the Capitol January 6th), strikes and unions are legally neutered, it's just not the American way anymore
the country has a lot of experience "managing" internal unpleasantry, see the time leading up to the civil war, and then the reconstruction, and then there was a lull as the innovation in racism led to legalized economic racism (the usual walking while black "crimes", vagrancy laws, etc), and then the civil rights era, with the riots, and since then (and as always) police brutality is used as a substitute to training and funding
I think a general strike might be effective for low-density places, though that requires enough people taking part to make it truly effective. That way you don't need an obvious place to protest apart from your workplace and it'd be a non-violent protest that would definitely get the attention of the wealthy.
yes and no, as the sibling comment mentions sometimes a message bus is used (Kafka, for example), but Netflix is (was?) all-in with HTTP (low-latency gRPC, HTTP/3, wrapped in nice type-safe SDK packages)
but ideally you don't break the glass and reach for a microservices architecture if you don't need the scalability afforded by very deep decoupling
which means ideally you have separate databases (and DB schema and even likely different kind of data store), and through the magic of having minimally overlapping "bounded contexts" you don't need a lot of data to be sent over (the client SDK will pick what it needs for example)
... of course serving a content recommendation request (which results in a cascade of requests that go to various microservices, eg. profile, rights management data, CDN availability, and metadata for the results, image URLs, etc) for a Netflix user doesn't need durability, so no Kafka (or other message bus), but when the user changes their profile it might be something that gets "broadcasted"
(and durable "replayable" queues help, because then services can be put to read-only mode to serve traffic, while new instances are starting up, and they will catch up. and of course it's useful for debugging too, at least compared to HTTP logs, which usually don't have the body/payload logged.)
...well, that's good for scaling the queue, but this means the worker needs to load all relevant state/context from some DB (which might be sped up with a cache, but then things are getting really complex)
ideally you pass the context that's required for the job (let's say it's less than 100Kbytes), but I don't think that counts as large JSON, but request rate (load) can make even 512byte too much, therefore "it depends"
but in general passing around large JSONs on the network/memory is not really slow compared to writing them to a DB (WAL + fsync + MVCC management)
whatever runs on typical investor/C-suite laptops and phones (so new iPhone/MacBook with "stock" Safari, maybe in corporate some cursed Windows setup with Chrome) is okay, and obviously they need to maxx out the glitter, it's the 2020s
There is a hypothesis that dealers disincentivized salespeople from selling EVs due to a lower expected amount of service department revenue in the future. I work in the industry close enough to get a whiff of that and I never heard anything more than speculation.
Could you explain this please?
reply