I'm really skeptical of the idea that the blame for the lack of water infrastructure ought to be put at the feet of the water companies. The UK's planning system has strangled just about every infrastructure project in every domain. There is a general trend of local residents preventing infrastructure being built in the area, whether it be for water, energy, rail, or roads.
Vetocracy and nimby are ensuring the country barely shambles on until the boomers croak off. No point in putting up with construction and paying for the investments, if the current infra is juust barely good enough to last until the average voter shuffles off this mortal coil. When the older generations vote reliably and young people are apathetic, you get the current situation
100%. If we could get a DomString8 (8-bit encoded) interface in addition to the existing DomString (16-bit encoded) and a way to wrap a buffer in a DomString8, we could have convenient and reasonably performant interfaces between WASM and the DOM.
The current situation is that we have limited uptake of WASM. This is due, in part, to lack of DOM access. We could solve that but we would have to complicate WASM or complicate the DOM. Complicating WASM would seem to undermine its purpose, burdening it forever with the complexity of the browser. The DOM, on the other hand, is already quite complex. But providing a fresh interface to the DOM would make it possible to bypass some of the accretions of time and complexity. The majority of the cost would be to browser implementors as opposed to web developers.
At least some of the implementation complexity is already there under the hood. WebKit/Blink have an optimization to use 8-bit characters for strings that consist only of latin1 characters.
I want DOM access from WASM, but I don't want WASM to have to rely on UTF-16 to do it (DOMString is a 16-bit encoding). We already have the js-string-builtins proposal which ties WASM a little closer to 16-bit string encodings and I'd rather not see any more moves in that direction. So I'd prefer to see an additional DOM interface of DOMString8 (8-bit encoding) before providing WASM access to DOM apis. But I suspect the interest in that development is low.
Tbh I would be surprised if converting between UTF-8 and JS strings is the performance bottleneck when calling into JS code snippets which manipulate the DOM.
In any case, I would probably define a system which doesn't simply map the DOM API (objects and properties) into a granular set of functions on the WASM side (e.g. granular setters and getters for each DOM object property).
Instead I'd move one level up and build a UI framework where the DOM is abstracted away (quite similar to all those JS frameworks), and where most of the actual DOM work happens in sufficiently "juicy" JS functions (e.g. not just one line of code to set a property).
- the styling is colocated with the markup
- sensible defaults
- avoids rule hierarchy/inheritance
- minimal JS at runtime
Disadvantages:
- build step and configuration
- dynamic styling complexity
I don't think that's a bad tradeoff. And we're talking about styling on the web, here. So there are no good solutions. But there is a bad solution and it's CSS-in-JS.
I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future.
Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes.
It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc.
It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on.
From that point of view, sure, nothing new under the sun.
It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words.
Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop".
Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text".
I think you’re missing the point of the blog post and the point of my grandparent comment, which is that there is a pervasive attitude amongst technologists that “it’s just gonna happen anyway and therefore whether I work on something negative for the world or not makes no difference, and therefore I have no role as an ethical agent.” It’s a way to avoid responsibility and freedom.
We are not discussing the likelihood of some particular scenario based on models and numbers and statistics and predictions by Very Smart Important People.
I agree that "very likely" is not "inevitable". It is possible for the advance of AI to stop, but difficult. I agree that doesn't absolve people of responsibility for what they do. But I disagree with the comparison to religious predestination.
I'm not sure how common that is... I'd guess most who work on it think that there's a positive future with LLMs also. I mean they likely wouldn't say "I work on something negative for the world".
I think the vast majority of people are there because it’s interesting work and they’re being paid exceptionally well. That’s the extent to which 95/100 of employees engage with the ethics of their work.
You are, of course, entitled to your religious convictions. But to most people outside of your religious community, the evidence for some specific theological claim (such as predestination) looks an awful lot like "nothing". In contrast, claims about the trajectory of AI (whether you agree with the claims or not) are based on easily-verifiable, public knowledge about the recent history of AI development.
It is not a "specific theological claim" either, rather a school of theological discourse. You're literally doing free-form association now and pretending to have novel insights into centuries of work on the issue.
I'm not pretending to any novel insights. Most of us who don't have much use for theology are generally unimpressed by its discourse. Not novel at all. And the "centuries of work" without concrete developments that exist outside of the minds of those invested in the discourse is one reason why many of us are unimpressed. In contrast, AI development is resulting in concrete changes that are easily verified by anyone and on much shorter time scales.
Relatedly, it would be bordering on impossible to convince Iran about the validity of Augustine, Aquinas or Calvin, but it was fairly easy with nuclear physics. Theology isn't "based on nothing", but the convincing power of the quantum physics books happens to be radically different from Summa Theologiae, even if both are just books written by educated people based on a lot of thought and prior work.
What's the current regulatory status for mass timber? My understanding was that one of the main hurdles for uptake in the US has been regulation. Is that no longer the case?
Our lives are already nearly fully dependent on technology. Some of the projections for casualties in the months following a high-altitude EMP (or solar flare) are pretty staggering. Just losing computers means that most people die of starvation within a few months as global supply chains completely collapse.
And you're being unnecessarily adversarial. The comment you're replying to didn't say anything about disregarding the well-being of life on Earth. Interpreting it that way is uncharitable.
The analogies to previous technologies always seem misguided to me. Maybe it allows us to make some predictions about the next few years, but not more than that. We do not know when/where we will hit the limits on AI capabilities. I think this is completely unlike any previous technology. AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years? And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is. Analogies to industrial agriculture (a very big deal, historically) and other technologies completely miss the scope of what's happening.
Let me give my two cents. I remember when people used to think ai models are all the rage and one day we are gonna get super intelligence.
I am not sure If we can call the current sota models that. Maybe, maybe not. But a little disappointing.
Now everyones saying that ai agents are the hype and the productivity gains are in that, the Darwin Gödel paper which was recently released for example.
On the same day (yesterday), hn top page had an ai blog by some fly.io and the top comment was worried about ai excelling and that as devs we should do something as he was concerned what if companies reach the intelligence hype that they are saying.
On the same day, builder.ai turned out to be actually Indians.
The companies are most likely giving us hype because we are giving them valuation. The hype seems not worth it. Everyones saying that all models are really good and now all that matters are vibes.
So in all of this, I have taken this.
Trust noone. Or atleast not take things at face value of ai hype companies. I genuinely believe that ai is gonna reach a plateau of sorts at such moment like ours and as someone who tinkers with it. I am genuinely happy at its current scale and I kind of don't want it to grow more I guess, and I kind of think that a plateau might come soon. But maybe not.
I don't think that its analogous to species but maybe that's me being optimistic about future but I genuinely don't want to think too much as it stresses my brain too much and makes evene my present... Well not a present(gift)
LLMs have only really been around a handful of years and what they are capable of is shocking. Maybe LLMs hit a wall and plateau. Maybe it's a few years before there's another breakthrough that results in another step-change in capabilities. Maybe not. We can focus on the hype and the fraud and the marketing and all the nonsense, but it's missing the forest for the trees.
We genuinely have seen a shocking increase in reasoning abilities over the course of only a decade from things that aren't human. There may be bumps in the road, but we have very little idea how long this trajectory of capability increases will continue. I don't see any reason to think humans are near the ceiling of what is possible. We are in uncharted territory.
I may be wrong,I usually am but wasn't ai basically possible even in the 1970s but back then there were of course no gpus and basically alexnet showed that gpu are really effective for ai and that is what basically created the ai snowballing.
I am not sure,but in my opinion, a hardware limitation might be real. These models are training on 100k gpus and like the whole totality of internet. I am not sure but I wouldn't be too certain of ai.
Also,maybe I am biased.is it wrong that I want ai to just stay here,at the moment it is right now. It's genuinely good but anything more feels to me as if it might be terrifying (if the ai companies hype genuinely comes true)
I've got no dog on this hunt at all, the idea that any give AI company could be a house of cards is not only plausible but is the bet I would place every time, but the whole "builder.com is all Indians" thing is something 'dang spent a half an hour ruefully looking into yesterday and it turned out not to be substantiated.
I am not sure but I read the hn post a little and didnt see that part I suppose.
But even then,people were defending it,saying so what,they never said that they aren't doing it or SMTH. So I of course assumed that people are defending what's true.
Maybe not,but such a rumour was quite a funny one to hear as an Indian myself.
While robotics are still relatively immature, I would think of AI as something akin to a remote worker.
Anything a human remote worker can do, a super human remote worker will be able to do better, faster and for a fraction of the cost – this includes work that humans currently do in offices but could theoretically done remotely.
We should therefore assume if (when) AI broadly surpasses the capabilities of a human remote worker it will not longer make economic sense to hire humans for these roles.
Should we assume this then what is the human's role in the labour market? It won't be their physical abilities (the industrial revolution replaced the human's role here), it won't be their reasoning abilities (AI will soon replace the human's here), but perhaps jobs which require both physical dexterity and human-level reasoning ability humans might still retain an edge? Perhaps at least for now we can assume jobs like roofing, plumbing, and gardening will continue to exist. While jobs like coding, graphic design and copy writing will almost certainly be replaced.
I think the only long-term question at the moment is how long it will take for robotics to catch up and provide something akin to human-level dexterity with super-human intelligence? At which point I'm not sure why anyone would hire a human except from the novelty of it – perhaps like the novelty of riding a horse into town.
AI is so obviously not like other technologies. Past technologies effectively just found ways to automate low-intelligence tasks and augment human strength via machinery. Advanced robotics and AI is fundamentally different in their ability to cut into human labour, and combined it's hard to see any edge to a human labourer.
But ether way, even if you subscribe to this notion that AI will not take all human jobs it seems very likely that AI will displace many more jobs than the industrial revolution did, and at a much much faster pace. Additionally, it will target those who are most educated, which isn't necessarily a bad thing, but it unlike the working class who are easy to ignore and tell to re-skill, my guess would be that demands will be made for UBI and large reorganisations of our existing economic and political systems. My point is, the likelihood any of this will end well is close to zero, even if you just believe AI will replace a bunch of inefficient jobs like software engineers.
This is a very interesting approach. Using pages as the basic sync unit seems to simplify a lot. It also makes the sync of arbitrary bytes possible. But it does seem that if your sync is this coarse-grained that there would be lots of conflicting writes in applications with a lot of concurrent users (even if they are updating semantically unrelated data). Seems like OT or CRDT would be better in such a use-case. I'd be interested to see some real-world benchmarks to see how contention scales with the number of users.
Thank you! You're 1000% correct, Graft's approach is not a good fit for high write contention on a single Volume. Graft instead is designed for architectures that can either partition writes[^1] or can represent writes as bundled mutations and apply them in a single location that can enforce order (the SQLSync model).
Basically, Graft is not the full story, but as you point out - because it's so simple it's easy to build different solutions on top of it.
[^1]: either across Volumes or across pages, Graft can handle automatically merging non-overlapping page sets