> The real question is what happens when the labor market for non-physical work completely implodes as AI eats it all. Based on current trends I'm going to predict in terms of economics and politics we handle it as poorly as possible leading to violent revolution and possible societal collapse, but I'd love to be wrong.
Exactly and the world has to start talking about it. Eventually everybody will, including all sorts of politicians who advocate to 'finally tackle the problem', which will be too late.
It's just a joy to use and i also like it a lot design wise.
I like that it has a big display for 4 RPN rows, but i admit that that's something software calculators would even be better at.
It definitely has a nostalgic/romantic side to it for me.
Oh and for every day stuff, i really like to use Spotlight on macOS. It's really convenient: Command+Space, then just type the expression into the search box.
Does the Spotlight calculator still expect you to respect the locale and ignores decimal points as if they don't exist, if you enter them not locale-compliant?
10 years ago I tried to add 640.9 + 2.73 on a German-locale Mac (Germany uses "," as the decimal separator), and it gave me 6682 as the answer...
Some UX friction i noticed:
To get back to the homepage from an article, i have to click on the article headline. While this is elegant and you likely get used to it, once you know it, it's not exactly intuitive.
I worry that almost all the 2025 startups I've seen are AI app builders. Where are the novel new applications? I get that codegen is currently one area where AI does well, but it also feels like we're struggling with other use cases.
My optimism says the good new stuff is coming slowly because people who care about their craft and taking things slowly aren’t in any rush to get to market.
Yes, this appears to use Stam's Stable Fluids algorithm. Look for the phrases "semi-Lagrangian advection" and "pressure correction" to see the important functions. The 3d version seems to use trilinear interpolation, which is pretty diffusive.
Right, all models are inherently wrong. It's up to the user know about its limits / uncertainty.
But i think this 'being wrong' is kind of confusing when talking about LLMs (in contrast to systems/scientific modelling).
In what they model (language), the current LLMs are really good and acurate, except for example the occasional chinese character in the middle of a sentence.
But what we mean by LLMs 'being wrong' most of the time is being factually wrong in answering a question, that is expressed as language. That's a layer on top of what the model is designed to model.
EDITS:
So saying 'the model is wrong' when it's factually wrong above the language level isn't fair.
I guess this is essentially the same thought as 'all they do is hallucinate'.
Exactly and the world has to start talking about it. Eventually everybody will, including all sorts of politicians who advocate to 'finally tackle the problem', which will be too late.
reply