In Europe, the average age for buying a new car is 50.
This means that most of the cars sold are second-hand.
Most people think that the car is a luxury and prefer to focus on their home first, then their family, and after that, their car.
I am from Brazil and although most cities are 100% built around cars, public transportation IS an option and mostly works and is (somewhat) affordable.
Unlike the US, if a place is 1km away as the eagle flies you can get there by walking ~1.5km max. And there are bus services and although often overcrowded or with low service, they do run and you can plan your life around them.
Yet everyone still buys a car as soon as they can afford one (or often if they can't). And they use it for commuting to work every day.
To get to the point that Europe is in where even rich people don't want cars or if they have one it is for weekend trips. You need to do a lot better than this.
Unfortunately getting the US to be like Europe in this regard is not really viable, but it could get to the point where Brazil is where the poorer people can afford to not own a car.
In some big cities in Brazil they do a lot of low-cost things like dedicated bus-lanes that actually make some high-demand trips shorter by bus. Progress in this area needs to be incremental, there is little point in investing crazy amounts of money in one big project. Instead the investing should be lower and constant.
sometimes one big project can make a big difference, like a new rail-bridge or metro. But in general getting people into busses is more efficient even if that means rich people still won't want to get into that bus.
> To get to the point that Europe is in where even rich people don't want cars or if they have one it is for weekend trips. You need to do a lot better than this.
It's not like that in all of Europe either, the further you go from Western Europe, the worse the public transit gets. The Balkans are probably the worst, you need a car if you live outside a city as rail and even busses are slow, unreliable, or just not an option in your place.
It is like that in Western Europe as well, "if you live outside a city" you need a car. However small cities don't have the massive gridlocks that big cities have so they can support a car-centric life.
And sure taking a train sure is faster/nicer for long travel, but in practice what matters the most for the economy and people's life/health is the daily commute which mostly happens inside cities.
But you don't quite get how it is in the US (and Canada). In the US it is "if you live *inside* a city" you need a car*, no matter if small, large, or metropolis.
> the daily commute which mostly happens inside cities
Well, some countries are far more centralized than others. The daily commute to/from cities is a huge problem where I live, to the point that cities are flooded mostly with outside commuters. Trains and busses could solve that very elegantly, but nobody’s investing in that.
the further you go from Western Europe, the worse the public transit gets.
It's not really an East-West thing. Downtown Sofia for example has much better public transportation than many 'secondary' and rural towns in Germany and France.
But that's Downtown Sofia, how about the rest of the land? That's what I'm trying to say - yes, it's fine if you live in cities, but outside of that, it's not so easy. Whereas Western Europe invests much more in having efficient transit in smaller and rural towns.
Yeah, getting rich people in buses is probably impossible. But making 90 % of all journeys cheaper and more convenient for 90 % of people is pretty doable.
There are 2 main problems with XSLT.
The first one is that manipulating strings is a pain. Splitting strings, concatenating them is verbose like hell and difficult to read.
The second one is that it quickly becomes a mess when you use the "priority" attribute to overload functions.
I compare XSLT to regular expressions, with great flexibility but impossible to maintain due to poor readability. To my knowledge, it's impossible to trace.
I worked for Ocean Software France at this moment.
Ocean Software stopped all development on Amiga and Atari ST, so there were a few games that we working on that were never released.
They wanted to focus on licenses, so all the other games were ditched
Yes, I programmed Snow Bros for the Atari ST. It was finished but never released. And no, I have no copy either.
When I worked for Cryo, I coded a game "Trashman" for the SNES. I still have a copy of the game but it's not finished.
Sorry to hear that. I suspect that you've been asked that question a lot too about still having it by the Atari community! :) . Hopefully something will surface in the future via other means.
That sounds very interesting about Trashman and would love to learn more sometime if you were up for talking about the game? If so, please feel free to contact me via the contact form link on the website.
In french, there is a game to build relations with words (they provide a word, and you have to type the most related words):
https://www.jeuxdemots.org
They reached 677 million of relations in 2024!
EDIT: after double-checking my work, I realized I have a better bound on maximum error, but not a better average error. So, the magic number depends on the goal or metric, but mean relative error seems reasonable. Leaving my original comment here, but note the big caveat that I’m half wrong.
One can do better still - 0x7EF311BC is a near optimal value at least for inputs in the range of [0.001 .. 1000].
The simple explanation here is:
The post’s number 0x7EFFFFFF results in an approximation that is always equal to or greater than 1/x. The value 0x7EEEEBB3 is better, but it’s less than 1/x around 2/3rds of the time. My number 0x7EF311BC appears to be as well balanced as you can get, half the time greater and half the time less than 1/x.
To find this number, I have a Jupyter notebook that plots the maximum absolute value of relative error over a range of inputs, for a range of magic constants. Once it’s setup, it’s pretty easy to manually binary search and find the minimum. The plot of max error looks like a big “V”. (Edit while the plot of mean error looks like a big “U” near the minimum.
The optimal number does depend on the input range, and using a different range or allowing all finite floats will change where the optimal magic value is. The optimal magic number will also change if you add one or more Newton iterations, like in that github snippet (and also seen in the ‘quake trick’ code).
PPS maybe 0x7EF0F7D0 is a pretty good candidate for minimizing the average relative error…?
Given my search criteria, the optimal magic number turns out to be: 0x7ef311c2
Initial approximation:
Good bits min: 4
Good bits avg: 5.242649912834
Error max: 0.0505102872849 (4.30728 bits)
Error avg: 0.0327344845327 (4.93304 bits)
1 NR step:
Good bits min: 8
Good bits avg: 10.642581939697
Error max: 0.00255139507338 (8.61450 bits)
Error avg: 0.00132373889641 (9.56117 bits)
2 NR steps:
Good bits min: 17
Good bits avg: 19.922843217850
Error max: 6.62494557693e-06 (17.20366 bits)
Error avg: 2.62858584054e-06 (18.53728 bits)
3 NR steps:
Good bits min: 23
Good bits avg: 23.674004554749
Error max: 1.19249960972e-07 (22.99951 bits)
Error avg: 3.44158509521e-08 (24.79235 bits)
Here, "good bits" is 24 minus the number of trailing non-zero-bits in the integer difference between the approximation and the correct value, looking at the IEEE 754 binary representation (if that makes sense).
Also, for the NR steps I used double precision for the inner (2.0 - x * y) part, then rounded to single precision, to simulate FMA, but single precision for the outer multiplication.
Ah very nice, I was close with using max error - 0.05051 is the same number I got. Pretty sure 0x7ef311c2 came up for me at least a few times as I was fiddling with parameters. Is this using minimum good bits as the deciding criteria, or is it the best overall number using one of the averages and also 1-3 NR steps? Did you limit the input range, or use all finite floats? Having the min/avg error in bits is nice, it’s more intuitive than relative error.
I like the FMA simulation, that’s smart; I didn’t think about it. I did my search in Python. I don’t have it in front of me right now, and off the top of my head I’m not even sure whether my NR steps are in Python precision or fp32. :P My posts in this thread were with NR turned off, I wanted to find the best raw approximation and noticed I got a different magic number when using refinement. It really is an amazing trick, right? Even knowing how it works it still looks like magic when plotting the result.
Thanks for the update!
BTW I was also fiddling with another possible trick that is specific to reciprocal. I suspect you can simply negate all the bits except the sign and get something that’s a decent starting point for Newton iters, though it’s a much worse approximation than the subtraction. So maybe (x ^ 0x7fffffff). Not sure if negating the mantissa helps or if it’s better to negate only the exponent. I haven’t had time to analyze it properly yet, and I don’t know of any cases where it would be preferred, but I still think it’s another interesting/cute observation about how fp32 bits are stored.
When measuring the errors I exhaustively iterate over all possible floats in the range [1, 2), by enumerating all IEEE 754 single precision representations in that range. That's "only" 2^23 numbers, so perfectly doable.
My selection criteria was abit complex, but something like this:
1. Maximize number of accurate bits in the approximation.
2. Same in NR step 1, then NR step 2 etc.
3. Minimize the max error in the approximation, and then the avg ertor in the approximation.
Another programmer had the same pseudo, but was working on the Atari ST.