(May have been featured on HN recently, I can't remember how I got there).
Often mipmaps are pretty much automatic from the player's perspective, it's often kind of baked in by the developers when processing the texture files. It can probably be disabled but I can't remember seeing that option in a game for a while.
I'm curious to see how Bevy's doing it. I'm making a game in Godot at the moment, and their is the option there to generate mipmaps or not for an imported texture, and you can choose use a bilinear or trilinear filtering for the mipmaps, but that's about it (maybe there's more in the API, I haven't checked.)
> Maybe what I'm asking for is impossible, because fighting those patterns any further would just go too far into blurry territory. It might just be an inherent property of projecting something on a grid of neatly ordered pixels, which is very unlike the receptors in our eyes.
Basically it solves this exact problem - that rendering a high res image at a lower size can lose details at points you don't want them to (e.g. because some part of the texture like a line is entirely in subpixel "space"), and pre-processing a lower res image to switch to at different distance thresholds from the camera. The result is actually better details at scaled down sizes, with much less flickering, even though the actual rendered texture at a distance can be much lower quality.
They can also save a bit of GPU power and possibly VRAM, as the lower res, distant textures stream much more quickly than the ultra high res, near ones.
https://bgolus.medium.com/sharper-mipmapping-using-shader-ba...
(May have been featured on HN recently, I can't remember how I got there).
Often mipmaps are pretty much automatic from the player's perspective, it's often kind of baked in by the developers when processing the texture files. It can probably be disabled but I can't remember seeing that option in a game for a while.
I'm curious to see how Bevy's doing it. I'm making a game in Godot at the moment, and their is the option there to generate mipmaps or not for an imported texture, and you can choose use a bilinear or trilinear filtering for the mipmaps, but that's about it (maybe there's more in the API, I haven't checked.)
> Maybe what I'm asking for is impossible, because fighting those patterns any further would just go too far into blurry territory. It might just be an inherent property of projecting something on a grid of neatly ordered pixels, which is very unlike the receptors in our eyes.
Basically it solves this exact problem - that rendering a high res image at a lower size can lose details at points you don't want them to (e.g. because some part of the texture like a line is entirely in subpixel "space"), and pre-processing a lower res image to switch to at different distance thresholds from the camera. The result is actually better details at scaled down sizes, with much less flickering, even though the actual rendered texture at a distance can be much lower quality.
They can also save a bit of GPU power and possibly VRAM, as the lower res, distant textures stream much more quickly than the ultra high res, near ones.