Hacker Newsnew | past | comments | ask | show | jobs | submit | jms55's commentslogin

If I remember correctly, most drives either:

1. Fail in the first X amount of time

2. Fail towards the end of their rated lifespan

So buying used drives doesn't seem like the worst idea to me. You've already filtered out the drivers that would fail early.

Disclaimer: I have no idea what I'm talking about


Over in hardware-land we call this "the bathtub curve".


we don't have perfect metrics here but this seems to match our experience; a lot of failures happened shortly after install before the bulk of the data download onto the heap, so actual data loss is lower than hardware failure rates


Where did you source them? I've thought about buying HDDs from a vendor like serverpartdeals.com but was unsure how reliable the drives would be.


Hi, author of Solari here!

It was pretty straightforward honestly. bevy_solari is written as a standalone crate (library), without any special private APIs or permissions or anything https://github.com/bevyengine/bevy/tree/main/crates/bevy_sol....

The crate itself is further split between the realtime lighting plugin, base "raytracing scene" plugin that could be used for your own custom raytracing-based rendering, and the reference pathtracer I use for comparing the realtime lighting against.

There were some small changes to the rest of Bevy, e.g. adding a way to set extra buffer usages for the buffers we store vertex/index data in from another plugin https://github.com/bevyengine/bevy/pull/19546, or copying some more previous frame camera data to the GPU https://github.com/bevyengine/bevy/pull/19605, but nothing really major. It was added pretty independently.


I've been working on raytraced lighting in the Bevy game engine, using wgpu's new support for hardware raytracing in WGSL. The initial prototype is launching with the release of Bevy 0.17 tomorrow, but there's still a ton left to improve. Lots of experimenting with shaders and different optimizations.

I wrote a blog post about my initial findings recently: https://jms55.github.io/posts/2025-09-20-solari-bevy-0-17


I've been evaluating texture compression options for including in Bevy https://bevy.org, and there's just, not really any good options?

Requirements:

* Generate mipmaps

* Convert to BC and ASTC

* Convert to ktx2 with zstd super-compression

* Handle color, normal maps, alpha masks, HDR textures, etc

* Open source

* (Nice to have) runs on the GPU to be fast

I unfortunately haven't found any option that cover all of these points. Some tools only write DDS, or don't handle ASTC, or want to use basis universal, or don't generate mipmaps, etc.


Why not just use BC7 (BC6H for HDR) textures on desktop and ASTC on mobile? There's no need to ship an format like basisu that converts to both. There are rust bindings for both the Intel ispc BC(n) compressor and ASTCenc.

Oh but if you care about GPU support then I'm pretty sure https://github.com/GPUOpen-Tools/Compressonator has GPU support for BC(n) compression at least. Could rip those shaders out.


Basis Universal also gives you much smaller size over the wire than hardware compressed formats (closer to jpg), which is important at least for web games (or any game that streams asset data over the net).


What did bevy end up using?


Nothing, I haven't found a good option yet.

We do have the existing bindings to a 2-year old version of basis universal, but I've been looking to replace it.


I don't think you'll find anything much better than basis universal, assuming you want textures compressed on the GPU and the simplicity of shipping one file that decodes quickly enough. I've followed development of the encoder, and its authors know what they're doing.

You might beat basisu if you encode for one texture format at a time, and use perceptual RDO for albedo textures.

Another alternative would be to use JPEG XL for distribution and transcode to GPU texture formats on install, but you'd have to ship a decent GPU texture compressor (fast ones leave quality on the table, and it's really hard to make a good one that isn't exponentially slower).


Is basis universal that bad? I thought it's more or less invented for this purpose.


Oh cool I used bevy for something. Really good shit


For people not familiar with how ML is being used by games, checkout this great and very recent SIGGRAPH 2025 course https://dl.acm.org/doi/suppl/10.1145/3721241.3733999. Slides are in the supplementary material section, and code is at https://github.com/shader-slang/neural-shading-s25.

Neural nets are great for replacing manually-written heuristics or complex function approximations, and 3d rendering is _full_ of these heuristics. Texture compression, light sampling, image denoising/upscaling/antialiasing, etc.

Actual "generative" API in graphics is pretty rare, at least currently. That's more of an artist thing. But there's a lot of really great use cases for small neural networks (think 3-layer MLPs, absolutely nowhere near LLM-levels of size) to approximate expensive or manually-tuned heuristics in existing rendering pipeline, and it just so happens that the GPUs used for rendering also now come with dedicated NPU accelerator things.


> Metal also definitely has a healthy balance between convenience and low overhead - and more recent Metal versions are an excellent example that a high performance modern 3D API doesn't have to be hard to use, nor require thousands of lines of boilerplate to get a triangle on screen.

Metal 4 has moved a lot in the other direction, and now copies a lot of concepts from Vulkan.

https://developer.apple.com/documentation/metal/understandin...

https://developer.apple.com/documentation/metal/resource-syn...


If only Vulkan SDK was half as good as Metal development experience, including IDE integration, proper support for managed languages, and graphical debugging and profiling experience.

That has been the main pain point of Khronos APIs, it isn't only the extension spaghetti, the first step is always to go fishing all the puzzle pieces to have a proper development experience.

At least now there is LunarG SDK, however for how long are they going to sponsor them, and it isn't applicable to Android, where Google does the minimum, a Github repo dump with samples and good luck.

Compare that with Apple Metal frameworks.



I wrote most of the virtual geometry / meshlet feature, so thanks for the kind words!

It's not quite at the level of Nanite, but I'm slowly getting there. Main limiter is that I do this in my spare time after work, since I don't have any dedicated funding for my work on Bevy. So, expect progress, but it's going to take a while :)


I've used LiveView a little in the past, and besides the lack of static typing (at the time, I think Elixir has gotten some type annotations since), I really liked it and found it easy to work with.

Ecto though, I could never figure out how to use. If I could just make SQL queries in an ORM-style I would understand it, but the repositories and auto generated relations and such I couldn't figure out. Do you have any good resources for learning ecto? I didn't find the official docs helpful.


If you want to make nice looking materials and effects, you need a combination of good lighting (comes from the rendering engine, not the material), and artistic capabilities/talent. Art is a lot harder to teach than programming I feel, or at least I don't know how to teach it.

Programming the shaders themselves are pretty simple imo, they're just pure functions that return color data or triangle positions. The syntax might be a little different than you're used to depending on the shader language, but it should be easy enough to pick up in a day.

If you want to write compute shaders for computation, then it gets a lot more tricky and you need to spend some time learning about memory accesses, the underlying hardware, and profiling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: