Hacker Newsnew | past | comments | ask | show | jobs | submit | gatane's commentslogin

This looks interesting! Thanks for sharing it, wonder if anyone else has related content.


- Stencil Routed A-buffer [1]

- Multi-Fragment Effects on the GPU using the k-Buffer [2]

- Production Volume Rendering [3]

- Translucent Shadow Maps [4]

[1] https://developer.download.nvidia.com/presentations/2007/sig...

[2] https://www.sci.utah.edu/~stevec/papers/kbuffer.pdf

[3] https://graphics.pixar.com/library/ProductionVolumeRendering...

[4] https://www.scribd.com/document/657069029/Translucent-Shadow...


Emil Persson's GPU BSP traversal demo (2017) https://www.humus.name/index.php?page=3D&ID=92


I've realized I am more fond of WinXP rather than Win95.


Dude, please give money to artists instead of using genAI


Nice read, I've been wondeling if coffee really had an effect on mental health too


Amazing project!!


Just I was trying to find a benchmark about this, I wondered which algorithm would work best for videogames. Thanks!


Video games and compute heavy tasks cannot have a large compression factor. The good thing is that you can test your own setup using zramctl.


Zippers are the derivative of lists. You can go beyond lists, too.

https://journals.sagepub.com/doi/abs/10.3233/FUN-2005-651-20...


This thing is so cool


Thank you!


Also happens with Github and anything that uses ads lately. A 2 core PC dies while trying to render those pages.


Hell, I have to use my phone instead if I want to go to any of those pages nowadays. And even my phone now gets a random crash from chrome or the page itself.


I wonder if it’s proof of work bullshit. For those to be effective, they have to slow down whatever machine is running in an LLM farm.

That generally means they have to peg a desktop GPU for long enough to make a dent larger than LLM inference time.

Browsing with a 2 core box is like using a musket to blow a hole in a wall that’s optimized for armor piercing rounds.

Is this dumb? Of course. Your best recourse is probably to work around the anti-LLM blocks by having an agent read and summarize the page for you.

Are there any decent cloud based web renderers yet? Something like vnc or rdp backed by a shared 4090 could solve the problem. For static content, it’d only have to run proof of work once, then serve the result from cache.


That makes no sense. You require PoW before providing the information, not when the page is already sent. For a large code repository like GitHub for LLM training, it's also trivial to just git fetch to scrape. The simpler explanation is that modern web devs are frequently incompetent and can't get static text to appear on an idle page without pegging cores, can't get subsecond server page generation for simple CRUD, etc. This was the trend before LLMs were ever a thing.


I absolutely despise what they did to GitHub's website. It used to be smooth and slick. But now a lot of things like reviewing PRs or just searching through a repo are painfully slow.


Luanti (ex Minetest) also uses this engine btw

https://www.luanti.org/


Kind of, they forked Irrlicht and took it in their own direction.

https://github.com/luanti-org/luanti/tree/master/irr


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: