Hacker News new | past | comments | ask | show | jobs | submit login

I'm surprised GPUs were the right call for this use case. Not to say GPUs aren't useful in DBs, but I/O and keeping the GPUs working hard quickly becomes a bottleneck. I'd assume the workloads must be non-trivial in order for this to be superior to general purpose CPUs using SIMD + in-memory data.



I have what may be a completely stupid question, but the imagination in me just posited it to me, so I thought I'd ask:

Can you translate raytracnig to DB relations?

You have a raytracing engine, and many proven approaches to be able to determine what is visible from any given point.

What if the "point" was, in fact, "the query" and from that point - you apply the idea of raytracing to see what pieces of information are the relevant fields in a space of a DB that pulls in all those fields - and the shader translates to the "lens" or "answer" you want to have from that points perspective?

Is this where a GPU would help? Why? Why not?


In theory, yes. Rays get bounced around from a light source until they hit a camera (or decay below a threshold). The net intensity is recorded for all the rays in the camera plane. Computing the net intensity is a reduction over all the rays hitting that pixel in the camera plane. Then you have NUM_BOUNCES many map steps to compute the position and intensity of the ray after each bounce. So in theory these map and reduce operations could be expressed in a database context.

In practice, does it make sense? Not really since each ray is not the same amount of work. One ray can go straight to the camera. Another could bounce many, many times going through many objects before hitting the camera or dying out altogether. GPUs are terrible at handling workloads with high thread divergence (some threads run much slower than others).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: