Apparently there are drawbacks to this rendering approach, although this company has claimed to have solved them. I'd be interested to hear from anybody here what experience they have with this type of rendering.
I'm not too experienced with voxel rendering, but I'm familiar with the general problem.
The most visible ways that nvidia and ati are tackling the LOD problem is on-GPU tessellation. This means using a mathematically smooth surface (such as beziers or nurbs), and approximating it with polygons at the appropriate level for each frame. This means that if your lettuce is far away, it gets few polys (or culled). If it's 20% of your screen, it gets lots more. You can avoid popping artifacts because it's the same surface, just approximated differently, and it's not too tricky to get approximations that blend smoothly into each other.
If you need more detail that doesn't need to be animated, toss on a displacement map at relatively low memory cost.
True volumetric modeling works best for things that have lots of branching and void spaces, like dandelions.
Displaced surfaces work for lots of interesting things. See Mudbox and ZBrush galleries:
There used to be this fantastic game called Outcast on the PC (http://en.wikipedia.org/wiki/Outcast_%28video_game%29) That used Voxels too. The resolution was limited because of the processing power required but the graphics were impressive for the time.
Although Outcast is often cited as a forerunner of voxel technology, this is somewhat misleading. The game does not actually model three-dimensional volumes of voxels. Instead, it models the ground as a surface, which may be seen as being made up of voxels. The ground is decorated with objects that are modeled using texturemapped polygons.
From what I see it was not a true point voxel system, but rather a ray tracing algorithm that was only used for the ground. Animated objects were still rendered using polygons.
Outcast's graphic system was particularly interesting because it wasn't just voxels: It was a voxel system for the landscape and polygons for actor-class things and structures (since most voxel implementations can only render heightmaps and thus are perfect for landmass features if little else).
"The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software."
Set off my bullshit detector immediately. I hope this is real, but I'd love to see maybe.. some sort of OpenGL extension, or demo app. I can understand wanting to be secretive, but he's got nothing to show at the moment but a lot of talk.
This seems really fishy to me. First of all, the detail simply cannot be 'unlimited'. The point cloud data is necessarily limited by memory. And unless the search algorithm is O(c), there must be a limit to how much point cloud data can be searched in a 60th or a 30th of a second (pick whatever frame rate you like).
It also seems like animation would complicate things greatly.
Although I don't quite believe the hype, it is a fucking impressive piece of software and should be praised as such.
Assuming this is legit, yes you can do animation with it. Probably with a minimal performance hit.
Keep all your sprite models separate from the main. For each sprite rotate the camera so you're viewing it from the same relative position you would be in the game, render it and put it to the right spot on the screen. As you do this for every object, you should be able to skip pixels you've already drawn. Once all the objects are up, draw the background and again skip all the pixels you've already done.
So again, if this works it should be able to do animations in a single pass, with just a little per-sprite overhead. (and they could well have overcome that too.)
I was aiming more at not being able to do the deformation required for facial animation or skeletal animation beyond rigging some prims together. To my knowledge it can't be done with this rendering technique.
Every model created for this is, for all intent and purpose, a piece of rock.
As others have said looks like some sort of voxel raycaster. I wrote one a couple years back that got to about 1FPS w/o too much optimization so it's pretty conceivable that with better optimization and faster hardware you could go much faster. The big problem with building scenes like this is that the datasets get really big really fast. There's a lot of interesting work happening in the area, in particular id's upcoming stuff and the GigaVoxels project.
So is it essentially "indexing" point data like Google indexes content?
As someone mentioned, this makes me think animation or changing stuff in real time would be difficult (since the indexing part of it is slow) but I don't have much background in this area.
No time to find a link, But I recently read an article about google tweaking their index from a once a month task to something they can do in 10 seconds. (though obviously they are also throwing hardware at that problem.)
There is absolutely nothing new in this movie compared to the one which was submitted a couple days ago. Even the description that supposedly goes into more detail about how it works is cut and paste from the Unlimited Detail site.
This happens with polygonal renders, too - it's called occlusion culling.
The problem is that point clouds can't be easily animated, and forget about applying physics to them, which makes them fairly useless for realtime gaming.
Exactly, notice no collision detection or animations were demonstrated. Also didn't see anything really beyond some shader effects and a single point light.
Exactly when he mentioned the pixels of a screen and only rendering for them, he made it sound new with his voice but it is just culling and z-buffering. Point clouds still have to do that.
The reason why we use polys is for eliminating the math of all those points to make physics, shadows, texturing possible. I can see voxels/point clouds will be something in the future and voxels will probably be it, but this is silly to present static low quality work to show off a new technology for graphics.
I wonder if it is not a joking jab at tessellation and voxel directions as DX11 and OpenGL 4 now have tessellation and voxel discussion by Carmack. It will happen but these things are slow moving.
Games and graphics look great nowadays, the need to do something just for the tech is not a real driver. Why would someone invest the time in this and static when they can also add interactivity? Until it can do interactivity, real-time, lighting, animation, collision detection/physics, etc better, faster and stronger, it will not take the market for graphics.
Well one thing I was thinking about is this 'laser-scan' technology for modelling real world objects. Perhaps the appeal of a quick, light, easy method of rendering detailed environments (albeit static) will prove very popular. Usually when I see static 3D demos on television (for instance on Grand Designs, a new-build architecture show, or computerized replays in soccer), they meet the bare minimum in aesthetic appeal.
why is that these point clouds can't be easily animated, are the data structures that allow for the searching too difficult to update once they've been built or is there just too much data to process?
(disclaimer - I'm not an expert) They're probably using some kind of sparse octree for looking up points. That means they split the space with planes into areas with high concentration of points and empty ones (then split the full ones more). These kind of structures are not easily update-able as far as I know - or if you update them in trivial way (move the boundary), they could lose the fast lookup properties. It's a bit like with global illumination - you have to precalculate a lot of values to render the scene in a reasonable time later on.
It would be interesting to see if anyone can create a hybrid of point-cloud scene + polygon animation... should be enough for most shooters. (Outcast apparently had something like that, but that was pre-"nice graphics")
Apparently there are drawbacks to this rendering approach, although this company has claimed to have solved them. I'd be interested to hear from anybody here what experience they have with this type of rendering.