|
On August 02 2011 09:42 Techno wrote: Either way, everything is still going to be converted to Triangles when its sent to Direct X. No doubt this is just a way of only drawing the right amount of polygons at each magnification. somebody hasn't been listening. He said that they currently only have a working software solution, that means no OpenGL or DirectX
|
On August 02 2011 09:18 GranDim wrote:Show nested quote +On August 02 2011 01:19 DeLoAdEr wrote: interesting results for a software renderer, but the video only shows static geometry. I wonder if they're even able to handle animated objects..
and they mention the scene consists of more than 21 trillion points, lets say they store 4 byte for each point (color information, position, etc.) this would be ~100 TB of data right? so there must be some good compression going on there. ;p The technique seems to be well fit for compression however, lots of atoms will be identical and position can also be grouped. Compression would get completely screwed by animation though, one voxel moved or changed and you have to recompress the whole block of them(however many voxels you compress together). It seems they do use lots of instanced objects to save ram though.
Since it doesn't seem like they can animate these voxels, they could use them for the environment and use good old polygons for all the animated things, but they're shading sucks and there are no shadows.
I assume they can't do shadows because they have to process a cluster of voxels at a time, decompressing the cluster to work on it and then flushing it from ram, and to do shadows they would have to draw the scene from the light source's point of view applying the lighting to each affected voxel.
For example, if you have X clusters of voxels in view, and Y is the number of cpu operations to decompress that cluster, then without shadows drawing the scene would be X*Y operations. However, with shadows, where L is the number of lights in the entire scene, it would be something like ( X^(L+1) )*Y operations. Big difference even with just 1 light.
|
On August 02 2011 09:47 GGTeMpLaR wrote: I'd like to see what actual game artists can do with this technology. It looked good, but it didn't feel like 100,000 good, although the elephant was pretty cool.
They should have picked something other than a rock for their example though because a rock in itself isn't the most interesting object lol. Yea, when I read your post I was thinking they should've done like a small animal, maybe a mouse, or even just a cell phone. But their lack of shaders would really hurt the look of basically anything except the most bland thing in the world(a rock).
|
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it.
|
I think in a couple of years where CPU and GPU power keep rising, a high-end computer will be able to render all these frames.
As of now, I still don't think it could happen. It's like the transition from 2D to 3D, it takes some time and power, and eventually the 3D power will be so high that the number of polygons will start to reach the number of atoms.
|
gl finding an artists that is willing to go in detail to atomic sizes
|
I think its to do with their patents, the ones in Australia may work differently from say USA. US Patent Law is completely bonkers, theres a patent on drawing removable sprites by inverting a bitmap using Xor, pretty much a main function of the operator and something I first did at age 7, but to a US patent court it apparently sounds like a complicated act of invention... It's why we won't sell to the states, they'll be letting someone patent use of a computer to add numbers together next. But I digress, in this case the silence is due to it all being made up BS.
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
And thats without colour, lighting and or texture data. Even if things repeat as often as they clearly did in the Island demo, which clearly only used a handful of repeated models, you're talking a sh*t tonne of data to render, you still have to translate the verticies so the current vertex count limits still apply. Every point you want to render requires its 3 points be multiplied by a 4v4 projection matrix, so even with good object culling theres at least a few trillion multiplications per render cycle right there. I want their computer, because I'm fairly sure you couldn't run that at 20FPS on current generation supercomputers. (except their computers imaginary, so I'm keeping mine)
I love how people are fervourantly defending this despite the fact that its obviously nonesense. "I don't want to live on this planet anymore." -Professor Farnsworth in response to the creationists arguments.
|
Their "unlimited" atoms is comparable to phone companies' "unlimited" data plan. This is not going to work until major hardware revolution.
|
I believe from the articles on the wiki page that the basis of the technology is a fast and effective occlusion culling algorithm that can work with large numbers of points, hence the "trillions of atoms" claim.
However not all of these points are stored. There is procedural generation involved ("unlimited detail"), at least duplication, which is obvious from the repetitiveness visible in the videos. For example if you are building a roof, you don't need to store all points of all tiles, it is enough to store one tile, and duplicate it should the need arise. Same with trees. It is also possible to store only the differences, for example a deformation of a tile, or missing leaves from a tree.
This culling algorithm might involve data structures (e.g. hash tables) that would make any kind of dynamic scenery expensive or even impossible. This is not as a great limitation as it would seem at first, similar limitations already exists in polygon based games as well, where there is a distinction between the world (static objects) and actors/characters/deformables (dynamic objects). One of its causes is similar to this case: to provide adequate occlusion culling (BSP), the other is to make precalculated lighting possible (illuminations, reflections, etc), because dynamic occlusion culling and dynamic lights is more expensive in that case as well.
I suspect some interpolation is also going on, with the right kind it is very possible to generate smooth surfaces from a few points by making more, think of something similar to NURBS. And smooth surfaces seems like the author's primary goal.
The rendering does not seem the trivial kind we know from voxel engines. I believe point clouds are used in medical stuff, but I lack knowledge about their rendering process. The author claims that the culling algorithms works on a pixel by pixel basis, so rendering and occlusion culling might be the same algorithm.
This technology seems very interesting. Certinaly, it has its limitations, but I believe it has its place, especially if it can be combined with polygon representations to create a hybrid engine.
It would be nice if the author dropped his UNLIMITED PR BULLCRAP though.
|
On August 02 2011 01:02 Morfildur wrote: ... (though who knows, when quantum computing is widely accessible... 2111 maybe...) Quantum computing does not apply here - this is not one of the very small class of tasks which will be aided by quantum computers (only a very select class of problems really may see any benefit, and aside from D-Wave's quantum annealing computer (which doesn't seem that useful, frankly), there's no real hope of a quantum computer with any scale anytime soon. Note that the D-Wave machine is $10million and resides in a dilution refrigerator at 40mK... not exactly practical for mass consumption.
Also, this tech. seems like bunk. I've more hope for ray tracing and am sad that Intel cancelled Larabee.
|
Cool to look at but the processing power needed to animate and rerender destructible terrain is probably 20 years off for consumers without some huge breakthrough there. Will make for amazing pre rendered non animated non interactive stuff though.
|
On August 02 2011 10:17 Frigo wrote:I believe from the articles on the wiki page that the basis of the technology is a fast and effective occlusion culling algorithm that can work with large numbers of points, hence the "trillions of atoms" claim. However not all of these points are stored. There is procedural generation involved ("unlimited detail"), at least duplication, which is obvious from the repetitiveness visible in the videos. For example if you are building a roof, you don't need to store all points of all tiles, it is enough to store one tile, and duplicate it should the need arise. Same with trees. It is also possible to store only the differences, for example a deformation of a tile, or missing leaves from a tree. This culling algorithm might involve data structures (e.g. hash tables) that would make any kind of dynamic scenery expensive or even impossible. This is not as a great limitation as it would seem at first, similar limitations already exists in polygon based games as well, where there is a distinction between the world (static objects) and actors/characters/deformables (dynamic objects). One of its causes is similar to this case: to provide adequate occlusion culling (BSP), the other is to make precalculated lighting possible (illuminations, reflections, etc), because dynamic occlusion culling and dynamic lights is more expensive in that case as well. I suspect some interpolation is also going on, with the right kind it is very possible to generate smooth surfaces from a few points by making more, think of something similar to NURBS. And smooth surfaces seems like the author's primary goal. The rendering does not seem the trivial kind we know from voxel engines. I believe point clouds are used in medical stuff, but I lack knowledge about their rendering process. The author claims that the culling algorithms works on a pixel by pixel basis, so rendering and occlusion culling might be the same algorithm. This technology seems very interesting. Certinaly, it has its limitations, but I believe it has its place, especially if it can be combined with polygon representations to create a hybrid engine. It would be nice if the author dropped his UNLIMITED PR BULLCRAP though.
Probably needs it to drum up finacial support and funding, as well as to generate interest.
|
21 trillion, 62 billlion, 352 million, 435,000 polygons ... In 480p, oh youtube.
I kinda don't believe ANYTHING i see on youtube (and like sites, (This is a rule EVERYONE should follow)). If the OP can put up a legit link like from Cnet where I can read about this, that would be cool.
|
On August 02 2011 10:07 mostevil wrote:Show nested quote +I think its to do with their patents, the ones in Australia may work differently from say USA. US Patent Law is completely bonkers, theres a patent on drawing removable sprites by inverting a bitmap using Xor, pretty much a main function of the operator and something I first did at age 7, but to a US patent court it apparently sounds like a complicated act of invention... It's why we won't sell to the states, they'll be letting someone patent use of a computer to add numbers together next. But I digress, in this case the silence is due to it all being made up BS. Show nested quote +On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
 And thats without colour, lighting and or texture data. Even if things repeat as often as they clearly did in the Island demo, which clearly only used a handful of repeated models, you're talking a sh*t tonne of data to render, you still have to translate the verticies so the current vertex count limits still apply. Every point you want to render requires its 3 points be multiplied by a 4v4 projection matrix, so even with good object culling theres at least a few trillion multiplications per render cycle right there. I want their computer, because I'm fairly sure you couldn't run that at 20FPS on current generation supercomputers. (except their computers imaginary, so I'm keeping mine) I love how people are fervourantly defending this despite the fact that its obviously nonesense. "I don't want to live on this planet anymore." -Professor Farnsworth in response to the creationists arguments.
Yeah there's some ridiculous patents out there for seemingly basic stuff: Absolute value
int const mask = v >> sizeof(int) * CHAR_BIT - 1; r = (v ^ mask) - mask
|
I always said, one day games will look better than real life. I guess we're getting there!
|
pretty sick
the guy's voice makes it sound like a satire though haha
|
On August 02 2011 01:30 Nacl(Draq) wrote: This is very interesting. I hope they are able to implement this into the gaming industry in the next 5 years. Would be nice to see an "nonfictional" human scanned in. Even if it is just static objects think of the things they can do to games about the world. You go into a game and visit scenery in Africa, after they come out with full animation this will be a big leap in the mmorpg world where people could escape into a completely realistic fake world.
Good luck to them.
let the getting-fired-from-irl-job-cause-I-spent-too-much-time-on-WoW begin (on mass scale)
|
Just seen it about an hour ago. It does look very impressive, but as the man said, the tech is far from done. I would imagine it would tax hardware more animate those atoms, but then again I don't really know anything about it. I'd find it hard to believe this tech company would invest so much in it if a few animations would make it flop.
|
Should get some funding from nVidia.. see what comes from it. If nVidia or AMD decided to to fund it and it actually worked, then the one that chose to do so would pretty much instantly monopolize the graphics market..
|
interesting, but that wasnt really proof so we just get to wait and see.
|
|
|
|