|
Guys, please... Don't get yourself fooled.
This "technology" is almost 2 years old(maybe even way older), and it already had many investors, it was promised that they would have games using this technology by the start of 2011... Do you see any games using this?
No. If you ask me, the technology isn't working right now and it will take many, many more years until it is actually viable. And by that time people will have found better solutions, as it is questionable that the technology that they invented works for real time animation etc. (Remember: Two years ago they didn't have any animation because "they are no artists".)
|
I don't really get most peoples arguments against this (if it's true).
"This will demand insane hardware" - No it won't. The whole point of the technology is that it isn't all that demanding, it only renders the exact points you see on screen, not every point in every object you see.
"It can't be utilized with physics or animation" - Yes it can, polygons aren't what make animation and physics possible. All you need is to connect an action to an object. A game having twice the polygons of another game doesn't mean it has twice the physics or animation of the other game, it just looks better.
The only argument which makes sense is the one concerning memory. The data has to be saved somewhere, the form of each object etc. You can procedurally generate dirt and all that stuff, but for a game, you won't have a very fun time if everything is generated.
|
WOWOWOW! This is really cool.
After finishing watching the video though, my mind raised some questions (like some of you have pointed out)
Namely: WTF are they going to do about: a) animation (dynamic anything) b) dynamic lighting c) model transformation and/or destroying d) collision detection e) other physics
Not only are these undoubtedly issues on their own, but because the detail is so massive, the already terribly difficult job of getting accurate physics and lighting and destructible models becomes SO MANY MORE TIMES harder to do! Technically, speaking, 10 000 times more detail would mean 10 000 times more difficulty in doing lighting and physics, etc.. A system could probably be developed which simplifies the models for such purposes though, but that would still be a really extreme challenge to overcome.
Until they solve those problems, it seems to me that this technology would be most useful for artists using 3D programs. If a modeling program could be used to use this technology natively or pseudo-natively, it would save so much on processing power in workstation graphics, since most of what they deal with is static modeling. This use would still require getting a hold of modifying these voxel models in their voxel form as opposed to just a polygon converter though.
|
To me, it seems like applying the concept of texture-tiling to meshes. The island and the weird pyramids are all made of repetitious objects. The concept of tiling bigger meshes out of "atomically" smaller meshes doesn't seem like it'd be beneficial if you want to make a world that looks appropriately random and natural.
The idea that you could actually have "infinite detail" in the sense that you could have as many different objects, each with its own different shape and textures, no two blades of grass the same, with physics all applied where needed, with lighting and shadows of perfect sharpness and geometry, blah I lost my train of thought. It's a really bad claim to be making, this "infinite detail".
Polygons, in the sense of rendering a picture with them, already allow us "infinite detail" in the sense that their demo showed us. You can smooth out polygon curves to perfect roundness, and you can see it in CGI all the time, sometimes even in games, all which this demo conveniently never displays. The problem comes in putting these shapes into a "world" that you interact with. Even if their technology is more efficient than the simple concept of meshes, it is most certainly limited, and can not be infinite. Yes, you only have to render what the camera shows (all games work that way to one degree or another, that's hardly a "new" thing), but infinite details within the camera's frame would still require staggering (note: infinite) amounts of data and computation.
While I'm sure I don't understand the concept completely, I think I understand it enough to say it's not half as practical as it claims to be.
|
Funny how so many people in this thread saying that this simply isn't possible in real-time. This is true if you consider how graphics engines have traditionally worked, but we have NO IDEA how their engine works. So saying "bah this can't be possible because other engines work like X" is kinda silly and close-minded.
I'm not saying it that it isn't a hoax, but until you understand exactly how their engine works, don't be so skeptical.
|
They say they have invented insanely good graphics but the video runs in 480p
|
Kyrgyz Republic1462 Posts
I believe the technology is called Sparse Voxel Octree (SVO) and it's not that new. id Software has been promoting it for the last couple of years, just not in the sensationalist way like these guys. You can see some of id's demos on YouTube.
The level of detail that you can achieve with this is indeed amazing, and one of the main features is that you don't have to keep all the data in memory to draw it. It is very easy to exactly determine the required data block. This together with a clever compression algorithm means that it can be streamed even from a DVD/Blu-Ray and still produce very good results. This allows to have enormous models in the order of tens of gigabytes.
However I think id said that they were going to use this only for landscapes and still use polygons for animated objects.
|
The lack of technical explanations and super hyping of their technology is fishy.
|
"It can't be utilized with physics or animation" - Yes it can, polygons aren't what make animation and physics possible. All you need is to connect an action to an object. A game having twice the polygons of another game doesn't mean it has twice the physics or animation of the other game, it just looks better.
No, in fact: polygons are what makes animation possible. Polygons, meshes of vertices, are deformable. They're positions in space which are transformed by linear transformations. Because a triangle is always planar (unless the three points become colinear, at which point it becomes invisibly thin), triangles always represent closed surfaces. So you can deform a mesh of triangle vertices and, as long as you don't break the mesh with your transforms, you will know that the mesh remains closed.
For "characters", it's even more complex. What you have is a number of positions over a "skeleton", where each "bone" in the skeleton is a transformation. Some of those positions are transformed by multiple transformations; this is what allows for smooth weighting across a complex, deformable mesh.
The technology they showed cannot do this. Indeed, it's not even clear if it can render multiple static objects in different positions each frame (ie: just sliding things around). Without deformation, you pretty much give up on humans, cloth, etc. And while some games could certainly get by without people, not all or most of them could.
b) dynamic lighting
Forget dynamic lighting; that's easy (assuming that each position has a normal and reasonable lighting parameters, and can have a user-defined shader program executed to generate the color of it). Shadows are hard. Notice how their shadows in the video are pretty much just slightly darkened patches under trees and such. There's nothing like actual shadow mapping going on here.
The shadows that they say they're working on look like pre-baked Quake-1 style shadows. Sure, they have more detail than Quake 1 shadows, but they're still pre-baked. Will there be proper shadowing for characters that pass under the shadow?
Oh that's right; this technology only works for static scenes.
Also, let's not forget anisotropic filtering and antialiasing. The Youtube compression hides many sins, but I seriously doubt their method can antialias very well. Anisotropic filtering can only work with textures, so they're going to have to put together one hell of an antialiasing package to compensate.
but we have NO IDEA how their engine works.
We do have an idea how it works; their presentation says how it works. It is essentially a complex query algorithm over a database of points that serves as a combination of frustum culling and LODing. This gets a field of visible points, which they write to an image.
Once you start moving points around, you now need to incorporate transforms. And since they have "infinite detail", that's a lot of transformation of points. You can't use the database query to cull points because until you've finished the transformation, you can't know which points might be visible. Your transform has to be done pre-culling. So you're going to waste a lot of time transforming points that aren't visible.
|
On August 02 2011 13:24 whatthefat wrote:Show nested quote +On August 02 2011 09:55 nemo14 wrote:On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it. Well, not if there are certain constraints. For instance, the positions of the atoms that make up a rock can all be defined in terms of 6 parameters: the 3D coordinates of one atom in the rock, and the 3D coordinates of the rock's angle of rotation. Only if the rock is broken into smaller pieces do more parameters need to be defined. The same goes for all contiguous objects, and it may be possible to do other clever compressions. For example, you could define whole regions of the island that are not currently being interacted with in terms of 6 coordinates. Or you could make approximations for different spatial scales, e.g., only visualize 1 in a million atoms for objects smaller than a certain scale. I don't know exactly what they're doing, but I think there are many such clever algorithms that could be used to achieve compression.
Yes, but if this were true, then it would eliminate the point of rendering the rock in "atoms" in the first place.
The whole idea is that If my player character fires a bullet that punches a hole in a leaf of one branch of one tree then the hole persists because its not a polygon, its a million little atoms.
If I damage the rock in some way, it needs to adjust in real-time and persist that way because its made up of millions of little "atoms". The WHOLE PURPOSE of making a game with this ridiculous fictitious tech would be to do something not doable with polygons. If you're just going to make a rock and keep its model and orientation then that can be done with current polygon technology.
|
Well, there's zero animation so it's difficult to gauge its true potential. And goddamn his reading voice / strange Australian accent freaked me out.
|
Why are they comparing their graphics with the graphics of 2006 games???? this sounds more and more fishy to me, not much hope but it would be nice if they actually able to do this
|
My guess is as a start, that they wont change the old polygonal way of creating with this new way. I think they will somehow blend them togetther, keeping some things polygonal, and then make other stuff, such as environmental stuff made out of such clouds we see in the video.
|
Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
|
On August 02 2011 19:09 NicolBolas wrote:Show nested quote +"It can't be utilized with physics or animation" - Yes it can, polygons aren't what make animation and physics possible. All you need is to connect an action to an object. A game having twice the polygons of another game doesn't mean it has twice the physics or animation of the other game, it just looks better. No, in fact: polygons are what makes animation possible. Polygons, meshes of vertices, are deformable. They're positions in space which are transformed by linear transformations. Because a triangle is always planar (unless the three points become colinear, at which point it becomes invisibly thin), triangles always represent closed surfaces. So you can deform a mesh of triangle vertices and, as long as you don't break the mesh with your transforms, you will know that the mesh remains closed. For "characters", it's even more complex. What you have is a number of positions over a "skeleton", where each "bone" in the skeleton is a transformation. Some of those positions are transformed by multiple transformations; this is what allows for smooth weighting across a complex, deformable mesh. The technology they showed cannot do this. Indeed, it's not even clear if it can render multiple static objects in different positions each frame (ie: just sliding things around). Without deformation, you pretty much give up on humans, cloth, etc. And while some games could certainly get by without people, not all or most of them could. Forget dynamic lighting; that's easy (assuming that each position has a normal and reasonable lighting parameters, and can have a user-defined shader program executed to generate the color of it). Shadows are hard. Notice how their shadows in the video are pretty much just slightly darkened patches under trees and such. There's nothing like actual shadow mapping going on here. The shadows that they say they're working on look like pre-baked Quake-1 style shadows. Sure, they have more detail than Quake 1 shadows, but they're still pre-baked. Will there be proper shadowing for characters that pass under the shadow? Oh that's right; this technology only works for static scenes. Also, let's not forget anisotropic filtering and antialiasing. The Youtube compression hides many sins, but I seriously doubt their method can antialias very well. Anisotropic filtering can only work with textures, so they're going to have to put together one hell of an antialiasing package to compensate. We do have an idea how it works; their presentation says how it works. It is essentially a complex query algorithm over a database of points that serves as a combination of frustum culling and LODing. This gets a field of visible points, which they write to an image. Once you start moving points around, you now need to incorporate transforms. And since they have "infinite detail", that's a lot of transformation of points. You can't use the database query to cull points because until you've finished the transformation, you can't know which points might be visible. Your transform has to be done pre-culling. So you're going to waste a lot of time transforming points that aren't visible. The thing is, you're still thinking in terms of polygons etc. Moving things around should actually be insanely easy. Here's a rock object, move it x amounts of points to the west. Cool, easy as crap, done. That's not an animation though, it's just moving an object. So lets say we have a dude and we want to move his arm. In polygon-world, you would do transforms, but here, you could have a completely different system. Say you have a skeleton in the arm, and you have a shoulder object and a biceps object. All you do is rotate and move the biceps and shoulder object to follow the skeleton, and use an algorithm to "fill in" areas with "atoms", similar to how many people do flash cartoons, using separate objects for movable parts.
Don't go thinking that just because something is done a certain way in a polygon-based game, it has to be done exactly the same in any system. Creating a 2d and then a 3d game shows how massive the difference can be to create even a minor effect.
|
It may not be completely bogus, they may indeed render VERY detailed environments in real time, maybe even animate them. What the video seems to imply is that you can make a game like Crysis and just add detail everywhere, while maintaining all other effects like dynamic lighting, multiple different models etc; you can clearly see this when they juxtapose their life-like vines to a simple flat surfaces of standard game environments. There are limitations to how much you can compress data without loosing anything, and it clearly shows in their presentation - just a few models copied everywhere, structures are detailed but formed from box-shaped elements, they are back to square one in terms of rendering believable water or any object with no coherent structure for that matter. What makes this interresting is that there are tons of people like Carmack who will find a way to gradually introduce similar systems into their engines, overcoming mentioned obstacles in the process, limiting objects rendered using this technique in order to actually render something more than the same elephant over and over. It clearly won't be a revolution, no engine will use just this method, but I'm looking forward to seeing this introduced on small scale along with standard polygons.
|
Kyrgyz Republic1462 Posts
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It is already being moved though How do you think the "camera" works to build a projection on your screen? It rotates the world around you Adding another transformation step to support simple animation is pretty easy.
|
On August 02 2011 19:24 Crushgroove wrote:Show nested quote +On August 02 2011 13:24 whatthefat wrote:On August 02 2011 09:55 nemo14 wrote:On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it. Well, not if there are certain constraints. For instance, the positions of the atoms that make up a rock can all be defined in terms of 6 parameters: the 3D coordinates of one atom in the rock, and the 3D coordinates of the rock's angle of rotation. Only if the rock is broken into smaller pieces do more parameters need to be defined. The same goes for all contiguous objects, and it may be possible to do other clever compressions. For example, you could define whole regions of the island that are not currently being interacted with in terms of 6 coordinates. Or you could make approximations for different spatial scales, e.g., only visualize 1 in a million atoms for objects smaller than a certain scale. I don't know exactly what they're doing, but I think there are many such clever algorithms that could be used to achieve compression. Yes, but if this were true, then it would eliminate the point of rendering the rock in "atoms" in the first place. The whole idea is that If my player character fires a bullet that punches a hole in a leaf of one branch of one tree then the hole persists because its not a polygon, its a million little atoms. If I damage the rock in some way, it needs to adjust in real-time and persist that way because its made up of millions of little "atoms". The WHOLE PURPOSE of making a game with this ridiculous fictitious tech would be to do something not doable with polygons. If you're just going to make a rock and keep its model and orientation then that can be done with current polygon technology. Missing the point.com
No, the point is that it's faster. Current games get better looking by raising the polygon count, compare an old FPS to a new one. Character models used to be a few hundred polygons, now they are several thousand polygons. This is straining as shit on computers and 3d cards. The point of this idea is that you can get FAR better detail than by multiplying the polygons hundreds of times, without it being more straining on the hardware.
Shooting a leaf would probably do nothing, just like in a polygon game, since there won't be enough memory to have thousands of leaf objects on every tree in a forest.
|
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static. It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
|
On August 02 2011 18:45 tainted muffin wrote: They say they have invented insanely good graphics but the video runs in 480p
Here is the HD version. Just for you.
+ Show Spoiler +
|
|
|
|