I stumbled on a youtube video today and I was really impressed by what they explained and how they explained it. My knowledge of the subject is very limited and I'd like to know if I'm getting fooled or if this really is the future of graphics in games. + Show Spoiler [youtube vid] +
What they say in the youtube video is that game graphics are made of polygons, lots and lots of them. The more they have, the better the quality. The people of this little company have discovered a way to use 'atoms' instead of polygons making game-environments way smoother and prettier. Normally this would use too much cpu, but apparently they found a way around that. A year ago they came with this idea and it was seen as a hoax, but seeing that they returned with it this year, is it actually true?
I think it would be a big step in videogaming if we could make our graphics as realistic at that, especially for shooter games as CoD and Crysis. I don't really see other type of games (perhaps racing games) use that kind of graphics.
What I don't understand though, is how they can 'import' rocks/cacti from reality. Do they just scan and figure out where the atoms are to recreate it in their simulation? It looked really impressive though.
It looks awesome. But everything in the demonstration was standing still. Wouldn't it tax the hardware a lot more when you try to animate stuff? I don't know too much about it myself, but I would imagine that rendering still objects are much easier than when they are moving.
The claims of 'unlimited' are just stupid, especially for someone who claims to work in the technology industry. Like WniO said, this could be used to create great pre-rendered locations, but would have a lot of shortcomings when it comes to animation and physics. That pretty water? Just a flat surface. Those little balls of dirt or leaves on the palm tree? Ain't budging an inch. Polygons will always be needed at the very least to determine collision detection until the processors get to the point of stupid where polygons get so small they might as well be atoms.
I also find it kind of disingenuous that one of its claims is that it only renders what is visible on the camera, considering that's exactly how current game engines work.
Can't listen to the video atm since i'm at work, but it reminds me of the old Voxel Technology (Old meaning: DOS Games used it) => http://en.wikipedia.org/wiki/Voxel . The first game that used it iirc was "Comanche" (http://en.wikipedia.org/wiki/Comanche_series)
The problem with infinite detail is, it requires infinite disk/memory space while polygon technologies just requires 3 coordinates per polygon, which reduces the required memory space a lot. This is why we will for some time still have polygon based games (though who knows, when quantum computing is widely accessible... 2111 maybe...)
I remember when the first vid was released a year ago. At the time, everyone was like "wtf no wai". Today, I mean, sure. The breakthrough had to happen sooner or later.
They used the word "unlimited" a little to often for me to buy ít. Unless they expect people to have unlimited computer resources then there is no chance that their technique can deliver unlimited anything.
If they built their 3d models with small particles (atoms) then the stored representation of those models would need to know the position of each of those particles. Unless we have unlimited storage space then we will quickly run into storage problems.
I also have a hard time imagining how the 3d engine works if it can handle unlimited number of objects with a CPU/GPU only able to perform a limited amount of operations.
This looks good, but I can't help but wonder about the capability for realistic physics with this. How are they going to do collision detection for example, without requiring a supercomputer?
The amount of "atoms" they have in that still image is probably hard to render as it is, if it were to put in use in a game, it would probably take a super computer to fully animate everything. However it's great such a technology exists, because one of these days domestic computers will be able to run games using this tech.
interesting results for a software renderer, but the video only shows static geometry. I wonder if they're even able to handle animated objects..
and they mention the scene consists of more than 21 trillion points, lets say they store 4 byte for each point (color information, position, etc.) this would be ~100 TB of data right? so there must be some good compression going on there. ;p
If they manage to get atom animation and real time lighiting effects, this is going to be really good. I wonder how this technology translates into cpu usage and memory usage.
On August 02 2011 01:07 Thrill wrote: Best proof that this is a hoax is the voice they used for the video, that alone is enough to tell you they're not serious about it.
Are you serious? I'm astonished. It's called an Australian accent. Not to mention, the company can't control what the voice of their CEO sounds like. If it were a hoax (and I'm not saying it is or it isn't), it certainly has got nothing to do with the voice.
This is very interesting. I hope they are able to implement this into the gaming industry in the next 5 years. Would be nice to see an "nonfictional" human scanned in. Even if it is just static objects think of the things they can do to games about the world. You go into a game and visit scenery in Africa, after they come out with full animation this will be a big leap in the mmorpg world where people could escape into a completely realistic fake world.
My knowledge of the subject is very limited and I'd like to know if I'm getting fooled or if this really is the future of graphics in games.
You are being fooled. Games don't have a limited number of polygons because it is hard for artists to draw polygons, but because it is hard to RENDER lots of polygons.
Their island has 21 trillion atoms... At 1 byte each, you'd need a 21TB graphics card...
These don't appear to be Voxels Gizmo, but instead pointclouds.
That's beside the point though -- while skeptical, I will be thrilled if these guys turn out a new revolution in 3D rendering techniques and efficient algorithms.
As for Gyth, modern rendering techniques use something called "Culling" this prevents the GPU from rendering anything not on the screen and within a certain 'draw distance'
That coupled with say, an efficient way to draw these 'atoms' in batches to the world could make this quite feasible.
This video is obviously made for people that don't understand as much about computers as the average TL poster seems to. That said it would still be awesome if this can actually work the way the say it can. It makes you wonder though, how far could the "real" graphics go? Could it get to the point where they are shooting at unsuspecting actors inside of scanners to get a realistic death scene?
Also Oh my God every time the guy says artist or total I feel like punching a wall. I'm Australian. That may be an Australian accent but that doesn't change the fact that he's butchering his speech. What the fuck ARR TTISTTT
Its kinda silly stating something is unlimited in the IT industry. As stated above, its fake, it just uses voxels. Its never unlimited.
In their example they stated their level consists of several billion of those voxels, they need to be saved and rendered somewhere. Obviously oclussion culling will make sure you only render what you see, but your oclussion culling process would take far longer with that many points.
On August 02 2011 01:42 Maxd11 wrote: This video is obviously made for people that don't understand as much about computers as the average TL poster seems to. That said it would still be awesome if this can actually work the way the say it can. It makes you wonder though, how far could the "real" graphics go? Could it get to the point where they are shooting at unsuspecting actors inside of scanners to get a realistic death scene?
Doesn't really seem probable. They have avoided doing that in movies for quite some time.
Taking it straight from the video, they are claiming that u don't need a supercomputer to run that level of graphic. Seems like a different engine, or even a different algorithm of doing graphics.
I'm a bit skeptical about this. To me, it just seems like replacing a few flat surfaces with a lot more smaller flat surfaces ("atoms"). Maybe I'm missing something, but in principle, it doesn't seem all that far from the way we actually do 3D graphics today apart from the higher level of detail which would probably take a lot more resources to create and a lot more power to render.
I could be wrong, but the entire direction of this seems contrary to where the gaming industry is looking to go in terms of graphics which seems to be finding ways to procedurally generate content rather than mapping out entire worlds in this kind of detail.
Honestly, I have big trouble believing in what they claim to do. The figures are totally out of this world, and I think that the engineers at Nvidia / ATI work really hard on related topics. It would be surprising that they did know anything at all about this technology.
One thing that's interesting to me, and lends some credence to their cause is that they claim their technique is based on a 'search' algorithm.
If they've found a way to reduce their problem to a form that search algorithms do the heavy lifting. . . that's huge.
Put in laymens terms, some of the most efficient search algorithms have properties along the lines of, "Oh you need me to look through a billion possibilities? Let me get you that answer you wanted in about 10 steps"
On August 02 2011 01:47 Slaytilost wrote: Its kinda silly stating something is unlimited in the IT industry. As stated above, its fake, it just uses voxels. Its never unlimited.
In their example they stated their level consists of several billion of those voxels, they need to be saved and rendered somewhere. Obviously oclussion culling will make sure you only render what you see, but your oclussion culling process would take far longer with that many points.
from what i've understood this is not voxel rendering, but point-cloud rendering. I agree though that this whole unlimited thing is very silly.
I suppose (assuming it's not fake) they've come up with good hierarchization techniques to make the culling and rendering process fast as well as some good compression and streaming methods to handle this insane amount of point-data.
"The word you're looking for is Voxels, and its a technology that's as old as polygons. You've got nothing new here.
You're passing off basic fundamental well-known 3d rendering techniques as something new and fascinating by making up your own words for decades old technology and fluffing it up with... fluffy meaningless phrases like "UNLIMITED POWAR!!!".
Oh well, have fun fleecing your investors then disappearing with the money as all modern snake oil salesman do. NinjaSeg 2 months ago 20 "
I think this is a niche algorithm that is only advantageus for really fractal surfaces. For normal applications, especially indoor levels I think normal raytracing with textures would be much more memory efficient.
On August 02 2011 01:19 DeLoAdEr wrote: interesting results for a software renderer, but the video only shows static geometry. I wonder if they're even able to handle animated objects..
and they mention the scene consists of more than 21 trillion points, lets say they store 4 byte for each point (color information, position, etc.) this would be ~100 TB of data right? so there must be some good compression going on there. ;p
The objects are all referenced. All the trees are the same for example, so you need the tree geometry once and then just one ancor point for every other tree. That limits the amount of information. But I dont quite see how they can render the scene though, as that actually would require all geometry, shader etc. information to be taken into account. That amount of information just can't be handled by current current computers. Not to mention collision detection, physics, adaptive lighting and so on.
It appears to be rendered in real time by the way the camera moves. They also answered that you dont need a supercomputer to run it... There, thrashed all the arguments in the thread.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
wasn't that the whole point of the video? was saying that they CAN render it real time?
no
okay i'm just going to put this out there
NOTHING is going to come from this you've already been given the technical reasons why now you have to understand that australia's game industry is virtually non-existent we don't have the industry as a country to suddenly come out with amazing breakthrough technology, we just don't
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
wasn't that the whole point of the video? was saying that they CAN render it real time?
no
okay i'm just going to put this out there
NOTHING is going to come from this you've already been given the technical reasons why now you have to understand that australia's game industry is virtually non-existent we don't have the industry as a country to suddenly come out with amazing breakthrough technology, we just don't
so if the only thing they can do with this is render epic-looking cinematics, then sweet, now we have better cinematics.... why do you assume nothing? are you the terminator back from the future to make sure it doesn't happen?
It looks very good. But there might be some problems ahead with lighting, animation, more then 5 models and fps with all these implemented. I hope it turns out for the best.
On August 02 2011 01:07 Thrill wrote: This is bullcrap, sadly. It's even harder to render than tessellation.
Best proof that this is a hoax is the voice they used for the video, that alone is enough to tell you they're not serious about it.
Technology progresses forward. Whether or not this is real, you can't just look at everything better than the current and call it fake just because you want to believe it isn't real.
If I find a way to cure cancer that requires super advanced technology that doesn't exist yet, the cure is not a hoax. It just needs technological advances before it can be utilized. Kinda like this.
This is pretty amazing. Just how realistic it is though... thats a different story.
To be perfectly honest, and as a few people above me have stated, i would think that animated objects with the atom style of graphics would be VERY CPU - intensive.
Many people seem to doubt the demonstration based on the supposed memory requirements the system would need. Is there a demonstration using polygons that can acheive detail like this? Even if this does take up absurd amounts of memory, it's still cool that you can render those scenes so quickly.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
wasn't that the whole point of the video? was saying that they CAN render it real time?
no
okay i'm just going to put this out there
NOTHING is going to come from this you've already been given the technical reasons why now you have to understand that australia's game industry is virtually non-existent we don't have the industry as a country to suddenly come out with amazing breakthrough technology, we just don't
so if the only thing they can do with this is render epic-looking cinematics, then sweet, now we have better cinematics.... why do you assume nothing? are you the terminator back from the future to make sure it doesn't happen?
yes i am the terminator sent back in time to convince the public that this will never work so that they cannot get funding to build their super atom computer which has the power to control all the atoms that the company has the patent. unfortunately the entire world decided to adopt this atom graphical power so these guys decide to make the world nuke themselves with atom bombs
On August 02 2011 01:57 Blyadischa wrote: I don't understand this...
Does anyone think that their "atom"-based graphics look like graphics from games from like 7 years ago?
Like they said, they aren't artists.
A lot of what you see today is just artistic, from the high contrast to the exaggerated blurs. They don't seem to have very interesting lighting or specular mapping so it looks boring and bland, but the detail is a lot higher otherwise.
Also, he's throwing around unlimited and unlimited is, of course, technically impossible because even with compression and decompression and streaming, you still need to have space for it all.
In all likelihood, just like all graphics technology, you'll be able to utilize this technology in bits and pieces where it seems fit and not just render a whole world with it and every little granulation of it you can think of. I would expect maybe world objects, like trees or leaves, to be utilizing point-cloud atoms while the character models still use normal polygons (if it's an FPS, probably the gun would also use atoms). The dirt would probably still be a textured surface of polygons done in a more regular fashion, with maybe certain objects (still objects) utilizing atoms.
Maybe someday it can be utilized for everything. Maybe.
Still, I want to see a better tech demo than this. I'm skeptical, but the cries of "impossible" are as exaggerated as this promo video.
On August 02 2011 02:24 DeltaSigmaL wrote: Many people seem to doubt the demonstration based on the supposed memory requirements the system would need. Is there a demonstration using polygons that can acheive detail like this? Even if this does take up absurd amounts of memory, it's still cool that you can render those scenes so quickly.
A presentation could probably be done in microstation/autocad quite easely with point cloud that they're using. Since most of the things they showed were exactly thesame thing, they could've just attached a whole bunch of references. Never made a presentation movie in MicroStation myself, but I saw one made over a road... I think it was 2 minutes long, highly detailed and very fluent in I think it was 20fps also. But they said that after they had the whole thing referenced in, and had layed out the camera path. It took the computer 18 hours or something like that to create it.
On August 02 2011 02:34 MrCon wrote: Isn't this "atom" thing was used in outcast ? They used no polygon and all the rendering was done by the CPU.
Yep. Voxels. Used in Delta Force, Command and Conquer: Tiberian Sun and many others back in the 90s.
I think I'm with this guy and the quote he dug up:
On August 02 2011 02:00 arbitrageur wrote: A comment on the video:
"The word you're looking for is Voxels, and its a technology that's as old as polygons. You've got nothing new here.
You're passing off basic fundamental well-known 3d rendering techniques as something new and fascinating by making up your own words for decades old technology and fluffing it up with... fluffy meaningless phrases like "UNLIMITED POWAR!!!".
Oh well, have fun fleecing your investors then disappearing with the money as all modern snake oil salesman do. NinjaSeg 2 months ago 20 "
It just looks like voxels. Smaller voxels, thanks to more powerful processors, but voxels nevertheless. The limitations of which have already been explored and thus the industry more readily adopted polygons as the primary rendering technique. Hell, that's all GPUs do. Render polygons by doing a metric fuckton of vector and matrix math in real-time, so essentially the whole thing HAS to be rendered by the CPU, because the GPU isn't built for it. I don't find it a coincidence that the video is rendered at a piss-poor level of detail in order to hide it's shortcomings. Come up with 'IMPOSSIBLE DETAIL' and only render in 480p? Really?
The whole thing seems like a means to grab investor money and then bail.
On August 02 2011 01:57 Blyadischa wrote: I don't understand this...
Does anyone think that their "atom"-based graphics look like graphics from games from like 7 years ago?
Like they said, they aren't artists.
A lot of what you see today is just artistic, from the high contrast to the exaggerated blurs. They don't seem to have very interesting lighting or specular mapping so it looks boring and bland, but the detail is a lot higher otherwise.
Also, he's throwing around unlimited and unlimited is, of course, technically impossible because even with compression and decompression and streaming, you still need to have space for it all.
In all likelihood, just like all graphics technology, you'll be able to utilize this technology in bits and pieces where it seems fit and not just render a whole world with it and every little granulation of it you can think of. I would expect maybe world objects, like trees or leaves, to be utilizing point-cloud atoms while the character models still use normal polygons (if it's an FPS, probably the gun would also use atoms). The dirt would probably still be a textured surface of polygons done in a more regular fashion, with maybe certain objects (still objects) utilizing atoms.
Maybe someday it can be utilized for everything. Maybe.
Still, I want to see a better tech demo than this. I'm skeptical, but the cries of "impossible" are as exaggerated as this promo video.
They seem to have no real claims beyond their UNLIMITED power which is obviously bogus.
The only "technical" details do not explain how this is different from any voxel rendering system (perhaps with a sparse representation of graphics).
The video has very little actual content. What it has is either clearly incorrect, meaningless, or not impressive. The video itself could obviously easily be created with existing polygonal technology and may even be prerendered. We have no evidence that this video is or is not genuine, but even if genuine it does not seem impressive from a classical standpoint.
Just like with people claiming to have constructed perpetual motion machines, techniques for turning water into wine, and creating gold from common metals these claims are hard to refute because we are given very few details. From what little is given it seems unlikely to work anything like advertised. Until a proper technical explanation is provided there is no reason to assume this is anything spectacular.
Considering this was all static/unanimated, I would be tempted to believe that they CAN render it real time. Animation/physics? Unlikely.
The real issue I have with this is: Data transfer rate. EVERY big render they showed was full of clones. It makes loading and storing that data trivial. Load data for one object, render it 10000 times. But what happens when you want to have graphics where everything is unique? Well, at least WAY less repetitive than what they showed? There is no way you can stream data THIS quickly with current (affordable) technology. You can't pre-load it either because the sheer amount of data will be too much.
So basically they say that companies spend millions of dollars on this, then proceed to explain the theory behind it, and that its already used in other industries. Then they expect us to believe they are are the only ones who can adapt this to video games? Yeah right, theres probably very good technical reasons why this isn't used. If were really groundbreaking like they say they would not need to make public videos on youtube to hype it, and major games companies would already be all over them in negotiations.
I've said it a lot before and I'll repeat it again: UE3 is one of the most powerful, beautiful and long lasting engines out there. If I didn't know it was UE3, I'd say it was some kind of top-notch hollywood CGI..
The first time I played Bulletstorm I had to collect my jaw from the floor..
What I don't understand though, is how they can 'import' rocks/cacti from reality. Do they just scan and figure out where the atoms are to recreate it in their simulation? It looked really impressive though.
there are 3d scanners for a long long time. for example police/military use them to scan certain areas they dont want to go in before making sure its safe. for example a crashed building. also its used to scan cars/objects. there are companies who scan in cars for a PRE-MODEL. you only get the dimension. you have to rebuild it by hand to get a clean 3d-model. a scanned 3D model is nothing you can directly import in a game. NO WAY. its a piece of polygon bullshit. its only for measuring.
figuring atoms? no no no.. this is science fiction. were not in geneva letting fucking atoms and shit collide just for the fun of it.
this video is full of bullshit and has nothing to do with a breakthrough. and by the looks of the video and by the homepage this is less than professional and purely a joke. literally. a joke.
On August 02 2011 02:14 Keitzer wrote: so if the only thing they can do with this is render epic-looking cinematics, then sweet, now we have better cinematics.... why do you assume nothing? are you the terminator back from the future to make sure it doesn't happen?
Cinematics are pre-rendered anyway (which is why they always look so much better than actual in-game graphics) so this technology would be of no consequence on that end. You can already render absurdly good cinematics using current techniques and it's really only limited by the actual budget you have available. The only way this "new" (questionable) technology would matter is if it was somehow applicable to actual gameplay where we're currently bottle-necked by the capabilities of our computer components (CPU, graphics cards, RAM, etc.). Either that, or if this somehow drastically reduced the cost of creating content, which really doesn't seem to be the case so far as I can tell.
Obviously if this represents a leap forward in freeing up resources on any front, it's a welcome development, but I don't think you can fault the skeptics given the limited amount of actual information in that video.
I think they're doing something different than standard polygon tech. Its not just going to be something as simple as just lots and lots of polygons, only smaller.
A computer wont be required to consistently keep the location of every atom in memory, or some other brute force way of doing things. I don't know how they would go about it, but that's just the point; They're developing a new method.
If they're method was obvious someone would have done it already. So to all skeptics of this, i doubt its going to be anything like a method that might jump into your head when your saw the video, like using terrabites of raw data of something.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
^^^ animate how does object interaction works for them seems to have been glazed over.
Anyways we increase polygon count dynamically now though tessellation which is bound to be much more memory efficient, in-fact most rendering is done though streaming increasing detail as you get closer to something because it saves on memory and cost to run the game.
On August 02 2011 03:02 THE_oldy wrote: I think they're doing something different than standard polygon tech. Its not just going to be something as simple as just lots and lots of polygons, only smaller.
A computer wont be required to consistently keep the location of every atom in memory, or some other brute force way of doing things. I don't know how they would go about it, but that's just the point; They're developing a new method.
If they're method was obvious someone would have done it already. So to all skeptics of this, i doubt its going to be anything like a method that might jump into your head when your saw the video, like using terrabites of raw data of something.
you dont know what a polygon is. you have no clue what your talking about.
i play the buzzkiller in this thread. im a dick. youre welcome.
I would love to see all the math they have behind this compared to the polygon math, but that would most likely give away something they don't want given away.
Giving the reason why they have been silent for so long.
On August 02 2011 02:08 exog wrote: It appears to be rendered in real time by the way the camera moves. They also answered that you dont need a supercomputer to run it... There, thrashed all the arguments in the thread.
Will be interesting to see how it goes.
Because the camera moves it is in real time? Right, movies like Toy Story and Independance Day are just stills, because they sure as hell arent real time!
Santacause is real, there, just trashed all the arguments made ever.
If you look at their site they're quite open about it being voxels; the technology they're parading is processing efficiency... whether THAT actually works or not is the big question.
He seemed pretty elitist from the video, Guess we wait and see how this plays out? Looks good but I don't think we'll be seeing this any time soon.
On August 02 2011 03:07 Cold-Blood wrote: I would love to see all the math they have behind this compared to the polygon math, but that would most likely give away something they don't want given away.
Giving the reason why they have been silent for so long.
I think its to do with their patents, the ones in Australia may work differently from say USA.
On August 02 2011 03:02 THE_oldy wrote: I think they're doing something different than standard polygon tech. Its not just going to be something as simple as just lots and lots of polygons, only smaller.
A computer wont be required to consistently keep the location of every atom in memory, or some other brute force way of doing things. I don't know how they would go about it, but that's just the point; They're developing a new method.
If they're method was obvious someone would have done it already. So to all skeptics of this, i doubt its going to be anything like a method that might jump into your head when your saw the video, like using terrabites of raw data of something.
you dont know what a polygon is. you have no clue what your talking about.
i play the buzzkiller in this thread. im a dick. youre welcome.
A polygon is just a 3 points that make up a triangle right? What i mean is that they are not just going to be using the same methods as polygons but with just 1 point instead of 3 or something.
The real problem with computer graphics in gaming, imho, is that the spectacular advances in hardware and software technology combined with marketing efforts have made it acceptable, yes even common place, to produce games with shoddy mechanics and poor to nonexistent storytelling, just as long as the graphics look really really shiny. hope this is not too far OT.
On August 02 2011 03:02 THE_oldy wrote: I think they're doing something different than standard polygon tech. Its not just going to be something as simple as just lots and lots of polygons, only smaller.
A computer wont be required to consistently keep the location of every atom in memory, or some other brute force way of doing things. I don't know how they would go about it, but that's just the point; They're developing a new method.
If they're method was obvious someone would have done it already. So to all skeptics of this, i doubt its going to be anything like a method that might jump into your head when your saw the video, like using terrabites of raw data of something.
you dont know what a polygon is. you have no clue what your talking about.
i play the buzzkiller in this thread. im a dick. youre welcome.
A polygon is just a 3 points that make up a triangle right? What i mean is that they are not just going to be using the same methods as polygons but with just 1 point instead of 3 or something.
Hopefully their atoms aren't 1 point polygons. Things in one dimension are usually hard to see. ^_^
On August 02 2011 03:18 Nycaloth wrote: The real problem with computer graphics in gaming, imho, is that the spectacular advances in hardware and software technology combined with marketing efforts have made it acceptable, yes even common place, to produce games with shoddy mechanics and poor to nonexistent storytelling, just as long as the graphics look really really shiny. hope this is not too far OT.
made acceptable? "Check out how shiny this new game i just made" has been part of it since the beginning. Its a driving force behind developing graphics, and its the reason games build with solid mechanics in mind, like sc2, are more graphical than the most "check out how shiny this is! whats a game play mechanic?" games of 10 years ago
This technology doesn't offer reflections, transparency, refractions or real shadows... animating these voxels is also very heavy.. this is useless ...
now the only difficulty is to make a game that's actually worth playing with all these new pretty graphics, rather than just making something that looks nice for the sake of it like 99% of games released in the last decade
On August 02 2011 03:24 Valashu wrote: Be true dammit! I want minecraft in infinite powah mode!
Isn't the charm of Minecraft the fact that it's blocky and it leaves a lot to the imagination? You know, the thing people pine for from video-games prior to the 2000s?
I get the idea he's trying to sell it, but I won't buy into it unless I see something other than camera pans of detailed environments. Plus I'm more interested in better games, not just better graphics.
On August 02 2011 03:37 keeblur wrote: I get the idea he's trying to sell it, but I won't buy into it unless I see something other than camera pans of detailed environments. Plus I'm more interested in better games, not just better graphics.
Well, i assume that noone will buy something like this without seeing that it actually works. And, as stated before, a video is no proof at all. And in my opinion, there is nothing that speaks against better graphics. Sure, good graphics won't make a bad game a good one or the other way around. But better graphics can make a good game a better game, which is a good thing.
On August 02 2011 04:20 DannyJ wrote: Id be more interested if they showed anything moving, or have moving lights. I suspect some issues.
Dynamic lighting will be utterly impossible with this kind of thing until we have better computers and can get that amount of ray tracing done in real time. Before then, they'll have to make some kind of half-assed polygon-based approximation.
The thing is we are shown all those beautiful engines but game developers can't really use them because games would require high end pcs and how many people own such. Gaming industry is restricted by specs of consoles and older pcs.
We need to wait for next generation of consoles for some real graphics progress. So developers can actually afford to make their games and sell them.
If you have a background in the industry you know the above pictures are impossible. A computer can’t have unlimited power and it can’t process unlimited point cloud data because every time you process a point it must take up some processor time. But I assure you, it's real and it all works.
Unlimited Detail's method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are ray tracing polygons and point clouds/voxels, they all have strengths and weaknesses. Polygons run fast but have poor geometry, ray-tracing and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software.
On August 02 2011 03:18 Nycaloth wrote: The real problem with computer graphics in gaming, imho, is that the spectacular advances in hardware and software technology combined with marketing efforts have made it acceptable, yes even common place, to produce games with shoddy mechanics and poor to nonexistent storytelling, just as long as the graphics look really really shiny. hope this is not too far OT.
I agree, this is an issue with today's market. The more mainstream games become the less effort their is to produce something unique & of quality. I hope things start to change before it becomes too stale.
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
Holy jesus, on one hand, Games will look a lot more realistic and we wont have wierd problems that result from module lag(for example, if you try to fly a super fast plane in gta 4, sometimes you will notice that a building just appeared from nowhere in fornt of ur face, killing you.
However, the problem is is that the average computer jsut cant handle this huge leap. If we think about Moore's law, we will notice that it will take decades for the average computer/gaming machine to adjust to this. As the video said, you can run 5 of these real objects, but a computer wont easily handle a entire game of these objects.
As the above posts said, you need polygons or else dealing with collisions would cause major lag, now just think about a sword penetrating armor....
On August 02 2011 04:59 lowfi( wrote: Some more information for all the people talking about the need of high end systems.
The company also claims that it doesn't require a super system to render all that. If you watched their first video they gave a short explanation.
If you have a background in the industry you know the above pictures are impossible. A computer can’t have unlimited power and it can’t process unlimited point cloud data because every time you process a point it must take up some processor time. But I assure you, it's real and it all works.
Unlimited Detail's method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are ray tracing polygons and point clouds/voxels, they all have strengths and weaknesses. Polygons run fast but have poor geometry, ray-tracing and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software.
Would be interesting to know about some more details on how it works.
The search thing may be good for casual gamers, but for pros who are constantly using the camera to scroll around with almost 0 predictability(of course, we humans would know why he may have the camera face this way, but a machine cannot) this will cause lag for the things that yet rendered because they forgot to count that when switch to one view to another direction, you also have to delete the first view from the physical memory.
Looks like mostly bs, they don't explain why their approach is more efficient than the polygon based approach. I mean, look at movies: it's pretty easy to have ultra realistic details with the current technologies, the problem is to be able to do it real time.
ps: the so called "moore's law" is not true anymore since a couple of years.
edit:
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
lol it's so obvious that it's a scam that it's not even worth arguing. I pity the guys who put money in this if there are any.
I'd have to see a cpu/memory benchmark usage of that technology before believing it. Sure, it sounds pretty damn exicting, but they could have done it on a super computer, who knows.
We can only dream of a game with that technology though. Doubt we will see one while alive.
On August 02 2011 03:18 Nycaloth wrote: The real problem with computer graphics in gaming, imho, is that the spectacular advances in hardware and software technology combined with marketing efforts have made it acceptable, yes even common place, to produce games with shoddy mechanics and poor to nonexistent storytelling, just as long as the graphics look really really shiny. hope this is not too far OT.
Absolutely agree with this- a blog I follow (Shamus Young) constantly rants on these sorts of things. All of the AAA game companies constantly are pushing the edge of graphics sky-rocketing development prices while putting far too little effort into mechanics, solid gameplay and something other than mind-numbing stupid storylines. Even something like procedurally generated content has largely been abandoned.
And here comes yet another near photo-realistic fps game or zombie apocalypse with washed-out colours.
I'll admit this could be impressive, but the graphics revolution has sprung so far ahead that there's a lot more that could be reasonably be explored to catch up. Rather than the endless graphics breakthrough that are super more realistic graphics than the more realistic graphics. Maybe I'm just bitter.
I don't want to engage in the discussion about this being real or not, I don't know anything about these kind of things.
I do want to state that some people trying to calcute the amount of memory needed for this technology shouldn't oversimplify this.
In theory you only need the surface(-points, -voxels, whatever) to be different from another (those are the only ones visible for the eye) and even those won't differ a lot from eachother (probability for a point next to a brown point being brown as well will be higher for example).
This means A LOT of the points/voxels/data will be the same which allows for some huge compressing (even if the points/voxels in the body of an object are not void).
TL;DR: With proper coding the big amount of data can be reduced a ton.
The real issue is ANIMATING such objects. Take for example LA Noire, which had a lot of emphasis on facial animation - how do you do that with point-based objects? Animating a billion points is where the real hardware issues are going to pop up
Holy jesus, on one hand, Games will look a lot more realistic and we wont have wierd problems that result from module lag(for example, if you try to fly a super fast plane in gta 4, sometimes you will notice that a building just appeared from nowhere in fornt of ur face, killing you.
However, the problem is is that the average computer jsut cant handle this huge leap. If we think about Moore's law, we will notice that it will take decades for the average computer/gaming machine to adjust to this. As the video said, you can run 5 of these real objects, but a computer wont easily handle a entire game of these objects.
As the above posts said, you need polygons or else dealing with collisions would cause major lag, now just think about a sword penetrating armor....
If you have a background in the industry you know the above pictures are impossible. A computer can’t have unlimited power and it can’t process unlimited point cloud data because every time you process a point it must take up some processor time. But I assure you, it's real and it all works.
Unlimited Detail's method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are ray tracing polygons and point clouds/voxels, they all have strengths and weaknesses. Polygons run fast but have poor geometry, ray-tracing and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
The result is a perfect pure bug free 3D engine that gives Unlimited Geometry running super fast, and it's all done in software.
The search thing may be good for casual gamers, but for pros who are constantly using the camera to scroll around with almost 0 predictability(of course, we humans would know why he may have the camera face this way, but a machine cannot) this will cause lag for the things that yet rendered because they forgot to count that when switch to one view to another direction, you also have to delete the first view from the physical memory.
ummm, in the first video, around 11 seconds, you see some of the stuff switch from one module to the next...
its a tech demo, so throughout the demo they turn up and down the tessellation they also do things like turn off textures etc to show off what's being done. It's probably best understood in a presentation with someone telling you when something is being done. http://www.geforce.com/GamesandApps/apps/endless-city-demo http://downloads.guru3d.com/Stone-Giant-Public-Demo-download-2526.html Tech demos usually show off hardware feature sets etc, most games aren't nearly so specialize in their programing because they try to really support legacy stuff form 4 years ago so their games can sell after all what good is a nice looking game if you can't move a product because no one can buy it and adding in more details cost more money.
The Battlfeild 3 character movement is looking promising for FPS games. I think it will be a long time before cinematics are on par with real-time graphics though.
On August 02 2011 05:19 DoraTheExploreHer wrote: YOU DO NOT UNDERSTAND, THERE WILL BE NO FUTURE! SEVERAL HIGH CLASS GOVERNMENT OPERATIVE HAVE ALREADY BEEN IDENTIFIED AS BEING REPTILIAN INCLUDING BUSH HIMSELF AS WELL AS VERY IMPORTANT MEMBERS OF THE OBAMA ADMINISTRATION. STAY INFORMED AND BE AWARE, THEIR TIME IS COMING.
Wasn't the thread recently closed? Why are you even posting this.
On August 02 2011 05:13 Jetaap wrote: Looks like mostly bs, they don't explain why their approach is more efficient than the polygon based approach. I mean, look at movies: it's pretty easy to have ultra realistic details with the current technologies, the problem is to be able to do it real time.
ps: the so called "moore's law" is not true anymore since a couple of years.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the search tool and typed in a word like 'money' the search tool quickly searches for every place that word appeared in the document. Google and Bing are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call "mass connected processing". Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
lol it's so obvious that it's a scam that it's not even worth arguing. I pity the guys who put money in this if there are any.
Well, if it is a scam it should be pretty obvious once they run away with the 2 mio $ they got from the australian government.
I don't know about this. I think it would be a great step forward, but with games like Oblivion (minimal loading screens, great expanses of terrain) this would just be too much.
I THINK. I don't know for sure. I'd love to see it though.
I feel that this is probably centered around some way of compressing massive amounts of data down and pre-rendering everything. I noticed that one of the early demos had things a Sierpinski pyramid.
If this is the case then animation won't be possible but idk.
They explained the methodology in one of their previous videos. What they do is calculate what "atoms" are going to be viewed by the player, and render only those "atoms". Its a neat little trick that can make something like this possible. It still needs a bit more computing power and a video game or two to back the engine, but nothing is or has really come close to geometric level of detail in scenes and what not.
Wasn't the thread recently closed? Why are you even posting this.
Edit that out of your quote
I actually remember this....still looking forward to what this might entail. It seems like it's fake, but what they've displayed looks amazing. This could get scary, though, if the game companies want to acquire this technology in due time.
On August 02 2011 08:16 hellsan631 wrote: They explained the methodology in one of their previous videos. What they do is calculate what "atoms" are going to be viewed by the player, and render only those "atoms". Its a neat little trick that can make something like this possible. It still needs a bit more computing power and a video game or two to back the engine, but nothing is or has really come close to geometric level of detail in scenes and what not.
that trick is already done in normal games, if you play crysis your computer only renders what you see and not the whole island
im really sceptical and cant imagine how a computer would handle so much data
On August 02 2011 06:06 iGrok wrote: The real issue is ANIMATING such objects. Take for example LA Noire, which had a lot of emphasis on facial animation - how do you do that with point-based objects? Animating a billion points is where the real hardware issues are going to pop up
Indeed. I don't know they're going to handle shades. Games usually use pretty poor techniques such as mapping, but how are you supposed to calculate the shades when you have trillions of thousands of billions of hundres of thousands (c) polygons?
They seem to have it OK with clipping, and argue that shades are going OK, so they probably have something up their sleeve.
Last thing I don't get is why theyr couldn't get a few actual artists to work on this to provide us with what they claim to be stunning graphics - I've done some computer modeling and while I don't call myself an artist by any means, I think that they've been pretty lazy on this part. I mean, their lightning kinda blows it completely.
Several people in the thread have summed it up quite well; it's just a different way of drawing. Current technological (hardware) limitations are blatantly obvious. I don't see how it's unique from past methods of rendering... Each method has its advantages and disadvantages. The video feels like more of a marketing ploy for now, whether or not what he's saying "is the future".
I'm not sure either way but I guess time will tell. Though I find it hard to believe that they all this trouble just for something fake (and even getting funding from the AUS government someone mentioned). I know algorithmes can do some crazy stuff so maybe they stumbled upon something good.
Cost me 800 dollars for a computer desktop just to play SC2 on mainly low and a few settings on medium, I wonder how much it costs for a home desktop to run a game with stuff like that in it lol. I don't imagine anyone but the crazy rich could play games with this type of technology for a long time. Still very cool and all but damn I wonder what kind of graphics card you'd need to play SC2 if it ran this type of graphics along with physics and all that.
i won't say the technology is there, but their claims are exaggerated and their optimism that this can be produced reliably in games with current rendering technology is misplaced.
Sounds like (procedurally generated? interpolated?) voxel models with level of detail and some fancy rendering. Doesn't sound too bad, though I'm concerned about physics (e.g. collision detection) and lighting (the relative lack of shadows in the demos for example).
It was kind of ironic that they claim they have unlimited detail, yet they never zoom into a rock or a piece of grass, or even the ground.
By the way, I believe computer graphics in games reached a satisfactory level at about 2002 (I'm thinking of Arx Fatalis), they should improve other aspects of games, with a heavy emphasis on gameplay and playtesting.
Looks nice but it sounds like they still have a ways to go to get the frame rate to a more acceptable level if they are only at 20fps. They won't say what hardware they are running this platform on. Probably don't want to scare investors if it's a tesla. If it is tesla I don't think this will be that successful if you gotta pay 3k for a proper gpu.
Seems too good to be true; how're you going to be able to animate this? The coordinates for the voxels have to be stored somewhere right? Although maybe my understanding of the process is poor it seems like he just found a way to speed up the "coordinate data" -> "screen render" rate but what happens when the coordinate data is changing due to animations?
On August 02 2011 01:19 DeLoAdEr wrote: interesting results for a software renderer, but the video only shows static geometry. I wonder if they're even able to handle animated objects..
and they mention the scene consists of more than 21 trillion points, lets say they store 4 byte for each point (color information, position, etc.) this would be ~100 TB of data right? so there must be some good compression going on there. ;p
The technique seems to be well fit for compression however, lots of atoms will be identical and position can also be grouped.
I have two problems with this: 1. They use the word "infinite" too much. It's obviously not possible to have "infinite" unique detail simply because there is only a limited amount of memory available. You can produce an infinite scene in the minecraft sense. I.E. there isn't actually an infinite scene that someone designed but just a deterministic rule to extend the detail procedurally whenever needed. So if you generate a world procedurally with a pseudo random number generator you could call it "infinite" for practical purposes since those generators have obscenely large cycle lengths, but that is more of a gimmick than actually useful for game design. What is interesting for game designers is the possibility to have a large amount of created unique content. Not just repeating the same tree/texture over and over in the scene or having some fractal generating a lot of fake detail. This is a nice example of actually useful technology that tries to adress this kind of problem (also this apparently runs on a XBox360):
2. They seem obsessed with polycounts when that isn't really that much of an issue anyway. The raw power of modern graphics hardware pretty much allows you to plaster the whole screen with pixel sized polygons. But what actually creates immersion Isn't absurdly detailed models, it's effects and post processing. Lights, shadows, reflection, indirect illumination, particle systems etc. How often do you actually feel the urge to examine single pebbles on the ground in a game? Is that relevant to gameplay? That just isn't the main contributor to the atmosphere. The reason why you need powerful hardware for modern games is because they run multiple passes of expensive shaders and not because there are so many polygons.
Because all of our games exist without animations and with unbelievable amounts of instances, right? oh wait.
Too much hype talk, too little actually useful tech to games.
This looks like a very nice tech for navigatable data visualization but using this in games is at most feasible in a hybrid tech that uses polys for the animated objects and this stuff for some highly instanced objects.
seems like this sort of thing might be good for animated movies and certain types of games? i was told about some friend of a friend who makes animated porn for a living, this guy is convinced that that industry is going to get big due to this sort of thing, they'll be able to do all sorts of weirdo fantasy stuff without taxing the viewer's suspension of disbelief too much
On August 02 2011 09:34 x2fst wrote: seems like this sort of thing might be good for animated movies and certain types of games? i was told about some friend of a friend who makes animated porn for a living, this guy is convinced that that industry is going to get big due to this sort of thing, they'll be able to do all sorts of weirdo fantasy stuff without taxing the viewer's suspension of disbelief too much
have you seen any animation in their Videos? They don't show it because that's nearly impossible. The only way they can draw so much geometry is that they have it nicely sorted in their data-structure if they were to animate stuff they'd have to sort their millions of moved point again. I'd love to see them do that in real-time with a normal PC.
Either way, everything is still going to be converted to Triangles when its sent to Direct X. No doubt this is just a way of only drawing the right amount of polygons at each magnification.
I'd like to see what actual game artists can do with this technology. It looked good, but it didn't feel like 100,000 good, although the elephant was pretty cool.
They should have picked something other than a rock for their example though because a rock in itself isn't the most interesting object lol.
On August 02 2011 09:42 Techno wrote: Either way, everything is still going to be converted to Triangles when its sent to Direct X. No doubt this is just a way of only drawing the right amount of polygons at each magnification.
somebody hasn't been listening. He said that they currently only have a working software solution, that means no OpenGL or DirectX
On August 02 2011 01:19 DeLoAdEr wrote: interesting results for a software renderer, but the video only shows static geometry. I wonder if they're even able to handle animated objects..
and they mention the scene consists of more than 21 trillion points, lets say they store 4 byte for each point (color information, position, etc.) this would be ~100 TB of data right? so there must be some good compression going on there. ;p
The technique seems to be well fit for compression however, lots of atoms will be identical and position can also be grouped.
Compression would get completely screwed by animation though, one voxel moved or changed and you have to recompress the whole block of them(however many voxels you compress together). It seems they do use lots of instanced objects to save ram though.
Since it doesn't seem like they can animate these voxels, they could use them for the environment and use good old polygons for all the animated things, but they're shading sucks and there are no shadows.
I assume they can't do shadows because they have to process a cluster of voxels at a time, decompressing the cluster to work on it and then flushing it from ram, and to do shadows they would have to draw the scene from the light source's point of view applying the lighting to each affected voxel.
For example, if you have X clusters of voxels in view, and Y is the number of cpu operations to decompress that cluster, then without shadows drawing the scene would be X*Y operations. However, with shadows, where L is the number of lights in the entire scene, it would be something like ( X^(L+1) )*Y operations. Big difference even with just 1 light.
On August 02 2011 09:47 GGTeMpLaR wrote: I'd like to see what actual game artists can do with this technology. It looked good, but it didn't feel like 100,000 good, although the elephant was pretty cool.
They should have picked something other than a rock for their example though because a rock in itself isn't the most interesting object lol.
Yea, when I read your post I was thinking they should've done like a small animal, maybe a mouse, or even just a cell phone. But their lack of shaders would really hurt the look of basically anything except the most bland thing in the world(a rock).
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it.
I think in a couple of years where CPU and GPU power keep rising, a high-end computer will be able to render all these frames.
As of now, I still don't think it could happen. It's like the transition from 2D to 3D, it takes some time and power, and eventually the 3D power will be so high that the number of polygons will start to reach the number of atoms.
I think its to do with their patents, the ones in Australia may work differently from say USA.
US Patent Law is completely bonkers, theres a patent on drawing removable sprites by inverting a bitmap using Xor, pretty much a main function of the operator and something I first did at age 7, but to a US patent court it apparently sounds like a complicated act of invention... It's why we won't sell to the states, they'll be letting someone patent use of a computer to add numbers together next. But I digress, in this case the silence is due to it all being made up BS.
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
And thats without colour, lighting and or texture data. Even if things repeat as often as they clearly did in the Island demo, which clearly only used a handful of repeated models, you're talking a sh*t tonne of data to render, you still have to translate the verticies so the current vertex count limits still apply. Every point you want to render requires its 3 points be multiplied by a 4v4 projection matrix, so even with good object culling theres at least a few trillion multiplications per render cycle right there. I want their computer, because I'm fairly sure you couldn't run that at 20FPS on current generation supercomputers. (except their computers imaginary, so I'm keeping mine)
I love how people are fervourantly defending this despite the fact that its obviously nonesense. "I don't want to live on this planet anymore." -Professor Farnsworth in response to the creationists arguments.
I believe from the articles on the wiki page that the basis of the technology is a fast and effective occlusion culling algorithm that can work with large numbers of points, hence the "trillions of atoms" claim.
However not all of these points are stored. There is procedural generation involved ("unlimited detail"), at least duplication, which is obvious from the repetitiveness visible in the videos. For example if you are building a roof, you don't need to store all points of all tiles, it is enough to store one tile, and duplicate it should the need arise. Same with trees. It is also possible to store only the differences, for example a deformation of a tile, or missing leaves from a tree.
This culling algorithm might involve data structures (e.g. hash tables) that would make any kind of dynamic scenery expensive or even impossible. This is not as a great limitation as it would seem at first, similar limitations already exists in polygon based games as well, where there is a distinction between the world (static objects) and actors/characters/deformables (dynamic objects). One of its causes is similar to this case: to provide adequate occlusion culling (BSP), the other is to make precalculated lighting possible (illuminations, reflections, etc), because dynamic occlusion culling and dynamic lights is more expensive in that case as well.
I suspect some interpolation is also going on, with the right kind it is very possible to generate smooth surfaces from a few points by making more, think of something similar to NURBS. And smooth surfaces seems like the author's primary goal.
The rendering does not seem the trivial kind we know from voxel engines. I believe point clouds are used in medical stuff, but I lack knowledge about their rendering process. The author claims that the culling algorithms works on a pixel by pixel basis, so rendering and occlusion culling might be the same algorithm.
This technology seems very interesting. Certinaly, it has its limitations, but I believe it has its place, especially if it can be combined with polygon representations to create a hybrid engine.
It would be nice if the author dropped his UNLIMITED PR BULLCRAP though.
On August 02 2011 01:02 Morfildur wrote: ... (though who knows, when quantum computing is widely accessible... 2111 maybe...)
Quantum computing does not apply here - this is not one of the very small class of tasks which will be aided by quantum computers (only a very select class of problems really may see any benefit, and aside from D-Wave's quantum annealing computer (which doesn't seem that useful, frankly), there's no real hope of a quantum computer with any scale anytime soon. Note that the D-Wave machine is $10million and resides in a dilution refrigerator at 40mK... not exactly practical for mass consumption.
Also, this tech. seems like bunk. I've more hope for ray tracing and am sad that Intel cancelled Larabee.
Cool to look at but the processing power needed to animate and rerender destructible terrain is probably 20 years off for consumers without some huge breakthrough there. Will make for amazing pre rendered non animated non interactive stuff though.
On August 02 2011 10:17 Frigo wrote: I believe from the articles on the wiki page that the basis of the technology is a fast and effective occlusion culling algorithm that can work with large numbers of points, hence the "trillions of atoms" claim.
However not all of these points are stored. There is procedural generation involved ("unlimited detail"), at least duplication, which is obvious from the repetitiveness visible in the videos. For example if you are building a roof, you don't need to store all points of all tiles, it is enough to store one tile, and duplicate it should the need arise. Same with trees. It is also possible to store only the differences, for example a deformation of a tile, or missing leaves from a tree.
This culling algorithm might involve data structures (e.g. hash tables) that would make any kind of dynamic scenery expensive or even impossible. This is not as a great limitation as it would seem at first, similar limitations already exists in polygon based games as well, where there is a distinction between the world (static objects) and actors/characters/deformables (dynamic objects). One of its causes is similar to this case: to provide adequate occlusion culling (BSP), the other is to make precalculated lighting possible (illuminations, reflections, etc), because dynamic occlusion culling and dynamic lights is more expensive in that case as well.
I suspect some interpolation is also going on, with the right kind it is very possible to generate smooth surfaces from a few points by making more, think of something similar to NURBS. And smooth surfaces seems like the author's primary goal.
The rendering does not seem the trivial kind we know from voxel engines. I believe point clouds are used in medical stuff, but I lack knowledge about their rendering process. The author claims that the culling algorithms works on a pixel by pixel basis, so rendering and occlusion culling might be the same algorithm.
This technology seems very interesting. Certinaly, it has its limitations, but I believe it has its place, especially if it can be combined with polygon representations to create a hybrid engine.
It would be nice if the author dropped his UNLIMITED PR BULLCRAP though.
Probably needs it to drum up finacial support and funding, as well as to generate interest.
I kinda don't believe ANYTHING i see on youtube (and like sites, (This is a rule EVERYONE should follow)). If the OP can put up a legit link like from Cnet where I can read about this, that would be cool.
I think its to do with their patents, the ones in Australia may work differently from say USA.
US Patent Law is completely bonkers, theres a patent on drawing removable sprites by inverting a bitmap using Xor, pretty much a main function of the operator and something I first did at age 7, but to a US patent court it apparently sounds like a complicated act of invention... It's why we won't sell to the states, they'll be letting someone patent use of a computer to add numbers together next. But I digress, in this case the silence is due to it all being made up BS.
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
And thats without colour, lighting and or texture data. Even if things repeat as often as they clearly did in the Island demo, which clearly only used a handful of repeated models, you're talking a sh*t tonne of data to render, you still have to translate the verticies so the current vertex count limits still apply. Every point you want to render requires its 3 points be multiplied by a 4v4 projection matrix, so even with good object culling theres at least a few trillion multiplications per render cycle right there. I want their computer, because I'm fairly sure you couldn't run that at 20FPS on current generation supercomputers. (except their computers imaginary, so I'm keeping mine)
I love how people are fervourantly defending this despite the fact that its obviously nonesense. "I don't want to live on this planet anymore." -Professor Farnsworth in response to the creationists arguments.
Yeah there's some ridiculous patents out there for seemingly basic stuff: Absolute value
int const mask = v >> sizeof(int) * CHAR_BIT - 1; r = (v ^ mask) - mask
On August 02 2011 01:30 Nacl(Draq) wrote: This is very interesting. I hope they are able to implement this into the gaming industry in the next 5 years. Would be nice to see an "nonfictional" human scanned in. Even if it is just static objects think of the things they can do to games about the world. You go into a game and visit scenery in Africa, after they come out with full animation this will be a big leap in the mmorpg world where people could escape into a completely realistic fake world.
Good luck to them.
let the getting-fired-from-irl-job-cause-I-spent-too-much-time-on-WoW begin (on mass scale)
Just seen it about an hour ago. It does look very impressive, but as the man said, the tech is far from done. I would imagine it would tax hardware more animate those atoms, but then again I don't really know anything about it. I'd find it hard to believe this tech company would invest so much in it if a few animations would make it flop.
Should get some funding from nVidia.. see what comes from it. If nVidia or AMD decided to to fund it and it actually worked, then the one that chose to do so would pretty much instantly monopolize the graphics market..
It looks really nice, but I remember asking Run about graphics. I saw him adding a polygon to a model, and I thought that our computers rendered all the polygons. He basically said that that technology is years out, so I would be inclined to believe that as well.
Sorry if this has been posted already, but it says in the vid description that they can indeed do animation with these graphics (for whatever that's worth). Posting for all the people who're saying they can't.
On August 02 2011 01:01 Bibdy wrote: The claims of 'unlimited' are just stupid, especially for someone who claims to work in the technology industry. Like WniO said, this could be used to create great pre-rendered locations, but would have a lot of shortcomings when it comes to animation and physics. That pretty water? Just a flat surface. Those little balls of dirt or leaves on the palm tree? Ain't budging an inch. Polygons will always be needed at the very least to determine collision detection until the processors get to the point of stupid where polygons get so small they might as well be atoms.
I also find it kind of disingenuous that one of its claims is that it only renders what is visible on the camera, considering that's exactly how current game engines work.
Yeah I was thinking of that as well when I saw them show blades of grass sway in Crysis.
I guess some level of physics modelling should be possible but a lot of it would need to be written from scratch - probably can't leverage what's already out there for meshes. As for collision detection, not sure if a computationally cheap way is even really possible for "atoms".
On August 02 2011 11:30 iamahydralisk wrote: Sorry if this has been posted already, but it says in the vid description that they can indeed do animation with these graphics (for whatever that's worth). Posting for all the people who're saying they can't.
If you can model it and render it, you can animate it - at least by hand.
In a game though, not everything is animated by hand.
not this shit again... I see this every year since '08 or earlier...
there is one big downside for this way of rendering: you cant have movement/animation. Its not currently possible because of the way the graphics are rendered.
They just show the same old tricks again and again while shittalking every game copmany LOL! ...
By stumble on it, do you mean you got it from reddit? Not trying to be a dick here, but it's pretty coincidental that this was posted the very same day it got really popular on reddit. You should at least be honest about it.
Anyways, I am very impressed by what was shown in the video. Looking forward to future demonstrations of this technology.
On August 02 2011 08:35 Duka08 wrote: Several people in the thread have summed it up quite well; it's just a different way of drawing. Current technological (hardware) limitations are blatantly obvious. I don't see how it's unique from past methods of rendering... Each method has its advantages and disadvantages. The video feels like more of a marketing ploy for now, whether or not what he's saying "is the future".
Kind of like CRT, LCD, and Plasma.
CRT wins in display quality over the currently used LCD and Plasma but most use LCD and Plasma anyway because it's smaller and doesn't start fires if they're broken.
Yep the advantages of LCD and Plasma is that they won't start a fire and/or implode like a CRT if it breaks >.>.
2 things... You don't see this being useful in RPG-type games? Really? Shooters have very little use for this, in all honesty. Second, do you really think that they go take a scan of the atoms in a real item and then translate that into a computer algorithm? I laughed when I read that, buddy.
I don't understand why game developers are so insanely desperate to have the best possible graphics in their games yet can't make improvements on game-play, longevity and other areas of the game which most people feel is more important.
I still find my PS1 and N64 games which have shockingly bad graphics very entertaining to play.
Stop worrying about the graphics and concentrate on the content.
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it.
Well, not if there are certain constraints. For instance, the positions of the atoms that make up a rock can all be defined in terms of 6 parameters: the 3D coordinates of one atom in the rock, and the 3D coordinates of the rock's angle of rotation. Only if the rock is broken into smaller pieces do more parameters need to be defined. The same goes for all contiguous objects, and it may be possible to do other clever compressions. For example, you could define whole regions of the island that are not currently being interacted with in terms of 6 coordinates. Or you could make approximations for different spatial scales, e.g., only visualize 1 in a million atoms for objects smaller than a certain scale. I don't know exactly what they're doing, but I think there are many such clever algorithms that could be used to achieve compression.
On August 02 2011 12:22 Tektos wrote: I don't understand why game developers are so insanely desperate to have the best possible graphics in their games yet can't make improvements on game-play, longevity and other areas of the game which most people feel is more important.
I still find my PS1 and N64 games which have shockingly bad graphics very entertaining to play.
Stop worrying about the graphics and concentrate on the content.
Truer words have never been spoken. Graphics are cool but there's so much more (as you said) that is important as well, and without that even the best graphics can make for a very poor game overall.
In my opinion, graphics mostly matter to me while playing an MMO. I really love looking at the landscape around me while auto-routing or seeing a large monster burst through the wall and tower over you with debris and dust flying everywhere. Sadly, most MMOs do not have such wonderful graphics nor well made. And why would you want graphics for RTS, Fighting, or FPS? Talking from my experience, I would get so focused on the game to the point that I could play the game in 8-bit and not notice anything. You will lose the game if you take 1 second to appreciate the beauty. But on the other hand, I think it may be incredibly important to improve graphics if we want e-sports to grow. There is a difference from playing low on StarCraft to play better and watching someone play. There is nothing more amazing than seeing a very flashy battle with high quality explosions and animations. But despite all that, grahpics will never beat Brood War :D
Oh yeah, I would like to see how these graphics would do mixed with anime art since that is my favorite type of games to play.
On August 02 2011 13:42 RezChi wrote: Doubt if that thing started and came out in a month, half the people that buy a game with it would realize their computer can't run it lol
I believe many companies will invest billions once they realize how much potential there is to make money in the upcoming new era in video games.
Ok, I read a few of the pages and apologize if this has already been mentioned. I first saw this on Kotaku and was impressed, so I looked further into and found their first video which does explain how it achieves this without a supercomputer...
From the description:
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine. It is best explained like this: if you had a word document and you went to the SEARCH tool and typed in a word like MONEY the search tool quickly searches for every place that word appeared in the document. Google and Yahoo are also search engines that go looking for things very quickly. Unlimited Detail is basically a point cloud search algorithm. We can build enormous worlds with huge numbers of points, then compress them down to be very small. The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesnt touch any unneeded points, all it wants is 1024*768 (if that is our resolution) points, one for each pixel of the screen. It has a few tricky things to work out, like: what objects are closest to the camera, what objects cover each other, how big should an object be as it gets further back. But all of this is done by a new sort of method that we call MASS CONNECTED PROCESSING. Mass connected processing is where we have a way of processing masses of data at the same time and then applying the small changes to each part at the end.
From there the only issue I see with this technology is the amount of space one has in the hard drive.
It seems like an amazing breakthrough but I guess it'll take a good while before we see this in use due to hardware limitations. On the PC platform, it might make an entrance sooner due to the presence of hardcore gamers with their crazy-powerful set-ups, but console gaming will have to wait a good while I feel, unless the big players like Sony, MS and Nintentdo (maybe not nintendo lol) accomodate this technology in their next line of consoles. Also, since most developers are turning to the console platform, I'm not sure how many will want to adopt this so soon.
Nevertheless, it's looking great for the future and I eagerly await the days when this sort of tech is fully in use in the video gaming industry.
It is physically infeasible to maintain that amount of points (we are talking 4 billions in demo with elephants in pyramids that is 4,000,000,000 * 12 bytes min (probably more, because there is no way they will be able to cram position of a point on that scale into 32 bit variable) = ~48 gigabytes of point data). I think in other demo they talked about 100 billions. They do give us some clues how they do what they do - almost all the objects are the same. They also mention that their algorithm simply applies changes to different objects.
Here is my conjecture of what they are doing - they maintain positions of objects and possibly some planes for those objects. Then they apply geometry based on the formulas for curves. In fact - it is possible because they are not doing real unlimited. They simply use really small coordinate field, and then after some formula magic object appears.
So trick is a really neat lod system. That is knowing which level of detail to compute and then which points to take into picture building.
There are limitations. Primary one is physics. They can render dirt grains, but they will not be able to remember how they move due to interaction with the player (assuming that cannot be precomputed and put into some kind of fixed formula). It is possible to work around that by remembering positions of explosions/footsteps/fallen objects and then recompute geometry based on that.
Second one is hardware support. Reason polygons are so entrenched because video cards had been optimized to use them for decades. This means that there will be quite a bit of momentum to overcome to get hardware support for this technology.
This is may be major upgrade to polygon technology, but not as versatile as ray tracing (They offered no way of doing complex reflections with their method). Also with tessellation in DX 11, they are a few years too late to really make impact they were looking for.
All that said I am looking towards games produced with that technology. Hopefully they can adapt nVidia CUDA/ ATI Stream to provide at least some hardware acceleration for their SDK.
Honestly what matters more to me than the texture resolutions themselves is the particle quantity, I cringe when I zoom into an object and see a rough edge. Anti-aliasing can only do so much, so I'm naturally very excited for this ^_^
Wouldn't it be possible to combine this (assuming it works) with traditional programming in games? Seems like some people are way to focused on the demo.
You have to realize that these are not the same guys who would be working on gameplay. The fact that they are working on graphics for a game does not instantly make them the designers behind a game.
The graphics hardware required to run this at acceptable frame rates will cost thousands of dollars. Average consumers who spend a few hundred won't utilize this.
never trust anything thats prerendered. I call bs on this one... (visual effects artist, 10+ years experience in games and film)
making pretty pre-rendered fly throughs is one thing, but this has shown 0 application in the game industry. they havent even touched things like, animation, dynamic lighting, physics, shaders, etc.
their "new and improved" lighting is just a rendered ambient occlusion still which i could render out in about 20 seconds with my eyes closed... they may be on to something.. but future of game development? absolutely not.
I'm going to have call BS on this. One thing that annoys me to no end is the tech industry's need to frequently come up with buzz words for slightly different things or the same things. Like... cloud computing. Things you do on the Internet is now called cloud computing. Things like uploading a picture. For any old schoolers: blast processing? LOLOLOL Posting messages onto to twitter gets it own little cute name: tweeting.
Polygons are nothing more than sets of points in the computer. What it seems like they're describing having many, many, many points... basically came up with a buzz word for high-poly counts... lol "atoms"
The demo doesn't even look as good as the best CG in movies...
This looks really cool. Really wonder what'll happen to the video game industry/market/target audience. This might also overwhelm the RTS genre - the FPS genre might appear too advanced for everything else.
On August 02 2011 15:54 dignity wrote: You have to realize that these are not the same guys who would be working on gameplay. The fact that they are working on graphics for a game does not instantly make them the designers behind a game.
The guy that work on the gameplay is probably the same that answer the phone from what I see from the games made in the last 5 years...
This "technology" is almost 2 years old(maybe even way older), and it already had many investors, it was promised that they would have games using this technology by the start of 2011... Do you see any games using this?
No. If you ask me, the technology isn't working right now and it will take many, many more years until it is actually viable. And by that time people will have found better solutions, as it is questionable that the technology that they invented works for real time animation etc. (Remember: Two years ago they didn't have any animation because "they are no artists".)
I don't really get most peoples arguments against this (if it's true).
"This will demand insane hardware" - No it won't. The whole point of the technology is that it isn't all that demanding, it only renders the exact points you see on screen, not every point in every object you see.
"It can't be utilized with physics or animation" - Yes it can, polygons aren't what make animation and physics possible. All you need is to connect an action to an object. A game having twice the polygons of another game doesn't mean it has twice the physics or animation of the other game, it just looks better.
The only argument which makes sense is the one concerning memory. The data has to be saved somewhere, the form of each object etc. You can procedurally generate dirt and all that stuff, but for a game, you won't have a very fun time if everything is generated.
After finishing watching the video though, my mind raised some questions (like some of you have pointed out)
Namely: WTF are they going to do about: a) animation (dynamic anything) b) dynamic lighting c) model transformation and/or destroying d) collision detection e) other physics
Not only are these undoubtedly issues on their own, but because the detail is so massive, the already terribly difficult job of getting accurate physics and lighting and destructible models becomes SO MANY MORE TIMES harder to do! Technically, speaking, 10 000 times more detail would mean 10 000 times more difficulty in doing lighting and physics, etc.. A system could probably be developed which simplifies the models for such purposes though, but that would still be a really extreme challenge to overcome.
Until they solve those problems, it seems to me that this technology would be most useful for artists using 3D programs. If a modeling program could be used to use this technology natively or pseudo-natively, it would save so much on processing power in workstation graphics, since most of what they deal with is static modeling. This use would still require getting a hold of modifying these voxel models in their voxel form as opposed to just a polygon converter though.
To me, it seems like applying the concept of texture-tiling to meshes. The island and the weird pyramids are all made of repetitious objects. The concept of tiling bigger meshes out of "atomically" smaller meshes doesn't seem like it'd be beneficial if you want to make a world that looks appropriately random and natural.
The idea that you could actually have "infinite detail" in the sense that you could have as many different objects, each with its own different shape and textures, no two blades of grass the same, with physics all applied where needed, with lighting and shadows of perfect sharpness and geometry, blah I lost my train of thought. It's a really bad claim to be making, this "infinite detail".
Polygons, in the sense of rendering a picture with them, already allow us "infinite detail" in the sense that their demo showed us. You can smooth out polygon curves to perfect roundness, and you can see it in CGI all the time, sometimes even in games, all which this demo conveniently never displays. The problem comes in putting these shapes into a "world" that you interact with. Even if their technology is more efficient than the simple concept of meshes, it is most certainly limited, and can not be infinite. Yes, you only have to render what the camera shows (all games work that way to one degree or another, that's hardly a "new" thing), but infinite details within the camera's frame would still require staggering (note: infinite) amounts of data and computation.
While I'm sure I don't understand the concept completely, I think I understand it enough to say it's not half as practical as it claims to be.
Funny how so many people in this thread saying that this simply isn't possible in real-time. This is true if you consider how graphics engines have traditionally worked, but we have NO IDEA how their engine works. So saying "bah this can't be possible because other engines work like X" is kinda silly and close-minded.
I'm not saying it that it isn't a hoax, but until you understand exactly how their engine works, don't be so skeptical.
I believe the technology is called Sparse Voxel Octree (SVO) and it's not that new. id Software has been promoting it for the last couple of years, just not in the sensationalist way like these guys. You can see some of id's demos on YouTube.
The level of detail that you can achieve with this is indeed amazing, and one of the main features is that you don't have to keep all the data in memory to draw it. It is very easy to exactly determine the required data block. This together with a clever compression algorithm means that it can be streamed even from a DVD/Blu-Ray and still produce very good results. This allows to have enormous models in the order of tens of gigabytes.
However I think id said that they were going to use this only for landscapes and still use polygons for animated objects.
"It can't be utilized with physics or animation" - Yes it can, polygons aren't what make animation and physics possible. All you need is to connect an action to an object. A game having twice the polygons of another game doesn't mean it has twice the physics or animation of the other game, it just looks better.
No, in fact: polygons are what makes animation possible. Polygons, meshes of vertices, are deformable. They're positions in space which are transformed by linear transformations. Because a triangle is always planar (unless the three points become colinear, at which point it becomes invisibly thin), triangles always represent closed surfaces. So you can deform a mesh of triangle vertices and, as long as you don't break the mesh with your transforms, you will know that the mesh remains closed.
For "characters", it's even more complex. What you have is a number of positions over a "skeleton", where each "bone" in the skeleton is a transformation. Some of those positions are transformed by multiple transformations; this is what allows for smooth weighting across a complex, deformable mesh.
The technology they showed cannot do this. Indeed, it's not even clear if it can render multiple static objects in different positions each frame (ie: just sliding things around). Without deformation, you pretty much give up on humans, cloth, etc. And while some games could certainly get by without people, not all or most of them could.
b) dynamic lighting
Forget dynamic lighting; that's easy (assuming that each position has a normal and reasonable lighting parameters, and can have a user-defined shader program executed to generate the color of it). Shadows are hard. Notice how their shadows in the video are pretty much just slightly darkened patches under trees and such. There's nothing like actual shadow mapping going on here.
The shadows that they say they're working on look like pre-baked Quake-1 style shadows. Sure, they have more detail than Quake 1 shadows, but they're still pre-baked. Will there be proper shadowing for characters that pass under the shadow?
Oh that's right; this technology only works for static scenes.
Also, let's not forget anisotropic filtering and antialiasing. The Youtube compression hides many sins, but I seriously doubt their method can antialias very well. Anisotropic filtering can only work with textures, so they're going to have to put together one hell of an antialiasing package to compensate.
but we have NO IDEA how their engine works.
We do have an idea how it works; their presentation says how it works. It is essentially a complex query algorithm over a database of points that serves as a combination of frustum culling and LODing. This gets a field of visible points, which they write to an image.
Once you start moving points around, you now need to incorporate transforms. And since they have "infinite detail", that's a lot of transformation of points. You can't use the database query to cull points because until you've finished the transformation, you can't know which points might be visible. Your transform has to be done pre-culling. So you're going to waste a lot of time transforming points that aren't visible.
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it.
Well, not if there are certain constraints. For instance, the positions of the atoms that make up a rock can all be defined in terms of 6 parameters: the 3D coordinates of one atom in the rock, and the 3D coordinates of the rock's angle of rotation. Only if the rock is broken into smaller pieces do more parameters need to be defined. The same goes for all contiguous objects, and it may be possible to do other clever compressions. For example, you could define whole regions of the island that are not currently being interacted with in terms of 6 coordinates. Or you could make approximations for different spatial scales, e.g., only visualize 1 in a million atoms for objects smaller than a certain scale. I don't know exactly what they're doing, but I think there are many such clever algorithms that could be used to achieve compression.
Yes, but if this were true, then it would eliminate the point of rendering the rock in "atoms" in the first place.
The whole idea is that If my player character fires a bullet that punches a hole in a leaf of one branch of one tree then the hole persists because its not a polygon, its a million little atoms.
If I damage the rock in some way, it needs to adjust in real-time and persist that way because its made up of millions of little "atoms". The WHOLE PURPOSE of making a game with this ridiculous fictitious tech would be to do something not doable with polygons. If you're just going to make a rock and keep its model and orientation then that can be done with current polygon technology.
Why are they comparing their graphics with the graphics of 2006 games???? this sounds more and more fishy to me, not much hope but it would be nice if they actually able to do this
My guess is as a start, that they wont change the old polygonal way of creating with this new way. I think they will somehow blend them togetther, keeping some things polygonal, and then make other stuff, such as environmental stuff made out of such clouds we see in the video.
Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
"It can't be utilized with physics or animation" - Yes it can, polygons aren't what make animation and physics possible. All you need is to connect an action to an object. A game having twice the polygons of another game doesn't mean it has twice the physics or animation of the other game, it just looks better.
No, in fact: polygons are what makes animation possible. Polygons, meshes of vertices, are deformable. They're positions in space which are transformed by linear transformations. Because a triangle is always planar (unless the three points become colinear, at which point it becomes invisibly thin), triangles always represent closed surfaces. So you can deform a mesh of triangle vertices and, as long as you don't break the mesh with your transforms, you will know that the mesh remains closed.
For "characters", it's even more complex. What you have is a number of positions over a "skeleton", where each "bone" in the skeleton is a transformation. Some of those positions are transformed by multiple transformations; this is what allows for smooth weighting across a complex, deformable mesh.
The technology they showed cannot do this. Indeed, it's not even clear if it can render multiple static objects in different positions each frame (ie: just sliding things around). Without deformation, you pretty much give up on humans, cloth, etc. And while some games could certainly get by without people, not all or most of them could.
Forget dynamic lighting; that's easy (assuming that each position has a normal and reasonable lighting parameters, and can have a user-defined shader program executed to generate the color of it). Shadows are hard. Notice how their shadows in the video are pretty much just slightly darkened patches under trees and such. There's nothing like actual shadow mapping going on here.
The shadows that they say they're working on look like pre-baked Quake-1 style shadows. Sure, they have more detail than Quake 1 shadows, but they're still pre-baked. Will there be proper shadowing for characters that pass under the shadow?
Oh that's right; this technology only works for static scenes.
Also, let's not forget anisotropic filtering and antialiasing. The Youtube compression hides many sins, but I seriously doubt their method can antialias very well. Anisotropic filtering can only work with textures, so they're going to have to put together one hell of an antialiasing package to compensate.
We do have an idea how it works; their presentation says how it works. It is essentially a complex query algorithm over a database of points that serves as a combination of frustum culling and LODing. This gets a field of visible points, which they write to an image.
Once you start moving points around, you now need to incorporate transforms. And since they have "infinite detail", that's a lot of transformation of points. You can't use the database query to cull points because until you've finished the transformation, you can't know which points might be visible. Your transform has to be done pre-culling. So you're going to waste a lot of time transforming points that aren't visible.
The thing is, you're still thinking in terms of polygons etc. Moving things around should actually be insanely easy. Here's a rock object, move it x amounts of points to the west. Cool, easy as crap, done. That's not an animation though, it's just moving an object. So lets say we have a dude and we want to move his arm. In polygon-world, you would do transforms, but here, you could have a completely different system. Say you have a skeleton in the arm, and you have a shoulder object and a biceps object. All you do is rotate and move the biceps and shoulder object to follow the skeleton, and use an algorithm to "fill in" areas with "atoms", similar to how many people do flash cartoons, using separate objects for movable parts.
Don't go thinking that just because something is done a certain way in a polygon-based game, it has to be done exactly the same in any system. Creating a 2d and then a 3d game shows how massive the difference can be to create even a minor effect.
It may not be completely bogus, they may indeed render VERY detailed environments in real time, maybe even animate them. What the video seems to imply is that you can make a game like Crysis and just add detail everywhere, while maintaining all other effects like dynamic lighting, multiple different models etc; you can clearly see this when they juxtapose their life-like vines to a simple flat surfaces of standard game environments. There are limitations to how much you can compress data without loosing anything, and it clearly shows in their presentation - just a few models copied everywhere, structures are detailed but formed from box-shaped elements, they are back to square one in terms of rendering believable water or any object with no coherent structure for that matter. What makes this interresting is that there are tons of people like Carmack who will find a way to gradually introduce similar systems into their engines, overcoming mentioned obstacles in the process, limiting objects rendered using this technique in order to actually render something more than the same elephant over and over. It clearly won't be a revolution, no engine will use just this method, but I'm looking forward to seeing this introduced on small scale along with standard polygons.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It is already being moved though How do you think the "camera" works to build a projection on your screen? It rotates the world around you Adding another transformation step to support simple animation is pretty easy.
On August 02 2011 05:03 9heart wrote: You should never feed trolls, but:
Quote: "21 trillion 62 billion 352 million 435100 polygons"
That would translate to at least 21.0623.5243.5100 triangles without instancing, or 21.0623.5243.5100 * 3 vertices, or 21.0623.5243.5100 * 3 * 3 xyz-coordinates, or 21.0623.5243.5100 * 3 * 3 * 4 = 758.2446.8766.3600 bytes = 689 terabytes worth of geometry data assuming 4-byte floats
That is a pretty basic issue that I am not sure anyone can get around. Every point has to have its coordinates stored somewhere, and while you might be able to come up with a more efficient coordinate system you still need to put trillions of easily-accessible points into it.
Well, not if there are certain constraints. For instance, the positions of the atoms that make up a rock can all be defined in terms of 6 parameters: the 3D coordinates of one atom in the rock, and the 3D coordinates of the rock's angle of rotation. Only if the rock is broken into smaller pieces do more parameters need to be defined. The same goes for all contiguous objects, and it may be possible to do other clever compressions. For example, you could define whole regions of the island that are not currently being interacted with in terms of 6 coordinates. Or you could make approximations for different spatial scales, e.g., only visualize 1 in a million atoms for objects smaller than a certain scale. I don't know exactly what they're doing, but I think there are many such clever algorithms that could be used to achieve compression.
Yes, but if this were true, then it would eliminate the point of rendering the rock in "atoms" in the first place.
The whole idea is that If my player character fires a bullet that punches a hole in a leaf of one branch of one tree then the hole persists because its not a polygon, its a million little atoms.
If I damage the rock in some way, it needs to adjust in real-time and persist that way because its made up of millions of little "atoms". The WHOLE PURPOSE of making a game with this ridiculous fictitious tech would be to do something not doable with polygons. If you're just going to make a rock and keep its model and orientation then that can be done with current polygon technology.
Missing the point.com
No, the point is that it's faster. Current games get better looking by raising the polygon count, compare an old FPS to a new one. Character models used to be a few hundred polygons, now they are several thousand polygons. This is straining as shit on computers and 3d cards. The point of this idea is that you can get FAR better detail than by multiplying the polygons hundreds of times, without it being more straining on the hardware.
Shooting a leaf would probably do nothing, just like in a polygon game, since there won't be enough memory to have thousands of leaf objects on every tree in a forest.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
Then how come they never show animated stuff? Am I allowed to remind you that they already came up with that about 2 years ago? With a very, very similar video?
I'm telling you: They try to scam investors. They never showed anything against what people criticized years ago(or was it 1 year? I dunno exactly.)..., nothing has changed, it's all the same.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
Then how come they never show animated stuff? Am I allowed to remind you that they already came up with that about 2 years ago? With a very, very similar video?
I'm telling you: They try to scam investors. They never showed anything against what people criticized years ago(or was it 1 year? I dunno exactly.)..., nothing has changed, it's all the same.
As others have said, the camera moves, which is enough to prove that moving static objects is a non-issue. As for whether or not this is a hoax, I have no idea. It could very well be prerendered. I'm just speaking about the theoretical technology.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
Then how come they never show animated stuff? Am I allowed to remind you that they already came up with that about 2 years ago? With a very, very similar video?
I'm telling you: They try to scam investors. They never showed anything against what people criticized years ago(or was it 1 year? I dunno exactly.)..., nothing has changed, it's all the same.
As others have said, the camera moves, which is enough to prove that moving static objects is a non-issue. As for whether or not this is a hoax, I have no idea. It could very well be prerendered. I'm just speaking about the theoretical technology.
You do realize that animation and manipulating the world and projection matrizes is not the same?
EDIT: It might work for statics, but thats it. Without proper animation possibilites and shadows it won't work for games or anything productive.
I don't know why everyone seems to be complaining so much about where you store all the data, the maximum capacity doubles every year. The maximum capacity of memory nowadays is 2^10 or 1024 times the maximum capacity of ten years ago. So following this logic, in ten years the maximum memory capacity should be 4TB, more than enough for any animation that would be needed by something developed now.
On the subject of it only having still objects and no real time animation, any new technology takes time to develop, this style of rendering only came about a few years ago, it's gonna take a while to get up to speed.
On August 02 2011 20:49 geethy wrote: I don't know why everyone seems to be complaining so much about where you store all the data, the maximum capacity doubles every year. The maximum capacity of memory nowadays is 2^10 or 1024 times the maximum capacity of ten years ago. So following this logic, in ten years the maximum memory capacity should be 4TB, more than enough for any animation that would be needed by something developed now.
On the subject of it only having still objects and no real time animation, any new technology takes time to develop, this style of rendering only came about a few years ago, it's gonna take a while to get up to speed.
Again, this technology came out 19 years ago. They only improved it, but the old constraints still are the same and were the reason why it was abandoned in favour of polygon technology.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
Then how come they never show animated stuff? Am I allowed to remind you that they already came up with that about 2 years ago? With a very, very similar video?
I'm telling you: They try to scam investors. They never showed anything against what people criticized years ago(or was it 1 year? I dunno exactly.)..., nothing has changed, it's all the same.
As others have said, the camera moves, which is enough to prove that moving static objects is a non-issue. As for whether or not this is a hoax, I have no idea. It could very well be prerendered. I'm just speaking about the theoretical technology.
You do realize that animation and manipulating the world and projection matrizes is not the same?
EDIT: It might work for statics, but thats it. Without proper animation possibilites and shadows it won't work for games or anything productive.
Like I explained before, there can easily be systems which allow proper animation possibilities. If you can move one static object in relation to the camera, connect two objects and move them together = animation. Make a car object, put a wheel object on it, rotate wheel object. Move car forward. A moving car animation.
It would be very hard to do certain things, definitely.. for example, making a leaf bend in the wind. Rotating a wheel would take almost 0 computer power, dynamically calculating a few hundred thousand atoms movement based on a force? That would be insanely heavy. Overall, moving objects should be easy, and a big object can be animated by splitting it into many smaller objects. The problem is when the animation needs to actually change objects themselves since then actual work on individual atoms would be needed.
Who knows what smart programmers can come up with though.
On August 02 2011 19:48 valaki wrote: Yeah it looks nice on paper but what happens when you try to move something with a billion particles? Yeah, nothing. In the video everything is static.
It's not harder to move a billion particles than one particle for a computer, since it isn't going to actually move every single particle one at a time. It just calculates a point. An analogy in 2d works: what's faster for the computer to render in a 2d game? A 1 red pixel box moving over the screen, or a 128x128 sprite of many colors? Answer: Doesn't matter unless the game is specifically programmed to optimize either solution. The game will still rewrite the whole scene with every sprite every frame, doesn't matter if the big sprite is in position 0,0 in one shot and 453,621 in the next.
Then how come they never show animated stuff? Am I allowed to remind you that they already came up with that about 2 years ago? With a very, very similar video?
I'm telling you: They try to scam investors. They never showed anything against what people criticized years ago(or was it 1 year? I dunno exactly.)..., nothing has changed, it's all the same.
As others have said, the camera moves, which is enough to prove that moving static objects is a non-issue. As for whether or not this is a hoax, I have no idea. It could very well be prerendered. I'm just speaking about the theoretical technology.
You do realize that animation and manipulating the world and projection matrizes is not the same?
EDIT: It might work for statics, but thats it. Without proper animation possibilites and shadows it won't work for games or anything productive.
Like I explained before, there can easily be systems which allow proper animation possibilities. If you can move one static object in relation to the camera, connect two objects and move them together = animation. Make a car object, put a wheel object on it, rotate wheel object. Move car forward. A moving car animation.
It would be very hard to do certain things, definitely.. for example, making a leaf bend in the wind. Rotating a wheel would take almost 0 computer power, dynamically calculating a few hundred thousand atoms movement based on a force? That would be insanely heavy. Overall, moving objects should be easy, and a big object can be animated by splitting it into many smaller objects. The problem is when the animation needs to actually change objects themselves since then actual work on individual atoms would be needed.
Who knows what smart programmers can come up with though.
Well since I didn't work with their technology I can't tell if that is true, but I can imagine that it is not possible. Why would they work on it for more than 12 months and then don't show at least SOME animation?
Yes in theory what you say is right, but this is why the standard consists of polygons and not atoms. If it was possible to do so with their technology, why don't they just show it? Apparently, they know how to use modelling programs, so they can easily make a few animations.
Again, this technology came out 19 years ago. They only improved it, but the old constraints still are the same and were the reason why it was abandoned in favour of polygon technology.
Well the same thing happened for TV, it took years to become big. Of course any new technology is going to take a while, as the saying goes 'Rome wasn't built in a day'.
Unfortunately I don't really know much about CPU's and GPU's over the past ten years since there are so many different models for each new design, but today's graphics card's are a hell of a lot better than the old ones.
The lack of animation is an obvious concern. Like everyone has already posted, this is not a new thing. These guys have been around for ages and yet everything still looks the same.
It looks cool, but honestly, graphics dont make the game at all. However i think that it will help shooting games a lot, because it will help immersion and creating an atmosphere.
On August 02 2011 03:24 Valashu wrote: Be true dammit! I want minecraft in infinite powah mode!
Isn't the charm of Minecraft the fact that it's blocky and it leaves a lot to the imagination? You know, the thing people pine for from video-games prior to the 2000s?
Re Euclideon, no chance of a game on current gen systems, but maybe several years from now. Production issues will be challenging.
high poly models isn't everything in a videogame, there's so much behind the scenes that all need to be working with this new technology, a whole lot needs to be built up from scratch
I have been working with both programming and graphics for a while now. And i see one limit with this.
It is fine that they have created a engine that can convert vast amounts of information on "atoms" into polygons for rendering, and they are optimizing so the polygon count is higher for near objects.
But how to you handle the fact that if an artist create a 3d environment that is so detailed that it requires thousands of TB of data just to store.
I don't want to have my game delivered in a truck full of blu rays, or download for 10 years via the internet to get it.
It is ok to be able to render it fast, because you have an algorithm that can reduce the source material to a limited number of polygons for a given camera angle, bit how can consumers be expected to have the hardware to handle and process the data from the source before rendering.
The reason why it works in the example they give, is that even though the details for the individual objects are really high, they are re-using the same source data for multiple objects in the world, as you can see in the video. This will reduce the source material size.
I am not saying that you cannot use this technology, but it is certainly not a new idea, and if they have some new good algorithms then its fine.
But this is not ground breaking, nor a new idea in the academic world.
1. There's nothing that new about using atoms, the only reason they weren't always used is because of computation limitations 2. As we get better graphical boards poligons can get smaller and smaller. How small do you think a poligon will get? That's right the minimum size will be similar to those atoms, but besides bringing the same quality it will bring more speed because you don't need distinct atoms to describe some things, a poligon aproximation will do without ever beeing spotted by a human eye.
Seems to me like there's little new and even less that's impressive in that vid.
On August 03 2011 00:48 Cyba wrote: It's a bit of bogus imo here's why.
1. There's nothing that new about using atoms, the only reason they weren't always used is because of computation limitations 2. As we get better graphical boards poligons can get smaller and smaller. How small do you think a poligon will get?
ye but as I understood the video they have found a way to dramatically decrease the computational work needed to render PER atom, so that you can have more complex objects. I don't know how they do it, but it sounds interesting and I would like to know more =)
On August 03 2011 00:47 Soleron wrote: It's an investment scam as they've been around for over a year with nothing but this video to show.
They can't create art that hasn't been drawn, and the bottleneck to better graphics in games is the art budget not the hardware or engine.
The bottleneck is definitively the hardware and the computational methods! I dont know if this is a scam or not though
Why are people bashing ? This is a first release, the guy said that the project is very far from being finished... This is just an update. And it doesn't matter if this is not useable for now, it's not like you invested in it or anything, you just watched a 7min video. Waiting for a next update
On August 03 2011 01:13 _fool wrote: There's a famous saying in software: "Don't make the demo look done".
Looks like they're doing just that.
Nah, they mentioned how they didn't have proper lighting at that stage, etc. It's an update presentation about how far they've come since their original video a year ago. Reasonable IMO.
On August 02 2011 01:02 Morfildur wrote: Can't listen to the video atm since i'm at work, but it reminds me of the old Voxel Technology (Old meaning: DOS Games used it) => http://en.wikipedia.org/wiki/Voxel . The first game that used it iirc was "Comanche" (http://en.wikipedia.org/wiki/Comanche_series)
Voxel technology has started to be used in modern game engines as well. Even commercial ones. C4 engine is well known for having the first Voxel terrain editor in a commercial engine (I believe it even beat CryEngine2 in the timelnie). It also has an upgrade coming soon that will allow Voxel blob particles (similar to the Portal 2 "Gel" particles - which may be Voxels too I'm not sure). The lead developer of the engine releaesd a video of it recently - - Those are all Voxels with full lighting and great performance.
Back on topic, it's all marketing at this point. It's too early to be anything but skeptical until the SDK is actually public.
I'm pretty sure their first video addressed that the technology they were using wasn't new, but they were able to develop some sort of search algorithm so that all the individual atoms didn't have to be rendered at the same time which would make performance increase a whole lot.
I'm pretty sure their first video addressed that the technology they were using wasn't new, but they were able to develop some sort of search algorithm so that all the individual atoms didn't have to be rendered at the same time which would make performance increase a whole lot.
I do not understand graphics that much, so can anyone enlighten me on why this is so hard to load with modern/even next generation CPUs? It looks like almost the exact same graphics as CryEngine 2 on max, maybe a BIT better?
Apart from the technical side, what really annoyed me was the: "We made the graphics 100.000x times better"
There is no way to objectively judge graphics quality I always loved the quake artstyle, where you can have a few plattforms floating in space and that's the whole level. All gameplay, no bullshit. All those "pretty" graphics are creating visual clutter and are getting in the way of actual gameplay.
I do not understand graphics that much, so can anyone enlighten me on why this is so hard to load with modern/even next generation CPUs? It looks like almost the exact same graphics as CryEngine 2 on max, maybe a BIT better?
He explained it in the video; it runs slower because it's made with particles instead of polygons; a polygon is basically an "area" between a certain amount of corners (think triangle/square), which draw very fast, whereas having very, very tiny atoms everywhere works really, really slow because you have to substitute a single polygon with a lot of atoms (think 10000+) to draw the polygon. On the plus side, it looks a lot nicer and allows artists for FAR greater detail (think real life level of detail), but it works way, way slower.
Voxels are old news. There's pros and cons of using it. The video speaks of some pros and ignore the cons. Because they want to create hype to get funding.
Until they solve those problems, it seems to me that this technology would be most useful for artists using 3D programs. If a modeling program could be used to use this technology natively or pseudo-natively, it would save so much on processing power in workstation graphics, since most of what they deal with is static modeling. This use would still require getting a hold of modifying these voxel models in their voxel form as opposed to just a polygon converter though.
we already have render proxies, dont worry, 3d artists arent under stress for finding ways of high polycounts
Yesterday we posted a video from Euclideon – a Australian company that claims it can revolutionise video game graphics, increasing visual fidelity by 100,000. This morning we spoke to Euclideon’s CEO Bruce Dell – the man Markus Persson calls a “Snake Oil Salesman” – to ask a few questions regarding Euclideon’s ‘Infinite Detail’ technology “I think what I would like to make clear is that this is not the finished product,” says Bruce Dell, CEO of Euclideon. “We feel like a mother who put cookies in the oven, and now everyone is surrounding the oven chanting ‘are they ready yet? Are they ready yet?’
“Give us time and the cookies will taste just fine!”
Instantly we recognise the voice — it’s the voice from that video. The voice that claimed Euclideon could revolutionise video game graphics, the voice that claimed a new technology called ‘Infinite Detail’ could increase visual fidelity by a factor of 100,000. The man Markus ‘Notch’ Persson, the creator of Minecraft, openly called a “Snake Oil Salesman”.
It’s 9am in Brisbane, and we’ve just woken said Snake Oil Salesman up.
“No! No, this isn’t a hoax,” Bruce Dell laughs, in response to our first, obvious question. “If this was a hoax then we’ve convinced the Australian government it was a hoax. We’ve convinced our board of directors and investors it’s a hoax! “We have a government grant – so no, it is not a hoax! We have real time demonstrations.”
The response to Euclideon’s demonstration video, which we posted yesterday was instantaneous and fairly mixed. Some were cynical, some called it a hoax, others were more receptive – but it was hardly a convincing demonstration. Markus Persson, writing on his own personal blog, was perhaps the most scathing in his criticism.
“They’re hyping this as something new and revolutionary because they want funding,” wrote Persson. “It’s a scam.”
But if it’s a scam, then the Australian Government is the mark, having invested 2 million dollars into Euclideon and its technology.
LOOKING FOR SNOW WHITE We asked Bruce to explain the technology and how it worked.
“Well, basically anyone who is technical is going to say you can’t run that many polygons,” he began, “but in the past we were trying to explain it in simple terms so people could understand.
“A good analogy would be this: imagine you go to a library to find a book — say… Snow White. Imagine you go to a library and those books aren’t on the shelf; they’re all lying on the ground. At the moment systems that run point cloud data are doing that, they’re putting every point on the screen and there is no order to it. Now imagine you go to a library and all the books are on the shelf and in order – you go to the ‘S’ Section, then look for ‘SNO’ and it isn’t long before you’ve found the book you need.
“One system is looking at thousands of books,” he continues, “and the other system is looking at ten labels. That’s the basis of a search algorithm like Google or Yahoo – they sort through all the knowledge in the world really quickly because it’s categorised. “We made a search algorithm, but it’s a search algorithm that that finds points, so it can quickly grab just one atom for every point on the screen.”
According to Bruce Dell, it’s all about efficiency.
“So think about the difference,” he says. “If you had all of the points you are seeing on the screen, like in our demo, it’s going to take forever. You’ll be waiting for a long time. But if you’re grabbing only one for every pixel on the screen, then you don’t have a trillion dots, you have… well, pick a resolution and do the maths!
“That’s the difference. In layman’s terms that’s how we’re doing what we’re doing. The workload is so small that at the moment we’re running software just fine with real time demonstrations and we’re still optimising, because we keep finding more efficient ways to do this.”
That appears to be all well and good, but most criticism from the games industry has come from the detail Euclideon has been a little more coy on: animation, physics …
“[V]oxels are horrible for doing animation,” wrote Markus Persson in his aforementioned blog, “because there is no current fast algorithms for deforming a voxel cloud based on a skeletal mesh, and if you do keyframe animation, you end up with a LOT of data. It’s possible to rotate, scale and translate individual chunks of voxel data to do simple animation (imagine one chunk for the upper arm, one for the lower, one for the torso, and so on), but it’s not going to look as nice as polygon based animated characters do.”
According to Bruce Dell, the reason no animations have been shown is simple – Infinite Detail is still a work in progress.
“We have animation,” claims Bruce, confidently. “We’re certainly going to do a lot more work in that area. I have faith that you’ll find our animation quite satisfactory, but we have no intention of releasing anything in that department until it looks absolutely 100% because if we release it now, I assure you that no-one will take it as ‘that’s where we’re up to and we’re still working on it’, they’ll just scream ‘it’s not perfect yet! They can’t make it perfect! This can’t compare to polygons!’”
THE EMPIRE STRIKES BACK We spoke to an Australian physics engine developer with experience of Bruce Dell and Euclideon. His company dealt with Bruce Dell years ago, when Euclideon was seeking funding for the Infinite Detail project. Said company declined to fund the project, citing issues with memory management, particularly when it came to animations.
According to him any live demonstrations given by Euclideon featured poor art and assets, so it was difficult to gauge precisely how hardware intensive Infinite Detail actually was.
The developer in question asked not to be named, but his primary concern wasn’t with the ‘Infinite Detail’ tech itself, which he claimed could work with adjustments – the issue was the toolset and the investments required to move an entire industry across to a new standard. Currently every game developer in the world is using tools dedicated to polygons – convincing an entire industry to toss years of investment and research would be a difficult task indeed, especially with an unproven technology.
Bruce Dell disagrees with that assertion.
“I see comments from people saying the games industry will never use this,” he begins. “Well, this industry isn’t quite so old and stubborn. The games industry is actually quite open and we’re in contact with quite a lot of players in that industry.” According to Bruce, the sheer efficiency of his technology will win developers over.
“The present polygon system has got quite a few problems, but not in terms of graphics. Polygons are not really scalable between platforms – if I were to make a character on a PlayStation 3, I can’t put him on the Nintendo Wii because he uses too many polygons, so I have to completely rebuild him. Imagine we weren’t doing a polygon game, say we were doing a 2D game, if I drew a character on the PlayStation, he’s just a bitmap image – this can easily be rescaled. You could do it in Microsoft Paint! ‘Infinite Detail’ data is like a 2D bitmap image in that rescaling its size is easy, whereas polygons can’t scale like that. “The big thing is – if you make a game using the present polygon system, you have to rebuild it to rescale it. You don’t have to do that with Unlimited Detail.
“The industry’s response was, basically, what you have is really good, you do not understand that the industry is used to using polygons and our tools are very good. I took a look at those tools and thought yes, they are very good. We want to get things to the stage where the artists don’t have to change anything, just that now they’re using unlimited detail.”
Not all developers have openly dismissed Bruce Dell and his ‘Infinite Detail’ technology, but even the most optimistic have opted for a ‘wait and see’ approach. John Carmack, for example, mentioned Euclideon briefly on his Twitter account claiming that “production issues would be challenging” but wondered if the tech might viable “a couple of years from now”.
Even Bruce Dell himself admits that he needs time. Come back later, he says, perhaps sooner than we think, and we might get the final product.
“Basically we’re in the middle of a trilogy and this is like our Empire Strikes Back,” he explains. “We disappeared for so long that I think everyone thought ‘oh, they’re dead’. So we thought we’ll release a one year report, tell everyone we’re alive and then disappear again.
“The intention is to come out again, once we’ve finished, and then we’ll be releasing real time demonstrations.”
He explains a little more about how their technology works. Take what you want from it, I'm sticking with the wait-and-see philosophy until they start coming out with real time & animated presentations.
Yesterday we posted a video from Euclideon – a Australian company that claims it can revolutionise video game graphics, increasing visual fidelity by 100,000. This morning we spoke to Euclideon’s CEO Bruce Dell – the man Markus Persson calls a “Snake Oil Salesman” – to ask a few questions regarding Euclideon’s ‘Infinite Detail’ technology “I think what I would like to make clear is that this is not the finished product,” says Bruce Dell, CEO of Euclideon. “We feel like a mother who put cookies in the oven, and now everyone is surrounding the oven chanting ‘are they ready yet? Are they ready yet?’
“Give us time and the cookies will taste just fine!”
Instantly we recognise the voice — it’s the voice from that video. The voice that claimed Euclideon could revolutionise video game graphics, the voice that claimed a new technology called ‘Infinite Detail’ could increase visual fidelity by a factor of 100,000. The man Markus ‘Notch’ Persson, the creator of Minecraft, openly called a “Snake Oil Salesman”.
It’s 9am in Brisbane, and we’ve just woken said Snake Oil Salesman up.
“No! No, this isn’t a hoax,” Bruce Dell laughs, in response to our first, obvious question. “If this was a hoax then we’ve convinced the Australian government it was a hoax. We’ve convinced our board of directors and investors it’s a hoax! “We have a government grant – so no, it is not a hoax! We have real time demonstrations.”
The response to Euclideon’s demonstration video, which we posted yesterday was instantaneous and fairly mixed. Some were cynical, some called it a hoax, others were more receptive – but it was hardly a convincing demonstration. Markus Persson, writing on his own personal blog, was perhaps the most scathing in his criticism.
“They’re hyping this as something new and revolutionary because they want funding,” wrote Persson. “It’s a scam.”
But if it’s a scam, then the Australian Government is the mark, having invested 2 million dollars into Euclideon and its technology.
LOOKING FOR SNOW WHITE We asked Bruce to explain the technology and how it worked.
“Well, basically anyone who is technical is going to say you can’t run that many polygons,” he began, “but in the past we were trying to explain it in simple terms so people could understand.
“A good analogy would be this: imagine you go to a library to find a book — say… Snow White. Imagine you go to a library and those books aren’t on the shelf; they’re all lying on the ground. At the moment systems that run point cloud data are doing that, they’re putting every point on the screen and there is no order to it. Now imagine you go to a library and all the books are on the shelf and in order – you go to the ‘S’ Section, then look for ‘SNO’ and it isn’t long before you’ve found the book you need.
“One system is looking at thousands of books,” he continues, “and the other system is looking at ten labels. That’s the basis of a search algorithm like Google or Yahoo – they sort through all the knowledge in the world really quickly because it’s categorised. “We made a search algorithm, but it’s a search algorithm that that finds points, so it can quickly grab just one atom for every point on the screen.”
According to Bruce Dell, it’s all about efficiency.
“So think about the difference,” he says. “If you had all of the points you are seeing on the screen, like in our demo, it’s going to take forever. You’ll be waiting for a long time. But if you’re grabbing only one for every pixel on the screen, then you don’t have a trillion dots, you have… well, pick a resolution and do the maths!
“That’s the difference. In layman’s terms that’s how we’re doing what we’re doing. The workload is so small that at the moment we’re running software just fine with real time demonstrations and we’re still optimising, because we keep finding more efficient ways to do this.”
That appears to be all well and good, but most criticism from the games industry has come from the detail Euclideon has been a little more coy on: animation, physics …
“[V]oxels are horrible for doing animation,” wrote Markus Persson in his aforementioned blog, “because there is no current fast algorithms for deforming a voxel cloud based on a skeletal mesh, and if you do keyframe animation, you end up with a LOT of data. It’s possible to rotate, scale and translate individual chunks of voxel data to do simple animation (imagine one chunk for the upper arm, one for the lower, one for the torso, and so on), but it’s not going to look as nice as polygon based animated characters do.”
According to Bruce Dell, the reason no animations have been shown is simple – Infinite Detail is still a work in progress.
“We have animation,” claims Bruce, confidently. “We’re certainly going to do a lot more work in that area. I have faith that you’ll find our animation quite satisfactory, but we have no intention of releasing anything in that department until it looks absolutely 100% because if we release it now, I assure you that no-one will take it as ‘that’s where we’re up to and we’re still working on it’, they’ll just scream ‘it’s not perfect yet! They can’t make it perfect! This can’t compare to polygons!’”
THE EMPIRE STRIKES BACK We spoke to an Australian physics engine developer with experience of Bruce Dell and Euclideon. His company dealt with Bruce Dell years ago, when Euclideon was seeking funding for the Infinite Detail project. Said company declined to fund the project, citing issues with memory management, particularly when it came to animations.
According to him any live demonstrations given by Euclideon featured poor art and assets, so it was difficult to gauge precisely how hardware intensive Infinite Detail actually was.
The developer in question asked not to be named, but his primary concern wasn’t with the ‘Infinite Detail’ tech itself, which he claimed could work with adjustments – the issue was the toolset and the investments required to move an entire industry across to a new standard. Currently every game developer in the world is using tools dedicated to polygons – convincing an entire industry to toss years of investment and research would be a difficult task indeed, especially with an unproven technology.
Bruce Dell disagrees with that assertion.
“I see comments from people saying the games industry will never use this,” he begins. “Well, this industry isn’t quite so old and stubborn. The games industry is actually quite open and we’re in contact with quite a lot of players in that industry.” According to Bruce, the sheer efficiency of his technology will win developers over.
“The present polygon system has got quite a few problems, but not in terms of graphics. Polygons are not really scalable between platforms – if I were to make a character on a PlayStation 3, I can’t put him on the Nintendo Wii because he uses too many polygons, so I have to completely rebuild him. Imagine we weren’t doing a polygon game, say we were doing a 2D game, if I drew a character on the PlayStation, he’s just a bitmap image – this can easily be rescaled. You could do it in Microsoft Paint! ‘Infinite Detail’ data is like a 2D bitmap image in that rescaling its size is easy, whereas polygons can’t scale like that. “The big thing is – if you make a game using the present polygon system, you have to rebuild it to rescale it. You don’t have to do that with Unlimited Detail.
“The industry’s response was, basically, what you have is really good, you do not understand that the industry is used to using polygons and our tools are very good. I took a look at those tools and thought yes, they are very good. We want to get things to the stage where the artists don’t have to change anything, just that now they’re using unlimited detail.”
Not all developers have openly dismissed Bruce Dell and his ‘Infinite Detail’ technology, but even the most optimistic have opted for a ‘wait and see’ approach. John Carmack, for example, mentioned Euclideon briefly on his Twitter account claiming that “production issues would be challenging” but wondered if the tech might viable “a couple of years from now”.
Even Bruce Dell himself admits that he needs time. Come back later, he says, perhaps sooner than we think, and we might get the final product.
“Basically we’re in the middle of a trilogy and this is like our Empire Strikes Back,” he explains. “We disappeared for so long that I think everyone thought ‘oh, they’re dead’. So we thought we’ll release a one year report, tell everyone we’re alive and then disappear again.
“The intention is to come out again, once we’ve finished, and then we’ll be releasing real time demonstrations.”
He explains a little more about how their technology works. Take what you want from it, I'm sticking with the wait-and-see philosophy until they start coming out with real time & animated presentations.
They asked the "Snake Oil Salesman" if it's a scam and he said no, how suprising :p
Well, i agree with you, wait and see is the best wait to go. I highly doubt this technology, especially since they only give a basic description but no technical description for those that can actually understand the technology, but maybe they have a big secret that noone else knows.
"Hey, it doesn't have to search, it finds stuff immediatly, no matter how big the dataset is"... uhm... well...
You know what also might be viable increased amount of DX11 games featuring tessellation at great levels because the hardware can handle along with common place texture and shading techniques the so call unlimited detail isn't that impressive, esp with the known hardware and animation issues with such a venture. Truth is short of this being amazing beyond belief which it's not the industry isn't going to shift for them to make their product more viable even if it was actually fit with everything needed such as proper animation.
Ken Silverman (the guy who wrote the Build engine, used in Duke Nukem 3D) has been working on a voxel engine called Voxlap, which is the basis for Voxelstein 3d:
And there’s more:
Every implementation seems to be a memory/hard drive hog. This could be somewhat alleviated with wavelet-based progressive volume data compression as noted in an intel article. Atomontage engine demonstrates limited deformation in real time, one demonstrates some kind of dynamic lighting and procedural generation. It is also possible to animate them.
This piece of technology has great promise, but it does not mean it completely supersedes current solutions. It has its advantages and disadvantages, and would be best used in conjuction with polygon-based and other kinds of rendering, where those produce suboptimal results. Terrain, plantation, LOD, repetitive scene elements.
It actually would be amazing as the basis of a strategy or role-playing game with user-editable maps and repetitive scenery and objects, using voxels instead of sprites and 3D instead of isometric view.
To quote a youtube comment:
zelexi from A good question along the same vein which really gets to the heart is, what can I do with SVO that I can't do with polygons? I think that SVO is a simplifying factor. It solves LOD, streaming texturing, streaming geometry, uv unwrapping, etc... all in *one* single algorithm. A giant hammer of which to make game development easier and higher quality. The erid result of voxels is what many other algorithms struggle for and what voxels achieves relatively effortlessly.
I honestly think it is just impossible. I don't know a lot about technology but think about the amount of physic that would occur when you do something
News flash: Graphics will continue to progress in ways you thought impossible until they can progress no further
Similar to this, Steve Moore, founder of Intel, made what's called "Moore's law" that cpu's will have twice the number of transitistors every 6 months or something like that and it has kept true for the best like 10 years.
This looks really nice and finally will have games that look like real life, close enough to it at least. I want to know how water physics and air will work with the "atoms" since the water/air should be made out of it too.
On August 02 2011 00:57 BansheeDK wrote: It looks awesome. But everything in the demonstration was standing still. Wouldn't it tax the hardware a lot more when you try to animate stuff? I don't know too much about it myself, but I would imagine that rendering still objects are much easier than when they are moving.
Yes it will. But the difference is the computing power we will have in 5-10 year time when this technology is perhaps implemented in games(still in development phase) will be more then enough to run these animations.
Yes it will. But the difference is the computing power we will have in 5-10 year time when this technology is perhaps implemented in games(still in development phase) will be more then enough to run these animations.
Average gamer won't have up-to-date rig. You need to aim -2 years with specs...
"Oh we couldn't possibly fool the government that it's a scam! Honest!"
Oh please, as if the Australian government (or any government) has a damned clue about the gaming industry and what is and isn't worth investing in. They just got lured into buying the same bologne as this video shows.
His excuse for animation is pretty poor, too. If you want to silence the skeptics, just show some animation and physics. Claiming it has to be 100% perfect kind of shows very little understanding of how amenable the industry is. It really doesn't matter how bad it is. Just show it can be done and let people's imaginations run wild. So, why the secrecy there? It HAS to be 100% perfect? Please. What a loaded claim.
The industry is pretty harsh on unfinished products, to be honest. It's why very few game companies enjoy releasing alpha footage of gameplay (as it simply leads to bad publicity). Government grants usually come from specific tech sectors, ie the NIH (as I wrote in my blog explaining this sort of thing, so if you really wanna bash on the grant stuff then you need to look up where exactly the money came from, and the processes behind the acceptance process.
On August 12 2011 00:22 Bodom wrote: [H]ardOCP did a follow up with Euclideon's founder and lead engineer Bruce Dell, also the voice on the original video. Euclideon follow-up
in the update interview he talks about algorithm. i think its true now. it uses a search style of graphic display so that there is less tax on all the hardware involved. very smart. very real. cant wait. next year or two this will be out np.
Also I have talked with a friend about the limitations of Sparse Voxel Octrees. We have come to the conclusion that basic animation and transformation is possible even with static octrees (just like in the video) given some modifications to the renderer, and for physics you can use a crude polygon model instead of the voxel data. Particle-level deformation might be possible as well on smaller voxel models if you are willing to recalculate the entire model. At the moment I'm checking some videos and papers on lighting, illumination and reflection.
On August 12 2011 00:22 Bodom wrote: [H]ardOCP did a follow up with Euclideon's founder and lead engineer Bruce Dell, also the voice on the original video. Euclideon follow-up
in the update interview he talks about algorithm. i think its true now. it uses a search style of graphic display so that there is less tax on all the hardware involved. very smart. very real. cant wait. next year or two this will be out np.
I also looked into it with some techie people, and the general consensus is that it's entirely feasible with the math we know, but complex things (ie. constant animation + lightning + other stuff) is too taxing on processing power, hence Carmack's quote noting that it's simply not possible at the current hardware generation. At the moment, this tech would thus be in the "we can see it in 10-20 years." The big kicker is that if they've come up with some "crazy algorithm" that makes the calculation significantly more efficient, then this becomes "we can see it in the next few years." This big difference is what Carmack wanted to see (via animation and such) but was not addressed.
So the overarching conclusion is that it's viable, and they've hit an interesting concept, but "how soon" it becomes industrially viable is a different question that was not addressed.
Edit: Also for people hanging on Notch's word the impression I get is that he's not actually viewed particularly favorably for his tech skills. "Notch is an idiot who came up with a great idea for a game and then implemented it in... java." Also you hear nightmare stories about his coding.
On August 12 2011 00:22 Bodom wrote: [H]ardOCP did a follow up with Euclideon's founder and lead engineer Bruce Dell, also the voice on the original video. Euclideon follow-up
in the update interview he talks about algorithm. i think its true now. it uses a search style of graphic display so that there is less tax on all the hardware involved. very smart. very real. cant wait. next year or two this will be out np.
Thanks a lot, this is an excellent interview that everyone should watch to form an educated opinion. I kinda like the CEO guy, i am hopeful.
I feel much more hopeful after this second and much more detailed interview. I'm waiting eagerly for their animated demo, whenever they have it ready. Good luck to them, their out-of-the-box innovating deserves to be rewarded.
My fear is, that the memory consumption will be extremely high and interactivity and animations will be a problem.
It could well be possible to render this stuff fast, but each little "atom" has to have coordinates, a color, reflection parameters etc. That's A LOT of data (in the demos only few different assets are used so they could just be "duplicating" them and save memory this way). If you add to that the amount of calculations needed for an animation (apply a transformation on each and every one of those atoms), physics or shadows, I don't think it will run on current hardware.
^ yeah, that's what Carmack was concerned about, and that's the concern that most tech people currently have about it. Hence the "wouldn't work on current hardware" thing and why the interview didn't really address the concern. We shall see I'm just hoping people have a more clarified understanding of why it works, but has significant hurdles.
This engine can apparently scale very well. In the video many objects are the same. That's probably the reason, why it runs well. I guess it will just introduce scaling and scanning of objects, while remaining taxing to the hdd and ram. If you further think about, it is not unrealistic to be able to run such graphics on a high-end pc in two to three years.
Just like to point out the analog between voxels and polygons in 2D between raster images (JPEGs, PNGs,) and SVGs. SVGs use math formulas to describe their pictures. JPEGs and PNGs use pixels. SVG image sizes remain constant under any resolution size. JPEGs and PNGs grow by area. Voxel resolution size would then grow by the volume that's contained.
On August 13 2011 03:06 Perscienter wrote: This engine can apparently scale very well. In the video many objects are the same. That's probably the reason, why it runs well. I guess it will just introduce scaling and scanning of objects, while remaining taxing to the hdd and ram. If you further think about, it is not unrealistic to be able to run such graphics on a high-end pc in two to three years.
Yeah, static objects at the moment are easy, presumably lighting effects and such aren't that far off, the main issue is the calculation of animation, esp with for instance lighting effects. Currently a rough example of how the system works is that if you have a 1024x768 res screen it calculates 786432 pixels' worth of "data" through the objects on the screen. For relatively static stuff this is straightforward, as you only have to look at the object map and calculate what those pixels have to be. But then when you start adding effects, you have to calculate the motion of those objects/motion of those effects and factor that into the 786432 pixels' worth of data. That's a lot of realtime calculation that's needed, and given that games tend to be like 30+ fps that's REALLY a lot.
This is why people atm are skeptical about it being able to run on current hardware (CPU/RAM/etc. not GPU), and why the engine needed to show efficient calculation to prove skeptics wrong. Unfortunately as the demo stayed on only static objects there was no indication about the higher level real-time calculation load, and so techie concerns regarding the industrial viability (ie. whether current hardware can actually pull it off) remains.
For people who cried scam, though, the video (if you can understand the technical concept) makes enough sense that you accept the theory as viable and turn toward the logistics.
Like many others, I'm interested in seeing this stuff with interaction, movement, and physics. It would suck to build this world only to see it crumble when you add gravity. =P
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
well do some research? also do remember this is in the FUTURE. meaning all will be differ from today.
they show the game played in realtime with over 50 fps in a video.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
well do some research? also do remember this is in the FUTURE. meaning all will be differ from today.
they show the game played in realtime with over 50 fps in a video.
do some research? i've worked with 11 people that work with voxel animation and a lot more, what have you done? read this thread?
i went to expression arts school for 5 1/2 years man.. pretty damn sure about what i'm talking about
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
well do some research? also do remember this is in the FUTURE. meaning all will be differ from today.
they show the game played in realtime with over 50 fps in a video.
do some research? i've worked with 11 people that work with voxel animation and a lot more, what have you done? read this thread?
well arent you awesome?
also one of the videos in notchs's blog is just the same old polygon gfx we got today, with a little mod, anyone can do that today, except maybe valve, they are stuck with VE bollocks
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
well do some research? also do remember this is in the FUTURE. meaning all will be differ from today.
they show the game played in realtime with over 50 fps in a video.
do some research? i've worked with 11 people that work with voxel animation and a lot more, what have you done? read this thread?
well arent you awesome?
also one of the videos in notchs's blog is just the same old polygon gfx we got today, with a little mod, anyone can do that today, except maybe valve, they are stuck with VE bollocks
its fairly simple to understand. to run a world like they show in that video, with 100% pure atoms or voxel makeup, there is no computer in the world that can stream data that fast, especially when any AI or animation is involved.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
well do some research? also do remember this is in the FUTURE. meaning all will be differ from today.
they show the game played in realtime with over 50 fps in a video.
do some research? i've worked with 11 people that work with voxel animation and a lot more, what have you done? read this thread?
well arent you awesome?
also one of the videos in notchs's blog is just the same old polygon gfx we got today, with a little mod, anyone can do that today, except maybe valve, they are stuck with VE bollocks
its fairly simple to understand. to run a world like they show in that video, with 100% pure atoms or voxel makeup, there is no computer in the world that can stream data that fast, especially when any AI or animation is involved.
you say this is your "area of expertise" yet it seems like you talk out of the blue with whatever make sense in your head. 20-50 videos on the internet proves it. also im gonna say it once again.
any mod,engine or whatever wont launch this year.. next year..or the year after that. by the time these engines comes to power we'll have sucha good computers that this mod will be silly.
also in a video and review i saw they clearly said engines as such doesnt require much GPU or CPU power as of now. a gaming laptop could play it out with 50 Fps in realtime.
now this arent going anywhere, i've said what i wanted,im done. do some research before you open your mouth son,.
On August 02 2011 00:54 WniO wrote: the problem with this is they cant render it in real time, or animate for that matter.
yes they can? O_o
you obviously haven't done any voxel animation if you think you can do realtime animations in a world like that at ANY playable framerate. i've seen single character models barely break 40fps being animated.
well do some research? also do remember this is in the FUTURE. meaning all will be differ from today.
they show the game played in realtime with over 50 fps in a video.
do some research? i've worked with 11 people that work with voxel animation and a lot more, what have you done? read this thread?
well arent you awesome?
also one of the videos in notchs's blog is just the same old polygon gfx we got today, with a little mod, anyone can do that today, except maybe valve, they are stuck with VE bollocks
its fairly simple to understand. to run a world like they show in that video, with 100% pure atoms or voxel makeup, there is no computer in the world that can stream data that fast, especially when any AI or animation is involved.
you say this is your "area of expertise" yet it seems like you talk out of the blue with whatever make sense in your head. 20-50 videos on the internet proves it. also im gonna say it once again.
any mod,engine or whatever wont launch this year.. next year..or the year after that. by the time these engines comes to power we'll have sucha good computers that this mod will be silly.
also in a video and review i saw they clearly said engines as such doesnt require much GPU or CPU power as of now. a gaming laptop could play it out with 50 Fps in realtime.
now this arent going anywhere, i've said what i wanted,im done. do some research before you open your mouth son,.
wtf are you even talking about? far from my area of expertise, but looks like charlie brown wrote your post dude. womp womp womp.
ya maybe in 5-10 years this could be feasible, depending on how hard drive and memory speeds increase. but for the near future, impossible. guaranteed
edit: i also find it hiliarous that your first reply to me was something about "believing some programmer making graphics from 1993" or some shit, yet you're battling back saying you've watched 50 youtube videos on it. you're basically criticizing yourself
since you've done your "research" and this is completely viable, show me a retail game that runs using ANY of the mentioned engines in that post. please. i'll wait.
On August 13 2011 05:54 Urnhardt wrote: its fairly simple to understand. to run a world like they show in that video, with 100% pure atoms or voxel makeup, there is no computer in the world that can stream data that fast, especially when any AI or animation is involved.
That's pretty much the point of Sparse Voxel Octrees, you don't need to load all voxels to render your model. It needs to load only a fraction of the data, a lower resolution representation, if the object is far away from the camera and needs to be rendered with less resolution. Then it stream all the details from disk when you get closer. This kind of progressive rendering is inherent in the data structure used.
Storage capacity is more of a concern than bandwidth, but for opaque models you only need to store its surface, and even there you can trade-off quality for storage. I suspect it is possible to use some kind of wavelet based / multiresolution compression to preserve both rough and fine details.
The Atomontage engine uses particle based physics, so that is definitely possible as well, but nothing is stopping you from using a rough polygonal model instead of the voxel model for physical calculations.