I did a search on the topic and it didn't turn up anything, so I decided I'd just share this since I know some people would be interested.
I came across this a while ago on the WoW forums or something and didn't know what to think of it. It seemed kind of fishy to me for a number of reasons, but as unlikely as the story "da evil corporation dunt want deir product made obsolete" seems, there's a very small chance this guy might be legitimate.
tl;dr: guy says this technology can make polygons obsolete by applying a search algorithm to the otherwise horribly inefficient point-cloud system.
the creators new website is http://www.euclideon.com/ if anyone is looking for *slightly* newer updates. Still, seems to be either extremely private in development, or just hell slow and sleepy.
Basically he's made the complexity of displaying the 1024*1680 or however many pixels of order 1, using some algorithm, instead of something like order n or n^2. I gathered this from him saying that there is an upper bound to the number of dots he needs to grab for the screen. Google probably uses hash tables to make their search algorithms of order 1.
I think they're doing something similar. Maybe I'll just read the comments and find out.
I remember seeing this and reading about it on Digg or reddit last year some time, or maybe it was earlier this year. From what I recall, this type of graphics can't be used for animating anything (except for frame by frame I suppose). So it could be used for cutscenes, and static stuff, so you could have some beautiful scenery, but it wouldn't be interactable.
On December 22 2010 11:59 GogoKodo wrote: I remember seeing this and reading about it on Digg or reddit last year some time, or maybe it was earlier this year. From what I recall, this type of graphics can't be used for animating anything (except for frame by frame I suppose). So it could be used for cutscenes, and static stuff, so you could have some beautiful scenery, but it wouldn't be interactable.
On December 22 2010 11:51 Hidden_MotiveS wrote: Oh my god! Wonton Soup! Holy... YES! I WANT!
Basically he's made the complexity of displaying the 1024*1680 or however many pixels of order 1, using some algorithm, instead of something like order n or n^2. I gathered this from him saying that there is an upper bound to the number of dots he needs to grab for the screen. Google probably uses hash tables to make their search algorithms of order 1.
I think they're doing something similar. Maybe I'll just read the comments and find out.
I think you'd be surprised at how ingenious their solution is.
I just watched the video and the follow up demonstration videos.
Looks pretty legit. Admittedly i'm not extremely well versed in the world of graphical detail, but the concept seems sound.
It looks like this technology is going to be made, but part of me can't help but wonder what unknown issues the technology might cause, or what unforeseen problems might arise once the technology is in wider use. Will it perhaps be incompatible (for some arcane reason) with the latest generation of graphics cards? Will it only be useful for static environments? Time will tell.
Is it not possible to integrate both unlimited detail and polygons in the same system? Because if you think about it, most things in most games are immovable. Walls, floors, tables, the areas in the distance your character sees but can't reach due to the "invisible boundaries" at the edge of the map. Why not let the unlimited detail system allow the entire environment to be rendered pixel by pixel and then overlay that with the polygon system?
Of course, what I know about this sort of thing is pretty much next to nothing, so I might just be spouting ignorant nonsense.
I'm not a graphics expert, but I know a good bit about the math that goes into modeling, texturing, animating etc. and about how graphics cards in general work. It's possible this could be some kind of hoax (they haven't released anything solid yet), but nothing really stands out as an indication. In fact, everything said in the videos is not only plausible but almost inevitable.
Polygons are the big primitives that the graphics industry works with. All the current 3d engines (that I know of) operate on and display polygons. All major software libraries for 3d (and even 2d) operate on polygons. Graphics artists (responsible for texturing etc.) work pretty much directly on polygons. I'm not completely sure about shader-type operations, but I'm willing to bet they are optimized for polygons as well. Everything is set up to manipulate polygons, even the hardware - and that's the most important part.
Polygons do have the problems described in the videos. If you make a model of the moon and stick it up in the sky, you have to render the entire moon and not just the "face" of it that the camera perspective sees. So you're drawing the back of the moon for no reason at all. Additionally, polygons are very unnatural to work with for 3d artists - and the best 3d artists are ones who are clever enough to make something look natural with the minimum amount of polygons.
The technique the video mentions is just drawing collections of dots in 3d space dense enough that they appear solid. If you tried to draw that using polygons, you'd run into memory trouble (because there would be so many polygons). But they have apparently come up with some clever algorithm to only draw the minimum to display 1 dot per pixel per frame. My guess is they store the dot locations directly in RAM (so that RAM/CPU is the new [faster] bottleneck).
They haven't released any technical details because they're probably busy stamping their ownership (in a legal sense) all over their work on it in case it catches on.
As far as animation etc., I imagine it would be pretty easy but very different from the way we do things now. Define a bounding box over a region to specify a "piece" of something (e.g. a finger on a hand) and have a transformation on the bounding box apply to the dots inside the box. That's a very primitive sketch of what an animation algorithm might look like.
Thanks to the OP, I find this kind of stuff very interesting and will keep up with it. I can't see this catching on anytime soon since the whole industry (all of the mountains of mathematics and expertise) is built pretty solidly around the current model.
On December 22 2010 11:51 Hidden_MotiveS wrote: Oh my god! Wonton Soup! Holy... YES! I WANT!
Basically he's made the complexity of displaying the 1024*1680 or however many pixels of order 1, using some algorithm, instead of something like order n or n^2. I gathered this from him saying that there is an upper bound to the number of dots he needs to grab for the screen. Google probably uses hash tables to make their search algorithms of order 1.
I think they're doing something similar. Maybe I'll just read the comments and find out.
I think you'd be surprised at how ingenious their solution is.