123
-=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- (c) WidthPadding Industries 1987 0|469|0 -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=- -=+=-
Socoder -> Off Topic -> HoloDeck?

Posted : Wednesday, 14 September 2016, 02:54
Jayenkai

View on YouTube
Posted : Wednesday, 14 September 2016, 02:54
steve_ancell
They speak as if polygons are taken for granted nowadays, when I was a kid a polygon was a dead parrot.
Posted : Wednesday, 14 September 2016, 03:52
shroom_monk
Oh, these guys again. I'm genuinely surprised they still have the money to keep going with this...

It's still almost certainly complete rubbish, though. The video seems to be trying to use this new holodeck stuff to try and legitimise the old point-cloud rubbish, but once again they give only the same 'it's infinite, it's not polygons' 'explanation' rather than any actual technical detail about how this works. And if, after ten years, they still can't give any technical detail (especially given their claim it all runs in software rather than on a GPU) as to how this supposedly works, then it is, almost certainly, still a fraud.

-=-=-
A mushroom a day keeps the doctor away...

Keep It Simple, Shroom!
Posted : Wednesday, 14 September 2016, 04:03
Jayenkai
[H]ardOCP have an interview with the devs.
Linkage

-=-=-
''Load, Next List!''
Posted : Wednesday, 14 September 2016, 04:12
steve_ancell
I still don't know what to make of all this either. Polygons or atoms, they still have to be rendered to the screen.
Posted : Wednesday, 14 September 2016, 05:24
rockford
If this is real, and let's face it they've been touting this for 10 years - that's a long time to keep up a pretence, then it could be the best thing that's ever happened in gaming/computing.

You have to think that tech companies have a vested interest in doing things the old way, with polygons. That's how they make their money, by providing graphics tech and code that releases the power of such a bit more each year.

Without the need for a GPU those companies won't be needed.

And why should these guys reveal how they're doing it? It will be their tech and their patent, which they can license to the world.

If what we've just watched is real, then we'll know about it soon enough, as everybody will want a bite of that cherry. Let's face it, who wouldn't want to have a play in a real holodeck? I'd certainly pay to play. Arcades could be back, big-time again
Posted : Wednesday, 14 September 2016, 05:28
shroom_monk
An interview conducted by someone who works in 'community relations' at the same company? Seems legit.

-=-=-
A mushroom a day keeps the doctor away...

Keep It Simple, Shroom!
Posted : Wednesday, 14 September 2016, 05:55
Jayenkai
Whether or not the bollocksy atom-render, unlimited crap is or isn't a real thing, the fact that there's an actual working (ish) holodeck is most intriguing!

Even if they do just use regular polygons to do it!!

-=-=-
''Load, Next List!''
Posted : Wednesday, 14 September 2016, 08:59
Andy_A
Well if you take the guys explanation at face value, he stated that they've developed an advanced 3D search algorithm to make it all work.

It's not too hard to imagine if you consider that all of their imagery is just dots on a screen. At 1024x768 screen res there's only 736K of pixels to update per frame. So with a sufficiently advanced search algo to retrieve only the pixels you can see (think z-buffer). Then it's entirely possible to stream the visible pixel info from disk. That amounts to 1024*768 * 24fps = 18,874,368 bytes per second. Well within the transfer rates of modern hard drive sub-systems. So, -IF- they have such a search algorithm, it's definitely within the realm of possibility, IMHO.
Posted : Wednesday, 14 September 2016, 09:57
shroom_monk
Surely that's just 18,874,368 pixels to process per second, not bytes? Assuming such a search algorithm only requires the world coordinates to locate a point in the cloud, and that those coordinates are XYZ coordinates stored as 3 single-precision (i.e. 4 byte) floats, that's 12 bytes per point, so 226,492,416 bytes per second.

And that's ignoring the fact you also need to stream the colour of the pixel you've located from disk as well, which at 3 bytes for RGB (assuming no alpha term for transparency) brings us up to a total of 15 bytes per point, or 283,115,520 bytes per second.

The video includes an example of lighting on a model, which will require per point normals too, so you have to throw in an extra 12 bytes per point for those, taking us to 509,607,936 bytes per second.

And allll that assumes that this is a static scene, entirely on a hard drive (capable of shifting half a gig per second), with no animation or moving parts, because those need to be calculated a run time, so can't just be streamed straight from disk.

I could go on, but I won't, because as pointed out this topic is really meant to be about this 'holodeck' rather than retreading the same ground on the 'infinite detail' stuff. As far as I can see, though, they don't give any explanation of how these are supposed to work (regardless of whether they are powered by point-cloud or polygons) - in the video, it looks like stuff projected onto various walls that follows the perspective of the camera filming the person, rather than the person themselves. And from an optics perspective, there doesn't seem to be any explanation of how 'ordinary glasses' are supposed to achieve a 'hologram' effect. I assume they're using polarising glasses, as in 3D cinema, and the screens would track the movement of the user? That doesn't seem impossible, I suppose, but I have a hard time believing that gives a better head tracking experience than a VR headset would.

(Though of course, since I lack the means to try either, I have no way of actually verifying that... :/ Don't suppose we have anyone in Australia who is actually close by? )

-=-=-
A mushroom a day keeps the doctor away...

Keep It Simple, Shroom!
Posted : Wednesday, 14 September 2016, 10:07
rockford
Not in Oz, but if you're willing to pay airfair and hotel bills, I'll let you now how good/bad/indifferent it is.
Posted : Wednesday, 14 September 2016, 11:14
spinal
So this is similar to ye-olde ray tracing? And they can access the data requires for infinite resolution without 'extra' slowdown?

Seems cool.

-=-=-
Check out my excellent homepage!
Posted : Wednesday, 14 September 2016, 11:57
Andy_A
The infinite resolution is definitely marketing hype.

But as far as data needed per second of animation is not as extreme as Shroom Monk would have you believe. He's basing most of his assumptions from existing 3D polygon based rendering. With the right database model/data structure, most of the info he insists must be present and transferred is just a look up away.

If they're just polarized glasses, wouldn't the illusion be interrupted, by what you don't see with your peripheral vision?
Posted : Wednesday, 14 September 2016, 12:14
shroom_monk
He's basing most of his assumptions from existing 3D polygon based rendering


No, they're based off the minimum data you need for lit 3D rendering in any kind of 3D system (e.g. ray-tracing). You need the position and the colour to draw anything, and you need normals for shading. The calculations above are the data transfer needed for the 'just a look up'.

|edit| And that's per frame of rendering, not animation. The animation considerations are a whole separate issue. |edit|

-=-=-
A mushroom a day keeps the doctor away...

Keep It Simple, Shroom!
Posted : Wednesday, 14 September 2016, 14:22
Andy_A
What if the 'atoms' contain most of the pixel information needed in the data base/structure, surely you wouldn't want to re-load or re-calc that information. Maybe you could try to envision the majority of the needed rendering information as a matter of using the right pointers to the different parts of the data. The 'position' is the pixel position on screen, not necessarily calc'd at runtime to render the scene, only an elevation and atom index is needed at any given screen position.
Posted : Wednesday, 14 September 2016, 20:49
therevillsgames
Shame I'm not going to QLD anytime soon

https://holoverse.com.au/

Looking at the reviews, it seems there have some technical issues to work out:

https://www.tripadvisor.com.au/Attraction_Review-g495002-d10389488-Reviews-Holoverse-Southport_Gold_Coast_Queensland.html#REVIEWS
Posted : Wednesday, 14 September 2016, 23:25
steve_ancell
I remember when they first announced what they're doing. I think I remember something about a binary tree system, maybe that's how they got around the speed issue. That's assuming the whole thing ain't bullcrap!
Posted : Thursday, 15 September 2016, 04:26
shroom_monk
"Andy_A" What if the 'atoms' contain most of the pixel information needed in the data base/structure, surely you wouldn't want to re-load or re-calc that information. Maybe you could try to envision the majority of the needed rendering information as a matter of using the right pointers to the different parts of the data. The 'position' is the pixel position on screen, not necessarily calc'd at runtime to render the scene, only an elevation and atom index is needed at any given screen position.


Oh, you would certainly want to store it all in RAM, yes. But the reason we were doing streaming calculations from the hard-drive rather than assuming it's all loaded into RAM was because we made the assumption that you only need to load the atoms you are drawing this frame (thanks to this magical search algorithm), rather than any other atoms we don't need yet. If you loaded all the atoms into memory (or even a fraction of them), you would quickly run out of space, making RAM infeasible.

As a demonstration: in the video they claim to have '64 atoms per cubic millimetre'. So, by our previous estimate of 27 bytes per atom, that's 27*64*10^9 = 1,728,000,000,000 bytes per cubic metre. 1.7TB of RAM to store one cubic metre of geometry! Say you only loaded a tiny amount of the data, such as a single cubic metre into 4GB of RAM - that give you about 0.15 atoms per cubic millimetre, or 150 per cubic centimetre. Now, that's not terrible on its own, even if it is less than a quarter of a percent of what they claim, but that's only for a single cubic metre of stuff. To make the kilometre wide island they show you'd have to be repeating the same cubic metre all over the place, or just have a handful of cubic metres of geometry to tile around - which is, indeed, exactly what you can see in the video.

My point is, no matter what your search algorithm is, at some point you have to load this data from somewhere. You either have your search algorithm detect its precise location on a (fairly massive, as we can see) hard drive, a which point our previous calculations on streaming rate become the bottleneck, or you preload it into RAM, except no feasible amount of RAM could hold that much data. Fundamentally, without some clever trickery - for instance, reusing the same tiny bit of geometry over and over, as can be seen in their demonstrations, which clearly isn't actually useful to making a game - what they are claiming is demonstrably impossible.

"steve_ancell" I think I remember something about a binary tree system, maybe that's how they got around the speed issue.


It's almost certainly something like that, yeah - though possibly an octree (eight children per node) rather than a binary tree (two children per node), since 3D space. Any search algorithm for 3D space can only be efficient with some kind of spatial partitioning of the data. Admittedly my experience with that kind of data structure is fairly limited, but I don't see quite how one can make it efficient with ray-tracing (which they must be using some variant of, given their claim that they only need to load precisely one atom per on-screen pixel).

-=-=-
A mushroom a day keeps the doctor away...

Keep It Simple, Shroom!
Posted : Thursday, 15 September 2016, 12:16
Andy_A
I really don't know how to convey the concept to you in a way that you can accept.

It is your contention that 27 bytes of info are needed to be loaded for every atom in the entire database for one 1024x768 screen/frame.

What I'm saying is that only the screen res times two 64-bit Ints (16 bytes) are needed be streamed per atom in view (on the screen). The rest of the required information is stored in RAM. Each atom has access to color (4 bytes), normals(12 bytes), position (12 bytes) from data already in memory (envision massively indexed look-up tables). You don't need peta-bytes to store that kind of information in RAM. You only need to store what's necessary for 'X' number of frames at a given resolution, only for visible atoms (1024x768/frame), not for every atom available in the database. What's more, much of that information will be redundant, one shiny spot will have similar if not the same values no matter where it is on screen.

I think Euclideon will be very tight lipped about their search algo, it appears to be very fast for 3D data.
Posted : Thursday, 15 September 2016, 12:18
Jayenkai
I still reckon they're cheating, somewhere!
..
But given my lack of VR due to motion sickness, this seems like a somewhat viable solution for me.

Bring it on!!

-=-=-
''Load, Next List!''
Posted : Thursday, 15 September 2016, 12:35
shroom_monk
I really don't know how to convey the concept to you in a way that you can accept.


Right back atcha.

What I'm saying is that only the screen res times two 64-bit Ints (16 bytes) are needed be streamed per atom in view (on the screen).


What are these two integers storing, exactly? You cannot project a 3D location in space onto a screen without the position data.

The rest of the required information is stored in RAM. Each atom has access to color (4 bytes), normals(12 bytes), position (12 bytes) from data already in memory (envision massively indexed look-up tables). You don't need peta-bytes to store that kind of information in RAM.


You are assuming that we're only going to preload into RAM precisely the 1024*768 pixels we need for a frame. But in reality, of course you're going to need more! There's no way you could compute which atoms you'll need, and move them from disk to memory, before you've computed which atoms you need! At which point yes, as per my calculation above, you do need far more memory space than is feasible. It's just simple maths.

What's more, much of that information will be redundant, one shiny spot will have similar if not the same values no matter where it is on screen.


That isn't at all how lighting works, but ok.

-=-=-
A mushroom a day keeps the doctor away...

Keep It Simple, Shroom!