depth map & over/under

Feb 18, 2014 at 7:36 PM
Do you think it would be possible to, instead of a typical depth map over/under - have a true stereo pair AND corresponding depth maps of each - and be able to use them for subtle distortion from the neck height model?

I could see this possibly being very useful it adding that little tough of realism in the parallax shifting you might see from naturally standing idle. Also from seeing the new Oculus headset with head tracking, it'll be more important to get some sort of motion in there on top of having the stereo pair.

thoughts? I could get an example image or two for you to review.
Mar 19, 2014 at 3:40 PM
Wow that DK2 is being released, along with hardware that supports '6-degrees-of-freedom'

I'm again curious if we can still combine having left/right images, on top of left/right depth maps, for use in creating the head shifting through a slight warp. Since the tools are partially there already in using the depth map for generating a second view, I wonder if this method is easily plausible..

thanks
Coordinator
Mar 29, 2014 at 4:09 PM
Edited Mar 29, 2014 at 4:12 PM
Hi 3dandswe,

A depth map gives you information about the distance of each pixels acquired by a camera.
Technically, you can recreate a 3d scene as a point cloud where each pixel is a point in space.
When you combine this approach with positional tracking and you move you head sideways, you will notice blank spots because you don't have texture and depth information behind each of these point. You only have access to "slices" of data facing the camera. See this example with a Kinect camera

I think that having a second depth map from a parallel camera located very near (at human IPD: about 6.5cm) will not give you a lot of new information.
Cameras would have to be more spread apart and facing inward to get a maximum color and depth data behind these "slices" and ultimately, it would be possible to recreate a mesh, or a pseudo-3d scene, with just enough data for the viewer. It would not be perfect and probably some software estimation would be necessary to fill some of the remaining blank spots. Also, one limitation is that long distance objects outside of the depth cameras field of view would appear more flat which not that much of a problem since it's already the case in stereoscopic movies.

An other aspect to consider with 2 depth map is that having 4 video channels (colors left-right and depth lift-right) would be harder to process in real time.

Finally, if you want to be able to really move in a scene, you might want to consider other 3D scanning solutions (photogrammetry, lidar, etc..)

English is not my primary language so I hope this makes sense :)
If you have any questions or comments, do not hesitate!

Regards,

-Stephane