There is a "star trek" quality about this invention. You can have a conference call with a 3D full body image of the remote person that looks as if they have just beamed to your location.
Who needs holography for 3D?
It is becoming increasingly obvious that you don't need to master the difficult techniques of real-time holography to create 3D displays. The trick is to use a Kinect, or any sort of depth camera, in sufficient quantities to map a full scene in 3D. Then it is just a matter of some 3D graphics to create wire-frame objects and map the live video onto the surfaces as a texture map. Then all you have to do is find a way to render the 3D graphics in 3D and allow it to interact with the user.
This approach has the big advantage that the extra data needed to create the full 3D effect is just a depth map and this can be transmitted over low bandwidth links.
To demonstrate that not only is all this possible but also desirable a team at The Human Media Lab at Queen's University in Canada have created the TeleHuman - a 3D human sized video conferencing pod. People stand in front of a pod and talk to 3D images of each other. See the video of it in action:
Six Kinects are positioned around the top of the pod and these provide a full depth map of the person and his or her location.They also provide the video data used by the projector.
The cylinder is 1.7m tall and has a DepthQ stereoscopic projector in its base. This reflects off a hemispherical mirror which allows it to project an image onto the entire cylinder. Users can also wear shutter glasses to see images in full stereoscopy. Without the shutter glasses the image seems to be "trapped" on the surface of the cylinder; with the glasses the user gets a full 3D effect with the human appearing to be within the cylinder.
When a user walks up to the cylinder the Kinects lock on and the video stream of the person is sent to the other pod via a 1Gbit network connection. The video is processed to remove the background and to provide an image centered in the tube.
The software makes use of XNA to work with the Kinect data and to compute the texture mapping including distortion needed to correct for the projection via the hemispherical mirror. The rendering viewpoint is synchronized with the users position to produce motion parallax. Currently the system can support two viewports and hence two simultaneous users. It also takes one PC for each pair of Kinects to keep the frame rate up.
The results are fairly impressive given the low cost approach. There are some problems with the system. In particular not all of the projectors resolution is use and brightness falls off the further down the cylinder you go.
The research team have used the same sort of set up to create BodiPod which provides a 3D view of human anatomy complete with a gesture input that allows the viewer to manipulate the display. They have also conducted experiments to show that 3D video conferencing is worth the extra effort - you can read the details in the research papers listed below.
What all this proves is that large 3D hologram-like displays are possible without the need for holography. It seems that the Kinect really is an enabling technology that is capable of magic.
If I tell you how fast I have been driving and for how long, you might think that the best you can do is to compute the radius of the circle I must be in given my starting point. In fact, the data pro [ ... ]
The Linux Foundation's Introduction to Linux starts on August 1st on the edX platform. Coursera has a course on Web Application Architectures and another on Algorithmic Thinking starting later in the [ ... ]