In this paper, we propose a depth propagation scheme based on optical flow field rectification towards more accurate depth reconstruction. In depth reconstruction, the occlusions and low-textural regions easily result in optical flow field errors, which lead ambiguous depth value or holes without depth in the obtained depth map. In this work, a scheme is proposed to improve the precision of depth propagation and the quality of depth reconstruction for dynamic scene. The proposed scheme first adaptively detects the occlusive or low-textural regions, and the obtained vectors in optical flow field are rectified properly. Subsequently, we process the occluded and ambiguous vectors for more precise depth propagation. We further leverage the boun...
One of the drawbacks of current 3DTV systems are the glasses required to separate the 3D views. This...
Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scen...
© 2014. The copyright of this document resides with its authors. It may be distributed unchanged fr...
In this paper, we propose a depth propagation scheme based on optical flow field rectification towar...
In this paper, we propose a depth propagation scheme based on optical flow field rectification towar...
The depth map of a video is a very important piece of information. Recovering the depth map of a vid...
Depth reconstruction from the light field, as a depth extracting approach, is a vibrant research fie...
Dense depth maps, typically produced by stereo algorithms, are essential for various computer vision...
Most of the depth from image flow algorithms has to rely on either good initial guesses, or some ass...
This paper studies the rendering and post-processing of a dynamic image-based representation called ...
In this paper, we propose a method of filtering depth maps that are automatically generated from vid...
This paper studies the rendering and post-processing of a dynamic image-based representation called ...
A method for estimating the depth information of a general monocular image sequence and then creatin...
This paper presents a method to recover depth information from a 2-D image taken from different vie...
Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scen...
One of the drawbacks of current 3DTV systems are the glasses required to separate the 3D views. This...
Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scen...
© 2014. The copyright of this document resides with its authors. It may be distributed unchanged fr...
In this paper, we propose a depth propagation scheme based on optical flow field rectification towar...
In this paper, we propose a depth propagation scheme based on optical flow field rectification towar...
The depth map of a video is a very important piece of information. Recovering the depth map of a vid...
Depth reconstruction from the light field, as a depth extracting approach, is a vibrant research fie...
Dense depth maps, typically produced by stereo algorithms, are essential for various computer vision...
Most of the depth from image flow algorithms has to rely on either good initial guesses, or some ass...
This paper studies the rendering and post-processing of a dynamic image-based representation called ...
In this paper, we propose a method of filtering depth maps that are automatically generated from vid...
This paper studies the rendering and post-processing of a dynamic image-based representation called ...
A method for estimating the depth information of a general monocular image sequence and then creatin...
This paper presents a method to recover depth information from a 2-D image taken from different vie...
Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scen...
One of the drawbacks of current 3DTV systems are the glasses required to separate the 3D views. This...
Remote dynamic three-dimensional (3D) scene reconstruction renders the motion structure of a 3D scen...
© 2014. The copyright of this document resides with its authors. It may be distributed unchanged fr...