Translated Abstract
The 3D reconstruction of human body and scene is more and more widely used in 3D printing, biomedicine, AR and other fields. However, the traditional three-dimensional reconstruction methods often use complicated equipment, these are not only difficult to operate but also inefficient. These methods are difficult to achieve real-time reconstruction and fusion, and they are unable to meet the requirements of users in the future. so, it is very important to study a multi-view real-time fusion method, which is convenient and efficient. In this paper, the multi-perspective point cloud data acquisition of IntelR200 depth camera, the rapid registration of multi-viewpoint point cloud data, and the real-time fusion of depth camera model and color data are optimized. Finally, based on the parallel acceleration of the algorithm using GPU, the real-time fusion of multi-perspective reconstruction point clouds with one single depth camera is Realized The following is the main work contents:
For the poor quality of the initial point cloud data acquired by the R200 depth camera, which is not conducive to the problem of registration, the overall scheme of the system is designed.
By analyzing the depth calculation principle and point cloud reconstruction process, the environment and camera parameters are optimized, and the depth data quality is improved. Various filtering methods are compared, finally, bilateral filtering is used to improve the quality of point cloud data.
Optimize the traditional ICP data registration method to improve the efficiency and stability of registration, the spatial projection method is used to search for the corresponding point instead of the Closest point search. A point-distance distance objective function accelerating the registration iteration different from the traditional point-distance distance objective function is realized, and the mean-valued hierarchical sampling is used to accelerate the convergence process of data registration, and the average single iteration time of registration reaches 0.35 ms.
In order to solve the problem of layered data after registration, and consider the characteristics of GPU parallel computing. The truncated distance function is used to realize the digital representation of the surface information in the pre-divided voxel element grid, and the linear weighted fusion method is used to process the adjacent frame point cloud data. Finally, realizing the real-time fusion of the point cloud data, and the model surface obtained by the ray casting algorithm is smooth and uniform. In addition, a color data fusion scheme is designed and a color model is output.
According to the above research results, real-time fusion of multi-perspective reconstruction point cloud of IntelR200 depth camera is realized. Through precision testing experiments, its accuracy can meet the needs of general human body and scene reconstruction. The reconstruction results show that this method is simple to implement in scanning equipment. At the same time, high-efficiency can reconstruct the three-dimensional shape of the measured object, which is of great significance for the three-dimensional reconstruction research.
Translated Keyword
[Data registration, Depth camera, Multiple views, Point cloud data optimization, Real-time integration]
Corresponding authors email