how you would create a 3D model of an object from imagery and depth sensor measurements taken at all angles around the object

Created
TagsCV

Have a rgb and depth camera where the two lenses are instrinsically and extrinsically calibrated. Generate Registered Depth, RGB and RGB Point cloud in real time with these parameters. Take multiple snapshots consisting of these data from intersecting but varying viewpoints in a timeline. Find features in a window set of 3 or 4 images in the RGB image. Extract the 3D coordinate and normal of the feature point in 3D space. Set up a system of linear equations to solve pose. Do 3D ransac find the corresponding features, with normal and distance thresholds in features, and estimate pose between the cameras. Further refine that pose with gradient descent, minimizing re projection error. Now you have the pose between the cameras. Multiply that pose over many different sets of these over the initial base frame pose. Transform all collected point clouds to that base frame. Apply downsampling voxel grid, mls smoothing and others. Convert your point cloud to a mesh with triangulation. Take each triangle and check which cameras it might have come from (with normal and depth occlusion check). Compute the 3 UV coordinates on that camera, and save it. Save your vertices, triangle edges, normals and uv maps. You have your 3d model.