首页>总结报告>点云拼接总结(必备3篇)

点云拼接总结(必备3篇)

时间:2024-03-31 08:45:38 总结报告

点云拼接总结 第1篇

更多参考:

hou等人提出的Voxel-Net首次将点云使用voxel栅格化将点云转换为Voxel,然后将Voxel作为预处理数据进行特征提取。

【Contribution】

基本思想:将点云使用voxel栅格化、且在每个voxel内使用“pointnet”,使其转变为4D tensor。接下来使用卷积的方法就可以了。

将三维点云划分为一定数量的Voxel,经过点的随机采样以及归一化后,对每一个非空Voxel使用若干个VFE(Voxel Feature Encoding)层进行局部特征提取,得到Voxel-wise Feature,然后经过3D Convolutional Middle Layers进一步抽象特征(增大感受野并学习几何空间表示),最后使用RPN(Region Proposal Network)对物体进行分类检测与位置回归。

【Method】

将三维点云划分为一定数量的Voxel,经过点的随机采样以及归一化后,对每一个非空Voxel使用若干个VFE(Voxel Feature Encoding)层进行局部特征提取,得到Voxel-wise Feature,然后经过3D Convolutional Middle Layers进一步抽象特征(增大感受野并学习几何空间表示),最后使用RPN(Region Proposal Network)对物体进行分类检测与位置回归。VoxelNet整个pipeline如下图所示。

损失函数:由于VoxelNet主要针对region proposal任务,并不做类别判断。Loss部分主要由3D box参数和前景背景判断组成如下所示:

box参数的定义,包括长宽高、角度以及中心坐标:

【Experiment】

主要针对KITTI数据集中的Car、Pedestrian和Cyclist进行测试,多模型效果对比如下所示:

点云拼接总结 第2篇

1. 基于特征配准:检测不同点云中的共有特征点对,如边缘、角点等,结合特征描述子进行匹配,计算点云间的刚体变换关系,实现点云配准和拼接。这类方法精度较高,但要求点云中存在丰富的特征点。

2. 基于相邻搜索:在点云重叠区域利用相邻点的几何特性进行配准,通过最小化配准误差不断优化点云间的刚体变换,实现点云拼接。这类方法计算过程相对简单,但易受点密度和噪声的影响。

3. 基于平移旋转搜索:在点云重叠区域内,通过尝试不同的平移和旋转组合,找到配准误差最小的刚体变换,完成点云拼接。这类方法比较直观,但计算复杂度高,不适合大规模点云。

4. 基于ICP算法:迭代最接近点(ICP)算法通过选取不同点云中的对应点,不断计算并优化刚体变换,以最小化对应点间的距离,达到点云配准的目的。该算法收敛速度快,较适用于大规模点云处理,是最常用的点云配准算法。

点云拼接融合的方法比较多,但深度学习方法由于其强大的特征学习能力,能够获得最高的配准精度。而深度学习也需要大量高质量的点云配准样本进行训练,这就要依赖专业的点云数据采集和标注能力。

伞云智慧研发的数据标注平台及3D点云标注服务,拥有点云数据采集与标注的核心技术,可以提供高质量的点云样本数据和技术服务,助力点云配准与拼接系统的建立。

点云拼接总结 第3篇

[1] Hackel T, Savinov N, Ladicky L, et al. Semantic3d. Net: A new large-scale point cloud classification benchmark [J]. arXiv preprint arXiv:170403847, 2017.

[2] Armeni I, Sener O, Zamir AR, et al. 3d semantic parsing of large-scale indoor spaces. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:1534-1543.

[3] Dai A, Chang AX, Savva M, et al. Scannet: Richly-annotated 3d reconstructions of indoor scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:5828-5839.

[4] Roynard X. Paris-lille-3d: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification [J]. International Journal of Robotics Research, 2018,37(6):545-557.

[5] Behley J, Garbade M, Milioto A, et al. Semantickitti: A dataset for semantic scene understanding of lidar sequences [J]. 2019.

[6] Qi CR, Su H, Mo K, et al. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition, 2017:652-660.

[7] Qi CR, Yi L, Su H, et al. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 2017:5099-5108.

[8] Jiang M, Wu Y, Zhao T, et al. Pointsift: A sift-like network module for 3d point cloud semantic segmentation [J]. arXiv preprint arXiv:180700652, 2018.

[9] Zhao H, Jiang L, Fu C-W, et al. Pointweb: Enhancing local neighborhood features for point cloud processing. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019:5565-5573.

[10] Zhang Z, Hua B-S, Yeung S-K. Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics. Proceedings of the IEEE International Conference on Computer Vision, 2019:1607-1616.

[11] Hu Q, Yang B, Xie L, et al. Randla-net: Efficient semantic segmentation of large-scale point clouds. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:11108-11117.

[12] Yang J, Zhang Q, Ni B, et al. Modeling point clouds with self-attention and gumbel subset sampling. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019:3323-3332.

[13] Chen L-Z, Li X-Y, Fan D-P, et al. Lsanet: Feature learning on point sets by local spatial aware layer [J]. arXiv preprint arXiv:190505442, 2019.

[14] Zhao C, Zhou W, Lu L, et al. Pooling scores of neighboring points for improved 3d point cloud segmentation. 2019 IEEE International Conference on Image Processing (ICIP): IEEE, 2019:1475-1479.

[15] Zhao Y, Birdal T, Deng H, et al. 3d point capsule networks. Proceedings of the IEEE conference on computer vision and pattern recognition, 2019:1009-1018.

[16] Wang Y, Sun Y, Liu Z, et al. Dynamic graph cnn for learning on point clouds [J]. Acm Transactions On Graphics (tog), 2019,38(5):1-12.

[17] Arandjelovic R, Gronat P, Torii A, et al. Netvlad: Cnn architecture for weakly supervised place recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 2016:5297-5307.

[18] Hua B-S, Tran M-K, Yeung S-K. Pointwise convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:984-993.

[19] Komarichev A, Zhong Z, Hua J. A-cnn: Annularly convolutional neural networks on point clouds. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019:7421-7430.

[20] Wang S, Suo S, Ma W-C, et al. Deep parametric continuous convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:2589-2597.

[21] Thomas H, Qi CR, Deschaud J-E, et al. Kpconv: Flexible and deformable convolution for point clouds. Proceedings of the IEEE International Conference on Computer Vision, 2019:6411-6420.

[22] Engelmann F, Kontogianni T, Leibe B. Dilated point convolutions: On the receptive field of point convolutions [J]. arXiv preprint arXiv:190712046, 2019.

[23] Engelmann F, Kontogianni T, Hermans A, et al. Exploring spatial context for 3d semantic segmentation of point clouds. Proceedings of the IEEE International Conference on Computer Vision Workshops, 2017:716-724.

[24] Huang Q, Wang W, Neumann U. Recurrent slice networks for 3d segmentation of point clouds. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:2626-2635.

[25] Ye X, Li J, Huang H, et al. 3d recurrent neural networks with context fusion for point cloud semantic segmentation. Proceedings of the European Conference on Computer Vision (ECCV), 2018:403-417.

[26] Zhao Z, Liu M, Ramani K. Dar-net: Dynamic aggregation network for semantic scene segmentation [J]. arXiv preprint arXiv:190712022, 2019.

[27] Liu F, Li S, Zhang L, et al. 3dcnn-dqn-rnn: A deep reinforcement learning framework for semantic parsing of large-scale 3d point clouds. Proceedings of the IEEE International Conference on Computer Vision, 2017:5678-5687.

[28] Landrieu L, Simonovsky M. Large-scale point cloud semantic segmentation with superpoint graphs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018:4558-4567.

[29] Landrieu L, Boussaha M. Point cloud oversegmentation with graph-structured deep metric learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019:7440-7449.

[30] Zhiheng K, Ning L. Pyramnet: Point cloud pyramid attention network and graph embedding module for classification and segmentation [J]. arXiv preprint arXiv:190603299, 2019.

[31] Wang L, Huang Y, Hou Y, et al. Graph attention convolution for point cloud semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019:10296-10305.