RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments
Tóm tắt
RGB-D cameras (such as the Microsoft Kinect) are novel sensing systems that capture RGB images along with per-pixel depth information. In this paper we investigate how such cameras can be used for building dense 3D maps of indoor environments. Such maps have applications in robot navigation, manipulation, semantic mapping, and telepresence. We present RGB-D Mapping, a full 3D mapping system that utilizes a novel joint optimization algorithm combining visual features and shape-based alignment. Visual and depth information are also combined for view-based loop-closure detection, followed by pose optimization to achieve globally consistent maps. We evaluate RGB-D Mapping on two large indoor environments, and show that it effectively combines the visual and shape information available from RGB-D cameras.
Từ khóa
Tài liệu tham khảo
Bradski G (2000) The OpenCV Library. Dr. Dobb’s Journal of Software Tools. Available from: http://drdobbs.com/open-source/184404319
Droeschel D, 2009, European Conference on Mobile Robots (ECMR)
Henry P, 2010, Proceedings of the International Symposium on Experimental Robotics (ISER)
Konolige K, 2004, Proceedings of the National Conference on Artificial Intelligence (AAAI)
Se S, 2008, International Journal of Intelligent Control and Systems, 13, 47
Thrun S, 2005, Probabilistic Robotics
Triebel R, 2005, Proceedings of the National Conference on Artificial Intelligence (AAAI)
Triggs B, 2000, Vision Algorithms: Theory and Practice, 1883, 153
Wu C. (2007). SiftGPU: A GPU implementation of scale invariant feature transform (SIFT). Available at: http://cs.unc.edu/~ccwu/siftgpu.