In today's world, the need for autonomous robots is increasing at an exponential rate and the implementation of
Simultaneous Localisation And Mapping (SLAM) is gaining more and more attention. One of the major component of SLAM is
3D Mapping of the environment which enables autonomous robots to perceive the environment like a human does for which
many Depth cameras or RGB-D cameras prove useful. This paper proposes a continuous real-time 3D mapping system that
tackles the long existing problem of point cloud distortion induced by dynamic objects in the frame. Our method uses the
Microsoft Kinect V1 as the RGB-D camera and the packages in the Robotic Operating System (ROS) like the Real Time
Appearance Based Mapping (RTAB-map) for 3D reconstruction. A ROS based method is used to implement dynamic object
elimination in real-time. For the purpose of dynamic objects detection in the frame, two algorithms - Deep Learning based tiny
YOLO-v3 and a Machine Learning based Haar Cascade classifier are used.