Neural Scene Representation for Locomotion on Structured Terrain

We propose a learning-based method to reconstruct the local terrain for locomotion with a mobile robot traversing urban environments. Using a stream of depth measurements from the onboard cameras and the robot’s trajectory, the algorithm estimates the topography in the robot’s vicinity. The raw measurements from these cameras are noisy and only provide partial and occluded observations that in many cases do not show the terrain the robot stands on. Therefore, we propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement. The model consists of a 4D fully convolutional network on point clouds that learns the geometric priors to complete the scene from the context and an auto-regressive feedback to leverage spatio-temporal consistency and use evidence from the past. The network can be solely trained with synthetic data, and due to extensive augmentation, it is robust in the real world, as shown in the validation on a quadrupedal robot, ANYmal, traversing challenging settings. We run the pipeline on the robot’s onboard low-power computer using an efficient sparse tensor implementation and show that the proposed method outperforms classical map representations. Published at RA-L/IROS 2022 Work by David Hoeller, Nikita Rudin, Christopher Choy, Animashree Anandkumar, and Marco Hutter Paper:
В начало