Projects and researches in the past have shown that deep learning is a potent technique for training robots to do specific things. To name a few, we've seen OpenAI use neural networks to train Dactyl to solve a Rubik's Cube, and an algorithm dubbed 6-DoF GraspNet that helps robots pick up arbitrary objects.
Continuing the trend, researchers at UC Berkeley have created the Berkeley Autonomous Driving Ground Robot (BADGR). BADGR is an end-to-end autonomous robot trained with self-supervised data. Unlike most traditional robots that depend on geometrical data to plan out a collision-free path, BADGR relies on 'experience' to traverse the terrain.
At the heart of BADGR is an Nvidia Jetson TX2, which processes the on-board camera, six-degree-of-freedom inertial measurement unit sensor, a 2D LIDAR sensor, and a GPS system. Specifically, BADGR houses an artificial neural network fed by realtime camera sensor observations and a sequence of future planned actions.
The neural network then predicts the best possible path to traverse to reach a goal. This approach offers one key advantage over traditional approaches that treat path traversal as a geometric problem: where traditional techniques would have steered clear of tall grass in the path, BADGR can navigate through it. Moreover, this allows BADGR to improve as it gathers more data. The researchers stated:
The key insight behind BADGR is that by autonomously learning from experience directly in the real world, BADGR can learn about navigational affordances, improve as it gathers more data, and generalize to unseen environments.
Moving forward, the team stated that BADGR's success poses some questions. Mainly, how will the robot gather data safely in an unseen, perhaps even hostile environment? And how will BADGR adapt to dynamic environments with live obstacles such as walking humans?
If you are interested in finding out more, you may study the paper on arXiv. The researchers have also published their implementation in BADGR's GitHub repository.
1 Comment - Add comment