General-Purpose Sim2Real Protocol for Learning Contact-Rich Manipulation With Marker-Based Visuotactile Sensors

TRO 2023

Weihang Chen*, Jing Xu*†, Fanbo Xiang, Xiaodi Yuan, Hao Su†, Rui Chen†





Abstract

Visuotactile sensors can provide rich contact information, having great potential in contact-rich manipulation tasks with reinforcement learning (RL) policies. Sim2Real technique tackles the challenge of RL’s reliance on a large amount of interaction data. However, most Sim2Real methods for manipulation tasks with visuotactile sensors rely on rigid-body physics simulation, which fails to simulate the real elastic deformation precisely. Moreover, these methods do not exploit the characteristic of tactile signals for designing the network architecture.

In this article, we build a general-purpose Sim2Real protocol for manipulation policy learning with marker-based visuotactile sensors. To improve the simulation fidelity, we employ an FEM-based physics simulator that can simulate the sensor deformation accurately and stably for arbitrary geometries.
We further propose a novel tactile feature extraction network that directly processes the set of pixel coordinates of tactile sensor markers and a self-supervised pretraining strategy to improve the efficiency and generalizability of RL policies. We conduct extensive Sim2Real experiments on the peg-in-hole task to validate the effectiveness of our method. And we further show its generalizability on additional tasks including plug adjustment and lock opening.

Method

Accurate Physics Simulation

We present a physics simulation method for marker-based visuotactile sensors using Incremental Potential Contact (IPC), which is based on the Finite Element Method. We show that IPC accurately simulates large deformations and dynamic properties of elastomers, employing barrier energy for contact modeling and continuous collision detection to enable stable simulation at large time steps. Our method models robot actions as Dirichlet boundary conditions on the elastomer mesh, simulating sensor deformation by calculating constrained vertex positions and velocities. We demonstrate that this approach enables accurate, efficient simulation of visuotactile sensors, leading to small domain gap between simulation and the real sensor.

Efficient Tactile Feature Extraction

In this work, we use the marker flow as the tactile sensor signals and propose an efficient tactile feature extractor based on PointNet. The marker-based tactile representation and point cloud learning architecture can inherently deal with the marker position input and extract both global and local tactile features. The randomization enhances its generalizability and further improves Sim2Real performance.

Self-Supervised Pretraining

To enhance sample efficiency and training stability, we pretrain the tactile feature extractor using an autoencoder. We design a decoder to reconstruct all the marker positions from the original marker positions and the latent feature.



Performance

Better performance

We achieve zero-shot Sim2Real for high-precision contact-rich manipulation tasks.

Peg Insertion (Speed: 3X)

Lock Opening (Speed: 4X)

Comparison of Marker-Based and Image-Based Tactile Representations

We designed an ablation study to compare our proposed marker-based tactile representation with conventional image-based tactile representation. Although the randomization parameters are the same, the proposed marker-based representation demonstrates advantages over the image-based representation.

Effectiveness of Pretraining in Enhancing Early Sim2Real Performance

Here we demonstrate that using the pretrained tactile encoder allows the policy to achieve a considerably high Sim2Real success rate, even at very early training stages.



Bibtex
@ARTICLE{chen2024tactilesim2real, author={Chen, Weihang and Xu, Jing and Xiang, Fanbo and Yuan, Xiaodi and Su, Hao and Chen, Rui}, journal={IEEE Transactions on Robotics}, title={General-Purpose Sim2Real Protocol for Learning Contact-Rich Manipulation With Marker-Based Visuotactile Sensors}, year={2024}, volume={40}, number={}, pages={1509-1526}, doi={10.1109/TRO.2024.3352969}}