Reducing Tactile Sim2Real Domain Gaps via Deep Texture Generation Networks

IEEE International Conference on Robotics and Automation(2022)

引用 9|浏览11
暂无评分
摘要
Recently simulation methods have been developed for optical tactile sensors to enable the Sim2Real learning, i.e., first training models in simulation before deploying them on a real robot. However, some artefacts in real objects are unpredictable, such as imperfections caused by fabrication processes, or scratches by natural wear and tear, and thus cannot be represented in the simulation, resulting in a significant gap between the simulated and real tactile images. To address this Sim2Real gap, we propose a novel texture generation network to map the simulated images into photorealistic tactile images that resemble a real sensor contacting a real imperfect object. Each simulated tactile image is first divided into two types of regions: areas that are in contact with the object and areas that are not. The former is applied with generated textures learned from real textures in the real tactile images, whereas the latter maintains its appearance as when the sensor is not in contact with any object. This makes sure that the artefacts are only applied to deformed regions of the sensor. Our extensive experiments show that the proposed texture generation network can generate realistic artefacts on the deformed regions of the sensor, while avoiding leaking the textures into areas of no contact. Quantitative experiments further reveal that when using the adapted images generated by our proposed network for a Sim2Real classification task, the drop in accuracy caused by the Sim2Real gap is reduced from 38.43% to merely 0.81%. As such, this work has potential to accelerate the Sim2Real learning for robotic tasks requiring tactile sensing.
更多
查看译文
关键词
deep texture generation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要