SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks

2017 IEEE International Conference on Robotics and Automation (ICRA)(2016)

引用 302|浏览215
暂无评分
摘要
We introduce SE3-Nets, which are deep networks designed to model rigid body motion from raw point cloud data. Based only on pairs of depth images along with an action vector and point wise data associations, SE3-Nets learn to segment effected object parts and predict their motion resulting from the applied force. Rather than learning point wise flow vectors, SE3-Nets predict SE3 transformations for different parts of the scene. Using simulated depth data of a table top scene and a robot manipulator, we show that the structure underlying SE3-Nets enables them to generate a far more consistent prediction of object motion than traditional flow based networks.
更多
查看译文
关键词
SE3-Nets,rigid body motion learning,deep neural networks,point cloud data,depth image sequence,action vectors,point wise data associations,table top scene,robot manipulator,depth camera,Baxter robot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要