Human Grasp Classification for Reactive Human-to-Robot Handovers

Yang Wei
Yang Wei
Paxton Chris
Paxton Chris

IROS, pp. 11123-11130, 2020.

Cited by: 8|Views94
EI
Weibo:
Evaluation of Human Grasp Classification: We evaluate our hand grasp classification model on a validation set collected with a subject which is unseen during the training procedure

Abstract:

Transfer of objects between humans and robots is a critical capability for collaborative robots. Although there has been a recent surge of interest in human-robot handovers, most prior research focus on robot-to-human handovers. Further, work on the equally critical human-to-robot handovers often assumes humans can place the object in t...More

Code:

Data:

0
Introduction
  • Transfer of objects between humans and robots is a critical capability for collaborative robots.
  • Most work focuses on transfer of objects from the several categories (Fig. 1) and the authors collect a dataset to learn robot to the human, assuming the human can just place the a deep model that classifies a given human hand holding object in the robot’s gripper for the reverse
  • This approach an object into one of those grasp categories.
  • The authors compare the handovers that can adapt to the way that the human is system with two baseline methods, one without inferring the presenting the object to the robot and meet them half way human hand pose and the other relying on independent hand to take the object
Highlights
  • Transfer of objects between humans and robots is a critical capability for collaborative robots
  • We compare our handovers that can adapt to the way that the human is system with two baseline methods, one without inferring the presenting the object to the robot and meet them half way human hand pose and the other relying on independent hand to take the object
  • We report the performance of our human grasp classification model
  • Evaluation of Human Grasp Classification: We evaluate our hand grasp classification model on a validation set collected with a subject which is unseen during the training procedure
  • Attentive: we demonstrated the set of five human grasps shown in Fig. 3: pinch-top, pinch-bottom, pinch-side, lifting, and on-open-palm
  • The main limitation of our approach is that it applies only to a single set of grasp types, so we plan to learn the correct grasp poses for different grasp types from data instead of using manually-specified rules
Results
  • The authors can see that the human grasp model achieves higher detection rate and is more robust compared with [4] especially when heavy occlusion occurs (e.g., 87.5% vs. 6.8% for pinch-side and 94.4% vs. 11.9% for lifting).
Conclusion
  • The authors described a system for enabling fluid human-robot handovers via classifying different types of grasp.
  • The authors will make the planning system more flexible and support more grasp types.
  • The authors believe the same approach could be applied to many other types of human-robot collaboration.
  • The main limitation of the approach is that it applies only to a single set of grasp types, so the authors plan to learn the correct grasp poses for different grasp types from data instead of using manually-specified rules.
  • The authors plan to make robot motions more legible and friendly
Summary
  • Introduction:

    Transfer of objects between humans and robots is a critical capability for collaborative robots.
  • Most work focuses on transfer of objects from the several categories (Fig. 1) and the authors collect a dataset to learn robot to the human, assuming the human can just place the a deep model that classifies a given human hand holding object in the robot’s gripper for the reverse
  • This approach an object into one of those grasp categories.
  • The authors compare the handovers that can adapt to the way that the human is system with two baseline methods, one without inferring the presenting the object to the robot and meet them half way human hand pose and the other relying on independent hand to take the object
  • Results:

    The authors can see that the human grasp model achieves higher detection rate and is more robust compared with [4] especially when heavy occlusion occurs (e.g., 87.5% vs. 6.8% for pinch-side and 94.4% vs. 11.9% for lifting).
  • Conclusion:

    The authors described a system for enabling fluid human-robot handovers via classifying different types of grasp.
  • The authors will make the planning system more flexible and support more grasp types.
  • The authors believe the same approach could be applied to many other types of human-robot collaboration.
  • The main limitation of the approach is that it applies only to a single set of grasp types, so the authors plan to learn the correct grasp poses for different grasp types from data instead of using manually-specified rules.
  • The authors plan to make robot motions more legible and friendly
Tables
  • Table1: Operators and corresponding preconditions LP for task execution and reactive execution. Operators are listed in descending order of priority; if all the preconditions are true, we execute the associated operator regardless of what the previously executed operator was
  • Table2: Results for handover performance on our quantitative metrics. Planning success rate indicates how often the system needed to replan its approach, versus grasp success rate as the number of times the system successfully took the object
  • Table3: Quantitative results from the user study. Users were able to complete tasks quickly even when they were distracted and had to concentrate on a different scenario
Download tables as Excel
Related work
  • Human-robot handovers have recently become a popular topic within human-robot collaboration [9] across a multitude of application areas from collaborative manufacturing [10], [11], [12] to assistance in the home [13], [14], [15], [16]. A large majority of this work focuses on robot-to-human handovers in which the robot starts with an object in hand and transfers it to the human. A key challenge is choosing parameters of the robot’s actions to optimize for a fluent handover. This includes the choice of object pose and robot’s grasp on the object, taking into account user comfort [17], preferences based on subjective feedback [18], affordances and intended use of the objects after the handover [19], [20], [21], [22], [23], motion constraints of the human [13], social role of the human [24], and configuration of the object when being grasped before the handover [25]. Other work emphasizes parameters of the trajectory to reach the handover pose, exploring the approach angle [11], starting pose of trajectory in contrast to the handover pose [15], motion smoothness [26], object release time [27], estimated human wrist pose [28], [29], relative timing of handover phases [30], and ergonomic preferences of humans [31]. While some work focuses on offline computation of handover parameters, most recent work involves perception of the human to enable reactive handovers [32], [28], [33], [34].
Reference
  • L. Ge, Z. Ren, and J. Yuan, “Point-to-point regression pointnet for 3d hand pose estimation,” in ECCV, 2018, pp. 475–491.
    Google ScholarLocate open access versionFindings
  • U. Iqbal, P. Molchanov, T. Breuel Juergen Gall, and J. Kautz, “Hand pose estimation via latent 2.5 d heatmap regression,” in ECCV, 2018, pp. 118–134.
    Google ScholarLocate open access versionFindings
  • L. Ge, Z. Ren, Y. Li, Z. Xue, Y. Wang, J. Cai, and J. Yuan, “3d hand shape and pose estimation from a single rgb image,” in CVPR, 2019, pp. 10 833–10 842.
    Google ScholarLocate open access versionFindings
  • Y. Xiang, T. Schmidt, V. Narayanan, and D. Fox, “PoseCNN: A convolutional neural network for 6d object pose estimation in cluttered scenes,” in Robotics: Science and Systems, 2017.
    Google ScholarLocate open access versionFindings
  • Y. Hasson, G. Varol, D. Tzionas, I. Kalevatykh, M. J. Black, I. Laptev, and C. Schmid, “Learning joint reconstruction of hands and manipulated objects,” in CVPR, 2019, pp. 11 807–11 816.
    Google ScholarFindings
  • C. Zimmermann, D. Ceylan, J. Yang, B. Russell, M. Argus, and T. Brox, “FreiHAND: A dataset for markerless capture of hand pose and shape from single rgb images,” in ICCV, 2019, pp. 813–822.
    Google ScholarFindings
  • S. Hampali, M. Rad, M. Oberweger, and V. Lepetit, “Honnotate: A method for 3d annotation of hand and objects poses,” 2019.
    Google ScholarFindings
  • C. Paxton, N. Ratliff, C. Eppner, and D. Fox, “Representing robot task plans as robust logical-dynamical systems,” in IROS, 2019.
    Google ScholarFindings
  • A. Bauer, D. Wollherr, and M. Buss, “Human–robot collaboration: a survey,” International Journal of Humanoid Robotics, vol. 5, no. 01, pp. 47–66, 2008.
    Google ScholarLocate open access versionFindings
  • A. Koene, A. Remazeilles, M. Prada, A. Garzo, M. Puerto, S. Endo, and A. M. Wing, “Relative importance of spatial and temporal precision for user satisfaction in human-robot object handover interactions,” in Third International Symposium on New Frontiers in Human-Robot Interaction, 2014.
    Google ScholarLocate open access versionFindings
  • V. V. Unhelkar, H. C. Siu, and J. A. Shah, “Comparative performance of human and mobile robotic assistants in collaborative fetch-anddeliver tasks,” in HRI. IEEE, 2014, pp. 82–89.
    Google ScholarLocate open access versionFindings
  • W. Wang, R. Li, Z. M. Diekel, Y. Chen, Z. Zhang, and Y. Jia, “Controlling object hand-over in human–robot collaboration via natural wearable sensing,” IEEE Transactions on Human-Machine Systems, vol. 49, no. 1, pp. 59–71, 2018.
    Google ScholarLocate open access versionFindings
  • J. Mainprice, M. Gharbi, T. Simeon, and R. Alami, “Sharing effort in planning human-robot handover tasks,” in RO-MAN. IEEE, 2012, pp. 764–770.
    Google ScholarLocate open access versionFindings
  • C.-M. Huang, M. Cakmak, and B. Mutlu, “Adaptive coordination strategies for human-robot handovers,” in Robotics: Science and Systems, 2015.
    Google ScholarFindings
  • M. Cakmak, S. S. Srinivasa, M. K. Lee, S. Kiesler, and J. Forlizzi, “Using spatial and temporal contrast for fluent robot-human handovers,” in HRI, 2011, pp. 489–496.
    Google ScholarFindings
  • E. C. Grigore, K. Eder, A. G. Pipe, C. Melhuish, and U. Leonards, “Joint action understanding improves robot-to-human object handover,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2013, pp. 4622–4629.
    Google ScholarLocate open access versionFindings
  • J. Aleotti, V. Micelli, and S. Caselli, “Comfortable robot to human object hand-over,” in RO-MAN. IEEE, 2012, pp. 771–776.
    Google ScholarLocate open access versionFindings
  • M. Cakmak, S. S. Srinivasa, M. K. Lee, J. Forlizzi, and S. Kiesler, “Human preferences for robot-human hand-over configurations,” in IROS. IEEE, 2011, pp. 1986–1993.
    Google ScholarLocate open access versionFindings
  • J. Aleotti, V. Micelli, and S. Caselli, “An affordance sensitive system for robot to human object handover,” International Journal of Social Robotics, vol. 6, no. 4, pp. 653–666, 2014.
    Google ScholarLocate open access versionFindings
  • W. P. Chan, Y. Kakiuchi, K. Okada, and M. Inaba, “Determining proper grasp configurations for handovers through observation of object movement patterns and inter-object interactions during usage,” in IROS. IEEE, 2014, pp. 1355–1360.
    Google ScholarLocate open access versionFindings
  • A. Bestick, R. Bajcsy, and A. D. Dragan, “Implicitly assisting humans to choose good grasps in robot to human handovers,” in International Symposium on Experimental Robotics. Springer, 2016, pp. 341–354.
    Google ScholarFindings
  • W. P. Chan, M. K. Pan, E. A. Croft, and M. Inaba, “An affordance and distance minimization based method for computing object orientations for robot human handovers,” International Journal of Social Robotics, pp. 1–20, 2019.
    Google ScholarLocate open access versionFindings
  • F. Cini, V. Ortenzi, P. Corke, and M. Controzzi, “On the choice of grasp type and location when handing over an object,” Science Robotics, vol. 4, no. 27, p. eaau9757, 2019.
    Google ScholarLocate open access versionFindings
  • S. Kato, N. Yamanobe, G. Venture, E. Yoshida, and G. Ganesh, “The where of handovers by humans: Effect of partner characteristics, distance and visual feedback,” PloS one, vol. 14, no. 6, 2019.
    Google ScholarLocate open access versionFindings
  • P. Ardon, E. Pairet, S. Ramamoorthy, and K. S. Lohan, “Towards robust grasps: Using the environment semantics for robotic object affordances,” in Proceedings of the AAAI Fall Symposium on Reasoning and Learning in Real-World Systems for Long-Term Autonomy, 2018, pp. 5–12.
    Google ScholarLocate open access versionFindings
  • E. De Momi, L. Kranendonk, M. Valenti, N. Enayati, and G. Ferrigno, “A neural network-based approach for trajectory planning in robot– human handover tasks,” Frontiers in Robotics and AI, vol. 3, p. 34, 2016.
    Google ScholarLocate open access versionFindings
  • Z. Han and H. Yanco, “The effects of proactive release behaviors during human-robot handovers,” in HRI. IEEE, 2019, pp. 440–448.
    Google ScholarLocate open access versionFindings
  • G. J. Maeda, G. Neumann, M. Ewerton, R. Lioutikov, O. Kroemer, and J. Peters, “Probabilistic movement primitives for coordination of multiple human–robot collaborative tasks,” Autonomous Robots, vol. 41, no. 3, pp. 593–612, 2017.
    Google ScholarLocate open access versionFindings
  • A. Sidiropoulos, E. Psomopoulou, and Z. Doulgeri, “A human inspired handover policy using gaussian mixture models and haptic cues,” Autonomous Robots, vol. 43, no. 6, pp. 1327–1342, 2019.
    Google ScholarLocate open access versionFindings
  • A. Kshirsagar, H. Kress-Gazit, and G. Hoffman, “Specifying and synthesizing human-robot handovers,” in IROS. IEEE, 2019, pp. 5930–5936.
    Google ScholarLocate open access versionFindings
  • S. Parastegari, B. Abbasi, E. Noohi, and M. Zefran, “Modeling human reaching phase in human-human object handover with application in robot-human handover,” in IROS. IEEE, 2017, pp. 3597–3602.
    Google ScholarLocate open access versionFindings
  • L. Peternel, W. Kim, J. Babic, and A. Ajoudani, “Towards ergonomic control of human-robot co-manipulation and handover,” in IROS. IEEE, 2017, pp. 55–60.
    Google ScholarLocate open access versionFindings
  • A. Kupcsik, D. Hsu, and W. S. Lee, “Learning dynamic robot-tohuman object handover from human feedback,” in Robotics research. Springer, 2018, pp. 161–176.
    Google ScholarFindings
  • T. Zhou and J. P. Wachs, “Early prediction for physical human robot collaboration in the operating room,” Autonomous Robots, vol. 42, no. 5, pp. 977–995, 2018.
    Google ScholarLocate open access versionFindings
  • H. Admoni, A. Dragan, S. S. Srinivasa, and B. Scassellati, “Deliberate delays during robot-to-human handovers improve compliance with gaze communication,” in HRI, 2014, pp. 49–56.
    Google ScholarLocate open access versionFindings
  • A. Moon, D. M. Troniak, B. Gleeson, M. K. Pan, M. Zheng, B. A. Blumer, K. MacLean, and E. A. Croft, “Meet me where I’m gazing: how shared attention gaze affects human-robot handover timing,” in HRI, 2014, pp. 334–341.
    Google ScholarLocate open access versionFindings
  • S. M. zu Borgsen, J. Bernotat, and S. Wachsmuth, “Hand in hand with robots: differences between experienced and naive users in humanrobot handover scenarios,” in International Conference on Social Robotics. Springer, 2017, pp. 587–596.
    Google ScholarLocate open access versionFindings
  • C. Becchio, L. Sartori, and U. Castiello, “Toward you: The social side of actions,” Current Directions in Psychological Science, vol. 19, no. 3, pp. 183–188, 2010.
    Google ScholarLocate open access versionFindings
  • M. Huber, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer, “Humanrobot interaction in handing-over tasks,” in RO-MAN. IEEE, 2008, pp. 107–112.
    Google ScholarLocate open access versionFindings
  • S. Shibata, B. M. Sahbi, K. Tanaka, and A. Shimizu, “An analysis of the process of handing over an object and its application to robot motions,” in 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, vol.
    Google ScholarLocate open access versionFindings
  • 1. IEEE, 1997, pp. 64–69.
    Google ScholarFindings
  • [41] W. P. Chan, C. A. Parker, H. M. Van der Loos, and E. A. Croft, “Grip forces and load forces in handovers: implications for designing human-robot handover controllers,” in HRI, 2012, pp. 9–16.
    Google ScholarFindings
  • [42] C. Shi, M. Shiomi, C. Smith, T. Kanda, and H. Ishiguro, “A model of distributional handing interaction for a mobile robot.” in Robotics: science and systems, 2013, pp. 24–28.
    Google ScholarFindings
  • [43] S. Parastegari, E. Noohi, B. Abbasi, and M. Zefran, “Failure recovery in robot–human object handover,” IEEE Transactions on Robotics, vol. 34, no. 3, pp. 660–673, 2018.
    Google ScholarLocate open access versionFindings
  • [44] A. Edsinger and C. C. Kemp, “Human-robot interaction for cooperative manipulation: Handing objects to one another,” in RO-MAN. IEEE, 2007, pp. 1167–1172.
    Google ScholarLocate open access versionFindings
  • [45] M. K. Pan, V. Skjervøy, W. P. Chan, M. Inaba, and E. A. Croft, “Automated detection of handovers using kinematic features,” IJRR, vol. 36, no. 5-7, pp. 721–738, 2017.
    Google ScholarLocate open access versionFindings
  • [46] D. Vogt, S. Stepputtis, B. Jung, and H. B. Amor, “One-shot learning of human–robot handovers with triadic interaction meshes,” Autonomous Robots, vol. 42, no. 5, pp. 1053–1065, 2018.
    Google ScholarLocate open access versionFindings
  • [47] N. Marturi, M. Kopicki, A. Rastegarpanah, V. Rajasekaran, M. Adjigble, R. Stolkin, A. Leonardis, and Y. Bekiroglu, “Dynamic grasp and trajectory planning for moving objects,” Autonomous Robots, vol. 43, no. 5, pp. 1241–1256, 2019.
    Google ScholarLocate open access versionFindings
  • [48] A. Handa, K. Van Wyk, W. Yang, J. Liang, Y.-W. Chao, Q. Wan, S. Birchfield, N. Ratliff, and D. Fox, “Dexpilot: Vision based teleoperation of dexterous robotic hand-arm system,” in ICRA, 2020, to appear.
    Google ScholarFindings
  • [49] C. R. Garrett, C. Paxton, T. Lozano-Perez, L. P. Kaelbling, and D. Fox, “Online replanning in belief space for partially observable task and motion problems,” in ICRA, 2019.
    Google ScholarFindings
  • [50] M. Colledanchise and P. Ogren, “Behavior trees in robotics and AI: An introduction,” 2018.
    Google ScholarFindings
  • [51] C. Paxton, A. Hundt, F. Jonathan, K. Guerin, and G. D. Hager, “Costar: Instructing collaborative robots with behavior trees and vision,” in ICRA. IEEE, 2017, pp. 564–571.
    Google ScholarLocate open access versionFindings
  • [52] C. Paxton, F. Jonathan, A. Hundt, B. Mutlu, and G. D. Hager, “Evaluating methods for end-user creation of robot task plans,” in IROS. IEEE, 2018, pp. 6086–6092.
    Google ScholarLocate open access versionFindings
  • [53] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” in NeurIPS, 2017, pp. 5099–5108. “Azure kinect dk,” https://docs.microsoft.com/en-us/azure/kinect-dk/, accessed:2020-02-28.
    Locate open access versionFindings
  • T. Feix, J. Romero, H.-B. Schmiedmayer, A. M. Dollar, and D. Kragic, “The grasp taxonomy of human grasp types,” IEEE Transactions on Human-Machine Systems, vol. 46, no. 1, pp. 66–77, 2015.
    Google ScholarLocate open access versionFindings
  • A. Murali, A. Mousavian, C. Eppner, C.Paxton, and D. Fox, “6-dof grasping for target-driven object manipulation in clutter,” in ICRA, 2020, to appear.
    Google ScholarFindings
  • K. Kase, C. Paxton, H. Mazhar, T. Ogata, and D. Fox, “Transferable task execution from pixels through deep planning domain learning,” in ICRA. IEEE, 2020, to appear.
    Google ScholarFindings
  • J. J. Kuffner and S. M. LaValle, “RRT-connect: An efficient approach to single-query path planning,” in ICRA, vol.
    Google ScholarLocate open access versionFindings
  • 2. IEEE, 2000, pp. 995–1001.
    Google ScholarFindings
  • [59] A. D. Dragan, K. C. Lee, and S. S. Srinivasa, “Legibility and predictability of robot motion,” in HRI. IEEE, 2013, pp. 301–308.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments