Abstract

One basic action for automatically handling clothing is to grasp a specific part of the clothing, that is held by one hand, by the other hand. For carrying out this basic action, the 3D position and posture of the target part are indispensable information. However, it is quite difficult to find a specific part from a clothing item in an arbitrary shape. Here, we propose a model-driven way using a deformable clothing model. First,  possible 3D shapes are predicted by simulating physical deformation of the clothing. Then, after observing the target item with a stereo camera system, each predicted shape is compared to the observed 3D data to select the most consistent one. While this comparison procedures, each possible shape is deformed by the virtual force which attracts it to the observed data. By selecting the shape which gets fitted to the observed data best,  the clothing state is determined.


Once the clothing state is recognized in the model-driven way, the position and pose of the target part (e.g. shoulder of a pullover) can be calculated from the 3D information of the model segment corresponding to the target part, which determines the origin and the three axes of the best hand coordinates for grasping the part.

We applied the proposed method to a toddler sweater while changing the holding positions.The mesh clothing models superimposed on observed imagesshow the recognition results. In five cases among the six, the recognition succeeded. Based on these results, the actions of grasping the target part were successfully carried out.

demo


Publications

1) Y. Kita、F. Saito, N. Kita: “A deformable model driven visual method for handling clothes",Proc. of Int. Conf. on Robotics and Automation (ICRA04),4,pp.3889-3895、2004/4.

2) Y. Kita, T. Ueshiba, E. S. Neo, N. Kita: “Clothes state recognition using 3D observed datas",Proc. of Int. Conf. on Robotics and Automation (ICRA09), pp.1220-1225, 2009/5

3) Y. Kita, T. Ueshiba, E. S. Neo, N. Kita: “A method for handling a specific part of clothing by dual armas",Proc. of Int. Conf. on Intelligent Robotics and Systems (IROS09), pp.4180-4185, 2009/10.