Human Interface Research
- Facial Expression Communication
- Gesture Recognition
Facial Expression Communication
In this paper, the facial expression is mainly focused to achieve the
smooth human tele-communications. In human communication society,
the ``gestures'' and ``facial expressions'' has been considered as the most
significant factors in usual communications. At first, to realize the
facial expression communication system in real time, we proposed the
facial expression space (FES) with PCA analysis, which can drastically
reduce the dimension of images to FES without any consideration of
facial features, such as eye blow, etc. Secondly, a correspondence
technique between each personal facial expression spaces has
been proposed with the affine transformation.
Finally, real time facial expression transportation system has been
developed, which transport the facial expression, but not the
image itself. This final system leads to synthesize the same facial
expressions in another persons, further more in cartoon characters.
The experimental results show the validity of these criteria.
Fig: Facial Expression Transportation
Fig: Real-Time Facial Expression Trans.
- Kohtaro Ohba, Takehito Tsukada, Tetsuo Kotoku and Kazuo Tanie,
- Facial Expression Space for Smooth Tele-Communications
- Proc. Third IEEE International Conference on Automatic Face and Gesture Recognition (FG'98),
- pp. 378-383,
- 14-16, April, 1998, Nara(Japna),
- Kohtaro Ohba, Guillaume Clary, Takehito Tsukada, Tetsuo Kotoku and Kazuo Tanie,
- Facial Expression Communication with FES
- Proc. 14 th International Conference on Pattern Recognition (ICPR'98),
- pp. 1376-1378,
- 16-20, Aug., 1998, Brisbane(Australia),
- Kohtaro Ohba, Guillaume Clary, Yoshihito Hiratsuka, Takehito Tsukada,
- Tetsuo Kotoku and Kazuo Tanie,
- Gesture and Facial Expression on Tele-Robotics
- Proc. IEEE/RSJ International Conference on Intelligent Robots and System (IROS'98),
- pp. 378-383,
- 13-17, Oct., 1998, Victoria(Canada),
This paper describes a real-time system to classify several human
motions with the principle component analysis, i.e. the eigen space method.
First, to solve the drawback of original eigen space technique,
such as background noise, silhouette images has been derived instead of
original images. Secondly, a curvature in eigen space has been projected
from a sequence of silhouette images. Finally,
correspondence technique between dictionary and input curvatures have been
discussed to classify human motions.
Experimental results show the validity of this proposed method.
Fig: Making a curvature in eigen space.
Fig: Gesture identification in eigen space.
- Yoshihito Hiratsuka, Kohtaro Ohba, Shinya Kajikawa and Hikaru Inooka,
- Real-Time Human Motion Classification with Visual Information,
- Trans. Japanese Soc. Mech. Eng., (will be published)