This project is proposed and researched by Masataka Goto.
English version is here.
Interaction between a player and a lifelike computer character through musical performance was reported [Goto, 1993], but interaction between players was not considered in those reports. In real jam sessions, musical interaction between players is essential and is achieved by exchanging not only auditory information but also auxiliary visual information such as gestures that indicate repetition or the end of song.
To enhance the visual interaction in a jam session and provide players with more expressive way to communicate, we introduce a virtual dancer called Cindy visible to both players. Two players, a drummer and a guitarist (or pianist), can choreograph Cindy together by their musical improvisation in real time. The players can use three kinds of interaction at the same time: 1) interaction between the players through their musical sounds, 2) interaction between each player and Cindy, 3) interaction between the players through Cindy. The first interaction is the musical interaction of a conventional jam session, and the second interaction is both direct and indirect. In the direct interaction the drummer determines the timing of a Cindy's motion, since each drum-sound is directly mapped to a Cindy's motion. There are six predefined sets of mappings between seven kinds of drum-sounds and Cindy's motions, each for a different mood of dance motions. In the indirect interaction the guitarist switches these mapping sets by changing a mood of his musical improvisation. If the guitarist plays, Cindy does not move unless the drummer determines the motion timing; information from both players is necessary to choreograph the dance totally. The third interaction, cooperation between the two players, is therefore important in creating a complete performance.
This lifelike computer character system has been implemented on distributed workstations connected by the Ethernet. MIDI data of two players' performance is sent to a graphics workstation that displays Cindy and to a workstation that controls a MIDI instrument that produces sounds. In our experimental setup, the three kinds of interaction were performed on the system, and the multimodal interaction produced very satisfactory results.