[ English | Japanese ]
This project is proposed and researched by
Sora Murofushi1/
Tomoyasu Nakano2/
Masataka Goto2 /
Shigeo Morishima1
1 Waseda University, Japan
2 National Institute of Advanced Industrial Science and Technology (AIST), Japan
We propose a dance video authoring system, DanceReProducer, that can automatically generate dance video appropriate to music by segmenting and concatenating existing dance video sequences. This system provides an environment where users not only listen to music conventionally, but can also enjoy music visually and sensationally by directing dance video content according to each user's tastes. In this paper, we focus on the reuse of the huge quantity of user generated dance video content already posted on the Web. Since the video content was created separately by many people, various types of mutual relationship exist between the music and the image sequences. Although previous work has been done on automatic music video creation, existing methods have not been applied to such relationships. In our system, the image sequence that best matches a music segment is automatically generated by a mapping model trained through analysis of the videos. In the training process, as well as simple linear regression from music parameters to video feature vectors, we also use the view count of each video content item from the Web as a weight because the view count reflects the content quality. Moreover, DanceReProducer provides assistance to help a user direct each scene interactively according to the user's preferences. A trial application of the system has shown that users found it a useful tool, and user comments attested to the system's effectiveness.
DanceReProducer Demonstration