In real life, localization of
sound sources are rarely static. When listeners
turn their head to the left or right in the real-world environment, the
flow of sound waves from their viewpoint instantly changes. Is it possible to
transmit such sort of sensation online?
Our answer is, 'Yes it is.'
On this site, we introduce our technology for online transmission of full spherical spatial sound information. This technology is named 'SOPA' which stands for 'Stream Of Panoramic Audio' and its features can be summarized as follows.
Please check several examples of panoramic sound in "Scenes with panoramic sounds".
A unique feature of this technology is that it allows for interactive manipulation of panning.
In real-world situations, the locations of sound sources vary according to the movement of the listener's head. If a listener directly facing the sound source, for instance, a loudspeaker, turns his or her head to the left or right, the sound no longer arrives from the front. Therefore head movements can be interpreted as movements of the world from the viewpoint of the listener. In reality, this never occurs when listening to music through headphones since the headphones rotate along with the head.
SOPA makes it possible to control the panning of the sound during its reproduction.
SOPA data contain not only an audio data stream but also its spatial property. Spatial property can be used to form a beam.
For more information, see Panoramic beamformer.
When there are 2 or more observation points, panoramic sound not only at the observation points but also at any points between them can be generated. It allows the listener to seamlessly walk through between the observation points while listening to panoramic sound .
For more information, see Panoramic sound morphing.
A SOPA file contains only the monaural audio data stream and the spatial property of sounds. How to reproduce the data is not specified anywhere in the SOPA file. How to reconstruct the space acoustically from the SOPA file is open to the developers.
Although all demonstrations presented on this site require stereo headphones, spatial sound information can be reconstructed from a SOPA file also for the multi-channel loudspeaker system.
We constructed, for instance, a 6-channel loudspeaker system to reproduce SOPA data that is illustrated in the figure below. The system is named 'Panoramic Sound Spot System .'
Although this 6-channel loudspeaker system is rather small and for the personal use, the system might be expanded for the theatrical performance.
A Java program for the 6-channel Panoramic Sound Spot System 'pss6tr' is available. You can download a jar file from pss6tr.jar (SOPA archives).
A Java program for the 4-channel Panoramic Sound Spot System 'pss4tr' is also available. You can download a jar file from pss4tr.jar (SOPA archives).
As mentioned above, a tetrahedral microphone system consists of 4 omnidirectional microphones. By replacing them with hydrohones, panoramic sound can be recorded also in water. See underwater panoramic sound for further information.
Here, we provide samples demonstrating the capabilities of SOPA.
In the demonstrations, stereo headphones are necessary.
The demo can be started by clicking the thumbnail below .
Once the image is loaded and the page says "Click to start" or "Tap to start," you can start reproducing the sound by clicking (tapping) on the image.
If the instruction saying "Tap on the image," appears, tap the screen and the browser starts loading data. When the necessary data are loaded, the page will say "Tap to start" and you can start reproducing panoramic sound by tapping the image.
During the reproduction, panning of sound changes according to the motion of the mouse pointer.
If you are using the mobile device, pan and tilt can be controlled by the rotation of the device.
Thanks to Three.js (http://threejs.org/)
 Kaoru Ashihara, "Tetrahedral microphone system for a virtual cardioid microphone." IEICE Tech. Rep., 115, 98(PRMU2015-37), 31-36, 2015
 Reika Uchimi, Takashi Miyaura, Kouta Nakagawa（TCU）, Kaoru Ashihara（AIST）, Shogo Kiryu（TCU）, "Directionality control by using spatial property of sounds." IEICE Tech. Rep., 118, 190(EA2018-32), 25-30, 2018
 Takashi Miyaura, Reika Uchimi, Kaoru Ashihara, Shogo Kiryu, "Spatial sound morphing by using a miniature head simulator array." IEICE Tech. Rep., 117, 328(EA2017-68), 55-60, 2017
 Reika Uchimi, Hiroaki Sato, Kaoru Ashihara, Shogo Kiryu, "Sound sensing by using panoramic sound morphing and panoramic beamformer." Proc. Auditory Res. Meeting, The Acoustical Society of japan, 49, 8, H-2019-107, 2019
 Motoki Kawano, Kaoru Ashihara, Kosuke Taki, Kazuki Azuma, Shogo Kiryu,"Demonstration of the panoramic sound field system." Proc. Auditory Res. Meeting, H-2014-83, 2014