Introduction to My Practical Project

Term two is over and so is the taught syllabus of my Post-Graduate Degree. Now we begin our final projects. I have chosen:

Localisation in binaural based Ambisonics using individualised HRTFs.

I understand this title includes some technical terms so let me break it down…

Localisation: The human ability to determine the direction of a sound source.
Binaural Based Ambisonics:  (BBAs) Creating a virtual 3D speaker array over headphones that allows sounds to be placed around the listener (rather than conventional stereo techniques that offer left/right panning).
Individualised HRTFs: (Head Related Transfer Functions). A set of data relating to the shape of the head and ears of the individual listener. (Can be thought of as a filter our own heads apply to sound entering the ear).

A simple example of binaural audio involves recording a sound with two microphones placed in the ears or a dummy-head. When played back over headphones, the listener will (ideally) hear the audio as though they were in the environment.

The step up from that is to capture a set of HRTFs, which allows us to take a sound and apply a particular filter to it, which when played back through headphones will sound as though it is coming from a particular direction. A full set of HRTFs allows a sound to be placed anywhere around a virtual sphere around the listener.

The field of Ambisonics has been around since the 1970’s but, due to the recent rise in virtual-reality gaming and videos, is now more relevant than ever. Currently BBAs are used on several platforms from 360 videos on Facebook and YouTube to VR gaming, alongside headsets such as the Oculus Rift, which means there is a demand to improve the technology, such as improving localisation and availability.

With this project, I aim to determine whether using our own HRTFs (over generic ones) improves our ability to localise sounds in a BBA environment. My instinct tells me yes, however the capturing of a set of HRTF’s is currently difficult as it involves a human participant sitting very still for a period of time, meaning the resulting HRTFs can be of poor quality compared to generic ones, therefor one part of my research will involve creating a method that will shorten the HRTF capturing process.

My results will be determined through a series of listening tests in which participants will try to accurately determine the location of a sound played in a BBA environment, using their own HRTFs and generic HRTFs.

 

TL;DR
In a virtual 3D environment, does using a personalised set of filters, based off the shape of our own head, improve our ability to determine where a sound is coming from?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s