The goal of this project is to develop a novel system that we call the Vocal Joystick (VJ). This device will enable individuals with motor impairments to use vocal parameters to control objects on a computer screen (buttons, sliders, etc.) and ultimately electro-mechanical instruments (e.g., robotic arms, wireless home automation devices).
Standard spoken language can be quite inefficient for such continuous control tasks and is often recognized poorly by automatic speech recognizers. The VJ system, in contrast, will allow users to exploit a large and varied set of vocalizations for both continuous and discrete motion control, and its selection will be optimized for high discrimination ability and low communication bandwidth. Furthermore, the users are able to perceive visualized feedbacks from the system and make adjustments on the fly. This may include regular speech sounds, such as vowels and consonants, but the primary focus will be on the variation of individual acoustic-phonetic parameters, like pitch, energy, vowel quality and voice quality.
Standard spoken language can be quite inefficient for such continuous control tasks and is often recognized poorly by automatic speech recognizers. The VJ system, in contrast, will allow users to exploit a large and varied set of vocalizations for both continuous and discrete motion control, and its selection will be optimized for high discrimination ability and low communication bandwidth. Furthermore, the users are able to perceive visualized feedbacks from the system and make adjustments on the fly. This may include regular speech sounds, such as vowels and consonants, but the primary focus will be on the variation of individual acoustic-phonetic parameters, like pitch, energy, vowel quality and voice quality.