I have been serving as a co-PI on a collaborative NSF-funded project that seeks to apply accent conversion algorithms to CAPT. Research shows that it is easier for learners to imitate a voice that is similar to their own. With accent conversion, instead of trying to imitate a native speaker (whose voice quality might be very different from the learner's own voice), learners can listen to their own voice
which has been computationally modified (re-synthesized) so that it sounds with a native U.S. accent. We call this re-synthesized voice a Golden Speaker
. While in my other projects I have taken the lead on conceptualizing and developing technology, on this project I focus on the design and implementation of learner-focused research studies.
This project is in collaboration with John Levis' Pronunciation Research Group
at Iowa State University and Ricardo Gutierrez-Osuna's PSI Lab
at Texas A&M University.