Pronunciation Research
Adaptive high-variability phonetic training.
Accent conversion for pronunciation learning.
Pronunciation has recently regained its once-lost attention in second-language pedagogy. Together with my colleagues and students, I have been working on new types of computer-assisted pronunciation teaching (CAPT) technology.
Research Findings
My work on CAPT has focused on combining adaptive learner modeling with the high-variability approach to pronunciation training. This work has resulted in a system for segmental perceptual training that automatically generates discrimination and identification perceptual exercises customized for individual learners. Paper [1] presents an experiment with a pre-/post-test design, involving 32 adult Russian-speaking learners of English as a foreign language. The participants' perceptual gains were found to transfer to novel voices but not to untrained words. Potential factors underlying the absence of word-level transfer are discussed.
1. Qian, M.*, Chukharev-Hudilainen, E., & Levis, J. (forthcoming). A system for adaptive high variability segmental perceptual training: implementation, effectiveness, and transfer. Language Learning & Technology.
In her doctoral dissertation (forthcoming), my student Manman (Mandy) Qian has optimized the parameters of the algorithm that generates these exercises. The improved version of the system has been demonstrated to foster word-level and sentence-level transfer of perceptual gains.

AAAL-2017
Left to right: Mandy Qian, Dr. John Levis, Dr. Evgeny Chukharev-Hudilainen
Current Funding
I have been serving as a co-PI on a collaborative NSF-funded project that seeks to apply accent conversion algorithms to CAPT. Research shows that it is easier for learners to imitate a voice that is similar to their own. With accent conversion, instead of trying to imitate a native speaker (whose voice quality might be very different from the learner's own voice), learners can listen to their own voice which has been computationally modified (re-synthesized) so that it sounds with a native U.S. accent. We call this re-synthesized voice a Golden Speaker. While in my other projects I have taken the lead on conceptualizing and developing technology, on this project I focus on the design and implementation of learner-focused research studies.

This project is in collaboration with John Levis' Pronunciation Research Group at Iowa State University and Ricardo Gutierrez-Osuna's PSI Lab at Texas A&M University.
Gutierrez-Osuna, R. (PI), Levis, J. (PI), & Chukharev-Hudilainen, E. (Co-PI) (2016–2019). EXP: Collaborative Research: Perception and production in second language: the roles of voice variability and familiarity. National Science Foundation award Nos. 1623622 & 1623750, $565,647.
The Golden Speaker project team, 2016 (PSI Lab, Texas A&M University, College Station, TX).

Left to right: Shaojin Ding, Guanlong Zhao, Chris Liberatore, Dr. John Levis, Dr. Evgeny Chukharev-Hudilainen, Dr. Ricardo Gutierrez-Osuna.

Software
The outcomes of my research on adaptive high-variability phonetic training are implemented in a web-based tool, Linguatorium Auris, which is available through the A. A. Hudyakov Center for Linguistic Research.

The Golden Speaker software is under development and not available to the public.