Recognition of speech sounds is accomplished through the use of adjacent sounds in time, in what is termed acoustic context. The frequency and temporal properties of these contextual sounds play a large role in recognition of human speech. Historically, most research on both speech perception and sound perception in general examine sounds out-of-context, or presented individually. Further, these studies have been conducted independently of each other with little connection across labs, across sounds, etc. These approaches slow the progress in understanding how listeners with hearing difficulties use context to recognize speech and how their hearing aids and/or cochlear implants might be modified to improve their perception. This research has three main goals. First, the investigators predict that performance in speech sound recognition experiments will be related when testing the same speech frequencies or the same moments in time, but that performance will not be related in further comparisons across speech frequencies or at different moments in time. Second, the investigators predict that adding background noise will make this contextual speech perception more difficult, and that these difficulties will be more severe for listeners with hearing loss. Third, the investigators predict that cochlear implant users will also use surrounding sounds in their speech recognition, but with key differences than healthy-hearing listeners owing to the sound processing done by their implants. In tandem with these goals, the investigators will use computer models to simulate how neurons respond to speech sounds individually and when surrounded by other sounds.
Hearing, Hearing Loss
Recognition of speech sounds is accomplished through the use of adjacent sounds in time, in what is termed acoustic context. The frequency and temporal properties of these contextual sounds play a large role in recognition of human speech. Historically, most research on both speech perception and sound perception in general examine sounds out-of-context, or presented individually. Further, these studies have been conducted independently of each other with little connection across labs, across sounds, etc. These approaches slow the progress in understanding how listeners with hearing difficulties use context to recognize speech and how their hearing aids and/or cochlear implants might be modified to improve their perception. This research has three main goals. First, the investigators predict that performance in speech sound recognition experiments will be related when testing the same speech frequencies or the same moments in time, but that performance will not be related in further comparisons across speech frequencies or at different moments in time. Second, the investigators predict that adding background noise will make this contextual speech perception more difficult, and that these difficulties will be more severe for listeners with hearing loss. Third, the investigators predict that cochlear implant users will also use surrounding sounds in their speech recognition, but with key differences than healthy-hearing listeners owing to the sound processing done by their implants. In tandem with these goals, the investigators will use computer models to simulate how neurons respond to speech sounds individually and when surrounded by other sounds.
Perception of Speech in Context by Listeners With Healthy and Impaired Hearing
-
University of Louisville, Louisville, Kentucky, United States, 40292
University of Minnesota, Minneapolis, Minnesota, United States, 55455
Researchers look for people who fit a certain description, called eligibility criteria. Some examples of these criteria are a person's general health condition or prior treatments.
For general information about clinical research, read Learn About Studies.
18 Years to 65 Years
ALL
Yes
University of Louisville,
Christian Stilp, PhD, PRINCIPAL_INVESTIGATOR, University of Louisville
2027-07-31