Stork: Individual differences in cochlear implant users' audiovisual integration and links to speech proficiency

ILIZA MYERS BUTERA (2016-09-01 to 2019-08-31) Individual differences in cochlear implant users' audiovisual integration and links to speech proficiency. Amount: $86136



Cochlear implants (CIs) allow those with profound hearing loss to experience sound, some of them for the first time. This highly successful neuroprosthetic device can drastically improve speech comprehension for some individuals; however, postoperative speech proficiency remains highly variable and difficult to predict. Although visual orofacial articulations play a crucial role in verbal communication both before and after cochlear implantation, clinical measures assessing implant candidacy and monitoring postoperative performance are currently limited to auditory-only speech measures. As a result, current assessments may be providing only a partial picture of aural rehabilitation with a CI. This proposal asserts that the degraded sound provided by a CI presents a unique computational challenge for the central nervous system and that, for speech in particular, visual information is almost certainly recruited to increase comprehension. We believe that extending performance assessments to include the visual domain increases the ecological validity of speech intelligibility measures and may also reveal an additional variable in successful outcomes?the integration of sensory streams to achieve multisensory enhancement. Our first aim is to characterize audiovisual integration in a cohort of postlingually deafened CI users. This includes unisensory and multisensory speech perception at the phoneme and word level. Doing so enables us to relate illusory tasks (i.e. the McGurk effect) to proficiency at comprehending words embedded in speech-like noise?a challenging listening environment for nearly all CI users, and one where effective sensory integration is key. Our second aim is to investigate the neural basis of variability in audiovisual integration seen in both our preliminary work and in the literature. Functional Near Infrared Spectroscopy (fNIRS) is a noninvasive imaging technique that is safe for all CI users. Hemodynamic responses measured with this optical imaging technique will allow us to determine whether auditory, visual, and audiovisual stimuli activate the temporal lobe differently between 1) CI users and NH controls, 2) proficient and non-proficient CI users, and 3) McGurk illusion perceivers and non-perceivers. The goal of this proposal is to better understand atypical audiovisual integration and how it relates to variability in both neural processing and speech comprehension in cochlear implant (CI) users. This knowledge is essential for our understanding of speech proficiency with a CI and, most importantly, for how users can best utilize all sensory information to enhance intelligibility and improve quality of life.

人工耳蜗植入物(CIs)允许有严重听力损失的人体验声音,其中一些是第一次。这种非常成功的神经假体装置可以极大地提高某些人的语言理解能力;然而,术后语言能力仍然高度可变且难以预测。虽然视觉面部关节在人工耳蜗植入前后的言语交流中起着至关重要的作用,但评估植入物候选资格和监测术后表现的临床措施目前仅限于听觉言语测量。因此,目前的评估可能只提供CI的听觉康复的部分图片。该提议断言,CI提供的降级声音对中枢神经系统提出了独特的计算挑战,并且特别是对于语音,几乎可以肯定地招募视觉信息以增加理解。我们认为,扩展性能评估以包括视觉领域可以提高语音清晰度测量的生态有效性,并且还可以揭示成功结果中的另一个变量 - 感知流的整合以实现多感官增强。我们的第一个目标是在一群语后聋的CI用户中表征视听整合。这包括在音素和单词级别的无感和多感官语音感知。这样做使我们能够将虚幻的任务(即McGurk效应)与理解语音类噪声中的词语的熟练程度联系起来 - 对于几乎所有CI用户而言,这是一个具有挑战性的聆听环境,而有效的感知整合是关键。我们的第二个目标是研究在我们的初步工作和文献中看到的视听整合的可变性的神经基础。功能性近红外光谱(fNIRS)是一种非侵入性成像技术,对所有CI用户都是安全的。使用这种光学成像技术测量的血流动力学反应将允许我们确定听觉,视觉和视听刺激是否在1)CI用户和NH控制之间以不同方式激活颞叶; 2)熟练和非熟练的CI用户,以及3)McGurk幻觉感知者和非感知者。该提案的目标是更好地理解非典型视听集成以及它与人工耳蜗(CI)用户的神经处理和语音理解的可变性之间的关系。这些知识对于我们理解CI的语言熟练程度至关重要,最重要的是,用户如何最好地利用所有感官信息来提高可懂度和改善生活质量。

■Do you need the full text of this grant application? We can help you to apply for it. The fee is $150 (USD). Please write to us with subject line Full text request for grant F31DC015956 (NIDCD). Please understand: (1) the application process involves NIH and the original grant authors and it will take ~1 month; (2) the grant author may exclude some sensitive information from the full text.