Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility.

Jacie R McHaney, Kenneth E Hancock, Daniel B Polley, Aravindakshan Parthasarathy
Author Information
  1. Jacie R McHaney: Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
  2. Kenneth E Hancock: Deparment of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, 02115, USA.
  3. Daniel B Polley: Deparment of Otolaryngology - Head and Neck Surgery, Harvard Medical School, Boston, MA, 02115, USA.
  4. Aravindakshan Parthasarathy: Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA. aravind_partha@pitt.edu.

Abstract

Multi-talker speech intelligibility requires successful separation of the target speech from background speech. Successful speech segregation relies on bottom-up neural coding fidelity of sensory information and top-down effortful listening. Here, we studied the interaction between temporal processing measured using Envelope Following Responses (EFRs) to amplitude modulated tones, and pupil-indexed listening effort, as it related to performance on the Quick Speech-in-Noise (QuickSIN) test in normal-hearing adults. Listening effort increased at the more difficult signal-to-noise ratios, but speech intelligibility only decreased at the hardest signal-to-noise ratio. Pupil-indexed listening effort and EFRs did not independently relate to QuickSIN performance. However, the combined effects of both EFRs and listening effort explained significant variance in QuickSIN performance. Our results suggest a synergistic interaction between sensory coding and listening effort as it relates to multi-talker speech intelligibility. These findings can inform the development of next-generation multi-dimensional approaches for testing speech intelligibility deficits in listeners with normal-hearing.

Keywords

References

  1. Ear Hear. 2016 Jul-Aug;37 Suppl 1:5S-27S [PMID: 27355771]
  2. Neuroimage. 2018 Jul 15;175:56-69 [PMID: 29604459]
  3. Brain Lang. 2019 Sep;196:104645 [PMID: 31284145]
  4. Neuron. 2015 Sep 23;87(6):1143-1161 [PMID: 26402600]
  5. Brain Topogr. 2002 Winter;15(2):69-86 [PMID: 12537303]
  6. Hear Res. 2015 Dec;330(Pt B):191-9 [PMID: 25769437]
  7. Exp Aging Res. 2016;42(1):50-66 [PMID: 26683041]
  8. J Acoust Soc Am. 1987 Jul;82(1):165-78 [PMID: 3624637]
  9. Electroencephalogr Clin Neurophysiol. 1994 Jul;92(4):321-30 [PMID: 7517854]
  10. Neuroscience. 2019 May 21;407:67-74 [PMID: 30826519]
  11. Int J Audiol. 2008 Nov;47 Suppl 2:S53-71 [PMID: 19012113]
  12. Ear Hear. 2016 Sep-Oct;37(5):e322-35 [PMID: 27556365]
  13. J Acoust Soc Am. 2004 Oct;116(4 Pt 1):2395-405 [PMID: 15532670]
  14. Nat Commun. 2019 Nov 6;10(1):5036 [PMID: 31695046]
  15. J Speech Hear Res. 1991 Feb;34(1):197-201 [PMID: 2008074]
  16. Am J Audiol. 2021 Sep 10;30(3):642-654 [PMID: 34314238]
  17. eNeuro. 2021 Dec 23;8(6): [PMID: 34799409]
  18. Front Aging Neurosci. 2016 Dec 15;8:293 [PMID: 28018209]
  19. Neuron. 2021 Mar 17;109(6):984-996.e4 [PMID: 33561398]
  20. Hear Res. 2013 Aug;302:113-20 [PMID: 23566980]
  21. Psychol Aging. 2009 Sep;24(3):761-6 [PMID: 19739934]
  22. J Neurosci Methods. 2006 Jun 15;153(2):214-20 [PMID: 16406043]
  23. Elife. 2020 Jun 16;9: [PMID: 32543372]
  24. Noise Health. 2010 Oct-Dec;12(49):263-9 [PMID: 20871181]
  25. Neuroscience. 2019 May 21;407:93-107 [PMID: 30292765]
  26. Ear Hear. 2024 Jul-Aug 01;45(4):915-928 [PMID: 38389129]
  27. Hear Res. 2001 Mar;153(1-2):32-42 [PMID: 11223295]
  28. Brain Lang. 2021 Nov;222:105010 [PMID: 34454285]
  29. J Speech Lang Hear Res. 2023 Oct 4;66(10):3825-3843 [PMID: 37652065]
  30. Trends Hear. 2016 Oct 3;20: [PMID: 27698260]
  31. Neurosci Lett. 2000 Oct 6;292(2):123-7 [PMID: 10998564]
  32. Science. 1966 Dec 23;154(3756):1583-5 [PMID: 5924930]
  33. Elife. 2022 Sep 16;11: [PMID: 36111669]
  34. Psychophysiology. 1982 Mar;19(2):167-72 [PMID: 7071295]
  35. Sci Rep. 2021 Mar 26;11(1):6962 [PMID: 33772043]
  36. J Acoust Soc Am. 2013 Sep;134(3):2225-34 [PMID: 23967952]
  37. J Acoust Soc Am. 2008 Dec;124(6):3841-9 [PMID: 19206810]
  38. J Neurosci. 2009 Nov 11;29(45):14077-85 [PMID: 19906956]
  39. Neuron. 2016 Feb 17;89(4):867-79 [PMID: 26833137]
  40. J Assoc Res Otolaryngol. 2016 Apr;17(2):133-43 [PMID: 26920344]
  41. Psychophysiology. 2010 May 1;47(3):560-9 [PMID: 20070575]
  42. Acta Otolaryngol Suppl. 1994;511:28-33 [PMID: 8203239]
  43. Nat Commun. 2016 Nov 08;7:13289 [PMID: 27824036]
  44. Ear Hear. 2015 Jul-Aug;36(4):e153-65 [PMID: 25654299]
  45. Neuroscience. 2019 May 21;407:21-31 [PMID: 30553793]
  46. J Speech Lang Hear Res. 2019 Nov 18;62(12):4269-4281 [PMID: 31738862]
  47. Neuropsychologia. 2003;41(8):989-94 [PMID: 12667534]
  48. Psychophysiology. 2023 Jul;60(7):e14256 [PMID: 36734299]
  49. Trends Hear. 2018 Jan-Dec;22:2331216518800869 [PMID: 30261825]
  50. J Neurosci. 2015 Feb 4;35(5):2161-72 [PMID: 25653371]
  51. J Neurosci. 2013 Aug 21;33(34):13686-94 [PMID: 23966690]
  52. Int J Audiol. 2003 Jun;42(4):177-219 [PMID: 12790346]
  53. J Neurosci. 2022 Sep 21;42(38):7201-7212 [PMID: 35995564]
  54. J Speech Lang Hear Res. 2017 Oct 17;60(10):2976-2988 [PMID: 29049598]
  55. PLoS One. 2016 Mar 31;11(3):e0152773 [PMID: 27031343]
  56. Nat Commun. 2016 Aug 02;7:12241 [PMID: 27483187]
  57. J Am Geriatr Soc. 2005 Apr;53(4):695-9 [PMID: 15817019]
  58. Ear Hear. 2021 Nov 03;43(1):9-22 [PMID: 34751676]
  59. Neuroimage. 2013 Oct 1;79:52-61 [PMID: 23624171]
  60. J Speech Hear Disord. 1983 May;48(2):150-4 [PMID: 6621006]
  61. Int J Audiol. 2011 Oct;50(10):708-16 [PMID: 21714709]
  62. Front Aging Neurosci. 2017 Jul 11;9:224 [PMID: 28744214]
  63. Ear Hear. 2018 Mar/Apr;39(2):204-214 [PMID: 28938250]
  64. Trends Amplif. 2008 Dec;12(4):283-99 [PMID: 18974202]
  65. Cogn Emot. 2015;29(5):900-9 [PMID: 25090306]
  66. J Neurosci. 2012 Oct 3;32(40):14010-21 [PMID: 23035108]
  67. Neuroimage. 2021 Jul 15;235:118014 [PMID: 33794356]
  68. Hear Res. 2012 Jul;289(1-2):52-62 [PMID: 22560961]
  69. Physiol Rev. 2004 Apr;84(2):541-77 [PMID: 15044682]
  70. Psychophysiology. 2010 Mar 1;47(2):236-46 [PMID: 19824950]
  71. Sci Rep. 2022 Jun 23;12(1):8929 [PMID: 35739134]
  72. Proc Natl Acad Sci U S A. 2011 Sep 13;108(37):15516-21 [PMID: 21844339]
  73. Psychophysiology. 2013 Jan;50(1):23-34 [PMID: 23157603]
  74. Neurobiol Aging. 2019 Jan;73:30-40 [PMID: 30316050]
  75. J Assoc Res Otolaryngol. 2018 Feb;19(1):83-97 [PMID: 28971333]
  76. J Neurosci. 2018 Aug 8;38(32):7108-7119 [PMID: 29976623]
  77. Annu Rev Neurosci. 2005;28:403-50 [PMID: 16022602]
  78. Elife. 2020 Jan 21;9: [PMID: 31961322]
  79. Otol Neurotol. 2018 Sep;39(8):950-956 [PMID: 30001284]
  80. PLoS One. 2019 Aug 15;14(8):e0220928 [PMID: 31415624]
  81. Proc Natl Acad Sci U S A. 2014 May 13;111(19):7126-31 [PMID: 24778251]
  82. J Am Acad Audiol. 2002 Apr;13(4):188-204 [PMID: 12025895]
  83. Hear Res. 2017 Aug;351:68-79 [PMID: 28622894]
  84. Neuron. 2015 Jul 1;87(1):179-92 [PMID: 26074005]
  85. J Assoc Res Otolaryngol. 2015 Dec;16(6):727-45 [PMID: 26323349]
  86. Neuroscience. 2019 May 21;407:8-20 [PMID: 30099118]
  87. J Speech Lang Hear Res. 2022 Aug 17;65(8):3195-3216 [PMID: 35917458]
  88. Nat Commun. 2016 Mar 24;7:11070 [PMID: 27009409]
  89. J Neurosci Methods. 2007 Mar 30;161(1):11-6 [PMID: 17109967]
  90. Hear Res. 2020 Nov;397:107922 [PMID: 32111404]
  91. Proc Natl Acad Sci U S A. 2019 Nov 19;116(47):23753-23759 [PMID: 31685611]
  92. Hear Res. 2019 Sep 15;381:107773 [PMID: 31404807]
  93. J Neurophysiol. 2021 Apr 1;125(4):1213-1222 [PMID: 33656936]
  94. Neurobiol Aging. 2022 Jul;115:50-59 [PMID: 35468552]
  95. Sci Rep. 2019 Nov 14;9(1):16771 [PMID: 31728002]
  96. Psychophysiology. 2017 Feb;54(2):193-203 [PMID: 27731503]

Grants

  1. R21 DC018882/NIDCD NIH HHS
  2. T32DC011499/NIH HHS
  3. R21DC018882/NIH HHS
  4. T32 DC011499/NIDCD NIH HHS
  5. F31 DC020085/NIDCD NIH HHS
  6. P50DC015817/NIH HHS

MeSH Term

Humans
Speech Intelligibility
Female
Male
Adult
Speech Perception
Young Adult
Pupil
Noise
Signal-To-Noise Ratio
Acoustic Stimulation

Word Cloud

Created with Highcharts 10.0.0speecheffortlisteningintelligibilityEFRsperformanceQuickSINcodingsensoryinteractionEnvelopepupil-indexednormal-hearingListeningsignal-to-noisemulti-talkerfollowingresponsesMulti-talkerrequiressuccessfulseparationtargetbackgroundSuccessfulsegregationreliesbottom-upneuralfidelityinformationtop-downeffortfulstudiedtemporalprocessingmeasuredusingFollowingResponsesamplitudemodulatedtonesrelatedQuickSpeech-in-NoisetestadultsincreaseddifficultratiosdecreasedhardestratioPupil-indexedindependentlyrelateHowevercombinedeffectsexplainedsignificantvarianceresultssuggestsynergisticrelatesfindingscaninformdevelopmentnext-generationmulti-dimensionalapproachestestingdeficitslistenersSensoryrepresentationsprovidecomplementarycontributionsCognitiveloadFrequencyPupillometrySpeech-in-noise

Similar Articles

Cited By

No available data.