Decoding of Envelope vs. Fundamental Frequency During Complex Auditory Stream Segregation.

Keelin M Greenlaw, Sebastian Puschmann, Emily B J Coffey
Author Information
  1. Keelin M Greenlaw: Department of Psychology, Concordia University, Montreal, QC, Canada.
  2. Sebastian Puschmann: Institute of Psychology, University of Lübeck, Lübeck, Germany. ORCID
  3. Emily B J Coffey: Department of Psychology, Concordia University, Montreal, QC, Canada. ORCID

Abstract

Hearing-in-noise perception is a challenging task that is critical to human function, but how the brain accomplishes it is not well understood. A candidate mechanism proposes that the neural representation of an attended auditory stream is enhanced relative to background sound via a combination of bottom-up and top-down mechanisms. To date, few studies have compared neural representation and its task-related enhancement across frequency bands that carry different auditory information, such as a sound's amplitude envelope (i.e., syllabic rate or rhythm; 1-9 Hz), and the fundamental frequency of periodic stimuli (i.e., pitch; >40 Hz). Furthermore, hearing-in-noise in the real world is frequently both messier and richer than the majority of tasks used in its study. In the present study, we use continuous sound excerpts that simultaneously offer predictive, visual, and spatial cues to help listeners separate the target from four acoustically similar simultaneously presented sound streams. We show that while both lower and higher frequency information about the entire sound stream is represented in the brain's response, the to-be-attended sound stream is strongly enhanced only in the slower, lower frequency sound representations. These results are consistent with the hypothesis that attended sound representations are strengthened progressively at higher level, later processing stages, and that the interaction of multiple brain systems can aid in this process. Our findings contribute to our understanding of auditory stream separation in difficult, naturalistic listening conditions and demonstrate that pitch and envelope information can be decoded from single-channel EEG data.

Keywords

References

  1. J Neurosci. 2007 Aug 29;27(35):9252-61 [PMID: 17728439]
  2. Front Neurosci. 2017 Aug 25;11:479 [PMID: 28890684]
  3. Neuroimage. 2019 Jan 15;185:96-101 [PMID: 30336253]
  4. Front Hum Neurosci. 2016 Nov 30;10:604 [PMID: 27965557]
  5. Front Hum Neurosci. 2011 Dec 14;5:158 [PMID: 22174701]
  6. J Exp Psychol Gen. 2020 May;149(5):914-934 [PMID: 31589067]
  7. Front Neurosci. 2014 Mar 31;8:60 [PMID: 24744695]
  8. J Neurosci. 2013 Jan 23;33(4):1417-26 [PMID: 23345218]
  9. Neuroimage. 2014 Mar;88:41-6 [PMID: 24188816]
  10. Psychol Aging. 2000 Mar;15(1):88-99 [PMID: 10755292]
  11. Ear Hear. 2019 Mar/Apr;40(2):358-367 [PMID: 29965864]
  12. Neurosci Biobehav Rev. 2017 Oct;81(Pt B):181-187 [PMID: 28212857]
  13. Hear Res. 2017 May;348:1-15 [PMID: 28137699]
  14. J Neurosci. 2016 Sep 21;36(38):9888-95 [PMID: 27656026]
  15. Trends Cogn Sci. 2007 Mar;11(3):105-10 [PMID: 17254833]
  16. Cereb Cortex. 2015 Jul;25(7):1697-706 [PMID: 24429136]
  17. J Am Acad Audiol. 2003 Nov;14(9):453-70 [PMID: 14708835]
  18. J Neural Eng. 2015 Aug;12(4):046007 [PMID: 26035345]
  19. Ear Hear. 2011 Nov-Dec;32(6):750-7 [PMID: 21730859]
  20. Nat Commun. 2019 Nov 6;10(1):5036 [PMID: 31695046]
  21. Nature. 2012 May 10;485(7397):233-6 [PMID: 22522927]
  22. J Neurosci. 2017 Sep 20;37(38):9189-9196 [PMID: 28821680]
  23. Biling (Camb Engl). 2009 Jul;12(3):385-92 [PMID: 21151815]
  24. Nat Commun. 2019 Jun 7;10(1):2509 [PMID: 31175304]
  25. Cognition. 2004 Dec;94(2):B45-53 [PMID: 15582622]
  26. J Speech Lang Hear Res. 2013 Feb;56(1):31-43 [PMID: 22761320]
  27. Cereb Cortex. 2020 Apr 14;30(4):2600-2614 [PMID: 31761952]
  28. PLoS Biol. 2018 Mar 12;16(3):e2004473 [PMID: 29529019]
  29. J Neurophysiol. 2012 Jan;107(1):78-89 [PMID: 21975452]
  30. J Neurosci. 2013 Mar 27;33(13):5728-35 [PMID: 23536086]
  31. eNeuro. 2018 Feb 9;5(1): [PMID: 29435487]
  32. Ear Hear. 2016 Nov/Dec;37(6):660-670 [PMID: 27438866]
  33. J Neurosci. 2012 Oct 10;32(41):14156-64 [PMID: 23055485]
  34. Front Hum Neurosci. 2016 Mar 09;10:103 [PMID: 27014034]
  35. Ear Hear. 2010 Jun;31(3):302-24 [PMID: 20084007]
  36. Nat Commun. 2017 Dec 15;8(1):2148 [PMID: 29247159]
  37. Neuroimage. 2019 Oct 15;200:1-11 [PMID: 31212098]
  38. Int J Audiol. 2014 Jul;53(7):433-40 [PMID: 24673660]
  39. Cereb Cortex. 2019 Jul 22;29(8):3253-3265 [PMID: 30137239]
  40. J Am Acad Audiol. 2013 Sep;24(8):689-700 [PMID: 24131605]
  41. J Acoust Soc Am. 2002 Apr;111(4):1906-16 [PMID: 12002873]
  42. Proc Natl Acad Sci U S A. 2017 Dec 19;114(51):13579-13584 [PMID: 29203648]
  43. Ear Hear. 2018 Mar/Apr;39(2):204-214 [PMID: 28938250]
  44. Hear Res. 2010 Jul;266(1-2):52-9 [PMID: 19748564]
  45. J Acoust Soc Am. 2011 Jul;130(1):429-39 [PMID: 21786910]
  46. J Acoust Soc Am. 1968 Jun;43(6):1223-30 [PMID: 5659488]
  47. Proc Natl Acad Sci U S A. 2012 Jul 17;109(29):11854-9 [PMID: 22753470]
  48. Ear Hear. 2016 Jul-Aug;37 Suppl 1:101S-10S [PMID: 27355759]
  49. J Acoust Soc Am. 1994 Feb;95(2):1085-99 [PMID: 8132902]
  50. Front Neurosci. 2018 Mar 07;12:121 [PMID: 29563861]
  51. Hear Res. 2019 Oct;382:107779 [PMID: 31505395]
  52. Proc Natl Acad Sci U S A. 2005 Sep 27;102(39):14110-5 [PMID: 16162673]
  53. J Cogn Neurosci. 2011 Sep;23(9):2268-79 [PMID: 20681749]
  54. J Assoc Res Otolaryngol. 2011 Dec;12(6):767-82 [PMID: 21826534]
  55. Trends Cogn Sci. 2008 May;12(5):182-6 [PMID: 18396091]
  56. Sci Rep. 2016 Nov 17;6:37405 [PMID: 27853313]
  57. PLoS Biol. 2016 Nov 15;14(11):e1002577 [PMID: 27846209]
  58. Neuroimage. 2019 Aug 1;196:261-268 [PMID: 30978494]
  59. Q J Exp Psychol (Hove). 2008 May;61(5):735-51 [PMID: 17853231]
  60. Front Neurosci. 2019 Nov 28;13:1153 [PMID: 31849572]
  61. Neuron. 2010 May 27;66(4):610-8 [PMID: 20510864]
  62. Elife. 2017 Oct 10;6: [PMID: 28992445]
  63. Percept Psychophys. 2002 Jul;64(5):844-54 [PMID: 12201342]
  64. J Neurosci. 2017 Nov 22;37(47):11505-11516 [PMID: 29061698]
  65. Nat Commun. 2016 Mar 24;7:11070 [PMID: 27009409]
  66. J Neurosci Methods. 2004 Mar 15;134(1):9-21 [PMID: 15102499]
  67. Front Neurosci. 2019 Mar 14;13:199 [PMID: 30930734]
  68. Neuroimage. 2019 Dec;203:116185 [PMID: 31520743]

Word Cloud

Created with Highcharts 10.0.0soundstreamauditoryfrequencyneuralrepresentationinformationpitchbrainattendedenhancedenvelopeieHzhearing-in-noisestudysimultaneouslylowerhigherrepresentationscanHearing-in-noiseperceptionchallengingtaskcriticalhumanfunctionaccomplisheswellunderstoodcandidatemechanismproposesrelativebackgroundviacombinationbottom-uptop-downmechanismsdatestudiescomparedtask-relatedenhancementacrossbandscarrydifferentsound'samplitudesyllabicraterhythm1-9fundamentalperiodicstimuli>40Furthermorerealworldfrequentlymessierrichermajoritytasksusedpresentusecontinuousexcerptsofferpredictivevisualspatialcueshelplistenersseparatetargetfouracousticallysimilarpresentedstreamsshowentirerepresentedbrain'sresponseto-be-attendedstronglyslowerresultsconsistenthypothesisstrengthenedprogressivelylevellaterprocessingstagesinteractionmultiplesystemsaidprocessfindingscontributeunderstandingseparationdifficultnaturalisticlisteningconditionsdemonstratedecodedsingle-channelEEGdataDecodingEnvelopevsFundamentalFrequencyComplexAuditoryStreamSegregationsegregationdecodingreconstructionspeech-in-noise

Similar Articles

Cited By