Unsupervised repetition enables rapid perceptual learning.

Vahid Montazeri, Michelle R Kapolowicz, Peter F Assmann
Author Information
  1. Vahid Montazeri: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080, USA.
  2. Michelle R Kapolowicz: Center for Hearing Research, University of California, Irvine, California 92697, USA.
  3. Peter F Assmann: School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080, USA.

Abstract

This study examined how listeners disambiguate an auditory scene comprising multiple competing unknown sources and determine a salient source. Experiment 1 replicated findings from McDermott, Wrobleski, and Oxenham. [(2011). Proc. Natl. Acad. Sci. U. S. A. 108(3), 1188-1193] using a multivariate Gaussian model to generate mixtures of two novel sounds. The results showed that listeners were unable to identify either sound in the mixture despite repeated exposure unless one sound was repeated several times while being mixed with a different distractor each time. The results support the idea that repetition provides a basis for segregating a single source from competing novel sounds. In subsequent experiments, the previous identification task was extended to a recognition task and the results were modeled. To confirm the repetition benefit, experiment 2 asked listeners to recognize a temporal ramp in either a repeating sound or non-repeating sounds. The results showed that perceptual salience of the repeating sound allowed robust recognition of its temporal ramp, whereas similar features were ignored in the non-repeating sounds. The response from two neural models of learning, generalized Hebbian learning and anti-Hebbian learning, were compared with the human listener results from experiment 2. The Hebbian network showed a similar response pattern as for the listener results, whereas the opposite pattern was observed for the anti-Hebbian output.

MeSH Term

Acoustic Stimulation
Auditory Perception
Humans
Learning
Sound
Sound Spectrography

Word Cloud

Created with Highcharts 10.0.0resultssoundssoundlearninglistenersshowedrepetitioncompetingsourcetwonoveleitherrepeatedtaskrecognitionexperiment2temporalramprepeatingnon-repeatingperceptualwhereassimilarresponseHebbiananti-HebbianlistenerpatternstudyexamineddisambiguateauditoryscenecomprisingmultipleunknownsourcesdeterminesalientExperiment1replicatedfindingsMcDermottWrobleskiOxenham[2011ProcNatlAcadSciUS10831188-1193]usingmultivariateGaussianmodelgeneratemixturesunableidentifymixturedespiteexposureunlessoneseveraltimesmixeddifferentdistractortimesupportideaprovidesbasissegregatingsinglesubsequentexperimentspreviousidentificationextendedmodeledconfirmbenefitaskedrecognizesalienceallowedrobustfeaturesignoredneuralmodelsgeneralizedcomparedhumannetworkoppositeobservedoutputUnsupervisedenablesrapid

Similar Articles

Cited By