Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

H E Harley, H L Roitblat, P E Nachtigall
Author Information
  1. H E Harley: Social Sciences Division, New College, University of South Florida, Sarasota 34243. harley@virtu.sar.usf.edu.

Abstract

A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

MeSH Term

Animals
Dolphins
Echolocation
Female
Learning
Photic Stimulation
Visual Pathways
Visual Perception

Word Cloud

Created with Highcharts 10.0.0conditionsdolphinvisual/echoicvisionExperimentaccuracyvisualechoicecholocationtrainingconditionobjectsintegrationperformed3-alternativematching-to-sampletaskdifferentmodalityecholocation:1occurreddual-modalityChoicetestschancewithout2unfamiliarcomplementarysimilarityrelationspresentedsingle-modality70%testedimmediatelyrose95%suggestingacrossmodalities3variedpresentationsamplealternativessuccessfullymatchedfamiliarcross-modaldatasuggestobject-basedrepresentationalsystemObjectrepresentationbottlenoseTursiopstruncatus:information

Similar Articles

Cited By