Authors
Amir Amedi 1,
William Stern 2,
Lotfi Merabet 2,
Ella Striem 1,
Uri Hertz 1,
Peter Meijer 3,
and Alvaro Pascual-Leone 2.
1 Department of Physiology and Program of Cognitive Science, Hebrew University, Jerusalem, Israel.
2 Berenson-Allen Center for Noninvasive Brain Stimulation, Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, USA.
3 Developer of The vOICe, Eindhoven, The Netherlands.
Audio-visual integration for objects, location and low-level dynamic stimuli:
novel insights from studying sensory substitution and topographical mapping
Abstract
The talk will present fMRI and behavioral experiments of auditory-visual
integration in humans. It will focus on integration in sighted but also
in sight restoration set-up, looking into the effects of learning, brain
development and brain plasticity. New findings regarding the nature of
sensory representations for dynamic stimuli ranging from pure tones to
complex, natural object sounds will be presented. I will highlight the
use of sensory substitution devices (SSDs) in the context of blindness.
In SSDs, visual information captured by an artificial receptor is
delivered to the brain using non-visual sensory information. Using an
auditory-to-visual SSD called "The vOICe" we find that blind achieve
successful performance on object recognition tasks, and specific
recruitment of ventral and dorsal 'visual' structures. Comparable
recruitment was observed also in sighted learning to use this device but
not in sighted learning arbitrary associations between sounds and object
identity. We also find using phase locking Fourier Techniques an array
of topographic maps which can serve as a basis for such audio-visual
integration. Finally, these results suggest "The vOICe" can be useful
for blind individuals' daily activities but it also has a potential use
to ‘guide’ visual cortex to interpret visual information arriving from
prosthesis.