COVOPRIM
COVOPRIM is a research project entitled “Comparative Studies of Voice Perception in Primates” funded by an Advanced Grant of the European Research Council in 2018.
Vocalizations produced by a member of our own species (conspecific vocalizations) are the most important sounds of the auditory landscape of primates–human or not.
We have evolved sophisticated neural mechanisms to extract and exploit, at expert level, the wealth of information they carry in order to optimize our social interactions.
Project COVOPRIM aims to reconstruct the recent evolutionary history of voice perception, by comparing the perceptual and neuronal mechanisms of voice perception in human and non-human primates.
Similarities in voice processing between species could be evidence of cerebral mechanisms inherited from their last common ancestor several 10s of million years ago. Differences could help us understand how speech has emerged in the last few 10s of thousand years.

We investigate the cerebral mechanisms of voice perception in human and non-human primates

- Humans are the only speaking species, but many animals, particularly other primates, rely on vocalizations in their social interactions, and appear expert at analyzing voice. Like us they can recognize that a sound is
- Despite the uniqueness of speech, do primates share with us some cerebral mechanisms for processing voice information? And what are those common mechanisms?
- Answering this question is not only important for a better understanding of the evolutionary history of our human brain and of how speech and language emerged; it will contribute to better treatment of communicative disorders and to the next generation of brain-computer interfaces.
In project COVOPRIM we investigate the perceptual and cerebral mechanisms of voice perception in humans and 3 other species of primates—baboons, macaques and marmosets.
Our scientific approach

- Most of the research on the cerebral processing of auditory information in humans so far has focused on speech – what makes us different from other species. And for understandable reasons: our unique use of vocalizations to convey meaning has been a major tool in our conquest of the planet, and is crucial in our everyday social interactions.
- But this focus on speech has made it hard to understand on which cerebral basis speech perception evolved in our recent ancestors when they started speaking a few hundreds of thousands of years ago. What cerebral mechanisms do we actually share with other primates when it comes to extracting and analyzing voice information?
- In project COVOPRIM we use the comparative approach: to compare the perceptual and neural mechanisms of voice perception in humans and other non-human primates using the same methods. Similarities between species could indicate that these mechanisms were already present is some rudimentary form in the last common ancestor of humans and macaques or marmosets – several tens of millions of years ago.
- The overarching hypothesis behind project COVOPRIM is that the brain of primates contains a ‘voice patch system’: a set of interconnected groups of neurons tuned to vocalizations, that underlie progressively abstract representations of the vocal input – similar to the face patch system of visual cortex.
- Project COVPRIML is organized along 3 workpackages:
- IN WP1, participants perform behavioural experiments using automated testing systems (electronic games for monkeys) they access at will. This allows leaving the participants in their social group while they perform thousands of trials on voice perception experiments for which they are rewarded with candy (wheat for macaques and baboons, Arabic gum for marmosets). The same experiments are conducted across species for comparison.
- In WP2 participants are scanned using function magnetic resonance imaging: a non-invasive method for measuring brain activity with high spatial precision. Human and monkey participants are scanned using the same scanner and protocol.
In WP3, a few selected individuals are surgically implanted with multi-electrode arrays in voice patches of their brain to analyse voice information processing at the scale of single-neurons.
