It is now possible to predict an individual’s perceptual ability in just a few seconds. It starts with a combination of brain imaging and artificial intelligence techniques to model the functioning of the brains of people who are experts at facial recognition. Those models could then have applications in the fields of security and health, among others. This initiative is pushing back the frontiers among human intelligence, deep learning and the brain.
Do you have a knack for recognizing the faces of people around you? Or, on the contrary, do you find it difficult to follow the story of a movie because you get the actors mixed up? Facial recognition ability varies considerably from one person to another. Some individuals, known as super-recognizers, can recognize a face they saw on the street only once, years ago. Others have prosopagnosia, or “face blindness,” meaning they are unable—even if they have perfect eyesight and typical intelligence—to recognize the faces of their colleagues, friends and even relatives.
My research project seeks to identify the cerebral mechanisms behind these extreme variations in perceptual ability, using artificial intelligence (AI) and brain imaging. Our team, including the labs headed by Professors Frédéric Gosselin and Ian Charest, as well as international collaborators, started off with a simple question: Can an individual’s facial recognition ability be determined from their brain alone? The answer could improve quality of life for prosopagnosics for example by enabling rapid objective diagnosis, and make it easier to find facial recognition experts for certain jobs in the security business (police forces, border control and other types of security).
To seek answers, I recruited super-recognizers, those rare individuals who make up the top 2% of the population for facial recognition, in the U.K. and Switzerland. To probe their brains, I used a non-invasive imaging technique that allows me to create real-time activation maps of the brain: high-density electroencephalography, or hdEEG. I recorded more than 100,000 cerebral activation maps of super-recognizers and “typical” individuals while they were shown various images (e.g., expressive or neutral faces, animals, everyday objects). I then used those maps to build machine learning models that enabled me to predict with 80% accuracy whether someone is a super-recognizer, using a recording of their brain lasting just one second!
I then asked myself how this ability manifests itself in a “superbrain.” To answer that question, I took advantage on recent advances in the fields of AI and neuroscience. We already know that deep artificial neural networks are able to perform complex human tasks like recognizing objects or faces as well as or better than humans. Recent research in cognitive and computational neuroscience has even shown empirically that artificial neural networks perform tasks that correspond to those of the visual areas of the human brain. The link between that brain-AI correspondence and human perceptual behaviour, however, has never been convincingly demonstrated.
This research suggested that parallels could be drawn between the operations performed by these artificial networks and significant variations in human perceptual behaviour, such as those observed in super-recognizers. Given that they perform complex visual tasks more efficiently, the brains of super-recognizers may be more similar to those optimal artificial neural networks. To test that idea, I began by comparing the human brain function (using my 100,000 cerebral activation maps) to that of visual-based artificial neural networks using a technique called representational similarity analysis, developed by Kriegeskorte et al. (2008). I performed that brain-AI comparison separately for the super-recognizer participants and the typical participants. The result: operations performed by super-recognizers’ brains are indeed more like those of visual-based artificial neural networks than those performed by typical brains at an early stage of brain processing.
I then tested a more daring hypothesis: that the semantic information that implicitly emerges when we look at an image (e.g., a tiger is a ferocious animal) may be richer in the brain of a facial recognition expert. To do this, I compared my participants’ cerebral maps to the representations of another, language-based artificial network, the Universal Sentence Encoder, which can predict semantic links among descriptions of images. It would predict, for example, that the description “a ferocious animal” is more similar to “a giraffe on the savannah” than to “an office building downtown.” That allowed me to make an exciting discovery: operations performed by the brains of super-recognizers are more similar to those of a semantic artificial neural network than to those of a typical brain at a late stage of brain processing. So if you are a super-recognizer, your brain probably contains richer visual and semantic information than a typical brain does.
The models developed have several potential applications outside the laboratory. Besides security applications, our team is looking to develop neuro-perceptual training systems in brain-machine interfaces based on these AI models. Such training could attenuate the incapacitating perceptual disorders that affect everyday life for many people, such as those with prosopagnosia, schizophrenia or an autism spectrum disorder.
This article was produced by Simon Faghel Soubeyrand, doctoral student in cognitive neuroscience in the Department of Psychology at the Université de Montréal, with the guidance of Marie-Paule Primeau, science communication advisor, as part of our “My research project in 800 words” initiative.