The next frontier in health rehabilitation is to adapt interventions and therapies to meet specific individual needs, leading to more personalized treatments and improved quality of life. Using data-driven algorithms could improve clinical outcomes for deaf patients undergoing cochlear implantation.
Cochlear implants have allowed the medical community to restore hearing in profoundly deaf individuals; they are the most successful neural prosthesis to date. Unfortunately, current technology does not enable cochlear implants to interface completely with the auditory nervous system — which causes the signal transmitted from a cochlear implant to the brain to be severely degraded compared to natural hearing. Although technology continues to improve, patient outcomes continue to be highly variable, and there is often a gap between desired and measured outcomes. By utilizing data-driven approaches in my research, I aim to significantly reduce this critical gap by better characterizing individual differences at the brain and behavioral levels. My research will provide insight into the type of cochlear rehabilitation strategy that is best suited to a given individual by leveraging the power of artificial intelligence to identify key patterns in patients’ data.
Even after months of intensive rehabilitation, the ability to understand speech varies significantly between cochlear implant users. It is not currently possible to predict how good a patient’s speech comprehension will be before implantation, and for many, comprehension remains limited. Although a large number of implant users can follow a conversation in a quiet room, and some are also able to communicate over the telephone, speech perception, in general, remain problematic. This includes understanding speech in noisy environments (such as in a restaurant or shopping mall), identifying a person from their voice, and perceiving emotions in voice (i.e., when someone is angry, happy, or sarcastic).
Perceiving emotions from sounds is a particularly complex process. Emotions are wide-ranging and expressed differently each time they are produced, with considerable variation in how they sound. Whereas recognizing auditory emotions is natural for most people with normal hearing, it is very challenging and sometimes impossible for cochlear implant users, even for those with good overall speech comprehension. In everyday life, the failure to perceive specific emotional expressions may result in inadequate behavior and, in turn, impair social relationships. In young children, it may additionally affect coping strategies — how they manage their own emotions. Profound adverse effects, therefore, extend beyond the realm of hearing, impacting an individual’s development, socio-economic prospects, and general quality of life. A critical step towards addressing the current obstacles that cochlear implant users face daily is understanding how brain adaptation to implant technology, as well as patient capabilities and strategies, affect real-life outcomes.
My research will help to identify the brain mechanisms underlying the impaired auditory processes in cochlear implant users, with an initial focus on the perception of auditory emotions. The first step will be to use a data-driven approach to integrate neuroimaging data (electroencephalography) and behavioral data such as the users’ emotion perception abilities with other relevant information the patients decided to share from their medical charts. From these sets of data, specifically chosen machine learning algorithms will help identify the features that can most accurately predict a patient’s outcomes.
Eventually, the hope is that predictive models will help identify the brain responses associated with particular emotions and the specific acoustic information used by cochlear implant users to differentiate emotions successfully. In time, similar models will help establish what combinations of features best predict clinical outcomes beyond those specific to emotion perception (the initial focus of this project). This new understanding of the features responsible for the variability in perception abilities between cochlear implant users will help maximize the potential of implantation and inform personalized interventions. In most cases, even marginal improvement can translate into tremendous benefits for a patient’s quality of life and socio-professional prospects.
This computational neuroscience project is only a stepping stone for the implementation of a large-scale data repository for a new cochlear implantation unit that will study many aspects of auditory perception and quality of life in cochlear implant users. It will help establish the basic research protocols necessary to facilitate massive data processing intended for all clinical outcomes. Ultimately, this project will lead to the development of custom-built software, providing an easy-to-use tool to measure these predictive features in-clinic.
This article was produced by Sébastien Paquette, Postdoctoral Intern, Département de Psychologie (Université de Montréal), with the guidance of Marie-Paule Primeau, science communication advisor, as part of our “My research project in 800 words” initiative.