IVADO brings together a world-class research community in the fields of artificial intelligence, life sciences, natural sciences and social sciences, paving the way for robust, reasoning and responsible artificial intelligence developed for the benefit of society and the planet.

It is composed of nearly 1,500 scientists, 14 academic members and over twenty partners actively involved in the R³AI project.

Laureates of our different programs

  • 2024
  • 2023
  • 2022
  • 2021
  • 2020
  • 2019
  • 2018
  • 2017
  • Postdoc-entrepreneur program

    Annie Desmarais

    Supervised by: Arnaud Saj

    Université de Montréal

    Flow develops a SaaS platform specialized in the field of mental health diagnostic assessment powered by artificial intelligence (AI). The product, the result of doctoral research in neuropsychology, consists of an intelligent supervisor that will help professionals to sort out signs and symptoms in the context of differential diagnoses. It is the first conversational assistant powered by cutting-edge algorithms connected to scientific mental health literature and the patient’s electronic medical record, to support the professional in his or her clinical reflection.

    The project will be developed in the research laboratory of Arnaud Saj, professor of neuropsychology at the Université de Montréal.

    Arthur Ouaknine

    Supervised by: David Rolnick

    McGill University

    Rubisco AI is developing a disruptive technology based on drone imagery combined with artificial intelligence, to monitor plantations hundreds of times faster and more accurately than manual methods. This technology can detect, segment, identify species and estimate the carbon stored by each tree to ensure large-scale monitoring, transparency to eliminate fraud and demonstrate the positive impact of forest restoration.

    Postdoctoral research funding

    Faustin Armel Etindele Sosso

    Supervised by: Martin Cousineau

    HEC Montréal

    Perceptions and intentions of Quebec health and social service professionals about artificial intelligence

    The integration of artificial intelligence (AI) into the medical and paramedical professions seems inevitable. However, the perceptions and intentions of Canadian health and social service professionals regarding the technological and organizational improvements resulting from this integration remain poorly understood and very little documented in the scientific literature. This major project will analyze and document the socioeconomic, behavioral and environmental determinants of the future integration of AI into clinical practice, from the perspective of clinicians themselves. It will also explore conceptual differences in AI among health and social service professionals in Quebec.

    Hugo Cossette-Lefebvre

    Supervised by: Jocelyn Maclure

    McGill

    When is AI discriminatory?

    My research project will focus on two questions:

    1. What are the ethical issues related to the use of predictive algorithms in decision-making processes?
    2. What measures need to be put in place to ensure that the rights and freedoms of all are respected despite the use of these technologies?

    My research project will identify how it is possible to ensure the responsible adoption of AI, and refine our understanding of the links between AI and equity, diversity, and inclusion by studying how these new technologies should be regulated to ensure that their use is not discriminatory.

    Ke Sun

    Supervised by: Yichuan Ding

    McGill

    AI-Driven Resilience in Agrifood: Optimizing Operations in Quebec’s Online Grocery Platforms

    Our project focuses on various quality grades of farm products, aiming to address the challenge of substitutions when exact quality requirements are unmet. To model the order fulfillment process, we plan to develop a bipartite queueing system and adapt fluid approximation techniques for analyzing the system dynamics. By utilizing extensive operational data, we aim to develop an AI-driven decision support system dedicated to optimizing service capacity rationing—specifically, farmland allocation—and inventory replenishment strategies.

    Francisco Berumen Murillo

    Supervised by: Jean-François Carrier

    Centre Hospitalier de l'Université de Montréal, Université de Montréal

    Artificial Intelligence in Brachytherapy: Reliable, Personalized and Enhanced Radiation Therapy

    Radiation medicine has significantly advanced, particularly in precision medicine treatments such as brachytherapy, which offers highly focused radiation doses to tumours while sparing healthy tissue. This research aims to enhance brachytherapy with Artificial Intelligence (AI) algorithms by developing applications that improve treatment accuracy and safety. The initiative focuses on developing a framework for personalized adaptive treatments following the R3AI principles, emphasizing the development of AI that is not only technically advanced but also ethically sound and responsible. This effort actively involves clinical medical physicists, the end-users of the technology, in the prototyping and testing phases.

    Wenmin Zhang

    Supervised by: Guillaume Lettre

    Université de Montréal

    Developing Responsible AI for Genetics-Guided Drug Target Discovery in Cardiovascular Diseases across Diverse Ancestries

    The complex nature of cardiovascular diseases necessitates continuous innovation in the development of novel therapeutic strategies. Human genetics has emerged as a transformative tool for drug target discovery. However, non-European ancestry populations may not benefit from these discoveries due to underrepresentation in existing data resources. Therefore, I aim to develop and apply novel responsible AI methods for identifying causal genetic variants, based on multi-ancestral genome-wide association studies for cardiovascular diseases, and conduct genetics-guided causal inference to identify drug targets for cardiovascular diseases at the transcript, protein, or metabolite levels, focusing on populations of diverse ancestries.

    Michael Catchen

    Supervised by: Timothée Poisot

    Université de Montréal

    Deep learning for species distribution modeling and monitoring

    My postdoc project will apply deep learning methods to answer one of the most fundamental questions about life on Earth: where are species? Species Distribution Models (SDMs) are widely used by ecologists to make decisions about biodiversity conservation, agricultural pest control, and the management of invasive species. My work will focus on several challenges in this domain, including (1) application of modern tools from computer vision to SDMs, (2) methods for species representation and transfer learning to fill in gaps for data deficient species, and (3) semi-supervised methods to optimize the spatial design biodiversity monitoring programs.

    Andreea Musulan

    Supervised by: Jean-François Godbout

    Université de Montréal

    In Defense of Electoral Integrity: Detecting Inauthentic Activity/Pour la défense de l’intégrité électorale: Détection des activités inauthentiques

    As inauthentic communication has become an increasingly challenging problem, my research project aims to produce insights into its evolution and detection. Using cutting edge machine learning, network, and temporal based approaches to analyzing cross-platform communication, and collaborating with a team of computer and political scientists at the Complex Data Lab, I will contribute to the development of techniques and methods for the detection of coordinated online activity. My goal is to help develop tools for policymakers and social media platforms that can help defend electoral integrity in democracies like Canada.

    Antoine Boudreau LeBlanc

    Supervised by: Blake Richard, Evelyne de Leeuw, Anne-Sophie Hulin

    Université McGill, Université de Montréal, Université de Sherbrooke

    Ecosystem of Ethics, Governance and Responsibility: Towards a Future Artificial Intelligence for Mental Health

    The adoption of artificial intelligence (AI) in mental health requires the development of trustworthy and responsible governance ecosystems. This project demystifies these interactions between Ethics, Law and Politics to discover reflexive, adaptive and collaborative modes of management and framing capable of responding to the complexities of AI in research and society. Through comparative science and experience in data management, this project seeks to advance the concept of governance to overcome the challenges surrounding the management of the Commons. Then, using these learnings, it will transfer them to a new legal model, integrating by default Science and Ethics with Law in the case of mental health.

    Ori Ernst

    Supervised by: Jackie Cheung

    McGill

    Heterogeneous Multi-Document Summarization: Summarizing Implicitly Related Documents

    The increasing abundance of textual information necessitates the development of effective methods for aggregating and utilizing data from multiple sources. While traditional approaches to multi-source setups assumed the presence of predefined collections of related and redundant documents, the reality is that humans often encounter document sets lacking a clear common narrative. In such cases, a preliminary step of document-relation identification becomes essential. To allow research in this area, we propose establishing the “heterogeneous multi-document” task with a dedicated multi-document summarization dataset where the document relation is unclear. We will also release a specific dataset for the document-relation identification task. The availability of these datasets along with new baseline models will extend the summarization task to a more realistic framework.

    Aude-Marie Marcoux

    Supervised by: Lyse Langlois

    Université Laval

    Collection and Sharing of Personal Data via Smartphones: Results of Citizen Deliberation

    What is acceptable (or not) regarding collecting and sharing personal data gathered on individuals’ smartphones by AI systems? To address this question, which remains insufficiently investigated, the method of citizen deliberation will be mobilized as an epistemological and methodological choice in this qualitative and inductive research conducted among a diverse group of citizens. The citizen recommendations arising from this project aim to legitimize, within the societal context of Quebec, political and legal provisions and guidelines for the private sector related to the principle of protecting citizens’ privacy in the face of AI intrusion.

    Joy Hutchinson

    Supervised by: Benoît Lamarche

    Université Laval

    Harnessing the potential of machine learning to help promote sustainable dietary patterns

    Improving the sustainability of population dietary patterns has the potential to simultaneously improve human and planetary health. However, many known and unknown factors contribute to whether someone consumes a sustainable diet.
    This project will apply machine learning models to identify the key factors that help people follow sustainable dietary patterns. This project will draw upon an existing cohort of adults in Quebec who participate in the NutriQuébec project.

    This research will provide needed insights to reduce barriers to sustainable eating to improve human and planetary health.

    Postdoc-entrepreneur program

    Roseline Olory

    Supervised by: Samuel Kadoury

    Polytechnique Montréal

    Project: BrainInnov

    BrainInnov Inc. develops an intelligent early detection system for lung cancer. Lung cancer is the most commonly diagnosed in both men and women and one of the deadliest in Canada (22% 5-year survival rate) and worldwide. Around half of all lung cancers are diagnosed too late, considerably reducing the chances of cure, partly due to a strong unmet need for early detection. An early detection system like ours could increase lung cancer patients’ chances of survival by more than 63% over 5 years.

    Roberto Felipe Salamanca Girón

    Supervised by: Marco Bonizzato

    Polytechnique Montréal

    Project: PhantasiAI

    PhantasiAI develops non-invasive neuroprosthetics using multiple AI architectures to deliver “symphonic” electrical stimulation capable of influencing neuroplasticity. In this way, the technology aims to provide optimal neuromodulation, which thanks to unprecedented control of arbitrary biomimetic waves, can dramatically improve learning and rehabilitation in healthy people and neurological patients.

    Raul Rodriguez Cruces

    Supervised by: Boris Bernhardt

    Université McGill

    Project: Brainscores

    Brainscores is an AI-powered collaborative imaging platform dedicated to enhancing the diagnosis and management of epilepsy. While anti-seizure medications are the first line of defense, they fail in one-third of patients, leaving surgery as the last resort. Current presurgical evaluation to identify the seizure source requires the monitoring (EEG) and acquisition of several brain images (e.g., MRI), as well as the feedback from multiple experts involved in the epilepsy care team. However, negative MRI results occur in 50% of cases due to subtle brain lesions undetectable by human experts, which may affect the accuracy of surgical planning, lead to suboptimal outcomes, and even hinder patients’ chances of undergoing surgery. Brainscores aims to provide a more precise evaluation of epilepsy cases by leveraging AI for increased lesion detection rates, multimodal imaging for more comprehensive patient data representations, and collaboration for multidisciplinary consensus-based decision-making.

    Strategic Research Funding Program

    Strategic Research Funding Program

    Topic 1 – Integrated Machine Learning and Optimization for Decision Making under Uncertainty: Towards Robust and Sustainable Supply Chains

    Lead researchers: Erick Delage (HEC Montréal, GERAD), Yossiri Adulyasak (HEC Montréal, GERAD), Emma Frejinger (Université de Montréal, CIRRELT)

    Nearly all decision problems involve some form of uncertainty. This is especially true in supply chains where, e.g., demand, cost, capacity, and travel time’s high variability considerably complicate the planning of procurement, production, distribution, and service activities. Due to constantly evolving environments and the high frequency of data acquisition, classical decision-making that is based on training models, validating them, to finally optimize decisions does not suffice anymore. This research program aims at developing new methods for making the most effective and adaptive use of data in decision-making. It is founded on modern optimization and machine learning perspectives that encompasses developments in deep reinforcement/end-to-end learning, risk averse decision theory, and contextual/distributionally robust optimization. Its mission is three-fold: (i) develop the next generation of methods to deal with uncertainty in data-driven risk-aware optimization models by integrating machine learning; (ii) identify scientifically challenging and high-impact opportunities for improving robustness in supply chains; and finally (iii) stimulate the integration of stochastic optimization models among our partners while defining use cases that will guide future methodological advances. Overall, this program envisions a virtuous cycle of scientific discoveries that are both fueled by and transformative for an important sector of the Canadian economy.

    Topic 2 – AI, Biodiversity and Climate Change

    Lead researchers: Etienne Laliberté (Université de Montréal, IRBV), Christopher Pal (Polytechnique Montréal, Mila), David Rolnick (Université McGill, Mila), Oliver Sonnentag (Université de Montréal), Anne Bruneau (Université de Montréal, IRBV)

    Climate change is altering plant biodiversity, with potentially catastrophic consequences on the resilience and functioning of terrestrial ecosystems. A major source of uncertainty in the global terrestrial carbon budget, and thus for future climate projections, is how plant species differ in their phenological responses to seasonal climate fluctuations. In addition, climate change reshuffles plant species distributions across entire landscapes, but we are unable to keep track of those changes in biodiversity using classical field-based sampling. Remote sensing technologies such as phenocams or drones offer potential to study plant phenology and biodiversity in great detail across spatial scales. These new approaches could revolutionise biodiversity science and conservation, and help guide the design of nature-based solutions essential to mitigate the effects of  climate change. New AI algorithms are needed to unlock the full potential of this transformative technology and its links to more traditional data streams and products. This program will develop these new algorithms, building on the most recent developments in computer vision and meta-learning to map plant species and their phenological signatures. Algorithms will be put directly into the hands of scientific and non-scientific end-users via the development of an active learning platform. This AI research will empower researchers and practitioners to turn imagery into actionable data about plant biodiversity and phenology, providing them with tools to help fight biodiversity loss and the effects of climate change.

    Topic 3 – Human health and secondary use of data

    Lead researchers: Michaël Chassé (Université de Montréal, CRCHUM), Nadia Lahrichi (Polytechnique Montréal, CIRRELT), An Tang (Université de Montréal, CRCHUM)

    Artificial intelligence (AI) technologies hold the potential to transform healthcare. These technologies are emergent in logistics and imaging, and hundreds of algorithms are now being developed to help support care delivery. Many challenges remain, however, when it comes to scale-up for use in the field. One such challenge is ensuring the generalizability of such algorithms. How can we guarantee the effectiveness of one model on a data set with characteristics that differ from the one the algorithm learned with? For example, an algorithm trained using data from a specific population may not perform as well when applied to a different population.

    This program therefore aims to study new methods for improving generalization, and pursues four objectives. First, set up a research environment enabling the study of methods likely to improve generalization in real-world contexts. Second, optimize data flows obtained in real-world healthcare settings to serve algorithm research. Next, investigate specific issues related to algorithm generalization and secondary use of medical data. Lastly, create an open data set that can be used to build upon the research program findings.

    Topic 4 – AI for the discovery of materials and molecules

    Lead researchers: Yoshua Bengio (Université de Montréal, Mila), Michael Tyers (Université de Montréal, IRIC), Mickaël Dollé (Université de Montréal), Lena Simine (Université McGill)

    Designing molecules with desired properties is a fundamental problem in drug, vaccine, and material discovery. Traditional approaches to designing a new drug can take over 10 years and a billion US dollars. Materials have been developed solely based on their performance characteristics leading to materials composed of rare, often toxic elements, which can inflict significant environmental damage. Artificial intelligence (AI) has the potential to revolutionize drug and material discovery by analyzing evidence from large amounts of data accumulated and learning how to search in the compositional space of molecules, and hence significantly accelerate and improve the process.

    This program aims to build an efficient and effective machine learning framework for searching molecules with designed properties. It will be crucial to build upon, and extend, ongoing collaborations (i) between Mila and IRIC, aimed at optimizing the algorithms to discover new antibiotics and (ii) between Mila and materials experts at McGill and Université de Montréal, on the development of materials with environmental applications like fighting climate change. This multidisciplinary project also raises exciting fundamental challenges in AI regarding learning to search, modeling and sampling complex data structures like graphs, and may have applications to scientific discovery more broadly.

    Topic 5 – Human-centered AI: From Responsible Algorithm Development to Human Adoption of AI

    Lead researchers: Pierre-Majorique Léger (HEC Montréal, Tech3lab), Sylvain Sénécal (HEC Montréal, Tech3lab)

    Human-AI interactions are common nowadays. We interact with artificial intelligence daily in performing many professional and personal tasks. Humans’ adoption of AI, however, is far from automatic, successful or satisfactory. Whether we are citizens, employees or consumers, issues such as bias, lack of trust and even low user satisfaction affect our likelihood of adopting AI in various contexts. To foster adoption, a holistic approach to AI is therefore needed. This multidisciplinary research program is investigating the full cycle of responsible AI development, from inception to adoption by users, putting people at the heart of the process. The goal is to map out guidelines for human-centred AI design using an iterative, multimethod methodological approach led by a multidisciplinary research team.

    Masters excellence scholarships

    Berk Bozkurt

    Supervised by: Aditya Mahajan

    McGill University

    Model-Based Reinforcement Learning for Constrained Markov Decision Processes

    “Despite the significant amount of research being conducted in the literature regarding the changes that need to be made to ensure safe exploration for the model-based reinforcement learning methods, there are research gaps that arise from the underlying assumptions and poor performance measures of the methods that are being proposed. As these certain gaps are preventing these proposed methods to be actually implemented as real life engineering solutions, they should be thoroughly investigated.

    Our project aims to closely study these research gaps in order to obtain methods that asymptotically converge to the true model while ensuring safe exploration and to characterize the regret analysis of these methods, ultimately coming up with models that are closer to being actually implemented for real life applications.”

    Simon Chamorro

    Supervised by: Christopher Pal

    Polytechnique Montréal

    Sim-to-real transfer for robotic control learning algorithms

    Machine learning is a growing field that is very well suited for the development of robotic control algorithms. More precisely, reinforcement learning is especially relevant for this type of problem, where the robot must perform actions and interact with its environment. However, deep learning algorithms require phenomenal amounts of data as well as a lot of computational resources. These requirements make their training on robotic platforms very difficult. For this reason, the use of simulation platforms for the development and training of neural networks is very important, especially in the field of robotics. Therefore, it becomes essential that the learning performed in simulation be transferable to reality. This represents a great generalization challenge for the algorithms used, and it is an active research topic. The subject of the proposed research is to explore different generalization methods in order to develop algorithms to transfer the knowledge acquired in simulation to a real context. This could have a big impact and push the limits of the intelligence of today’s robots. Robotics can be applied to almost any industry and can automate, accelerate, and democratize services as well as help alleviate labor shortages. Possible applications are autonomous vehicles, assistance robots for the elderly or in hospitals, delivery drones, and many more.

    David Chemaly

    Supervised by: Julie Hlavacek-Larrondo

    Université de Montréal

    Mesurer la masse de trous noirs lointains à l’aide de lentilles gravitationnelles et de l’apprentissage automatique

    Dû au temps que prend la lumière à se déplacer, observer un objet distant signifie regarder dans le passé, soit quand la cible est beaucoup plus instable et énergétique. Ainsi, étudier un trou noir lointain nous permettrait d’en apprendre énormément sur les débuts de notre univers. L’objectif du projet est donc de développer, à l’aide de l’apprentissage automatique par réseau de neurones, un algorithme capable de rapidement et aisément résoudre la cinématique de trous noirs supermassifs lointains observés au travers de lentilles gravitationnelles. La résolution supérieure de ces données mènera à l’approfondissement de notre compréhension de jeunes trous noirs. Ceci permettra alors, pour la première fois, de mesurer la masse des trous noirs supermassifs lointains, un défi monumental qui s’avère impossible avec les méthodes traditionnelles d’astronomie et qui requière les techniques innovatrices des lentilles gravitationnelles, combinées aux outils d’apprentissage automatique. Suite à cela, les données obtenues seront sujettes à une étude physique approfondie lors d’un stage d’été à l’université de Cambridge au sein de l’équipe de Prof. Roberto Maiolino, directeur du Kavli Institute for Cosmology. Considérant leur résolution révolutionnaire, celles-ci mèneront certainement à de nouvelles découvertes. Étant donné le grand nombre de lentilles gravitationnelles connues, ce projet permettra de distinguer les structures jamais vues auparavant d’une quantité importante de trous noirs, tout en ayant le potentiel d’enrichir le savoir de la communauté scientifique sur l’état primaire de notre univers. Ce projet, co-supervisé par Prof. Hezavehe et Prof. Hlavacek-Larrondo, deux détenteurs des Chaires de recherche du Canada se terminera par la publication d’un article en tant que premier auteur.

    Olivier Denis

    Supervised by: Jean-François Arguin

    Université de Montréal

    Identification des électrons du LHC à l’aide de réseaux neuronaux convolutifs

    “La recherche en physique des particules a recours à des expériences qui tentent de recréer les conditions propices à la création des constituants fondamentaux de notre univers. L’expérience qui nous permet présentement d’accomplir cet exploit est un accélérateur de particule situé près de Genève en Suisse : le Grand collisionneur de hadrons (LHC). Celui-ci est un immense anneau souterrain de près de 27 km de circonférence, dans lequel deux faisceaux de protons sont accélérés en direction opposée à des vitesses très près de la vitesse de la lumière pour ensuite entrer en collision et ainsi produire les particules que l’on désire étudier.

    Notre équipe se sert des données provenant du détecteur ATLAS (A Toroidal LHC ApparatuS), qui est l’une des 4 principales expériences au LHC. Celui-ci est composé d’une multitude de couches d’appareils servant à mesurer les trajectoires ainsi que les énergies des différentes particules produites à chaque collision.

    Lorsqu’il fonctionne, le LHC génère une quantité faramineuse de données à analyser, parmi lesquelles l’identification des électrons joue un rôle clé pour de nombreuses analyses et a, entre autres, permis la découverte du fameux boson de Higgs.

    Dans cette optique, notre groupe de recherche travaille donc à développer un classificateur qui peut identifier quelles particules sont de vrais électrons et quelles autres ne sont que du bruit de fond. Le classificateur est réseau de neurones qui est typiquement utilisé pour reconnaître des images. On fournit à ce réseau les images correspondant aux trajectoires et aux dépôts d’énergie des particules détectées dans ATLAS et le réseau peut alors identifier les électrons avec une efficacité 10 fois meilleure que celle de l’algorithme actuellement utilisé.”

    Clara El Khantour

    Supervised by: Karim Jerbi

    Université de Montréal

    Étude de la cognition sociale dans la perception du rire : comparaison des modulations de l’activité cérébrale en MEG chez des sujets neurotypiques et sur le spectre de l’autisme par des méthodes d’apprentissage machine

    “Le rire est une forme de communication non-verbale que nous utilisons souvent dans nos interactions au quotidien, mais qui a longtemps été négligée par la littérature neuroscientifique. Pourtant une meilleure compréhension de notre capacité à décoder le rire d’un interlocuteur aurait des implications importantes dans l’étude des interactions sociales et de certains troubles neuro-développementaux associés à des déficits de la cognition sociale, comme c’est le cas des personnes présentant un trouble du spectre de l’autisme (TSA).

    Ce projet vise à déterminer les corrélats neuronaux et les processus de la cognition sociale sous-tendant la reconnaissance du rire social et spontané en magnétoencéphalographie (MEG) chez des individus neurotypiques et présentant un TSA en utilisant des méthodes d’apprentissage machine supervisée.

    Nous appliquerons sur les données cérébrales MEG des méthodes statistiques en apprentissage machine supervisé, afin de mettre en évidence les paramètres discriminants entre les individus neurotypiques et les individus présentant un TSA lors de la perception et la reconnaissance du rire social et spontané.

    À plus long terme, ce projet permettra de fournir de nouveaux outils pour le dépistage des maladies mentales, telles que l’autisme, dont les capacités sociales sont affectées.”

    Jean-Simon Fortin

    Supervised by: Sébastien Hétu

    Université de Montréal

    L’habénula en contexte de gambling: une étude en imagerie par résonance magnétique fonctionnelle

    Le jeu pathologique est un trouble mental caractérisé par un patron de jeu continu malgré des conséquences physiques, psychologiques et sociales négatives. L’habénula, une petite structure du cerveau situé dans le mésencéphale, semble impliquée dans cette pathologie. Il a été proposé qu’une dysfonction dans le traitement de l’information liée au feedback négatif empêcherait l’habénula de jouer son rôle dans l’extinction des comportements et dans l’utilisation du feedback négatif par le cerveau pour adapter sa stratégie d’un essai à l’autre. Cela serait à la source de l’incapacité des joueurs pathologiques à se servir du feedback négatif pour adapter leurs réponses et, ultimement, arrêter de parier (Zack et al., 2020). En raison de son possible rôle dans le jeu pathologique, il est essentiel de mieux comprendre le rôle du système de l’habénula dans une tâche de prise de décision similaire à celles auxquelles prennent part les joueurs. Toutefois, le système de l’habénula demeure méconnu (Yoshino et al., 2020) dû à la difficulté d’acquisition de données d’IRMf des régions du mésencéphale, entre autres à cause de leur petite taille et de leur proximité avec des artères importants du cerveau (Lawson et al., 2013). La présente étude vise, en employant une tâche de gambling, à vérifier si l’habénula fait partie d’un réseau distribué (Hétu et al., 2016) dont l’activité permet de prédire si le participant a pris une bonne décision et obtenu une récompense vs une mauvaise décision et subi une perte. Pour ce faire, nous utiliserons des “multi-voxel pattern analysis” (MVPA). Les MVPA prennent en compte des patrons d’activité s’étendant sur des ensembles de voxels afin de décoder l’information qu’ils contiennent collectivement (Cohen et al., 2017). L’une des manières d’utiliser les MVPA consiste à utiliser des classificateurs issus de l’apprentissage machine. Les classificateurs apprennent à pondérer l’activité de chaque voxel afin d’identifier une frontière de décision pour classifier Condition X vs Condition Y. Notre hypothèse est que l’activité cérébrale du système de l’habénula permettra de classifier si le participant a gagné ou perdu, suggérant ainsi un rôle de l’habénula dans le traitement de l’information relative à l’issue d’une décision (gain vs perte) en contexte de gambling. En mettant à profit des analyses basées sur l’apprentissage automatique, ce projet fournira de nouvelles connaissances sur le rôle du système de l’habénula dans un contexte de gambling, ce qui pourrait, à terme, améliorer notre compréhension du jeu pathologique et mener à des traitements.

    Oumayma Gharbi

    Supervised by: Elie Bou Assi

    Université de Montréal

    AI and data mining EEG signals to classify depression and anxiety

    According to the mental health commission of Canada, mental illness impacts 1 in every 5 Canadians each year. Unfortunately, the screening of patients with psychiatric conditions remains descriptive and is based on subjective questionnaires. The aim of this research project is to develop quantitative methods based on artificial intelligence capable of automatically screening patients with psychiatric conditions based on electroencephalography (EEG) recordings (brain’s electrical activity). Our specific objective is to use computational methods to implement probabilistic classification algorithms able to provide anxiety and depression scores based on EEG recordings. This research will be deployed at the Centre Hospitalier de l’Université de Montréal (CHUM) where more than 1200 patients are admitted for a routine EEG per year. The ability to identify anxiety and depression precursors from routine EEG recordings would significantly improve patient care and accelerate the referral of patients before their condition worsens.

    Rose Guay Hottin

    Supervised by: Marco Bonizzato

    Polytechnique Montréal

    Clinical transfer learning pour l’intégration des connaissances d’experts dans l’optimisation automatisée des neurostimulations.

    “La neuromodulation, ou stimulation électrique du système nerveux, est de plus en plus utilisée pour plusieurs indications cliniques. Par exemple, la stimulation cérébrale profonde peut être utilisée pour réduire les effets moteurs de la maladie de Parkinson tandis que la stimulation spinale épidurale peut être indiquée pour traiter la douleur chronique. L’efficacité de ces techniques dépend grandement des paramètres de stimulation (cible spatiale, intensité, forme des impulsions). Ainsi, les cliniciens sont confrontés au processus très coûteux de programmation de neurostimulations optimales dû au nombre énorme des combinaisons possibles.

    Nous avons développé des systèmes automatisés, basés sur l’IA, pour pallier ce problème. Nos algorithmes surpassent la programmation humaine en trouvant rapidement la combinaison optimale dans plusieurs contextes de neuromodulation en temps réel. Il est important de noter que ce résultat est possible même en ayant fourni aucune connaissance préalable sur l’allure des données. Or, les chercheurs et cliniciens ont souvent des connaissances préalables ou des hypothèses quant à l’efficacité de certains paramètres. Ces connaissances proviennent de sources variées, telles que des données de groupe, de l’expérience du clinicien ou d’évidence antérieure spécifique au patient. Le transfert de ces connaissances au système algorithmique est d’une grande pertinence pour son utilisation en clinique.

    Nous proposons un système de clinical transfer learning (CTL) permettant l’intégration flexible d’évidences cliniques préalables pour la programmation des neurostimulations. Un tel système devrait pouvoir tirer avantage des connaissances apportées en convergeant plus rapidement sur une neurostimulation optimale. Il devrait mettre à jour la connaissance existante en identifiant les différences avec les données récoltées. Le système avec CTL devrait également être assez robuste pour que sa performance ne soit pas affectée négativement si l’information proposée est inexacte. Notre nouveau système algorithmique avec CTL sera développé et évalué à l’aide de bases de données de stimulation chez l’animal déjà disponibles, ainsi que des données cliniques obtenues chez des patients traités pour la maladie de Parkinson et le Tremblement Essentiel avec la stimulation cérébrale profonde.

    En ligne avec l’objectif d’IVADO de transformer les soins de santé grâce à l’intelligence numérique, ce projet vise à engendrer un changement de paradigme en programmation clinique des neurostimulations en développant un système d’optimisation pouvant tirer avantage conjointement de l’IA et du savoir clinique.”

    Nanda Harishankar Krishna

    Supervised by: Guillaume Lajoie

    Université de Montréal

    Probing learning dynamics in recurrent neural networks and the brain

    “Artificial Intelligence powered by neural networks is the driving force behind numerous applications in today’s world. While neural network models are capable of surprisingly good performance on many tasks, they are brittle and susceptible to adversarial attacks. Such challenges raise questions on their reliability and applicability to safety-critical domains, and thus motivate the study of mechanisms to improve their robustness.

    Studying the mechanisms that underpin learning in the brain, which is capable of robust generalisation, could provide us insight on making neural networks more robust. Key to this study is understanding the dynamics of learning in the brain. We propose to develop techniques for the analysis of learning dynamics from activations observed during training, based on experiments with RNNs. We then aim to utilise these tools to analyse learning dynamics in a proxy parameter space extracted from neural activity, for a brain-machine interface task.

    This would help us gain valuable insight on the nature of parameter updates in the brain, and also the nature of objectives that drive learning. Furthermore, it would spur the development of superior biologically-inspired algorithms for deep learning models, with the goal of improving their reliability.”

    Nizar Islah

    Supervised by: Eilif Muller

    Université de Montréal

    Neocortical Continual Learning

    Long-term activity dependent changes in synaptic strength are thought to underlie learning in the brain, but the mechanisms by which these synaptic changes are coordinated to achieve human-level learning remain a mystery. While deep learning in artificial neural networks provides a concrete example for how learning might be coordinated, contrasting and understanding the key differences to learning in brains could provide both a better understanding of the learning mechanisms in the brain, and inspire new deep learning approaches. The ability to learn new things without forgetting previously what was previously learned, known as Continual Learning, is one property of the human neocortex that deep learning doesn’t reproduce. This project aims to understand the mechanisms of how this emerges using modeling approaches unifying deep learning with architecture and constraints from the neocortex.

    Katia Jodogne-del Litto

    Supervised by: Guillaume-Alexandre Bilodeau

    Polytechnique Montréal

    Détection et segmentation d’instances d’objets en temps réel par approximation de masque

    “Le développement des véhicules intelligent est une avancée technologique majeure, qui se repose en partie sur des techniques de vision par ordinateur pour observer l’environnement, que ce soit dans les cas de conduite autonome ou d’assistance à la conduite. Il est nécessaire de pouvoir détecter en temps réel et de manière précise les usagers de la route dans une scène dense. L’utilisation d’approximation permet d’augmenter fortement la vitesse de détection sans pour autant perdre beaucoup d’informations.

    En vision par ordinateur, une des briques essentielles à de nombreuses tâches est la détection d’objets dans une image. Il s’agit pour le système d’être capable de produire un rectangle autour d’une instance de l’objet, tout en l’identifiant. On peut ensuite indiquer pour chaque pixel s’il fait partie ou non de l’objet détecté, c’est la segmentation d’instances. Celle-ci produit un masque pour chaque instance détectée. Il s’agit d’une méthode beaucoup plus précise, mais également beaucoup plus coûteuse en temps de calcul. L’approche intermédiaire proposée dans ce projet est un compromis entre le simple rectangle et le masque. Pour chaque instance, c’est un polygone qui sera tracé autour de l’objet, détecté à partir de son centre. La méthode s’applique principalement aux usagers de la route dans des cadres urbains où la densité est forte, avec les ensembles de données Cityscapes, KITTI, IDD. Il s’agit dans ce projet de s’inspirer de la méthode de détection CenterPoly pour proposer une approche plus robuste, en s’intéressant tout particulièrement à l’impact de la fonction d’erreur et du système de représentation des polygones d’approximation.”

    Ilitea Kina

    Supervised by: Louis-Martin Rousseau

    Polytechnique Montréal

    Utilisation de l’intelligence artificielle pour prédire les délais logistiques menant à la congestion de l’urgence

    “La congestion dans les urgences est un véritable fléau au Québec, avec un impact sur la qualité de soins aux patients et le bien-être du personnel soignant. Malgré que de nombreuses études ont démontré une augmentation de la mortalité et de la morbidité chez les patients qui subissent la congestion hospitalière, celle-ci demeure difficile à prédire avec les outils et mesures actuels. Dans un contexte de pénurie de personnel, ce problème est d’autant plus important.

    La congestion des urgences découle des délais logistiques associés aux examens de radiologie et de biochimie, aux délais de consultations et aux délais de transfert intra-hospitalier des patients. En analysant les données de ces différents délais dans la trajectoire du patient, nous voulons prédire, avec un outil d’intelligence artificielle, des seuils de délais qui sont à risque de faire congestionner l’urgence. Ces délais seront dynamiques; ils varieront selon l’état de l’urgence en temps réel. En ayant des seuils qui prédisent la congestion à l’urgence en temps réel, ou bien une combinaison de plusieurs seuils, les urgences auront la capacité de rapidement détecter pour quels départements des solutions devraient être entreprises. Une prise en charge plus rapide de ces délais pourrait permettre d’éviter la congestion à l’urgence.

    Notre analyse se fera à partir de l’urgence de l’hôpital de la Cité-de-la-Santé de Laval, une des urgences les plus achalandées au Québec, et inclura les données des trois dernières années. Nous créerons des algorithmes d’apprentissage automatique, en se concentrant sur les optimal classification trees, afin de prédire la congestion de l’urgence. Nous évaluerons leur performance prédictive pour déterminer les techniques les plus adaptées à l’urgence de Cité-de-la-Santé.”

    Élodie Labrecque Langlais

    Supervised by: Frédéric Lesage

    Polytechnique Montréal

    Développement d’un algorithme de prédiction automatisée du succès de la procédure chirurgicale Transcatheter Aortic Valve Implantation (TAVI) à partir d’images CT

    Les valvulopathies cardiaques touchent de 8 à 13 % de la population âgée de plus de 65 ans dans le monde entier et ce fardeau s’alourdit avec le vieillissement de la population. Longtemps traitées par des interventions chirurgicales à cœur ouvert, le remplacement valvulaire aortique percutané (Trans Aortic Valvular Implantation, TAVI) a révolutionné le traitement de la sténose aortique. Il consiste à remplacer la valve aortique malade en utilisant un cathéter introduit par une ponction de l’artère fémorale. Initialement adopté seulement pour les patients inéligibles à la chirurgie ou à haut risque chirurgical, il est maintenant validé pour les patients avec tous les profils de risque. L’expansion accrue du TAVI reste cependant dépendante de son succès clinique, soit le succès procédural (décès, migration de la valve) et de la sécurité précoce (la réduction au minimum des complications telles que les fuites autour de la valve et les troubles de conduction qui affectent la survie à long terme). Le succès clinique du TAVI repose lourdement sur la caractérisation de la valve aortique et des structures anatomiques adjacentes observés sur des CT scan cardiaque. Aucun score clinique de prédiction du succès du TAVI n’existe actuellement. Seules des avancées technologiques comme l’intelligence artificielle et plus précisément les réseaux neuronaux convolutifs peuvent prédire le succès du TAVI à partir des CT scans cardiaques. L’objectif principal du projet est alors de développer un algorithme qui permet de prédire automatiquement le succès d’une procédure TAVI à partir de CT scans du cœur.

    Myriam Lizotte

    Supervised by: Guy Wolf

    Université de Montréal

    Diffusion Geometry & Topology Approach to Data Fusion and Mitigating Batch Effects

    “It is becoming crucial to combine datasets collected in different circumstances, but this task is challenging due to various inconsistencies introduced by data collection artifacts and inherent biases. This gives rise to a set of challenges often referred to as data fusion or batch effect removal, which can be divided into two main tasks that we aim to tackle in this MSc research. The first is combining data of the same system from different sensors, each with its own calibration, scale, and level of noise. The second is combining datasets that measure the same variables but in different conditions (e.g., subjects, locations, or time of day). This creates a problem called batch effects, where confounding variables hide the effect of true variables of interest. Even small batch effects, or their incomplete removal, can significantly bias statistical conclusions and detract from the ability to provide reliable insights from biomedical data.

    Significant efforts have been recently invested in unsupervised data fusion and mitigation of batch effects. Several diffusion-geometry approaches have been developed, including integrated diffusion and harmonic alignment. The former aims to combine multimodal data while denoising and adjusting for discrepancies in data capture resolutions. The latter aims to rigidly align or fit datasets together so that their information is comparable, in other words alleviating or removing batch effects. In this project, we will build upon and combine ideas from previous diffusion-based approaches, while aiming to relax the required assumptions. We will leverage the recent diffusion condensation framework, which captures data representations at different scales or resolutions, to identify which local regions or resolutions are most appropriate for aligning data, based on their intrinsic topological “shape”. Another goal is to pinpoint meaningful local differences between datasets, as opposed to global deformations due to technical artifacts.

    On a fundamental level, this research will further explore the intersection between diffusion geometry and topological data analysis, merging the two dominant approaches to manifold learning in data exploration. On the application side, it will address critical challenges in biomedical data exploration, especially encountered in high-throughput multi-modal data from multi-sample cohorts, to enable new research avenues and significantly advance the frontiers of AI and health.”

    Aristides Milios

    Supervised by: Siva Reddy

    McGill University

    Language Models Enhanced with Hyperlink Graphs

    Pre-trained language models have achieved remarkable progress in the field of Natural Language Processing in the past few years. These models are trained on massive amounts of text extracted from the internet, in order to learn linguistic patterns and factual information about the world. They then act as general-purpose models, able to be customized on a large number of distinct tasks. This text extracted from the internet is hypertext (e.g. formatting and links), which under the status quo is stripped of all markup to be used as plain text in these models’ pre-training regimes. However, this markup contains information that could potentially be useful for these models to learn. In the status quo, models do learn some limited factual information from the plain text they consume, but certain things remain unclear, namely: how exactly the facts are encoded in the models’ parameters, why they learn some facts but not others, and how to prevent “hallucinations” (models making up incorrect information). One potentially useful aspect encoded in this hypertext is hyperlinks, which form a web of interconnected links between pages. Specifically, this hyperlink graph could be useful to help models learn factual information relating to concepts and entities expressed in the free-form text. The main question this proposal seeks to answer is: how can we best incorporate this hyperlink graph information into the pre-training process of language models to enhance the knowledge they learn about the world? One option is to incorporate the linked content into the model as text along with the regular input, using a control mechanism to ensure the model isn’t overloaded with potentially irrelevant information from the linked content. On an intuitive level, this is conceptually similar to a human looking up a term they are unfamiliar with in an encyclopedia when they encounter it in a text. An alternative approach is to model the hyperlinks as mentions of real-world entities, and the text between two hyperlinks in a given sentence as a relation between them, and to train the model to encode this information directly. Creating more factually reliable models opens up these language models to new uses, for example as a knowledge engine that you can query for information in natural language.

    Camille Rondeau Saint-Jean

    Supervised by: Timothée Poisot

    Université de Montréal

    Reconnaissance de microdialectes et de chants individuels de Bruants des prés avec un réseau neural profond

    “Mon projet consiste à explorer les capacités de l’apprentissage profond pour distinguer le chant d’individus ou de différents groupes sociaux au sein d’une même espèce d’oiseaux. Je dispose d’un répertoire de plusieurs centaines d’heures d’enregistrements de Bruants des prés issus de la population de l’île Kent, au Nouveau-Brunswick. On y entend des chants appartenant à six microdialectes, qui s’apparentent à des accents locaux qui distinguent les oiseaux vivant dans différentes parties de l’île.

    Je prévois entraîner un réseau neural profond à distinguer ces microdialectes et à identifier acoustiquement les mâles que les chercheurs ont observés visuellement sur le terrain. Il est difficile pour un observateur humain de distinguer différents Bruants des prés à l’oreille, car leur chant consiste en une succession très rapide de syllabes brèves et aigües. En analysant les caractéristiques du chant sur lesquelles se basera le réseau neural pour sa classification, nous comprendrons mieux comment ils communiquent entre eux leur identité et leur appartenance à un groupe local.

    Le développement d’outils automatiques facilitant la reconnaissance d’individus ou de dialectes aidera à accélérer les recherches sur le comportement et sur la structure des populations. Quand ils seront mis au point, il sera possible d’exploiter avec un tout autre niveau de détail et à très grande échelle les enregistrements de sons d’oiseaux qui sont désormais faciles et peu coûteux à obtenir sur le terrain. Comme la diversité culturelle et génétique au sein même d’une même espèce est importante pour la survie des populations, les informations que nous pourrons alors acquérir serviront à mieux orienter la conservation dans le but de freiner la crise de la biodiversité.”

    Myriam Sahraoui

    Supervised by: Bruno Gauthier

    Université de Montréal

    Étude de l’évaluation du profil cognitif de l’enfant en combinant EEG haute densité, EEG mobile et intelligence artificielle

    L’évaluation des capacités cognitives chez les enfants se fait essentiellement à l’aide de tests neuropsychologiques. À titre d’exemple, les tests de fluidité graphique mettent en valeur diverses aptitudes comme la flexibilité cognitive, la planification et la créativité. À ce jour, les processus cérébraux associés à ces capacités chez l’enfant sont encore mal compris. Cette limitation est en partie due aux contraintes associées au contexte d’examen nécessitant un enregistrement d’électroencéphalographie (EEG) en laboratoire. Depuis une quinzaine d’années, des versions portatives à faible coût de ces appareils ont été développées pour des usages commerciaux, permettant par exemple de faire de la méditation de façon plus efficace. Ces appareils sont de plus en plus utilisés en recherche, et ouvrent la voie vers une toute nouvelle façon d’étudier la cognition de l’enfant, particulièrement dans des milieux naturels, hors laboratoire. Toutefois, à notre connaissance, aucune étude ne s’est penchée sur leur utilisation pour l’étude de la cognition normale chez l’enfant. Par ailleurs, le développement d’algorithmes en intelligence artificielle permet aujourd’hui une meilleure exploration des données en neuroimagerie. L’objectif de ce projet est, dans un premier temps, de collecter des données cérébrales chez des enfants pendant l’exécution d’un test neuropsychologique, puis d’utiliser des algorithmes d’apprentissage pour comparer les résultats d’un EEG mobile avec un EEG de laboratoire. Tout d’abord, 40 participants de 6 à 12 ans seront recrutés et assignés à une condition EEG mobile ou de laboratoire. Ensuite, un enregistrement EEG sera effectué pendant la passation d’une version numérique du Five-Point Test, un test mesurant la fluidité graphique. Ces groupes seront ensuite comparés sur la qualité des enregistrements cérébraux. Finalement, l’apprentissage machine sera utilisé afin d’identifier des sous-groupes de participants avec des profils cognitifs distincts. L’utilisation de l’apprentissage non supervisé permettra une exploration des données sans apriori en neuropsychologie de l’enfant, un milieu où ce type d’approche est encore sous-exploité. Dans un second temps, ce projet consistera à développer des modèles de prédiction permettant d’améliorer la résolution spatiale de l’EEG mobile pour l’approcher de la résolution d’un EEG de laboratoire. Ces modèles prédictifs seront développés à partir d’un ensemble de données de plus grande envergure en utilisant des algorithmes basés sur les réseaux de neurones artificielles. Au-delà de l’intérêt d’utiliser des EEG sur le terrain, fournir des preuves de l’utilité de L’EEG mobile pourrait favoriser une plus grande participation des populations vulnérables et éloignées pour la recherche en neurosciences cognitives.

    Rebecca Salganik

    Supervised by: Golnoosh Farnadi

    HEC Montréal

    Exposure Fairness in Music Recommendation

    Given the expansive growth of musical databases, streaming companies have employed the use of recommendation systems to streamline the process of discovering new music. Unfortunately, recent years have come with the discovery of a wide range of biases that can seep from within these models and into the lives of users. One such discovery has been that of popularity bias. Popularity bias is exemplified when algorithmic reliance on pre-existing data causes a feedback loop in which previously popular items (with many ratings) are recommended instead of new, or less well-known items with less ratings. Such a loop can force the algorithm into avoiding new or niche items when it guides a listener’s discovery process. This is deeply concerning as such a bias can have very serious consequences on the financial prospects of artists, the musical experiences of listeners, and, on a broader scale, our cultural values related to art. As such, our goal takes on the task of mitigating popularity bias, or, equivalently, promoting exposure fairness in music recommendation. In our work, we develop a novel, multimodal musical dataset containing social, cultural, and musical information necessary. We use this dataset to train a state-of-the-art graph neural network based recommender system, PinSage, to develop robust representations for musical items. Finally, we use in-processing methods to develop a fairness loss which, when combined with training loss, enables PinSage to perform both fair and relevant recommendations.

    Hugo Schérer

    Supervised by: Yashar Hezaveh

    Université de Montréal

    Characterization of the distribution of dark matter on small scales using strong gravitational lensing

    “Astrophysical observations indicate that there exists an unknown, exotic type of matter called dark matter, which actually constitutes the vast majority of matter in the Universe. Discovering dark matter and its properties is one of the most important priorities in modern physics. One powerful way to characterize dark matter it through a phenomenon called strong gravitational lensing. This phenomenon happens when there are two galaxies, one directly behind the other from the Earth’s perspective. The closer one is the lensing galaxy, and the one further away is the source galaxy. Because massive objects curve spacetime, the lensing galaxy distorts the image of the source galaxy, very much like an actual lens distorts images. Studying these distortions make it possible to learn about the distribution of dark matter inside the lensing galaxy, and provide invaluable information about this unknown form of matter.

    My research will contribute to this study of dark matter using data from strong gravitational lenses. One major challenge is the analysis of this data which is difficult and takes a lot of computing time with traditional analysis methods. Furthermore, very large amounts of data will become available in the coming years, which makes this challenge even more important. Neural networks have been shown to be extremely promising to dramatically reduce the computing time in analyzing such data. Neural networks are computing algorithms that fall under the category of machine learning, also sometimes referred to as artificial intelligence. My research will be to take part in developing machine learning analysis methods to analyze data from strong gravitational lenses. I also plan to apply the methods that I develop to real data, which will further advance our understanding of dark matter in particular and of the Universe in general.”

    Mohammad-Hadi Sotoudeh

    Supervised by: Laurence Perreault-Levasseur

    Université de Montréal

    Reconstructing the Initial Conditions of the Universe Using Deep Learning

    “We, humans, have had the old question “where do we stand in the universe?” for centuries. We are curious to understand our origin, our place in the cosmos, and how our universe evolves. Nowadays, these questions are investigated in modern cosmology by analyzing astronomical observations through physical models. An essential step in this direction is reconstructing the universe’s initial conditions (i.e., its contents, their initial spatial distribution, and a handful of parameters underlying their evolution through time), contributing to our understanding of the universe and constraining fundamental physics models.

    The data coming from modern telescopes such as the James Webb Space Telescope, Euclid, and Vera Rubin Observatory significantly enhance the volume and the resolution of available cosmological data. This wealth of data provides remarkable opportunities for discoveries; however, it requires innovative inference techniques. This project will utilize the advances in deep learning and data science to develop novel tools for high-dimensional inference in cosmology. These tools will not only pave the way for cosmologists to discover the initial conditions of the universe but also hold the promise of bringing new ideas and methods that can be fruitful for inference in other realms of science.”

    Internship grants: Data to tell

    Clémence Delfils

    Université de Montréal

    Stage chez La Presse

    Ève Ménard

    Université de Montréal

    Stage chez Le Devoir

    Baptiste Pauletto

    Polytechnique Montréal

    Stage chez Le Devoir

    Sahar Ramazan Ali

    Université de Montréal

    Stage chez CIRANO

    Thi Sopha Son

    Université du Québec en Outaouais

    Stage chez CIRANO

    Rokhaya Yade

    Université du Québec à Trois-Rivières

    Stage chez La Presse

    Postdoc-entrepreneur program

    Masoud Ali

    Supervised by: Pooneh Maghoul

    Polytechnique Montréal

    Project: Scient Analytics

    Scient empowers geologists and mining companies by identifying minerals that are indistinguishable by the human eye. Scient hardware-enabled-software solution uses hyperspectral imaging and artificial intelligence to remove the latency between drilling and core characterization, providing virtually real-time feedback for operational decision support. Our mission is to maximize efficiency and minimize the environmental footprint of natural resource exploration, assessment, and extraction.

    Elham Kheradmand

    Supervised by: Manuel Morales

    Université de Montréal

    Project: Lucid Axon

    Lucid Axon is dedicated to advancing transparency, consistency, and comparability in an ESG-evolving world. Through the use of advanced AI-powered technology, we collect ESG data from a range of sources including financial disclosures, news, company data, and social media, and present it in a user-friendly web application. Our solutions are designed to be customizable to meet the unique needs of our clients, offering flexibility in the integration of ESG metrics for informed decision-making, investments, capital allocation, audits, and other related activities.

    Claudie Ratté-Fortin

    Supervised by: Jean-François Plante

    HEC Montréal

    Project: Clean Nature

    Accelerating the transition to sustainable, intelligent and connected winter practices

    Although essential for maintaining public safety, the application of de-icing salt generates major environmental and economic issues in Canada. For public administrators and private contractors alike, optimal de-icing salt application is the key to better management. The tools currently available to determine the type and quantity of salt to spread are limited to tables presenting ranges of quantities to be spread according to temperature ranges and other road weather descriptors. A technological catch-up is needed, especially as the interpretation of these tables can be complex and is often based on subjective decisions made during the decision-making process. Current practices favor the massive spreading of salt to ensure road safety.

    The aim of the project is to set up a pilot project with a municipality to carry out the proof-of-concept of an innovative decision support system (GuiA) for optimizing the spreading of de-icing salts and abrasives for winter road maintenance. This innovative tool, based on artificial intelligence, will propose de-icing salt and abrasive doses according to actual road and weather conditions. Road safety will be assessed using a pavement friction sensor, which will validate that the recommendations made by the tool are safe. An awareness-raising and outreach component will also be put forward to ensure the social acceptability of the AI tool among citizens and blue-collar workers. Water quality sampling will be carried out on a river running through the municipality, to monitor salinity over time.

    Postdoctoral research funding

    Saad Akhtar

    Supervised by: Charles Audet

    Polytechnique Montréal

    Topology Optimization for Conformal Cooling in Molds and Dies

    “The manufacturing of aluminum parts is plagued by problems related to the optimization of manufacturing molds. The quality, reliability, and production efficiency of these parts are directly related to the ability of the molds to efficiently transfer heat to facilitate cooling. The industrial technique traditionally used to dissipate heat is the drilling of holes in the mold die. These holes are generally not optimally placed, resulting in thermal stresses and microstructural variations in the manufactured parts. In this context, the availability of molds with complex cooling channels could remedy this limitation.

    In this project, we propose to design an open-source topological optimization platform based on the tools of Finite Element Modeling (FEM), Artificial Intelligence (AI), and Black Box Optimization (BBO) for thermofluidic problems to specify the optimal location and shape of cooling channels. The project will leverage the strengths of the fellow, host supervisors (Professors Charles Audet and Bruno Blais) at Polytechnique Montreal, and R&D collaborators at the National Research Council (NRC), Canada which allows for a significant synergy between the proposed research and institutional strategic priorities. Furthermore, the project will contribute towards strengthening the development of energy-efficient manufacturing expertise of the local Quebec Aluminum manufacturing industry.”

    Yacine Bareche

    Supervised by: John Stagg

    Université de Montréal

    DeepPredictIO: A Pan-Cancer Deep Learning Framework to Predict Response to Immune Checkpoint Inhibitors

    Cancer is the second deadliest disease worldwide. In the last decade, several agents called immune checkpoint blockade (CPI), aimed at boosting patients’ immune system to fight the tumor and led to exciting results with a long-lasting response for some patients. However, CPI remains extremely costly for patients and public health care (>$10,000 per patient per month), yet a large portion of CPI-treated patients (60-80%) do not derive benefit from this therapy. Thus, the identification of a robust biomarker of response to treatment and easily applicable in clinical routine practice is currently one of the most important active fields in immuno-oncology. In a previous work, we developed a robust and powerful biomarker of response to CPI therapy based on RNA-sequencing, called PredictIO. Yet, despite the continuous effort in decreasing cost and processing time, RNA-sequencing remains not suited for clinical routine practice. Hematoxylin & Eosin (HE) stained Whole-Slide Images (WSIs) is the current gold standard for solid tumor diagnostic, with cheap and fast protocols used worldwide. We hypothesize that building a model able to efficiently predict CPI-response from HE WSI, would greatly improve clinical decisions, with more robust patient stratification through the use of a tool directly applicable in clinical routine practice.

    Edoardo Maria Ponti

    Supervised by: Siva Reddy

    McGill University

    Skill Discovery in Language Models

    One of the key goals of natural language processing is devising models that can use language creatively and in unforeseen circumstances. In humans, this is possible by virtue of the fact that each linguistic “task” (e.g., answering a question) results from the combination of different sets of skills, which are autonomous and reusable facets of knowledge. The goal of my project is to integrate such a modular design into neural architectures. In particular, I will focus on language understanding grounded in a simulated environment. Upon receiving a new instruction, an agent will execute a sequence of actions conditioned on a specific subset of skills it has learned (navigating a room, picking up objects, opening doors, etc,). Similarly, latent skills can be discovered for language generation, thus controlling the text a model outputs and making it more diverse. Thus, my project holds promise to better align machine learning with human creativity in language usage.

    Denahin Hinnoutondji Toffa

    Supervised by: Dang Khoa Nguyen

    Université de Montréal

    Adaptation d’un classificateur multimodal au diagnostic électroencéphalographique de l’épilepsie

    L’épilepsie est une maladie cérébrale qui affecte environ 140000 Canadiens et 50-60 millions de personnes au monde. En 2020, le diagnostic d’épilepsie dépend encore de l’identification de détails visuels caractéristiques sur l’électroencéphalogramme (EEG) : les pointes épileptiformes. Toutefois, ces pointes sont absentes ou de signification incertaine dans environ 30-70% des cas, conduisant ainsi à des excès ou à des retards diagnostiques parfois graves. Pour améliorer les chances d’un diagnostic précoce même en l’absence de pointes, nous proposons un algorithme pour une analyse multimodale automatisée ciblant à la fois les pointes et des détails EEG infravisuels. De même, là où un neurologue tablerait subjectivement sur son expérience de 2-30 ans, notre algorithme d’IA qui pourra objectivement résoudre les situations diagnostiques en se basant sur un recul équivalent à des centaines d’années de carrière. Au total, plus de 25000 échantillons EEG anonymes avec les diagnostics corrélés seront sélectionnés au CHUM pour entrainer la version de base de l’algorithme à reconnaître les EEG qui sont corrélés à l’épilepsie. Cet outil sera une solution innovante dont l’optimisation en contexte clinique améliorera significativement le seuil diagnostique de l’EEG. Il pourrait aussi radicalement changer la stratégie prise en charge de l’épilepsie, voire celle d’autres affections neurologiques.

    Ankur Mali

    Supervised by: Eilif Muller

    Université de Montréal

    Surprising you learn, not surprising you don’t: A model of neocortical perception and learning based on prediction

    Here we aim to develop a deep learning network architecture that combines the strengths of predictive coding and self-supervised learning approaches and is constrained by the neocortical architecture, to account for the role of integrated contextual priors and surprise in both inference and learning. We will leverage an initial prototype bio-inspired deep learning model of a mechanism for combining contextual priors with sensory inference at one cortical region that was recently developed in the lab of Dr. Eilif Muller. We will augment this prototype, and 1) incorporate a model of surprise accounting for known neocortical perceptual dynamics under violation of contextual priors, and 2) develop a local learning algorithm consistent with synaptic plasticity dynamics between pyramidal integration zones for sensory input and contextual priors (basal and apical dendritic compartments, respectively). The latter will be performed in close collaboration with computational neuroscientists working in the lab and other experimental groups studying such plasticity dynamics. We will develop the network architecture to scale to learning in a deep hierarchy on tasks and datasets consisting of natural images.

    Aaron Berk

    Supervised by: Tim Hoheisel

    McGill University

    Realistic sampling strategies for deep generative inverse problems in medical imaging

    Generative neural networks (GNNs) have shown impressive performance in capturing intrinsic low-dimensional structure in natural images. For instance, some produce realistic-looking images of human faces. This makes them promising candidates for modelling complex structured data. Recently, GNNs are being developed as structural proxies for inverse problems in medical imaging, such as magnetic resonance imaging and computed tomography. Robustness and interpretability are mandates in medical imaging. However, both are major open challenges for neural network-based reconstruction methods, and a solid theoretical understanding is required to address them. In this research program, we propose a new theoretical analysis aimed at improving the reliability of GNNs in crucial applications. In particular, we will investigate the use of GNNs as structural proxies for inverse problems by elucidating optimal sampling strategies for realistic measurement processes. This work will develop new probabilistic machinery, which we expect to be useful for analyzing other open questions about neural networks.

    Véronique Brousseau-Couture

    Supervised by: Normand Mousseau

    Université de Montréal

    Optimisation de l’ensemble d’entraînement pour la modélisation atomistique de batteries solides

    Face à la menace des changements climatiques, le besoin de développer des méthodes de stockage d’énergie efficaces, sécuritaires et peu coûteuses est considérable, car de tels dispositifs sont nécessaires pour favoriser la migration vers des énergies propres et renouvelables. Les batteries solides comptent parmi les avenues les plus intéressantes. Pour identifier des matériaux prometteurs, il faut toutefois investiguer un très grand nombre de candidats, ce qui est beaucoup trop coûteux numériquement avec les méthodes de pointe actuelles. L’utilisation de l’apprentissage automatique pour développer les potentiels interatomiques qui nous permettront d’effectuer les calculs à moindre coût est donc une avenue de prédilection. Or, les calculs nécessaires pour construire un ensemble de données d’entraînement efficace demeurent particulièrement lourds, puisque l’on doit simuler des milliers d’atomes. Ce projet propose donc d’utiliser le concept d’espace latent développé dans les réseaux de type encodeur-décodeur pour identifier un ensemble de configurations atomique plus simples, comprenant moins d’atomes, qui donneraient néanmoins au modèle d’apprentissage machine une information équivalente à des configurations très complexes. En optimisant ainsi la construction de notre ensemble de données d’entraînement, on pourra donc améliorer les capacités de prédiction de nos modèles d’apprentissage machine tout en réduisant considérablement le coût numérique des calculs.

    Arthur Chatton

    Supervised by: Mireille Schnitzer

    Université de Montréal

    Clustered super learning for optimal decision rules

    Super learner is a powerful supervised ensemble learning method optimizing the predictions by combining several machine learning approaches. However, several contexts in health or social sciences involve repeated measures for each unit, requiring specific loss functions and cross-validation schemes due to autocorrelation. Our main goal is to extend the super learner to such clustered data to predict the most at-risk statistical units of poor outcomes. We plan to develop a dynamic weighted outcome linear regression estimator for the clustered data in a second step. Then, we will use this estimator to identify effect-modifying variables to build rules for optimally setting the exposures to improve patient outcomes. We will apply these methodological developments in nephrology to optimize online dialysis sessions according to the patient’s and session’s characteristics.

    Eduard Gorbunov

    Supervised by: Gauthier Gidel

    Université de Montréal

    Improving the Theory of Numerical Methods for Solving Variational Inequalities and Distributed and Stochastic Optimization Problems

    The goal of the project is to improve theoretical understanding of existing methods for solving variational inequalities, distributed and stochastic optimization problems, and to design new efficient methods with better convergence guarantees in comparison to existing ones. In particular, we plan to push further the theory of stochastic methods for variational inequalities and min-max problems motivated by machine learning applications, obtain new theoretical results on stochastic optimization with heavy-tailed noise in the gradients, develop new efficient distributed methods robust to Byzantine attacks, and new communication-efficient distributed methods with compression. Theoretical challenges we consider are motivated by the various applications including training of GANs, federated learning, collaborative learning, and training complicated deep learning models on NLP tasks.

    Hélène Verhaeghe

    Supervised by: Gilles Pesant

    Polytechnique Montréal

    Improving the solving of scheduling problems using machine learning

    “The goal of this project is to use machine learning algorithms to help improve the solving of scheduling problems such as the Resource-Constrained Project Scheduling Problem (RCPSP). Constraint programming (CP) has been proven effective to solve such scheduling problems. However, instances with more than 500 tasks are still hard to solve by any method. The target of this project is to be able to solve instances with up to 2000 tasks.

    Combining ML and CP has already been proven successful in multiple occasions.

    Here, ML algorithms would principally be used to find clusters of hard-to-schedule tasks in order to schedule them first, as a sub-problem. “

    Hao-Ting Wang

    Supervised by: Pierre Bellec

    Université de Montréal

    Impact of age and sex on transdiagnostic brain biomarkers amongst neurodegenerative conditions

    “With the ageing of the Canadian population, neurodegenerative diseases are reaching epidemic levels. At present, it is not possible for clinicians to predict accurately if and when a patient with no symptoms or mild cognitive impairment will start experiencing debilitating symptoms of dementia given basic information such as age and sex. An early, reliable diagnosis could however dramatically increase the effectiveness of current and future interventions.

    Magnetic resonance imaging is a promising technology to assist clinicians in making a precise diagnosis by providing a non-invasive window in the structure and function of the brain. With the innovation of AI combined with availability of multisite datasets, we can apply the rich information found in these brain images to clinical practice. In order to understand what is the impact of age and sex on diagnostic brain markers, we need to see how brain markers vary in a large number of contexts and individuals. “

    Marta Zagorowska

    Supervised by: Moncef Chioua

    Polytechnique Montréal

    Robust and data-Efficient Learning for Industrial Control

    “Increasing energy and resource efficiency in industrial systems is key to decrease harmful emissions by 90% by 2050. Reaching the environmental targets requires a holistic approach to how resources and energy are delivered to the industry by means of distribution networks, such as heat networks, electricity networks, or gas transport networks. I will devise new control strategies that ensure robust operation of distribution networks while ensuring safety and satisfaction of environmental objectives.

    The environmental performance of the whole system hinges on the performance of distribution networks. Optimal control of such networks is complex due to timescales, from milliseconds to ensure safe operation of pumps or generators, to days or months to include environmental goals, spatial complexity, uncertainty related to varying operating conditions, incomplete information available, and limited computational power. Existing control frameworks are usually application specific and have limited use in large-scale systems.

    In the project, I will use advanced theory in data analytics and optimisation and build on my industrial experience to develop operating strategies for distribution networks that will enable safe implementation and reaching the environmental targets.”

    Wentao Zhang

    Supervised by: Jian Tang

    HEC Montréal

    Graph Data Mining 

    “I will continue to focus on graph data, graph models, and graph systems. Concretely, my future research works will include:

    1. AutoML on Graph: model and system
      Recently, the combination of AutoML and graph data mining has aroused lots of concern. I will focus the researches on auto-knowledge graph, auto-network embedding and graph neural architecture search, and then build a system to better use them.
    2. Data-centric Graph Mining
      Many state-of-art graph models are data-driven. Building data-driven graph model (e.g., robust model under data noise, weakly-supervised learning, self-supervised learning and federated learning under limited graph data) is my other future work.
    3. Machine Learning for Graph Data
      Data is playing an increasingly central role in creating ML solutions. So, I hope to build a system that can deal with the data challenges (e.g., data annotation, data cleaning, and data augmentation) in graph. “

    Undergraduate research initiation grants

    Anna Andrienko

    Supervisé.e par : Margarida Carvalho

    Université de Montréal

    A binary decision diagram-based approach for interdiction games: Critical Node Problem

    Vanessa Bellegarde

    Supervisé.e par : Julie Hussin

    Institut de cardiologie de Montréal (ICM)

    Prédire la proportion d’ethnicité à l’aide du Diet Network

    Geneviève Bistodeau-Gagnon

    Supervisé.e par : Guy Wolf

    Université de Montréal

    Integreted data-driven approaches for understanding immunological data

    Marise Bonenfant

    Supervisé.e par : Lubna Daraz

    Université de Montréal

    Ontology Assessing the Reliability of Mental Health Tools and Information on the Internet

    Mégan Brien

    Supervisé.e par : Frédéric Gosselin

    Université de Montréal

    Decoding real-world visual recognition abilities in the human brain

    Ariane Brucher

    Supervisé.e par : Phaedra Royle

    Analyses statistiques de potentiels évoqués lors du traitement linguistique chez les adolescents francophones neurotypiques

    Sara-Ivana Calce

    Supervisé.e par : An Tang

    Centre hospitalier de l’Université de Montréal (CHUM)

    Application de techniques d’apprentissage profond pour la classification de maladies diffuses du foie par imagerie ultrasonore

    Marianne Chevalier

    Supervisé.e par : Éric Lacourse

    Université de Montréal

    Exploration d’approches computationnelles pour analyser et comparer des données d’enquête

    Jonathan Couture

    Supervisé.e par : Alexandre Dumais

    Centre intégré universitaire de santé et de services sociaux de l’Est-de-l’Île-de-Montréal (CIUSSS - EMTL) Hôpital Maisonneuve-Rosemont et Institut universitaire mentale de Montréal)

    Thérapie Avatar pour traiter la schizophrénie résistante : Modélisation du processus thérapeutique à l’aide du Traitement Automatique du Langage humain

    Simon Del Testa

    Supervisé.e par : Vincent-Philippe Lavallée

    Centre hospitalier universitaire Mère-Enfant (CHU Sainte-Justine)

    Analyse génomique comparative de leucémies au diagnostic et à la rechute

    Guillaume Dubé

    Supervisé.e par : Éric Lacourse

    Université de Montréal

    Démocratisation du concept de régularisation pour la recherche en sciences sociales

    Clara El Khantour

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Analyse de données cérébrales MEG combinant analyses spectrales et apprentissage machine

    Gaspar Faure

    Supervisé.e par : Guillaume-Alexandre Bilodeau

    Détection et suivi simultané d’objet

    Louis-Simon Guité

    Supervisé.e par : Julie Hlavacek-Larrondo

    Université de Montréal

    Evidence of Massive Runaway Gas Cooling in High-Redshift Clusters of Galaxies

    Armelle Jézéquel

    Supervisé.e par : Michel Gagnon

    Polytechnique Montréal

    Grattage web des événements culturels du Québec

    Rose Jutras

    Supervisé.e par : Frédéric Gosselin

    Université de Montréal

    Individual differences in semantic and conscious processing of natural scenes.

    Jin Kwon

    Supervisé.e par : Aude Motulsky

    Centre hospitalier de l’Université de Montréal (CHUM)

    Développement d’outils pour un recueil de données standardisées du prescriptome des patients

    Yassine Lamrani

    Supervisé.e par : Elie Bou Assi

    Centre hospitalier de l’Université de Montréal (CHUM)

    Objets connectés et IA : Détection des crises d’épilepsie à partir de la respiration

    Audrey Lamy-Proulx

    Supervisé.e par : Sébastien Hétu

    Université de Montréal

    Le traitement de la hiérarchie sociale et son influence sur la prise de décisions

    Julie Lanthier

    Supervisé.e par : Jean Provost

    Implémentation d’un logiciel permettant de réaliser l’imagerie fonctionnelle cérébrale ultrasonore dans plusieurs modèles animaux

    Justine Le Blanc-Brillon

    Supervisé.e par : Sébastien Hétu

    Université de Montréal

    Studying social influence using a neuroimaging and data science approach

    Dragos Cristian Manta

    Supervisé.e par : Adam Oberman

    Generalization for Semi-Supervised Learning

    Nadine Mohamed

    Supervisé.e par : Roberto Araya

    Centre hospitalier universitaire Mère-Enfant (CHU Sainte-Justine)

    DIGITAL MORPHOLOGICAL ANALYSIS AND MODELING OF NEURONAL DENDRITES AND DENDRITIC SPINES IN NEURODEGENERATION

    Manuel Pellerin

    Supervisé.e par : Michel Desmarais

    Polytechnique Montréal

    Analytique de l’apprentissage dans Moodle en vue de la réalisation de tableaux de bord d’engagement et d’apprentissage

    Félix Pellerin

    Supervisé.e par : Antoine Lesage-Landry

    Polytechnique Montréal

    Optimal policies to mitigate power system-ignited wildfires and peak demand

    Sophie Rodrigues-Coutlée

    Supervisé.e par : Alexandre Dumais

    Centre intégré universitaire de santé et de services sociaux de l’Est-de-l’Île-de-Montréal (CIUSSS - EMTL) Hôpital Maisonneuve-Rosemont et Institut universitaire mentale de Montréal)

    Innovative Virtual Reality and Artificial Intelligence based psychotherapy to reduce cannabis use in patients with psychotic disorders

    Emily Tam

    Supervisé.e par : Blake Richards

    Exploring the role of Dale’s Law in Artificial Neural Networks

    Nicole Tebchrany

    Supervisé.e par : Jean Provost

    Senseur optique d’ultrasons

    Jacqueline Nguyen Phuong Trieu

    Supervisé.e par : Anne Gallagher

    Centre hospitalier universitaire Mère-Enfant (CHU Sainte-Justine)

    Optimization of data analysis strategies for the ELAN Project: a multimodal approach

    Konstantinos Tsiolis

    Supervisé.e par : Adam Oberman

    McGill University

    Statistical Learning Theory Applied to Word Embeddings

    Élise Vaillancourt

    Supervisé.e par : Louis Doray

    Université de Montréal

    Calcul de rentes et primes d’assurance-vie individuelles avec la Covid-19

    Michael Vladovsky

    Supervisé.e par : Vincent-Philippe Lavallée

    Centre hospitalier universitaire Mère-Enfant (CHU Sainte-Justine)

    Analyse comparative du séquençage du transcriptome et de l’exome pour le diagnostic des leucémies myéloïdes aiguës.

    Anton Volniansky

    Supervisé.e par : Alexandre Dumais

    Centre intégré universitaire de santé et de services sociaux de l’Est-de-l’Île-de-Montréal (CIUSSS - EMTL) Hôpital Maisonneuve-Rosemont et Institut universitaire mentale de Montréal)

    Le Traitement Automatique du Langage humain comme outil de prédiction de la réponse à la Thérapie Avatar pour traiter les hallucinations de la schizophrénie résistante

    Masters excellence scholarships

    Benjamin Akera

    Supervisé.e par : David Rolnick

    McGill University

    Learning Global Embeddings for social good – A case study of Remote Sensing and Citizen Science

    “Despite the growth of initiatives for monitoring life on earth, there continue to be vast gaps in data and knowledge across modalities.

    To bridge this gap, we learn a global ecosystem embedding to approximate terrestrial biodiversity at different regions on a geographical scale. This embedding will utilize data from different sources, including remote sensing and citizen science observations of animals and plants. The combination of unlabelled satellite imagery and sparse information on biodiversity will allow us to establish a framework for learning global embeddings in ecology which can be extended to other societal challenges such as climate change and food security in regions where little or no data might exist.”

    Ève Campeau-Poirier

    Supervisé.e par : Laurence Perreault Levasseur

    Université de Montréal

    Modélisation de lentilles gravitationnelles à l’aide d’une machine à inférences récurrentes (RIM))

    “Les deux méthodes employées jusqu’à maintenant pour déterminer le taux d’expansion de l’univers génèrent des résultats différents. L’une a recours au décalage de la lumière des étoiles locales, c’est-à-dire à l’état récent de l’univers. L’autre emploie les assertions du modèle standard de cosmologie ainsi que les variations de température du fond diffus cosmologique, soit un rayonnement électromagnétique issu de l’univers primordial. Il est important de comprendre l’origine de ce désaccord, entre autres, pour s’assurer de la validité de nos présentes connaissances sur l’évolution et sur la structure de l’univers. Une troisième méthode de mesure indépendante pourrait permettre de confirmer une des valeurs, et donc, de nous renseigner sur l’erreur qui cause l’écart.

    Il en existe une qui requiert l’observation d’un vaste échantillon de lentilles gravitationnelles. Celles-­ci sont des systèmes formés de deux galaxies dont l’une est appelée source, et l’autre, lentille. La galaxie lentille se trouve entre la Terre et la galaxie source. Dû à des effets gravitationnels, la lentille dévie la lumière émise par la source, déformant l’image de la source que reçoit la Terre. Pour évaluer le taux d’expansion de l’univers avec ces systèmes, il faut modéliser la distribution de masse de la lentille. Or, il n’existe pas de relation qui procure la distribution de masse de la lentille à partir de l’image déformée de la source. La procédure actuelle consiste à simuler des images déformées à partir de plusieurs distributions de masse différentes et à retenir celle qui engendre l’image simulée ressemblant le plus à l’image réelle. Ce procédé est lent et coûteux en ressources de calculs.

    Notre projet vise à découvrir la relation de la distribution de masse en fonction de l’image déformée. Pour ce faire, nous utiliserons une machine à inférences récurrentes (RIM), soit une méthode d’apprentissage automatique qui produit un algorithme d’inférence. En entrainant un réseau de neurones avec cette méthode, il apprendra le processus d’extraction de la distribution de masse à partir de l’image déformée. Ce processus pourra alors être appliqué à tout nouveau système de lentille de façon rapide et efficace. Étant donné l’important volume d’images attendu de la part de la nouvelle génération de télescopes, cela permettra d’évaluer le taux d’expansion de l’univers dans un délai raisonnable. De plus, les RIM n’ont jamais été testées sur des problèmes non-linéaires comme celui-ci. Ce projet pourrait donc ouvrir un éventail de possibilités pour leurs applications futures.”

    Clémentine Courdi

    Supervisé.e par : Éric Lacourse

    Université de Montréal

    Prédiction des profils de santé physique et mentales des mères canadiennes à partir de leur profil de facteurs de vulnérabilité par l’analyse de classes latentes

    “L’objectif de la recherche est de déterminer si des profils de facteurs de risque, incluant par exemple
    l’âge, le niveau d’éducation, le revenu et le statut d’immigration, affectent le profil de santé des mères canadiennes, incluant la santé physique et mentale. La santé maternelle est au cœur de ce projet, définie comme incluant tous les aspects (autant physiques que mentaux) de la santé de la femme, avant et pendant la grossesse, jusqu’à l’accouchement et la période post­partum dans les mois suivants. En investiguant quels sont les facteurs et plus précisément les profils de risques associés à des problèmes de santé physique et mentale chez les mères, on espère trouver des stratégies de santé publique permettant d’encourager une meilleure santé chez les mères, et par conséquent chez les enfants. Pour étudier le lien entre les profils de risques et les profils de santé des mères, des techniques d’analyse inspirée de l’intelligence artificielle vont être utilisées. Plusieurs recherches ont étudié auparavant des facteurs de risque précis, comme le fait d’être une mère adolescente, et leurs conséquences sur la santé maternelle. Ce projet se distingue par le fait qu’une grande quantité de facteurs de risque seront pris en compte simultanément pour vérifier comment ils agissent en synergie pour affecter la santé maternelle. Dans le même ordre d’idées, aucun problème de santé en particulier n’est étudié; c’est l’état général de santé physique et mentale avant, pendant et après la grossesse qui sera analysé. Cette analyse sera conduite sur les données issues de l’Enquête sur la santé dans les collectivités canadiennes (ESCC), précisément celles des années 2010, 2014 et 2018. Le projet comprend donc également un volet comparatif visant à évaluer l’évolution de la situation.”

    Ali Fakhri

    Supervisé.e par : James Goulet

    Polytechnique Montréal

    Bayesian Dynamic Linear Models for Structural Health Monitoring

    “Roads, bridges, and dams are critical components of the modern infrastructure system. Their deterioration from ageing, usage, and environmental exposure may require costly repairs or even result in catastrophic failure. This can negatively impact the economy or even lead to loss of life. A large portion of Canadian infrastructure is old or in bad condition. Subsequently, this poses risks to the Canadian economy and public safety. Research in Structural Health Monitoring (SHM) offers a promising path towards addressing this issue. Namely, we can detect changes in structural behaviour using the data captured by sensors placed on the structure. This will allow us to monitor the structure’s condition in real-time, and any unusual behaviour can trigger preventative actions (e.g., field inspections and repairs).

    There have been considerable advances in SHM methods in recent years. In particular, Bayesian Dynamic Linear Models (BDLMs) were shown to be very effective in detecting anomalies in the context of SHM. Nonetheless, much remains to be done before a BDLM framework can be deployed to a large population of different structures. Specifically, the existing methods rely on expert judgement and prior data analysis to prepare the model. Thus, suboptimal selection of parameters can greatly hinder the performance. Alternative parameter selection methods must be investigated to improve the performance and make the models applicable to a wide range of structures. The existing model capabilities also need to be extended to include and interpret multiple data sources and account for non-periodic patterns in data.

    This project aims to investigate the presented research gaps and further advance the BDLMs for SHM, getting us closer towards managing the ageing civil infrastructure via a large-scale deployment of a generic low-cost SHM system.”

    Alexandre Hudon

    Supervisé.e par : Alexandre Dumais

    Centre intégré universitaire de santé et de services sociaux de l’Est-de-l’Île-de-Montréal (CIUSSS - EMTL) Hôpital Maisonneuve-Rosemont et Institut universitaire mentale de Montréal)

    Traitement Automatique du Langage humain comme outil pour soutenir le thérapeute dans l’évolution clinique d’un patient dans le cadre d’une Thérapie Avatar pour soigner les hallucinations résiduelles de la schizophrénie résistante

    “En cette ère de la médecine de précision, l’un des défis fondamentaux de la recherche en psychiatrie est d’améliorer nos modèles de prédiction de l’évolution clinique des patients. Dans le présent projet, nous souhaitons mettre sur pied une procédure automatisée d’analyse du discours des patients atteints d’un trouble mental afin de prédire la réponse au traitement à la suite d’une séance de thérapie et d’adapter conséquemment la séance suivante.

    Le projet vise à développer une procédure automatisée d’analyse du discours (Traitement Automatisé du Langage-TAL) des patients engagés dans une Thérapie Avatar (thérapie dialogique), qui, à l’aide de la réalité virtuelle, permet au patient d’entrer en dialogue avec une hallucination négative représentée par un avatar.

    Sur le plan opérationnel, la procédure automatisée permettra également de transformer les enregistrements audios (des séances de thérapie) en texte écrit, de coder automatiquement les unités du discours à l’aide de l’algorithme, puis d’appliquer des modèles statistiques tirés de l’apprentissage machine afin d’identifier des features discriminant les bons et les mauvais répondeurs, et ainsi adapter la thérapie pour suivre la trajectoire des bons répondeurs.

    La pertinence de ce projet, par son côté novateur et avant-gardiste, permet d’ouvrir la porte sur la possibilité de prédire adéquatement la réponse clinique d’un patient à une thérapie basé sur un champ lexical, c’est-à-dire sur les dialogues avec son thérapeute. Ce concept pourrait être utilisé non seulement pour la Thérapie Avatar, mais être extrapolé pour une panoplie de psychothérapies.”

    Pascal Laferriere-Langlois

    Supervisé.e par : Nadia Lahrichi

    Polytechnique Montréal

    Prédiction des variations de tension artérielle chez le patient subissant une chirurgie

    “Ce projet a comme objectif d’améliorer les outils de monitoring dont dispose le médecin lorsqu’il administre une anesthésie à un patient subissant une chirurgie. Nous souhaitons intégrer l’ensemble des informations disponibles pour prédire l’évolution de la tension artérielle du patient opéré au cours des prochaines minutes. Nous savons déjà que de brèves périodes de tension artérielle trop élevée (hypertension) ou trop basse (hypotension) est délétère pour le patient et augmente son risque de complications post-opératoires.

    En prédisant à l’avance ces épisodes d’hypotension ou d’hypertension, nous aiderons le clinicien à maintenir la tension artérielle dans un intervalle sécuritaire. Pour y parvenir, nous développerons des algorithmes bâtis sur d’anciens patients ayant subis des variations de tension artérielle. En collaborant avec l’Université de Californie à Los Angeles (UCLA), nous aurons accès à des banques de méga-données de patients opérés et nous pourrons analyser les dossiers médicaux pour analyser et identifier quels antécédents médicaux influencent la dynamique de la tension artérielle. Nous pourrons également analyser l’évolution dans le temps des signes vitaux et des courbes physiologiques de ces patients opéré, afin d’identifier quelles caractéristiques peuvent nous aider à prédire les variations de tension artérielle. En utilisant une séquence d’analyse par apprentissage supervisé et non supervisé, nous pourrons trouvé quels paramètres sont importants et ainsi bâtir nos algorithmes, pour ensuite tester la qualité de leur prédiction sur des banques de patients.

    En intégrant ces algorithmes au monitoring du patient opéré, nous pourrons faciliter le maintien de la tension artérielle et possiblement réduire les complications du patient.”

    Robin Legault

    Supervisé.e par : Emma Frejinger

    Université de Montréal

    Modèles de capture de flot et simulation stochastique pour réseaux congestionnés avec usagers hétérogènes

    “La transition à la voiture électrique est un axe phare des plans de lutte aux changements climatiques mis de l’avant par les États du monde entier. Sa réalisabilité à large échelle est cependant conditionnelle à une accessibilité accrue aux installations de recharge. Cette considération peut être formalisée au moyen de modèles de capture de flot, une famille de problèmes d’optimisation qui consiste à maximiser la couverture, par les installations d’un décideur, des entités se déplaçant dans un système donné.

    Le problème de capture de flot tel qu’étudié dans la littérature repose toutefois sur des hypothèses simplificatrices empêchant son application adéquate aux problèmes routiers de grande taille, soit la fluidité du réseau et le comportement uniforme des usagers.

    Des travaux récents ont permis de formaliser l’hypothèse plus réaliste selon laquelle les choix des voyageurs sont dictés par des préférences variées et aléatoires, mais ces modèles requièrent la simulation d’un nombre important de scénarios, une tâche computationnellement exigeante limitant la taille des problèmes pouvant être considérés.

    Le projet consiste ainsi à développer un modèle de capture de flot réaliste, efficacement résolvable et applicable au problème de la localisation des bornes électriques sur des réseaux de taille réelle.

    Trois axes seront étudiés pour concrétiser cet objectif, soit les techniques de réduction de variance permettant l’identification d’une solution optimale en présence d’usagers hétérogènes par la simulation d’un nombre minimal de scénarios, la combinaison des approches de résolution principales explorées séparément dans la littérature du problème de capture de flot, puis l’application de ces méthodes au cadre plus général des réseaux de transport congestionnés.

    Le modèle développé sera appliqué à des données réelles décrivant les réseaux routiers de grandes villes nord-américaines.”

    Xing Han Lu

    Supervisé.e par : Siva Reddy

    McGill University

    Explainable and Faithful Models for Question Answering

    In the past few years, various deep learning architectures have shown a great ability in automatically answering free-form questions. In fact, they are able to match human-level performance when we evaluate them using popular benchmark datasets. Unfortunately, those datasets take a lot of time and are very expensive to collect, since they require a team of annotators to manually write a question and find the correct answer. Furthermore, deep learning models that are trained on such datasets cannot be improved once they are online. Additionally, the models can find an answer that’s likely correct, but they cannot explain why they chose it. To overcome those issues, we propose a framework that lets those models continuously improve their ability to answer questions even after they are online, and can learn from user feedback to explain why a certain answer is relevant when you give them a question. The proposed framework will be useful in emergency situations (such as the COVID-19 pandemic), since we can quickly build a question answering system that is capable of improving itself and learn to explain why its answers are correct, both of which would not be possible using traditional approaches.

    Sarthak Mittal

    Supervisé.e par : Guillaume Lajoie

    Université de Montréal

    Multiple Faces of Modularity

    There has been a lot of research that incorporate different flavors of modular­ity and sparsity in typical Neural Network models to endow them with inductive biases that are inspired from human cognition. We analyze one such type of model, Recurrent Independent Mechanisms (RIMs), that aims to do both semantic as well as episodic factorization. We investigate the key properties utilized by these modules in an effort to understand exactly what properties are essential for their generalization capacity. In particular, we perform ablations relating to the following properties – independence, de-centralized organization, communication and try to understand to what extent do the different modules in RIMs specialize based on semantic information. We believe that understanding the limitations of these models would pave the way to future research on improving and scaling them up.

    Sacha Morin

    Supervisé.e par : Guy Wolf

    Université de Montréal

    Geometry preserving deep networks

    Artificial intelligence models learn by minimizing an objective function on a given data set. Objective functions are tailored to specific tasks, such as classifying samples, generating new images or reducing the dimensionality of the data. Their design is critical to model performance, especially when considering how well a model generalizes to previously unseen data points that were not used for training. This project concerns the design of objective functions for deep learning algorithms. Specifically, we aim to study geometric objective functions for deep learning, i.e. learning objectives that consider some measure of distance between data points to preserve the intrinsic geometry of the data. Deep learning is known for dramatically changing the structure of the data when learning new representations of it. For example, data points forming a circle may be embedded as a closed curve with many self intersections. Previous work has shown that encouraging deep neural networks to preserve data geometry could be beneficial for the task of dimensionality reduction and, to some degree, classification. We aim to show similar benefits for new tasks, such as the generation of synthetic data points, and further explore the mathematical theory that could explain the superior performance of deep neural networks with geometry preserving properties.

     

    Justine Pepin

    Supervisé.e par : Margarida Carvalho

    Université de Montréal

    Jeux de programmation en nombres entiers : approches pour la sélection des équilibres corrélés

    “Afin d’optimiser leur bénéfice individuel, plusieurs joueurs d’un jeu de programmation en nombres entiers tentent de jouer la combinaison de stratégies qui les avantagera le plus en anticipant le comportement des autres.
    Conséquemment, la solution du jeu se trouvera en un point d’équilibre à partir duquel aucun joueur ne gagnerait à dévier unilatéralement de la combinaison de stratégies qu’il joue actuellement.

    Un équilibre peut être corrélé, c’est-à-dire qu’un coordinateur oriente le choix de la combinaison de stratégies des joueurs en s’assurant que chaque joueur aura intérêt à suivre ses recommandations. Dans de nombreux jeux d’intérêt pratique, de multiples équilibres peuvent exister. Il en découle qu’une telle coordination peut être cruciale afin que les joueurs s’entendent sur un équilibre efficace, par exemple sur celui qui optimise le bien-être social.

    Cependant, comment s’assurer qu’un équilibre corrélé optimise l’objectif choisi? Pour cela, il faudrait disposer d’un algorithme certifiant que nous avons le meilleur point d’équilibre. Nous proposerons une approche algorithmique inspirée des méthodologies classiques de décomposition en optimisation.”

    Emmanuelle Richer

    Supervisé.e par : Farida Cheriet

    Polytechnique Montréal

    Segmentation d’images volumiques de la rétine par apprentissage profond

    “Les maladies oculaires telles que la dégénérescence maculaire liée à l’âge, le glaucome et la rétinopathie diabétique sont la cause principale de la cécité chez la population active. Avec le vieillissement de la population et la prévalence croissante du diabète, on prévoit qu’en 2025, 333 millions de patients atteints de diabète à travers le monde vont avoir besoin d’un examen ophtalmologique chaque année.

    L’apprentissage profond a déjà fait ses preuves dans le diagnostic automatique de pathologies à partir d’images médicales. Des architectures de réseaux de neurones convolutionnels ont déjà été proposés pour, par exemple, la segmentation de vaisseaux rétiniens et du disque optique ainsi que la détection de ces trois maladies, à partir d’une banque d’images du fond d’œil. Même si l’image de fond d’œil représente la modalité la plus utilisée en clinique, les images volumiques de la rétine acquises à l’aide de la tomographie par cohérence optique (OCT) sont souvent requises afin de confirmer un diagnostic. Un des inconvénients de l’utilisation de réseaux de neurones pour la prédiction et diagnostic de maladies est l’interprétabilité des résultats par les cliniciens. La segmentation préalable de biomarqueurs est donc un atout lors de l’entraînement de tels réseaux.

    Dans le cadre de ce projet, nous allons développer un modèle d’apprentissage profond afin de segmenter automatiquement les biomarqueurs associés aux différentes maladies oculaires à partir des images OCT. Ce modèle sera contraint par un modèle préalablement entraîné sur des images du fond d’œil. Ainsi, la segmentation des images OCT sera guidée par la segmentation des lésions sur des images de fond d’œil obtenue par apprentissage profond. Les deux cartes de segmentation obtenues à partir des deux modalités seront par la suite soumises à un classifieur afin d’établir le diagnostic final.

    Le projet proposé permet d’exploiter les avancées récentes en apprentissage profond pour une analyse automatique des images volumiques de la rétine. Une détection automatique de ces trois maladies permettra de prioriser les patients à haut risque afin d’éviter l’évolution vers un stade de pathologie irréversible. La prise en charge de ces patients à temps entrainera des bénéfices à notre système de santé en augmentant l’efficacité des protocoles préventifs et en réduisant le coût des traitements.”

    Maria Sadikov

    Supervisé.e par : Julie Hlavacek-Larrondo

    Université de Montréal

    Comprendre les plus gros trous noirs de l’univers en utilisant des techniques innovatrices d’apprentissage automatique

    “Les avancées des récentes années dans le domaine de l’apprentissage automatique révolutionnent la recherche dans une multitude de domaines, que ce soit en physique, en biologie ou en économie. Ces algorithmes permettent d’analyser et d’interpréter de très larges ensembles de données de manière plus rapide et plus efficace qu’auparavant, nous donnant accès à une nouvelle compréhension du monde. Notre objectif est d’utiliser des méthodes innovatrices d’apprentissage automatique afin de pousser plus loin notre compréhension des objets les plus extrêmes de l’univers : les plus gros trous noirs supermassifs situés au centre d’amas de galaxies.

    Ces amas de galaxies contiennent un gaz intra-amas, qui émet d’énormes quantités d’énergie sous forme de rayons-X. On s’attendrait à ce que cette perte d’énergie résulte en un refroidissement rapide de l’amas, mais ce processus est contrebalancé par les jets relativistes éjectés par le trou noir supermassif, qui causent des perturbations dans le milieu intra-amas. Pour pouvoir mieux caractériser ces trous noirs supermassifs, il est impératif d’avoir une bonne compréhension des propriétés du milieu intra-amas ainsi que de l’évolution de ces systèmes. Ces propriétés ont déjà été étudiées pour des amas de galaxies proches avec des méthodes traditionnelles. Cependant, le développement de nouveaux instruments d’observation extrêmement puissants nous permet d’obtenir pour la première fois des larges échantillons d’amas lointains. La taille de ces échantillons, ainsi que le grand nombre de paramètres à considérer, rend nécessaire l’utilisation de méthodes avancées d’apprentissage automatique. L’objectif du projet est donc de développer des algorithmes d’apprentissage automatique permettant l’étude de ces amas de galaxies. Les modèles seront entraînés puis validés sur des ensembles d’images obtenues à l’aide de simulations cosmologiques, avant d’être appliqués sur un échantillon d’amas de galaxies ayant déjà été analysé avec des méthodes traditionnelles. Le but est de reproduire et d’améliorer ces résultats en employant des algorithmes de classification d’images qui repéreront des structures additionnelles. Nous cherchons ainsi à découvrir l’information supplémentaire obtenue à l’aide des méthodes d’apprentissage automatique, nous permettant notamment de trouver des indicateurs de l’impact du trou noir supermassif sur le gaz environnant. “

    Jérôme St-Jean

    Supervisé.e par : Dang Khoa Nguyen

    Centre hospitalier de l’Université de Montréal (CHUM)

    Vêtement intelligent Hexoskin et intelligence artificielle : Détection des crises d’épilepsie

    Le but de ce projet de recherche est de développer des méthodes de détection de crises d’épilepsie basées sur des signaux multimodaux enregistrés avec des chandails intelligents fournis par notre partenaire industriel Hexoskin. Les objectifs spécifiques sont : 1) Caractériser les signaux physiologiques en comparant les périodes ictales (en état de crise) et les périodes interictales (entre les crises). 2) Développer un algorithme de détection des périodes ictales avec des techniques d’intelligence artificielle (IA). 3) Développer un algorithme permettant la détection en temps réel d’une crise d’épilepsie.

    Yuanyuan Tao

    Supervisé.e par : Derek Nowrouzezahrai

    McGill University

    Physics-aware deep learning for cellular dynamics

    “A physics-aware deep learning system is developed to almost instantaneously infer cellular dynamics with minimum experimental procedure.

    People are paying heavy prices for diseases that would otherwise be diagnosed and treated much more easily and cheaply by employing cell mechanics. Cellular forces dictate cellular processes and the onset and progression of diseases such as cancer and asthma. The use of cellular dynamics as biomarkers and modulators for cell behaviour indicates the potential of cell mechanics in diagnosis, treatment, drug development, and the study of disease mechanisms. However, the application of cell mechanics is hindered by the costly, time-consuming, and complex procedures to measure cellular forces. Current methods examining cellular forces measure the deformation directly resulted from the forces and then calculate the forces back from the deformation. However, those methods completely depend on in-vivo experiments, so cannot go beyond the parameter space in the experiments, and often require complex and tedious procedures. Sometimes, the accuracy is limited by solving inverse problems.

    This project develops a physics-aware deep learning system to apply cell mechanics to diagnosis, treatment, and drug development with no additional procedures, time, and cost. To integrate digital intelligence into biology and medicine, this project searches for the best way to model the physical interaction between the cell and environment with deep learning. To resolve the limits of the current methods and to revolutionize the methodology in the field of cell mechanics, a comprehensive and generalizable physics-aware deep learning system with adjustable parameters is developed to accurately and almost instantaneously infer and simulate the dynamics in cells and tissues under different conditions from merely the time series of cell morphology. ConvLSTMs and transformers can be used to capture the features and spatiotemporal relations in the morphology. Physical equations, for example, in solid mechanics, and biological parameters are embedded into neural networks optimized for the physical laws and biological models. Our approach is also much more sustainable as minimum lab waste is produced. This system provides revolutionary insights and utility in drug development, diagnosis, treatment, and researches. Moreover, the application of this project exceeds cellular dynamics as the fundament of the project is modelling physical interactions and problems with deep learning.”

    Doctoral excellence scholarships

    Nicolas Cabrera Malik

    Supervisé.e par : Mendoza-Gimenez Jorge

    HEC Montréal

    Multi-modal vehicle routing problems

    Avec les progrès de l’informatique et la complexification croissante des modèles utilisés dans l’industrie, de nombreux problèmes d’ingénierie ne peuvent plus être approchés à l’aide des méthodes de l’optimisation classique. Les fonctions définissant ces types de problème sont des boîtes noires, c’est-à-dire des simulations numériques ou des codes informatiques comportant des entrées réglables par l’utilisateur et retournant une ou plusieurs sorties. L’évaluation de ces fonctions pour des paramètres d’entrée donnés peut être coûteuse, voire approximative, et les dérivées ne sont pas disponibles. Les techniques d’optimisation classique (par exemple celles s’appuyant sur les gradients) ne peuvent donc s’appliquer.

    De nombreux logiciels pour résoudre ce type de problèmes ont été développés. Parmi eux, le logiciel NOMAD implémente un algorithme à l’état de l’art, MADS. Cette méthode est efficace et possède de bonnes propriétés de convergence. L’objectif général de ce projet consiste à développer de nouveaux algorithmes en optimisation multiobjectif, où plusieurs objectifs contradictoires doivent être pris en compte dans la modélisation d’un problème donné, basés sur la méthode MADS possédant des propriétés de convergence similaires, avec un temps de calcul pour un problème donné plus court que les méthodes à l’état de l’art existantes.

    Les méthodes développées seront intégrées au logiciel open source NOMAD et testées sur des modèles de simulation concrets (problèmes de dimensionnement de moteur). Ces travaux de recherche ont des retombées importantes dans les domaines du génie ; en chimie (conception de réseaux ou l’optimisation de réactions chimiques) ; l’apprentissage machine est également concerné avec des problèmes de classification et de partitionnement de données.

    Veronica Chelu

    Supervisé.e par : Doina Precup

    McGill University

    Retrospective Causal Models for Credit Assignment

    Learning fast from a few samples of interaction is a fundamental skill for AI systems that capable of making robust good decisions. This ability is a core component of human intelligence and of autonomous agents adaptable to change. In many societal applications in areas such as healthcare and education, AI can help guide interventions by quickly searching through the space of effective strategies for the best decision policy to support human health, learning or performance. To this effect, the paradigm of reinforcement learning proposes a general solution framework by which agents learn through experience to make good choices. These methods are however limited in how quickly they can learn, typically requiring millions of trials of experience for learning to make adequate decisions. One core aspect of cognition is using knowledge of how the world works to explain observations, to imagine what could have happened that did not, or what could be true that is not. Humans are remarkably apt at inferring (or fabricating) causes for events, and of retrospectively updating their beliefs about the world in hindsight of experience. These mechanisms stand at the basis of counterfactual reasoning and are crucial components in tackling real-world problems with long-delayed feedback. Many questions in everyday life and scientific inquiry are causal in nature, particularly in areas such as healthcare — e.g. “What if I had administered the patient a different drug?”. In this project, we address the problem of efficient learning by interaction with an environment through trial and error. To tackle this problem we propose learning algorithms that build causal models to represent the fundamental workings of the world and apply said models to retrospectively infer the potential causes of events so that they can efficiently re-evaluate their beliefs.

    Pierluca D’Oro

    Supervisé.e par : Pierre-Luc Bacon

    Université de Montréal

    Sample-Efficient Reinforcement Learning via Metacognition

    “Global transformations such as new emerging epidemics and climate change are generating unprecedented challenges for humanity, making a clear call for effective artificial intelligence and control methods. Recent progress in reinforcement learning, the study of how an agent can learn to maximize a utility function while interacting with a system, makes it particularly promising, but with some caveats: while most previous successes needed the collection of considerable amounts of experience to solve a task, the nature of these new challenges requires very sample-efficient algorithms, able to rapidly learn from few interactions with the world.

    One of the most effective tools employed by humans during learning is metacognition, or the ability to think about thinking. The goal of the project is to create a new generation of reinforcement learning algorithms that, by injecting into the agent the ability to reason in a deeper way about its own training process, are able to increase their efficiency. This can happen by arming the agent both with curiosity towards the experiences that are maximally useful for improving its performance and with an understanding of how to employ the collected experience in a way that is efficient according to its learning aptitudes.”

    Dirk Douwes-Schultz

    Supervisé.e par : Alexandra Schmidt

    McGill University

    Coupled Markov Switching Count Models for Spatio-temporal Infectious Disease Counts

    “Accurate epidemic forecasting of infectious diseases remains a major challenge. A state-of-the-art class of statistical models known as “Markov Switching Models” have shown promise in this area. This approach breaks up epidemic forecasting into two components. Firstly, there is a component to predict epidemic occurrence, i.e. when an epidemic will begin in an area. Since an infectious disease will often sit at low levels of incidence, or be completely absent, for extended periods, being able to predict when an epidemic will begin is the first vital step to epidemic forecasting. The second component of the Markov switching model forecasts the resulting cases in the epidemic period. This component can predict when the epidemic will peak, how many cases are expected, how long it will last and other important metrics for the epidemic period. To summarize, Markov switching models predict when an epidemic will occur and then forecast the resulting trajectory of the cases.

    We offer some novel extensions to these models in order to improve forecasting performance. Firstly, we will allow seasonal, meteorological, socioeconomic and other factors to impact our predictions of epidemic occurrence. In contrast, previous approaches assumed a constant probability of epidemic occurrence. This assumption is not very appropriate as epidemics of an infectious disease are known to be highly seasonal and influenced by a wide range of factors. For example, an epidemic of dengue fever will almost never occur in the winter when temperature is too low for prolonged mosquito survival.

    A model not accounting for temperature in this case would give poor predictions of epidemic occurrence.  Incorporating space-time varying predictions of epidemic occurrence into our modeling framework should improve forecasting performance considerably. Secondly, we will incorporate realistic human movement into our forecasts, even using cell phone data where available. Essentially, our framework allows for epidemics in an area to spread into other areas connected by high movement flows. Human movement to and from epidemic areas has been shown to be a major risk factor for the development of epidemics and we view this as an essential component of our modeling strategy. Our extensions create statistical challenges compared to more tradition Markov switching models that have
    been used by others. To overcome this, we will develop new fast and efficient algorithms for fitting the model. All model fitting software will be made publicly available and we plan on writing subsequent less technical papers to  introduce policy makers to our methods.”

    Andreas Enzenhoefer

    Supervisé.e par : James Richard Forbes

    McGill University

    Robot learning using accurate dynamics simulation with frictional contact

    The objective of the proposed research is to develop an accurate dynamics simulation framework that will be used to generate control policies for autonomous robots. Discrepancies between the virtual environment and the real world will be reduced through accurate simulations and domain randomization of material parameters and initial conditions. An analytically differentiable contact model will be developed to allow for sample efficient model-based reinforcement learning. This contact model will be extended to include anisotropic and asymmetric friction critical for the accurate and realistic simulation of many surfaces and material types. Novel generative models for syntheses of simulation scenarios for robotic training and learning will be created. This also includes a simulation plausibility verification to prevent unrealistic or unphysical effects in the training data. The proposed research will significantly decrease the computational cost of controller optimization and improve the transferability of learned control policies to the real robot (“simulation to reality”) through more accurate and validated simulations. This will contribute to accelerating the technological readiness of autonomous robots such as self-driving vehicles.

    El Mehdi ER RAQABI

    Supervisé.e par : Issmaïl EL HALLAOUI

    Polytechnique Montréal

    Decomposition Learning: An Intelligent Framework For Large Scale Optimization Problems

    “With the growing human population, an increasing demand is emerging for several needs such as food, transport, and healthcare. With such a trend, many public, private, profit, and non-profit organizations are investing energy and time to serve our population. Several of them are facing large-scale problems in operations scheduling, budget allocation, resources assignment, etc. Given their size, it is not possible to solve these problems manually. Furthermore, sometimes, even using computers, no feasible solutions are rapidly found. By developing algorithms, which decompose the large-scale problem into smaller problems and solve them repetitively, there is an opportunity to tackle it efficiently. With such observation, the proposed research project aims to develop more efficient mathematical optimization algorithms that smartly decompose the large-scale problem after automatically selecting a suitable decomposition.

    For this purpose, in a world-class lab where researchers/entrepreneurs are gathering, I plan to bring the fields of Mathematical Optimization (MO) and Artificial Intelligence (AI) together to develop systems able to solve large scale problems. The AI component will analyze the data of the problem as well as the solutions of similar instances before providing useful information permitting to speed-up the decision algorithms. This research project will foster Canada’s research and industrial clusters. Furthermore, I believe in the potential of commercializing the proposed solution worldwide through a spin-off that could create dozens of high-quality jobs in Canada.

    The framework can tackle various large-scale problems like industrial production, supply chain management, airline planning, public transportation, urban planning, retail businesses, agriculture planning, and healthcare management. By pushing boundaries in MO and AI, I seek the reinforcement of Canada’s leadership in these two fields while contributing to humans’ wellbeing anytime and anywhere on earth. “

    Simon Faghel-Soubeyrand

    Supervisé.e par : Frédéric Gosselin

    Université de Montréal

    Décoder les variations d’habileté perceptive dans la population en utilisant l’imagerie cérébrale et l’apprentissage machine

    L’expertise avec laquelle l’humain extrait de l’information sur l’identité, l’état émotionnel et le genre des visages de ses pairs est cruciale pour tout individu vivant en société. Le cerveau humain typique effectue ces opérations apparemment sans effort, à l’échelle du dixième de seconde. Par contre, il s’avère que l’habileté en reconnaissance faciale n’est pas aussi homogène qu’on a pu le croire : la performance individuelle varie considérablement dans la population, avec certains individus que l’on nomme “prosopagnosiques développementaux” n’ayant aucune lésion cérébrale mais étant incapable de reconnaître leur collègues ou leurs proches. D’autres, nommés “super-recognisers”, sont au contraire capables de se souvenir d’un visage vu une seule fois des années plus tôt. Étonnamment, les mécanismes cérébraux qui sous-tendent ces variations d’habiletés ont été sous-explorés, et restent largement inconnus. Une compréhension des mécanismes qui causent ces variations est centrale pour implémenter des programmes d’entraînement pouvant améliorer l’habileté des individus avec des troubles perceptifs et sociaux (incluant ceux sur le spectre de l’autisme et de la schizophrénie). L’objectif de ce projet doctoral est d’utiliser l’apprentissage automatique et l’imagerie cérébrale afin de modéliser le cerveau d’individus qui sont extraordinairement habiles, moyens, et déficitaires en reconnaissance faciale. Nous révélerons d’abord comment (par.ex. la géométrie des représentations, les computations effectuées à différents moments) un cerveau perceptif optimal devrait se comporter pour reconnaître les visages des individus de son entourage. Nous développerons ensuite des modèles qui imiteront le comportement du système visuel optimal et déficitaire en reconnaissance faciale. Ces modèles pourront nous informer sur des mécanismes précis sur lesquels agir afin de réduire l’impact des troubles perceptifs d’individus prosopagnosiques (par. ex. renforcer l’activité cérébrale d’une région spécifique pour mieux reconnaître l’identité). Ultimement, nous souhaitons créer des modèles d’entraînement spécifiques à différents troubles perceptifs comme les troubles sur le spectre de la schizophrénie ou de l’autisme. Cette caractérisation formelle du code cérébral derrière l’habileté visuelle exceptionnelle pourra potentiellement inspirer de nouveaux modèles d’apprentissage profond, qui sont en ce moment performants en reconnaissance d’objets/visages mais non-robustes à des changements visuels minimes dans les images.

    Jose Gallego-Posada

    Supervisé.e par : Simon Lacoste-Julien

    Université de Montréal

    Towards a Geometric Theory of Information

    “Information theory is a highly developed and active research area and has been of paramount importance in the development of modern machine learning techniques. However, this field was originally developed in the framework of random variables taking values on mere discrete sets of symbols. This austerity results in a blindness to additional structure amongst the symbols, which limits the power and applicability of the theory. My long-term vision is to build a generalization of the theory of information developed by Shannon in a way that directly incorporates the geometric structure in the domains of the random variables.

    Tools from information theory have been used in the context of representation learning to understand the “surprising” generalization properties of deep learning systems. Expanding our understanding on why deep neural networks perform well on unseen examples, and the potential role that its learned representations play in this process, is a key step towards the deployment of deep learning-based systems in applications for which performance guarantees are critical. However, one of the challenges faced by these approaches arises from the invariance of the mutual information between two random variables with respect to smooth invertible transformations of their sample spaces.

    My proposal aims at providing machine learning researchers with theoretical tools to tackle these challenges. In previous work, we have imported a notion of similarity-sensitive entropy originally developed in theoretical ecology to the machine learning community. Based on this definition, we propose geometry-aware counterparts for several concepts and results in standard information theory, as well as a novel notion of divergence which incorporates the geometry of the space when comparing probability distributions while avoiding the computational challenges of optimal transport distances.

    In the future, my research will focus on the theoretical and practical implications of these ideas: 1) can we obtain an axiomatic characterization of geometry-sensitive entropy?; 2) are geometric mutual information objectives better behaved for representation learning?; 3) what are the connections between our proposed divergence and rate-distortion theory, in particular, regarding deep-learning based compression techniques; 4) can we improve the entropic regularization used in reinforcement learning to encourage exploration by considering similarities on the action space?”

    Dóra Jámbor

    Supervisé.e par : Siva Reddy

    McGill University

    Zero-shot Natural Language to SQL translation

    “Much of the world’s knowledge is stored in relational databases. To access this knowledge, however, users have to express their questions in Structured Query Language, i.e., SQL programs. This causes a severe bottleneck in efficiency across many organizations, as the vast majority of key decision-makers are not fluent in SQL. Natural language to SQL models (Text2SQL) have emerged to automatically translate questions expressed in plain English to SQL programs that can then be readily executed against a given database. Although there has been great progress in recent years with Transformer-based Text2SQL models, it is still challenging to generalize in the zero-shot setting where models have to generate SQL programs for previously unseen databases.

    In this work, we propose a novel procedure to fuse structural and semantic signals to find better alignment between a given question and database pair. Specifically, our hypothesis is that by explicitly modeling how information flows through a database graph, we can better capture the semantics of the database entities. We believe that these richer semantics can consequently improve how we map more complex lexical and structural references in a given question to databases. We believe that our structure-aware alignment can help models generate semantically valid SQL programs not only for known databases but also for previously unseen databases, thereby helping zero-shot generalization. “

    Sékou-Oumar Kaba

    Supervisé.e par : Siamak Ravanbakhsh

    McGill University

    Equivariant Deep Models for Materials Property Prediction

    “Materials discovery is a key driver of technological innovation, especially now that environmental and sustainability constraints are core priorities. To answer this challenge, materials informatics is emerging as a new field making use of the increasing availability of experimental and computational data on materials. A fundamental problem in this area is to create algorithms that can be trained on a large number of already known materials to predict the properties of previously unseen materials. A trained algorithm would have the advantage of making estimates of desired properties much faster than the currently available methods based on physical simulation.

    The goal of the proposed project is to design an algorithm able to perform such predictions using deep learning. Although deep learning methods have demonstrated their power in multiple fields, they have seen less use in materials modeling. One crucial reason is that seen at the atomic level, a solid-state material is an extremely large structure that is difficult to input to a deep learning model. To tackle this problem, we will leverage the symmetry properties of crystals. These materials are composed of atoms arranged in ordered structures, a feature we can use to build efficient models. We will assess the performance of our architecture on the Materials Project dataset, a collection of more than one hundred thousand materials for which properties have been computed with quantum mechanical methods. If the model achieves satisfying results, it will be made available for practical applications.”

    Sanaz Kaviani

    Supervisé.e par : Jean-François Carrier

    Université de Montréal

    Enhancement of quantitative estimation of metabolism and vascularization with positron emission tomography (PET) and Ultrafast Ultrasound Localization Microscopy (UULM) Using Deep Learning

    “1 Problem and context
    Structural and functional imaging of tissue vasculature has been studied using various imaging modalities such as positron emission tomography (PET), magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound. Among all the molecular imaging modalities, no single modality is perfect and sufficient to obtain all the necessary information for any questions of interest. A recent novel technique inspired by super-resolved fluorescence microscopy called ultrasound localization microscopy (ULM), improved spatial resolution of vascular images, from hundreds to a few microns in vivo via the detection at thousands of frames per second of millions of individual microbubbles injected in the bloodstream. Our group (led by my co-supervisor J. Provost, IVADO member) has recently shown the feasibility of extending ULM to dynamic acquisitions in three dimensions using novel imaging sequences and reconstruction algorithms.
    With the introduction of deep learning algorithms, research focusing on multimodality medical imaging has increased exponentially, such as image segmentation, denoising, and image reconstruction. In this work, we propose to combine ULM with ultra-resolution images and dynamic PET imaging to estimate parametric images of dynamic PET based on deep learning, using compartment pharmacokinetic modeling. Moreover, I aim to enhance the quality of dynamic PET images and extraction of perfusion and vasculature parameters to have a precise model of tissue behavior with denoising and precise segmentation.

    2 Methodology
    In this project, we aim to combine the microvascular information from 3D ULM to the molecular information of dynamic PET imaging in order to enhance the resolution and quantification of PET dynamic acquisitions. The first step is an iterative parametric image reconstruction using a deep neural network. I will not use prior training pairs, but only the same ULM image. I will utilize the ULM images from the same image as anatomical prior (blood content in every voxel) to guiding the parametric image reconstruction through the neural network. The neural network will be inserted into the iterative parametric image reconstruction framework and pharmacokinetic modeling to achieve more precise kinetic parameters, rather than using it as a post-processing tool.
    The second step of the project will be on denoising of dynamic PET images because of the high-level noises of these images. Generally, deep learning with convolutional neural networks (CNN), requires the preparation of large training image datasets. This presents a challenge in a clinical setting because it would be very difficult to prepare large, high-quality datasets. Recently, the deep image prior (DIP) approach suggests that CNN structures have an intrinsic ability to solve inverse problems such as denoising without any pre-training. The DIP approach iterates learning using a pair of random noise and corrupted images and a denoised image is obtained by the network output with moderate iterations. The third step will be the segmentation of PET images, which will be organ detection based on unsupervised learning. The main idea for this is that better image representation gets better clustering, and better clustering results helps to get better image representation.

    3 Results
    Modified network structures are developed based on the 3D U-net for each step which consists of an “encode” part and a “decode” part. The encode part of architecture consisting of the repetitive applications of 3D convolution layers, each followed by a batch normalization (BN) and a leaky rectified linear unit (LReLU), and convolutional layers for downsampling. The decoding part consists of a deconvolution layer, followed by the BN and the LReLU, transposed convolutional layers for up-sampling, skip connection with the corresponding linked feature map from the encoding part. In each step, the network structure and the loss function are modified to have the best performance in that step.
    We will extract perfusion and vasculature parameters to have a precise model of tissue behavior. The extracted parametrical maps non-invasively depict beneficiary information about tissue microvasculature and are used as input in the PET compartment pharmacokinetic model. The technique will be used on pre-clinical dynamic data for small animal microPET and on numerical phantoms for dynamic quantitative analysis, validation and sensitivity study. Applications for human PET in nuclear medicine – including tumor microenvironment parametrization – will be developed. “

    Jordan Lei

    Supervisé.e par : Eilif Muller

    Unraveling the Mystery of Neocortical Learning with Deep Neural Networks

    The human brain manifests a remarkable capacity for learning across a broad range of modalities and contexts. This feat is made possible in large part by the neocortex, a brain structure composed of a repeating layered neuronal circuit motif shared across modalities, and even across mammalian species.The neocortex is capable of learning in ways that modern deep learning systems struggle with. For example, the neocortex can learn from only a few training examples, generalize to different task conditions, and learn representations and associations without explicit instruction, all of which are open problems in the field of machine learning and artificial intelligence. Deep neural networks, originally inspired by the architecture of the neocortex, provide a powerful framework for modeling neocortical learning algorithms. In this project, we will build on differential target propagation (DTP), a promising family of neocortical learning models, to develop refined learning paradigms that account for recent neuroscientific insights into neocortical anatomy, synaptic plasticity and perceptual phenomena.Our work aims to unravel the mystery of neocortical learning, and will have direct implications for the fields of neuroscience, artificial intelligence, data science, and mental health.

    Bruna Pascual Dias

    Supervisé.e par : Jean-François Arguin

    Université de Montréal

    Identification d’électrons à l’aide des réseaux antagonistes génératifs (GAN) dans l’expérience ATLAS

    Localisée autour de l’anneau du Grand collisionneur de hadrons (LHC) au CERN, l’expérience ATLAS est conçue pour enregistrer les signaux provenant d’un milliard des collisions entre protons par seconde. Les algorithmes utilisés pour l’identification des particules produites par ces collisions sont aussi impliqués dans les simulations du détecteur, responsables pour estimer son efficacité et comparer notre connaissance actuelle des lois physiques avec les mesures expérimentales mesurées. Par contre, des imperfections dans cette simulation introduisent de grandes incertitudes systématiques dans les mesures expérimentales et donc limitent la qualité des résultats obtenus par le détecteur. Dans ce contexte, ce projet vise à développer et mettre en application des réseaux antagonistes génératifs (GAN) pour créer un algorithme d’identification d’électrons insensible aux perturbations des paramètres qu’introduisent ces imperfections. Ceci nous permettra de minimiser les incertitudes systématiques introduites par la différence de performance de l’algorithme d’identification d’électrons dans les simulations et les mesures expérimentales, ce que promet impacter positivement l’avenir de l’expérience ATLAS.

    Kellin Pelrine

    Supervisé.e par : Reihaneh Rabbany

    McGill University

    Broad Data Systems for Society: Leveraging Heterogeneous Data for Social Good

    “Today’s world is increasingly technological and interconnected, and we are in the era of big data. But although big data has enabled many breakthroughs, there are also many challenges we face as a society that have proven too complex to be solved with large quantities of data alone. For example, to counter misinformation or reduce political polarization, a dataset of millions of tweets is not enough – we need to leverage the full spectrum of our social interactions and more. We need systems that use not just big data but broad data.

    This project aims to develop such systems and apply them for social good. In work so far, with J. Danovitch and R. Rabbany, I laid a thorough empirical foundation by showing that current benchmarking of misinformation detection algorithms is flawed, and leads to models that fail to extract the potential of the broad data they try to use. In current work, I am analyzing political polarization with the full breadth of social media interactions, to understand causes, changes over time, and how to unite people for common good. I am also working to develop large scale models that combine text, images, and social interactions to power a wide range of social network research and applications. In future work, I will write a dissertation on broad data in general, aiming to go beyond existing tools and individual applications and show how to use it to solve the challenges of tomorrow.”

    Brice Rauby

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Angiographie Quantitative du Myocarde par Localisation Ultrasonore

    En 2015, 110 millions de personnes étaient atteintes d’insuffisance coronarienne (CAD) et près de 9 millions en mouraient ce qui en faisait la première cause de mort dans le monde. L’imagerie cardiaque est souvent la première étape dans le diagnostic et la planification du traitement de ce groupe de maladies. Cependant, les angiographies conventionnelles ne permettent pas de mesurer le débit sanguin d’une manière directe qui soit non-invasive et largement disponible. Par ailleurs, les méthodes de microscopie de localisation ultrasonore (ULM) se sont développées et rendent possible une cartographie à haute résolution et non-invasive du débit sanguin sur des organes statiques. Dans ce cadre, l’application de méthode d’apprentissage automatique a montré une amélioration significative des résolutions temporelle et spatiale. L’objectif de ce projet est de transférer les méthodes d’ULM des organes statiques vers le muscle cardiaque grâce à des méthodes de correction de mouvement ayant montré des résultats prometteurs et en adaptant les méthodes d’apprentissage automatique pour bénéficier d’améliorations en résolution.

    Sima Rishmawi

    Supervisé.e par : Frederick Gosselin

    Polytechnique Montréal

    Digital Twin of a Rotating Machine: Model Order Reduction and Artificial Intelligence for Hydroelectricity Production

    “NASA defines a Digital Twin as an integrated multiphysics, multiscale, probabilistic simulation of an as-built vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin.”

    This project aims at implementing a digital twin of a Vertical Axis Rotating Machine (VARM) exhibiting some similar physical characteristics to a hydroelectric machine. Some of these characteristics are represented in the dynamic behavior of both machines such as vibrations. Also, similar parameters can be measured and monitored, and similar loading conditions can be applied to both machines. This means that studying the dynamic behavior of a VARM can simplify the process of understanding the dynamic behavior of an industrial hydroelectric unit.

    To create the digital twin, two numerical methods will be used; the first is model order reduction using Proper Generalized Decomposition (PGD), which can be thought of as a method that transforms a multi-dimensional complicated system into a group of simple one-dimensional systems that are easier to work with. The second method is to use Physics Informed Neural Networks (PINN) in order to predict the parameters needed to create the system model by Artificial Intelligence (AI). “

    Camille Rochefort-Boulanger

    Supervisé.e par : Julie Hussin

    Apprentissage profond en génomique pour la prédiction de phénotypes complexes

    Plusieurs traits humains comme la taille, ainsi que diverses maladies telles que la schizophrénie et les maladies cardiovasculaires sont qualifiés de complexes: leur manifestation est modulée par l’interaction de facteurs environnementaux et de nombreux variants génétiques. Les récents progrès en génomique humaine ont généré l’espoir de pouvoir prédire ces traits complexes à partir des nombreuses données collectées. Des scores de risque génétique ont été développés pour prédire la prédisposition des individus à des traits complexes à partir de leurs variants génétiques. Toutefois, ces scores ont un pouvoir de prédiction limité, surtout lorsqu’ils sont calculés sur des individus provenant de populations différentes, car ils sont biaisés par l’ethnicité génétique des individus utilisés pour la construction initiale de ces scores. De par leur capacité à considérer les interactions entre de multiples facteurs, les méthodes d’apprentissage profond, une branche de l’intelligence artificielle, représentent un moyen prometteur d’accomplir des progrès dans ce domaine. Mon projet consiste à développer une nouvelle méthode d’apprentissage profond tirant profit des données génomiques pour prédire des traits complexes. Toutefois, la quantité volumineuse de variants génétiques identifiés chez l’humain, ainsi que la qualité variable des jeux de données génomiques posent plusieurs défis à l’application des méthodes d’apprentissage profond actuelles. Dans le cadre de ce projet, je développerai des approches adaptées aux particularités des données génomiques et robustes à l’ethnicité des individus afin de réduire le biais causé par la diversité ethnique sur les prédictions de traits complexes. Ce projet aura un impact important dans le domaine de l’apprentissage profond, car il vise à développer des algorithmes adaptés aux données génomiques, ainsi que dans le domaine de la génomique, puisqu’il vise à améliorer la prédiction de traits complexes en prenant en compte la diversité ethnique génétique.

    Julien Roy

    Supervisé.e par : Christopher Pal

    Polytechnique Montréal

    Weakly Supervised Behavioral Modeling for Sequential Decision Making

    Reinforcement Learning (RL) is a popular approach to sequential decision making problems, and is a core component of many recent successes in Artificial Intelligence ranging from predicting the 3D structure of proteins to mastering the game of Go. However, most modern approaches focus on learning a single optimal policy for a given task. This property makes applying RL in the industry cumbersome as any desired change in the agent’s behavior requires tuning the reward function and training the policy all over again. In this project, we present and study a novel framework to train multi-modal, conditional policies that allow to capture a variety of near-optimal behaviors. Crucially, our method would give practitioners the capability to tune and adjust the policy at test-time, without re-training the agent. We hope that this contribution will take us closer to the deployment of more practical RL algorithms to tackle the real life problems of today and to better support the fast changing requirements of the industry.

    Ludovic Salomon

    Supervisé.e par : Sébastien Le Digabel

    Polytechnique Montréal

    Optimisation multiobjectifs de boîtes noires sous contraintes générales

    “Avec les progrès de l’informatique et la complexification croissante des modèles utilisés dans l’industrie, de nombreux problèmes d’ingénierie ne peuvent plus être approchés à l’aide des méthodes de l’optimisation classique. Les fonctions définissant ces types de problème sont des boîtes noires, c’est-à-dire des simulations numériques ou des codes informatiques comportant des entrées réglables par l’utilisateur et retournant une ou plusieurs sorties. L’évaluation de ces fonctions pour des paramètres d’entrée donnés peut être coûteuse, voire approximative, et les dérivées ne sont pas disponibles. Les techniques d’optimisation classique (par exemple celles s’appuyant sur les gradients) ne peuvent donc s’appliquer.

    De nombreux logiciels pour résoudre ce type de problèmes ont été développés. Parmi eux, le logiciel NOMAD implémente un algorithme à l’état de l’art, MADS. Cette méthode est efficace et possède de bonnes propriétés de convergence. L’objectif général de ce projet consiste à développer de nouveaux algorithmes en optimisation multiobjectif, où plusieurs objectifs contradictoires doivent être pris en compte dans la modélisation d’un problème donné, basés sur la méthode MADS possédant des propriétés de convergence similaires, avec un temps de calcul pour un problème donné plus court que les méthodes à l’état de l’art existantes.

    Les méthodes développées seront intégrées au logiciel open source NOMAD et testées sur des modèles de simulation concrets (problèmes de dimensionnement de moteur). Ces travaux de recherche ont des retombées importantes dans les domaines du génie ; en chimie (conception de réseaux ou l’optimisation de réactions chimiques); l’apprentissage machine est également concerné avec des problèmes de classification et de partitionnement de données.”

    Harsh Satija

    Supervisé.e par : Joelle Pineau

    McGill University

    Batch Policy Improvement with Multiple Objectives and Safety Constraints

    “My current goal is to design RL algorithms that can improve the agent’s performance from a batch of data collected under some behaviour policy, while still ensuring the safety guarantees. I’m interested in the setting where there are multiple feedback signals (reward functions), and the algorithm’s user (ML Practitioner) can control the trade-offs between them.
    This setting is also reflective of the real-world scenarios where there is an abundance of recorded data, collected via experts or suboptimal agents, but training the agent directly via interactions and experimentation with the real environment is expensive and risky, such as health-care, hiring, or finance.
    The goal is to build algorithms that provide practical high-probability guarantees about the undesirable behaviour that might be caused by deploying the policy returned by the learning algorithm in the real-world. “

    Beheshteh Tolouei Rakhshan

    Supervisé.e par : Guillaume Rabusseau

    Université de Montréal

    Randomized numerical linear algebra approaches with tensor methods

    My research is focused on algorithms, and complexity results for problems arising in machine learning and data science. This broadly includes contributions in tensor factorization, randomized matrix computations methods, and theoretical computer science. In past and ongoing research, I have been specifically interested in developing algorithms with provable guarantees, accurate and fast solutions to computationally expensive methods by leveraging dimensionality reduction and tensor decomposition techniques. This will be broadly applicable in machine learning and artificial intelligence problems in areas ranging from human diseases to climate change to recent technological developments. My goal as a researcher is to contribute to both the theoretical understanding of computationally challenging problems, as well as the design of efficient techniques for large-scale high-dimensional data. In doing so I plan to help bridge the gap between the best theoretical results and the most practical algorithms for complex problems in machine learning. The main point of my PhD project is to explore and apply both randomized algorithms and tensor decomposition techniques to large-scale data sets. Significant expected outcomes include computationally fast methods for big data problems that cannot be solved in real time.

    Baptiste Toussaint

    Supervisé.e par : Maxime Raison

    Polytechnique Montréal

    Développement d’un membre supérieur robotique autonome pour la collaboration en réadaptation

    “La robotique est un marché mondial d’environ 40 millions de dollars en 2020 en forte croissance. Couplée à l’intelligence artificielle et à l’internet des objets, la robotisation affectera tous les secteurs de l’économie. Le Technopole en réadaptation pédiatrique est l’un des leaders dans le domaine, avec un nouveau bâtiment (2019) avec infrastructure thérapeutique principalement composée de systèmes robotiques et impression 3D pour plusieurs applications dont l’interaction et le jeu intelligent (de réadaptation) avec le patient.

    Cependant, pour ce type applications, le constat du problème est similaire. Bien que l’utilité des bras robotiques dans la réadaptation ait déjà été démontrée, leur utilisation n’est pas démocratisée. En effet, les bras commercialisés sont chers et non-adaptables. De plus, ces robots manquent souvent de dextérité, rapidité et précision.

    L’objectif de ce projet est donc de développer un robot humanoïde du membre supérieur intelligent, autonome et interactif en réadaptation. Les données et l’intelligence artificielle auront un rôle principal pour les trois sous-objectifs :
    1. Augmenter la dextérité du robot pour l’accomplissement de tâches du membre supérieur, à l’aide d’apprentissage machine sur des bases de données de mouvements de la balle et du robot ;
    2. Augmenter les performances de rapidité et de précision du robot, à l’aide de notre contrôleur quadratique linéaire itératif (iLQR) renforcé par réseau neuronal (NNiLQR) ;
    3. Valider les deux premiers objectifs au travers de démonstrations appliquées uniques au monde, tel qu’un record du monde d’échanges de ping-pong entre humain et robot, tâche qui combine la dextérité, la rapidité et la précision.”

    Internship grants: Data to tell

    Ali Akbar Sabzi Dizajyekan

    Polytechnique Montréal

    Stage chez Wikimédia Canada, spécialité science des données

    Katharine O’Brien

    Concordia

    Stage chez Synapse-C, spécialité communication

    Laurence Taschereau

    UQAM

    Stage chez Wikimédia Canada, spécialité communication

    Simon-Olivier Laperrière

    Université de Montréal

    Stage chez Le Devoir, spécialité science des données

    Khaoula Chehbouni

    HEC Montréal

    Stage chez La Presse, spécialité science des données

    Clara Gepner

    Concordia

    Stage chez La Presse, spécialité communication

    Paul Fontaine

    Université Laval

    Stage chez Le Devoir, spécialité communication

    Isabelle Bouchard

    Polytechnique Montréal

    Stage chez Radio-Canada, spécialité science des données

    Postdoc-entrepreneur program

    Daniel Pereira

    Supervised by: professor Louis-Martin Rousseau

    Polytechnique Montréal

    Startup: Matrius Technologie

    Matrius has developed a revolutionary alternative to the current approach to scheduled infrastructure maintenance shutdowns: the first ultrasonic non-destructive testing probes that operate fully exposed at temperatures up to 600°C (1112°F) and enable continuous monitoring of active infrastructure.

    The funding will enable the development of AI-based software to analyze the data stream generated by the sensors. The goal is to create AI-based predictive models that identify accelerated corrosion, improve maintenance planning, and aid in long-term decision making.

    Ehsan Moradi

    Supervised by: Luis Miranda-Moreno

    McGill University

    Project: CarboRate

    Stepping beyond the conventional energy and emissions assessment tools, “CarboRate” takes a multi-modal agent-based approach for estimation, tracking, and evaluation of transportation energy, carbon, and air-pollution footprint while targeting both the commercial and non-commercial markets.

    Postdoctoral research funding

    Homa Arab

    Supervised by: Steven Dufour

    Polytechnique Montréal

    A Deep-Learning Method for Arrhythmias Detection Using a Millimeter-Wave MIMO

    Wireless multiple-input multiple-output (MIMO) radar sensors are attracting increased attention because of their capability to measure the angle-of-arrival (AoA), they have a larger signal-to-noise ratio (SNR), and they allow lower minimum detectable speeds. They can be used in a wide range of applications found in day-to-day life, such as for contactless vital sign detection, sleep monitoring, human fall detection, and smart surveillance and security systems. These systems have the capability to receive long term data from various receivers to detect tiny movements of one or multiple objects. Due to the noise and interference found in RX signals, extracting the desired information from received signals from nonstationary targets, multiple targets, and various RX antennas signals, is an important and challenging task. The focus of this research is to remove noise and random body movements from measured heartbeat and respiration signals by using convolutional neural network (CNN) encoder and LSTM multilayer decoders. LSTM network will also be applied to detect and cut out interference in the spectrograms of the signals. Furthermore, a coarse-to-fine generative adversarial network (GAN) and a CNN-LSTM Encoder Decoder will be used to restore the part of the spectrogram that is affected by the interferences. Denoised signals will be fed into a simple CNN network to carry out the final classification.

    Natasha Clarke

    Supervised by: AmanPreet Badhwar

    Institut universitaire de gériatrie de Montréal (IUGM)

    Machine learning based insights into the relationship between cerebrovascular pathology and brain functional connectivity in Alzheimer’s disease

    Alzheimer’s disease (AD) is a devastating disorder of the brain. The hallmark pathology is a build-up of abnormal forms of two proteins, amyloid and tau. Despite efforts to develop drugs that clear these deposits from the brain, there is currently no treatment for AD. One reason for this is that most patients also have damage to other brain components, such as the blood vessels, and that this damage also contributes to cognitive impairment. Our project will explore the interplay between (1) damage to the brain vascular system, and (2) brain connectivity, a measure of how well different parts of the brain communicate. Both can be assessed using magnetic resonance imaging (MRI). We will use machine learning to analyse MRI scans from a large study (UK Biobank) to identify subgroups of people with similar connectivity patterns, and then determine which of these subgroups are associated with damage to the brain vascular system. Then, we will determine which of these vascular damage-associated subgroups are related to clinical features of AD, and whether these multi-pathology MRI markers better predict future decline. Our findings will enable more precise treatments for people with AD, and better patient selection for clinical trials.

    Jonathan Cornford

    Supervised by: Richards Blake

    McGill University

    Exploring learning in neural networks with brain-inspired geometries

    There are many differences between artificial and biological neural networks. However, based on the fundamental assumption that biological neural networks have been optimized by evolution, the two fields have long shared a synergistic relationship. A simple question we might therefore ask is: “How similar are the parameter update rules that govern learning in biological and artificial networks?”. In AI, neural networks are generally trained to minimize empirical risk via stochastic gradient descent (SGD). As such, network parameters are updated additively with the negative gradient of the loss at every training iteration. In contrast, recent biological experiments have shown that synaptic weight updates in the brain are predominantly multiplicative in nature. In this research proposal we consider how these two forms of update can arise from the choice of distance generating function in Mirror descent, and propose to leverage our recent work building networks with sign-constrained weights to explore the use of multiplicative updates and non-euclidean distance functions for training artificial neural networks.

    Istvan David

    Supervised by: Eugene Syriani

    Université de Montréal

    Inference of simulation models in Digital Twins by reinforcement learning

    Digital Twins are virtual representations of physical assets, providing a proxy towards applications needing to access data on the physical asset. Simulators are one of those applications, extensively employed in Digital Twins to support real-time decision making. Due to the enormous complexity of the systems subject to digital twinning, constructing simulators by hand is an error-prone and costly endeavor, that automation can significantly improve. This project provides a framework for the inference of simulation models in Digital Twins by reinforcement learning. Specifically, we aim at the inference of Discrete Event System Specification (DEVS) models. DEVS is a versatile simulation formalism that has been show to be the common denominator of other simulation formalisms. The algorithm starts from system-specific architectural templates of DEVS models, and learns their dynamic elements: state-transition and timing functions. The learning mechanism is supported by a-priori formalized feedback provided by the environment. We will use a data set of 130.000 real sensor records, comprising 7 distinct metrics, and validate the scalability of the developed technique by applying it to a larger dataset of 250M-1.5B records from 100 sensors.

    Thang Doan

    Supervised by: Joelle Pineau

    McGill University

    Enabling Zero-Shot transfer in Reinforcement Learning through dynamic augmentation

    Reinforcement Learning (RL) has made great strides in recent years, facilitating the creation of artificial agents that can learn to solve a wide array of complex tasks, including robotic manipulation and challenging games (Atari, Go), purely from pixel inputs. However RL agents are still fragile when deployed in environments they were not directly trained on: small changes to the observations or environment dynamics often result in dramatic drops in performance. This project aims to develop a new method that will create RL agents that are robust to novel environment dynamics. As a first step, we learn a latent space of environment dynamics. We then apply data augmentation in the latent subspace, which will train the network to extrapolate to nearby environments. In effect, we train the agent on a space of “imaginary” environments, close to the training environment, which should make the agent robust when faced novel environments with unseen dynamics during evaluation.

    Ali Falaki

    Supervised by: Numa Dancause

    Université de Montréal

    Using artificial intelligence to personalize transcranial magnetic stimulation parameters

    In recent years, repetitive transcranial magnetic stimulation (rTMS) has shown great promise as a potential therapy for neurological and psychiatric disorders. rTMS is a safe non-invasive neurostimulation method to induce long-lasting changes in the brain via a wire coil that generates magnetic fields passing through the scalp. However, its clinical application is limited mainly because of the variability of induced responses across individuals. To address this variability, clinicians should be able to rapidly select the parameters of the stimulation, such as the frequency or the intensity, based on the responses evoked in a given individual. But these factors are too complex to study one by one manually. Our general goal is to develop an intelligent machine learning algorithm that effectively selects the optimal parameters of the stimulation based on each subject’s response. After this step, we will adapt this approach to create a user-friendly and flexible interface that can be used clinically.

    Jean-Pierre Glouzon

    Supervised by: Martin Smith

    Université de Montréal

    Clustering of human transcriptome for real-time gene expression profiling

    Identifying expression profiles from deep RNA sequencing (RNA-seq) analysis is a promising tool for precision medicine that can refine disease aetiology, improve risk stratification and increase diagnostic precision. However, the lengthy turnaround time required to generate and analyze data using next-generation sequencing technologies severely limit the diagnostic potential of RNAs-seq. There is thus a need for efficient and accurate RNA-seq technologies combined with gene expression pipelines and models to accelerate clinical decisions and improve patient management. We propose an approach to gene expression profiling in real-time based on an efficient representation and online clustering of transcript sequenced via Nanopore RNAseq. Our model leverages real-time raw directed RNA signals generated by Nanopore RNAseq to build accurate and efficient gene expression and transcriptome profiles avoiding error-prone based-called data analysis.

    Alex Hernandez-Garcia

    Supervised by: David Rolnick

    McGill University

    Deep learning for material discovery to fight climate change

    The current and expected consequences of climate change driven by anthropogenic greenhouse gas emissions are a major threat for humanity and, more generally, for the biodiversity and stability of the planet. Developing strategies of adaptation to and mitigation of these effects is thus of utmost importance. One of the ways in which artificial intelligence can help fight climate change is by accelerating scientific discoveries. In this multidisciplinary project, we propose to combine machine learning and chemistry to discover new materials to improve energy storage, optimize the energy from renewable sources or capture carbon dioxide, by leveraging the potential of deep neural networks to find the most promising materials among large sets of candidates. Progress in this direction will not only help reduce carbon emissions but also advance fundamental aspects of machine learning science.

    Xu Ji

    Supervised by: Yoshua Bengio

    Université de Montréal

    Generalization in Neural Networks

    Why is it that humans can switch tasks and apply knowledge learned in the first whilst learning the second, accurately and without forgetting, whereas neural network models cannot?

    Why do networks require orders of magnitude more training data to learn what a human can be taught with a single experience?
    Why is it that networks will still misclassify an image of a dog as a lawnmower, despite seeing thousands of examples of both?

    These questions relate to the fundamental problem of generalization, which is perhaps the most pressing problem currently facing the development of human-level intelligence. Without an effective solution, neural networks cannot learn in real time from naturally sequential real world data as effectively as humans do, which limits their performance, deployment options, and training efficiency.

    Generalization is therefore a problem with immense practical implications, as well as being interesting theoretically and from the perspective of biologically-inspired learning. This project seeks to understand and solve the questions posed above by finding new neural network training procedures.

    Katarzyna Jurewicz

    Supervised by: Becket Ebitz

    Université de Montréal

    Data-driven discovery of continual learning algorithms from neural populations

    Natural environments change unpredictably, and require continual learning, a skill that remains challenging even in cutting-edge artificial intelligence (AI). However, biological decision-makers have solved this problem. They have evolved algorithms that balance the reliable exploitation of well-tried options with the need to continue to explore and learn about alternatives. This is called the stability-flexibility dilemma and is a major open question in artificial intelligence and neuroscience. The goal of this project is to leverage large-scale neural datasets and machine learning methods to identify the biological algorithms for resolving the stability-flexibility dilemma. Understanding these algorithms could reveal basic neural mechanisms for cognitive flexibility, impact our understanding of diseases in which flexibility is compromised (e.g. addiction, depression, and obsessive compulsive disorder), and inform next-generation AI.

    Jacob Miller

    Supervised by: Guillaume Rabusseau

    Université de Montréal

    Structured language modeling with recurrent tensor networks

    Recent deep learning architectures have enabled massive improvements in the quality of neural language models, to the point that state-of-the-art models are now capable of generating essays, short computer programs, and even poetry with seemingly near-human quality. While this progress is impressive, further research has revealed important limitations in the high-level understanding of such models, largely arising from a difficulty in capturing certain syntactic and semantic structures present in natural language.

    To circumvent these limitations, this project proposes to make use of the novel capabilities of a new family of generative models built on quantum-inspired tensor networks. Small tensor network (TN) language models have already been shown capable of efficiently generating text which conforms to the structure of any user-specified grammar, and we aim to assess this capability in real-world settings using large-scale TN models trained on large language corpora. Beyond the immediate utility of these methods for source code generation, this will facilitate the development of similar methods for the automatic extraction of syntactic structure present in pretrained TN models. Such methods would open the door to transparent methods for capturing high-level semantic information, bringing such language models closer to genuine language understanding.

    Ashraf Uz Zaman Patwary

    Supervised by: Francesco Ciari

    Polytechnique Montréal

    Development of An Efficient Gradient Estimation Technique for Large-scale Traffic Assignment Model Optimization

    Intelligent transportation engineering is moving towards the implementation of sensor-embedded, connected vehicles and infrastructure, generating a large variety of complementary datasets in the process. While the machine learning literature does provide different methods for utilizing individual datasets for accurate predictions, large-scale, network-wide traffic assignment (TA) models can truly utilize the complementarity of these datasets while maintaining the interpretability and infrastructure of the underlying physical process. However, optimization of large-scale TA models is severely hindered by the curse of dimensionality, undesirable mathematical properties, and expensive function evaluations. To alleviate this problem, we propose to develop an efficient gradient estimation algorithm named iterative backpropagation (IB), for solving multi-source, high dimensional, large-scale, agent-based TA calibration or optimization problems. IB is inspired by the popular backpropagation through time (BPTT) algorithm used in recurrent neural network (RNN) training. It exploits the iterative structure of the TA solution procedure and simultaneously calculates the gradients while the TA process converges. IB requires no additional function evaluation and consequently, scales very well with higher dimensions. Having a similar structure as BPTT, IB is highly parallelizable and can benefit from the existing machine learning literature and implementations, ensuring sustainability and efficiency improvement of the algorithm in the long run.

    Janarthanan Rajendran

    Supervised by: Sarath Chandar Anbil Parthipan

    Polytechnique Montréal

    Towards Lifelong Reinforcement Learning Agents

    Lifelong learning agents that continually learn throughout their lifetime accumulate knowledge to achieve their current goals efficiently and prepare for future ones. Their wide ranging applications include dialog systems, autonomous vehicles, and household robots. In this project, we will focus on lifelong Reinforcement Learning (RL) agents which have long lifetimes and act in vast, complex, and non-stationary environments with very sparse and delayed rewards. Unlike most current RL agents, lifelong RL agents have a single start to their lifetime without any resets. They can visit only a small fraction of all the states in their environment, and their actions can have irreversible effects. Most of our current solution methods which focus on simpler settings are not effective in this setting. We propose to work on addressing some of the key challenges posed by lifelong learning to RL: 1) Discovering what to learn by learning General Value Functions (GVF), 2) Gathering relevant experience needed to learn by learning options guided by learned intrinsic motivation, 3) Learning and accumulating knowledge effectively by using modular and memory-augmented networks for GVF learning, and 4) Using the learned knowledge to infer more knowledge and achieve its goals by building approximate and abstract models.

    Jimin Rhim

    Supervised by: Derek Nowrouzezahrai

    McGill University

    Building human-AI trust for the long term: in the context of frictionless retail

    In April 2021, Canada proposed a federal budget of $443.8 million for the nation’s artificial intelligence (AI) economy., among which, $185 million will support the commercialization of AI innovations. Contrary to the huge investment, only 44% of Canadians trust AI and robotics, making Canada one of the least trusting nations for the AI industry. If left unaddressed, the lack of user trust poses a major threat to Canada’s future prosperity with AI. For Canadians to take full advantage of the potential benefits of AI, it is integral to build human-AI trust. Consequently, the goal of the proposed research is to develop a temporal dynamic model of trust between human and AI by exploring humans’ behavioral patterns and expectations during repeated interactions with an AI system. The newly established McGill Retail Innovation Lab — a living lab featuring a frictionless convenience store — will be used to observed how trust is formed, maintained, and lost during human-AI interaction in the wild. In this proposed project, I will model the temporal dynamic of user trust in AI based on the accumulated data (e.g., navigation patterns, purchased products, visiting time, interaction duration, drop out rates, interaction with smart agents). Then, the effectiveness of the developed temporal trust dynamic model will be tested in the frictionless retail setting. The AI will prosper and provide its potential when the public’s trust in AI is formed. A new, temporal model of human-AI trust developed using the abovementioned longitudinal investigation will challenge the current rudimentary model of trust. The empirically validated model will subsequently inform how automation systems should be designed in the future with the long-term view of user trust in mind. This, in turn, will directly inform industry leaders and policymakers with practical means to help lift the distrust in AI and robotics in Canada and abroad.

    Hajime Shimao

    Supervised by: Maxime Cohen

    McGill University

    Implications of “Fairness” in Fair Machine Learning

    The issues surrounding fairness in prediction results generated by machine learning (ML) algorithms has attracted enormous interest from researchers in recent years. While numerous algorithms have been proposed and they can provide prediction results that are fair based on certain notions, the current literature lacks an understanding of potential impacts of these predictions on the behavior of prediction subjects. Their behavior in turn influences the data generating process for the future predictive task; therefore the iterative dynamics must be investigated to fully understand the social implication of fair-ML. The purpose of this research project is to examine the implications of fairness when a fair ML algorithm is used in realistic settings. This research project leverages a unique synergy that combines cross-discipline expertise in ML, economics, operation research, and information systems, which are strongly relevant to the problem at-hand. The overarching goal of this research project is to design a fair ML algorithm that takes into account the behavior of prediction subjects and welfare of relevant stakeholders when the algorithm is used in realistic settings.

    Bénédicte L. Tremblay

    Supervised by: Julie Hussin

    Université de Montréal

    L’apprentissage profond et les sciences omiques dans le combat contre l’infarctus du myocarde

    L’infarctus du myocarde (IM) résulte du blocage d’une artère du cœur, ce qui provoque la destruction d’une partie du muscle cardiaque. Les sciences omiques, qui étudient l’ensemble complexe des molécules qui composent le corps, permettent de mieux comprendre les maladies, dont l’IM. Par ailleurs, des modèles issus de l’intelligence artificielle (IA) ont le potentiel de prédire le risque de certaines maladies, dont l’IM. Toutefois, ces modèles sont difficiles à interpréter puisqu’ils fonctionnent comme des « boîtes noires » : ils génèrent des prédictions ou des recommandations, sans fournir d’explications ni de justifications. Or, l’interprétabilité biologique du modèle est essentielle pour le développement d’applications cliniques. Quelques études font état de l’interprétabilité biologique de méthodes d’IA, mais très peu concernent la santé.

    L’objectif de ce projet est d’évaluer l’interprétabilité biologique d’une nouvelle méthode d’IA qui utilise des données omiques pour prédire le risque d’IM. Le projet compte 2000 participants, dont 1000 avec des antécédents d’IM et 1000 sans antécédents d’IM.

    Ce projet permettra de mieux prédire le risque d’IM et, ultimement, d’aider au développement de programmes de prévention et de traitement. L’interprétabilité biologique de la méthode renforcera la confiance de la communauté médicale face à l’IA et favorisera son application en clinique.

    Yuan Yang

    Supervised by: Jerome Le Ny

    Polytechnique Montréal

    Learning for Haptic Shared Control in Human-Robot Teams

    The rapid advancements of decision support systems and autonomous systems are making robots more involved in physical collaboration with human teammates. In order to work alongside people safely and efficiently, robots need to correctly comprehend the time-varying states of their human partners and to appropriately adapt their behaviors in response. A promising solution to this challenge is haptic shared control systems that organize haptic interactions and share control authority between human and robot teammates by transferring suitable resistive/assistive forces to human operators as per their psychophysical status. For optimal haptic shared control, this project proposes to develop data-driven strategies to estimate quantitative models of human operators and to develop plug-and-play algorithms to coordinate a dynamic network of mobile robots in collaboration with human teammates. Specifically, we plan to leverage reinforcement learning techniques to identify certain passivity properties of human operators from our collected experimental data, and then synthesize the learned models into passivity-based human-robot teaming algorithms to improve the navigation and motion performance of robots. This project will contribute data-driven approaches to the control of transportation and logistics systems with humans in the loop.

    Undergraduate research initiation grants

    1st edition

    Samuel Arseneault

    Supervisé.e par : Chantal Labbé

    HEC Montréal

    Générateur de données pour favoriser l’apprentissage en science des données.

    Kristina Atanasova

    Supervisé.e par : Alexandre Prat

    Université de Montréal

    Identification des interactions cellulaires de la barrière hémato-encéphalique par apprentissage automatique.

    Sol’Abraham Castaneda Ouellet

    Supervisé.e par : Didier Jutras-Aswad

    Université de Montréal

    Les effets du cannabidiol sur la cognition chez les personnes atteintes d’une dépendance à la cocaïne.

    Simon Chasles

    Supervisé.e par : François Major

    Université de Montréal

    Caractérisation de motifs structurels surreprésentés dans l’ARN.

    Laurence Beauregard

    Supervisé.e par : Dang Khoa Nguyen

    Université de Montréal

    Détection de crises épileptiques en combinant les techniques d’intelligence artificielle et les signaux physiologiques non-invasifs multimodaux.

    David Chemaly

    Supervisé.e par : Julie Hlavacek-Larrondo

    Université de Montréal

    State-of-the-art radio images of the Coma Cluster of Galaxies.

    Omar Chikhar

    Supervisé.e par : Julie Hlavacek-Larrondo

    Université de Montréal

    A novel machine learning approach to identifying cool core clusters.

    Léo Choinière

    Supervisé.e par : Numa Dancause

    Université de Montréal

    Hierarchical Bayesian Optimization for Stimulation Protocols in Cortical Neuroprostheses.

    Céline Boegler

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    La méthode des résidus conjugués pour l’optimisation sans contraintes.

    Karl-Étienne Bolduc

    Supervisé.e par : Normand Mousseau

    Université de Montréal

    Développement de potentiels atomiques par apprentissage machine.

    Ariane Brucher

    Supervisé.e par : Phaedra Royle

    Université de Montréal

    Analyses statistiques de potentiels évoqués basées sur des modèles mixtes linéaires.

    Hugo Cordeau

    Supervisé.e par : Vasia Panousi

    Université de Montréal

    Utilisation des données satellitaires ainsi que de donnés de recensements afin de connaitre les éléments d’étalement urbain.

    Olivier Denis

    Supervisé.e par : Jean-François Arguin

    Université de Montréal

    Apprentissage machine pour l’analyse des données du LHC.

    Samuel Desmarais

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Conception d’un algorithme de reconstruction d’images ultrasonores utilisant l’apprentissage profond afin de réduire le nombre de canaux nécessaire à l’obtention d’une image à contraste équivalent.

    Cauderic Deroy

    Supervisé.e par : Sébastien Hétu

    Université de Montréal

    Identification des signatures physiologiques de la violation des normes sociales.

    Guillaume Dupuis

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Projet sur le 5G et les villes intelligentes.

    Charlie Gauthier

    Supervisé.e par : Liam Paull

    Université de Montréal

    Enabling on-the-fly machine learning in Duckietown.

    Victor Geadah

    Supervisé.e par : Guillaume Lajoie

    Université de Montréal

    Impact of nonlinear activation functions on learning dynamics of recurrent networks.

    Rose-Marie Gervais

    Supervisé.e par : Frédéric Gosselin

    Université de Montréal

    Générer des métamères visuels utilisant la MEG et l’apprentissage profond pour évaluer l’encodage des propriétés visuelles spécifiques en mémoire à long terme.

    Élodie Labrecque Langlais

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Imagerie de pulsatilité de localisation dynamique par ultrasons utilisant l’intelligence artificielle.

    Simon-Olivier Laperrière

    Supervisé.e par : Pierre L’Écuyer

    Université de Montréal

    Outils pour mesurer l’équidistribution de générateurs pseudoalétoires basés sur des récurrences modulo 2.

    Geoffroy Leconte

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    Une méthode prédicteur-correcteur multi-précision pour l’optimisation quadratique convexe.

    Florence Ménard

    Supervisé.e par : Anne Gallagher

    Université de Montréal

    Optimization of data analysis strategies for the ELAN Project: a multimodal approach.

    Marco Mendoza

    Supervisé.e par : Vincent Arel-Bundock

    Université de Montréal

    Developing a machine-learning algorithm to predict citizens’ fiscal preferences.

    Neshma Metri

    Supervisé.e par : Pierre Majorique Léger

    HEC Montréal

    Source-level EEG connectivity correlates of immersion during high-fidelity vibrokinetically-enhanced cinema viewing.

    Andrei Mircea Romascanu

    Supervisé.e par : Jackie Cheung

    McGill University

    Reinforcement Learning Rewards for Text Generation.

    Olivier Parent

    Supervisé.e par : Roberto Araya

    Université de Montréal

    Computation model of diendritic nonlinearities in layer 5 pyramidal neurons.

    Maria Sadikov

    Supervisé.e par : Michel Côté

    Université de Montréal

    Utilisation de transfert d’apprentissage pour la caractérisation du graphène.

    Christopher Scarvelis

    Supervisé.e par : Prakash Panangaden

    McGill University

    Convex Relaxations for Neural Network Training.

    Joey St-Arnault

    Supervisé.e par : Marina Martinez

    Université de Montréal

    Analyse multivariée de la cinématique pour créer un lien entre les mouvements comportementaux observées et ceux générés par neurostimulation corticale.

    Danny Tran

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Projet 5G et villes intelligentes.

    Anton Volniansky

    Supervisé.e par : Jean-François Tanguay

    Université de Montréal

    Banque de données des issues cliniques à court et long termes des Échafaudages Vasculaires Biorésorbables comparativement aux Stents pharmacoactifs de 2e génération.

    Charles Wilson

    Supervisé.e par : Paul Charbonneau

    Université de Montréal

    Prédiction du cycle solaire par assimilation de données.

    2nd edition

    Adrien Adam

    Supervisé.e par : Benjamin De Leener

    Polytechnique Montréal

    Développement d’une méthode de super-résolution pour l’imagerie quantitative de susceptibilité néonatale.

    Rodrigo Chavez Zavaleta

    Supervisé.e par : Sarath Chandar Anbil Parthipan

    Polytechnique Montréal

    Understanding the Dynamics of Non-saturating Recurrent Units.

    Cheng Chen

    Supervisé.e par : Aditya Mahajan

    McGill University

    Regret in learning the optimal linear quadratic regulator: empirical comparison of Thompson sampling and adaptive control algorithms.

    Feng Yang Chen

    Supervisé.e par : David Alexandre Saussié

    Polytechnique Montréal

    Learning based visual waypoint detection for agile drone flight.

    Ghassen Cherni

    Supervisé.e par : Sofiane Achiche

    Polytechnique Montréal

    Développement d’un chatbot pour mieux engager les patients dans leur gestion des barrières à une bonne observance aux antirétroviraux.

    Fanny Beltran

    Supervisé.e par : Benjamin De Leener

    Polytechnique Montréal

    Développement d’une méthode basée sur l’apprentissage automatique pour la reconstruction d’images IRM “sparse” du cerveau chez le nouveau-né.

    Valérie Bibeau

    Supervisé.e par : Bruno Blais

    Polytechnique Montréal

    Conception d’un réseau de neurones pour prédire la puissance des agitateurs à partir de données massives issues de simulations.

    Ludovic Bilodeau-Laflamme

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Modèles statistiques de la distribution des délais d’accès dans un réseau mobile.

    Geneviève Bock

    Supervisé.e par : Sofiane Achiche

    Polytechnique Montréal

    Conception et réalisation d’un agent conversationnel intelligent (chatbot) pour aider les personnes vivant avec le VIH à mieux gérer leurs traitements anti-VIH.

    Félix-Antoine Constantin

    Supervisé.e par : Samuel-Jean Bassetto

    Polytechnique Montréal

    L’agenda santé satisfaisant.

    Vivienne Crowe

    Supervisé.e par : Julie Hussin

    Université de Montréal

    Investigation of SARS-CoV-2 infection using a quantitative approach.

    Andjela Dimitrijevic

    Supervisé.e par : Benjamin De Leener

    Polytechnique Montréal

    Algorithme validant la segmentation du cerveau chez les enfants (2-8 ans) à partir d’images IRM en utilisant des réseaux adverses génératifs (GAN).

    Parfait Djimefo

    Supervisé.e par : Samuel Pierre

    Polytechnique Montréal

    Modèle de reconnaissance faciale pour les personnes de minorités visibles et les groupes de populations.

    Roxanne Drainville

    Supervisé.e par : Marina Martinez

    Université de Montréal

    Combiner les effets de la stimulation corticale et spinale pour améliorer la marche après une lésion médullaire.

    Marilou Farmer

    Supervisé.e par : Jalbert Jonathan

    Polytechnique Montréal

    Programmation et diffusion des courbes Intensité-Durée-Fréquence des précipitations.

    Sam Finestone

    Supervisé.e par : Sarath Chandar Anbil Parthipan

    Polytechnique Montréal

    Modular Neural Networks for Lifelong Learning.

    Victor Gaudreau-Blouin

    Supervisé.e par : François Leduc-Primeau

    Polytechnique Montréal

    Simulateur de réseaux de neurones profonds implémentés sur matériel non fiable.

    Sarah Hafez

    Supervisé.e par : Christian Dorion

    HEC Montréal

    Deep Learning Methods for Factor Investing.

    Tamara Herrera Fortin

    Supervisé.e par : Dang Khoa Nguyen

    Université de Montréal

    Identifying Patients’ and Caregivers’ Needs and Preferences: the Key to Developing Successful Seizure Detectors.

    Alexander Iannantuono

    Supervisé.e par : Adam Oberman

    McGill University

    Accelerated algorithm for SGD, applied to deep neural networks and reinforcement learning.

    Guillaume Jones

    Supervisé.e par : Mario Jolicoeur

    Polytechnique Montréal

    Génération d’un modèle à l’échelle du génome des cellules cancéreuses ovariennes.

    Hugues Martin

    Supervisé.e par : Mario Jolicoeur

    Polytechnique Montréal

    Caractérisation de la chimiorésistance par apprentissage machine du transcriptome des cellules cancéreuses ovariennes.

    Gabriela Moisescu

    Supervisé.e par : Doina Precup

    McGill University

    Temporal Abstraction in Reinforcement Learning.

    Sacha Morin

    Supervisé.e par : Guy Wolf

    Université de Montréal

    PHATE-NET.

    Stéfan Nguyen

    Supervisé.e par : Philippe Dixon

    Université de Montréal

    Biomechanical analysis of walking in outdoor environements using wearable sensors.

    Mathilde Ricard

    Supervisé.e par : Sébastien Le Digabel

    Polytechnique Montréal

    CHPO: Constrained Hyperparameter Optimization / Optimisation sous contraintes des hyper-paramètres.

    Patrice Rollin

    Supervisé.e par : Iwan Meier

    HEC Montréal

    Conception d’une base de données financières pour l’analyse des fonds communs de placement.

    Myriam Sahraoui

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Analyse de données cérébrales MEG combinant analyses spectrales et apprentissage machine.

    Monssaf Toukal

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    Online Automatic Optimization of Software for Big Data.

    Alina Weinberger

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Oscillatory brain dynamics under light and deep anesthesia: Predicting states of consciousness using machine learning techniques.

    Paul Xing

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Correction d’aberration en microscopie de localisation ultrasonore par apprentissage profond.

    Xin Yuan Zhang

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    5G, IoT and Smart cities project.

    Masters excellence scholarships

    Alexandre Adam

    Supervisé.e par : Laurence Perreault Levasseur

    Université de Montréal

    Mesurer l’expansion de l’Univers avec l’apprentissage automatique

    Le taux d’expansion de l’Univers est une observable importante pour contraindre les modèles cosmologiques qui retracent l’évolution de l’Univers depuis le Big Bang. Récemment (2018), l’équipe du satellite Planck a publié une valeur dérivée des mesures du rayonnement fossile émis lorsque l’Univers n’était âgé que de 300,000 ans. La valeur trouvée contredit les mesures locales du paramètre, faites à partir de la vitesse de fuite des supernovas Ia et des céphéides se trouvant près de la Voie lactée. Nous proposons d’investiguer ce problème via une troisième méthode de mesure qui, jusqu’à maintenant, possédait une précision limitée par la faible quantité connue de quasar situé derrière une galaxie selon notre ligne de vue, telle que l’image du quasar est multipliée par l’effet de lentille gravitationnelle. La précision de cette méthode est limitée en grande partie par la reconstruction de la distribution de masse de la galaxie-lentille. Les avancées récentes des algorithmes d’apprentissage automatiques ont permis de démontrer qu’un réseau neuronal convolutionnel (CNN) pouvait accomplir la reconstruction de la lentille 10 millions de fois plus rapidement que les algorithmes conventionnels. Cette preuve de concept arrive juste à temps pour permettre l’analyse de la quantité phénoménale de données qui sera produite par les télescopes à champs larges dans la prochaine décennie. Nous devrons aussi adapter des architectures comme les machines à inférences récurrentes (RIM) pour automatiser le processus de reconstruction. Les besoins scientifiques de notre mission nécessitera d’adapter l’architecture de nos modèles pour l’estimation des incertitudes.

    Hatim Belgharbi

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Microscopie de localisation par ultrasons fonctionnelle 3D (fULM)

    L’imagerie fonctionnelle cérébrale permet de mieux comprendre quelles régions du cerveau sont impliquées dans différents types de tâches. Il est possible de réaliser ce type d’analyse à l’aide, par exemple, de l’imagerie par résonance magnétique, mais à une résolution spatiotemporelle limitée (de l’ordre du millimètre et de la seconde). Plus récemment, une autre technique, la microscopie de localisation 2D a permis de drastiquement augmenter la résolution spatiale des ultrasons (5 millièmes de millimètre), mais puisqu’elle requiert la détection de microbulles injectées individuelles (approuvées en clinique), sa résolution temporelle était insuffisante pour détecter l’activation du cerveau (dans l’ordre des minutes). Le laboratoire de Jean Provost a récemment développé une nouvelle technique d’imagerie appelée Microscopie de Localisation Ultrasonore Dynamique 3D (dMLU-3D), qui permet d’atteindre la même résolution spatiale en trois dimensions plutôt que deux et aussi une résolution élevée pour les phénomènes périodiques (de l’ordre de la milliseconde). La technique permet la visualisation de la microvasculature cérébrale (morphologie), mais la visualisation de l’activité cérébrale n’a pas encore été développée (fonction). La modélisation de ce qui caractérise une activation cérébrale dépend de plusieurs paramètres non linéaires dont il n’existe pas de vérité terrain à l’échelle de la microvasculature in-vivo, alors l’utilisation d’un réseau de neurones convolutionnel (CNN) s’avère pertinente à cette application. Ce projet vise à montrer qu’il est possible de faire de l’imagerie fonctionnelle (détecter l’activité ou le manque d’activité cérébrale) dans tout le cerveau de rongeur à l’aide de l’approche dMLU-3D avec une résolution spatiotemporelle encore jamais atteinte avec d’autres méthodes comparables. Des expériences seront réalisées afin de révéler et de corréler l’activité des régions visuelles thalamiques et corticales du cerveau du modèle murin suivant la présentation de stimuli visuels. Par la suite, ces résultats seront comparés avec ceux obtenus chez des modèles animaux de la schizophrénie (développemental, pharmacologique, lésionnel ou génétique) afin de vérifier l’hypothèse que ce désordre est caractérisé par une altération des connexions entre le cortex visuel et le thalamus. Ce projet serait la toute première démonstration de la faisabilité de l’imagerie fonctionnelle cérébrale par ultrasons superrésolus en 2D et en 3D, permettant la cartographie de l’activation cérébrale de la totalité du cerveau de rongeur ou d’autres petits animaux, tel le chat, pour des études pré-cliniques permettant à terme de mieux comprendre certaines pathologies et menant potentiellement à un meilleur diagnostic ou même traitement. C’est d’autant plus prometteur étant donné qu’aucune autre modalité d’imagerie peut atteindre une résolution aussi fine, avec une profondeur d’imagerie suffisante et ce, de manière non invasive.

    Marie-Hélène Bourget

    Supervisé.e par : Julien Cohen-Adad

    Polytechnique Montréal

    Segmentation automatique d’images histologiques par apprentissage profond

    Les axones de la matière blanche sont le prolongement des neurones, et constituent les autoroutes du système nerveux central. Une gaine lipidique, la myéline, entoure ces axones permettant la conduction plus rapide de l’influx nerveux. Des maladies neurodégénératives comme la sclérose en plaques ou encore des traumatismes menacent l’intégrité des axones myélinisés, ce qui peut mener à des déficits sensoriels ou moteurs tels que la douleur ou la paraplégie. Afin de développer de nouveaux traitements, les chercheurs en neurosciences ont besoin de quantifier avec précision la morphométrie de ces axones (taille, épaisseur de myéline, etc.). Mon laboratoire d’accueil NeuroPoly a développé le logiciel AxonDeepSeg permettant de faire la segmentation automatique de neurones sur des images histologiques par des algorithmes d’apprentissage profond. Cependant, AxonDeepSeg manque de robustesse vis-à-vis de la variabilité qui peut exister selon les paramètres d’acquisition et la qualité des images ainsi que selon les espèces. Ce projet vise donc à développer des modèles robustes de segmentation de neurones par l’adaptation et l’implémentation de méthodes innovantes de segmentation par apprentissage profond (Adaptation de domaine, MixUp, FiLM). Le potentiel de généralisation des algorithmes développés sera validé à l’aide de bases de données de microscopie incluant diverses modalités d’imagerie (optique, électronique à balayage, électronique en transmission), espèces, organes et pathologies. De plus, les modèles développés et les données générées seront rendus publics en accès libre et documentés afin de permettre à de nombreux chercheurs et cliniciens en neurosciences de les utiliser. Cet outil permettra également de faire la validation d’autres modalités d’imagerie essentielles dans la recherche sur les maladies neurodégénératives comme l’imagerie par résonance magnétique quantitative non-invasive, et ainsi augmenter la quantité de données utilisables par les chercheurs.

    Joëlle Cormier

    Supervisé.e par : Valérie Bélanger

    HEC Montréal

    Analyse du transport d’urgence aérien dans les régions éloignées du Québec

    Dans un objectif d’offrir des soins spécialisés à l’ensemble de sa population, le Québec peut compter sur le programme d’Évacuation aeromédicales du Québec (EVAQ) mis en place par le gouvernement. L’offre de service permet de transférer des patients depuis les différentes régions du Québec vers des centres spécialisés de Québec et Montréal afin de leur offrir les soins nécessaires, le tout entouré d’une équipe médicale adaptée à leur condition et leur niveau d’urgence. Plusieurs des services offerts par l’EVAQ ont connu une augmentation de la demande durant la dernière décennie. La présente recherche vise à bâtir un outil de simulation qui permettra de simuler différentes utilisations des ressources. L’analyse des différents scénarios permettra de faire des recommandations à l’ÉVAQ sur les actions à prendre afin d’offrir le meilleur niveau de service possible aux populations des régions. Il y a beaucoup à apprendre sur le modèle instauré au Québec, tant au niveau de la planification stratégique des appareils et des trajets, qu’au niveau de la coordination et des opérations au quotidien. La densité de population, les distances à franchir et les conditions météorologiques difficiles sont des facteurs déterminants à considérer dans leur unicité.

    Edward Hallé-Hannan

    Supervisé.e par : Sébastien Le Digabel

    Polytechnique Montréal

    Optimisation de l’entraînement des réseaux de neurones profonds à partir d’extensions de l’algorithme MADS sur les hyperparamètres de type variable de catégorie

    Ce projet de maîtrise vise à optimiser l’entraînement des réseaux de neurones profonds à partir d’extensions de l’algorithme MADS sur les hyperparamètres de type variable de catégorie. Ces hyperparamètres sont généralement choisis de manière arbitraire ou heuristique. Or, la plupart des algorithmes d’optimisation développés solutionnent des problèmes où les variables sont de type continu ou entier. En d’autres mots, il existe peu de méthodes d’optimisation pouvant traiter efficacement les variables de catégorie. Cependant, puisque ces variables sont discrètes, il est possible de construire et d’explorer un espace de variables discrétisées avec les méthodes d’optimisation dites recherche directe. Le projet de recherche a pour objectif d’adapter les récents développements de l’algorithme MADS (« Mesh Adaptive Direct Search ») aux variables de catégorie, notamment pour le traitement des contraintes et l’intégration d’un treillis anisotrope dynamique. Plus précisément, nous nous intéressons à optimiser plus rigoureusement les hyperparamètres des réseaux de neurones profonds, afin d’entraîner plus intelligemment les modèles d’intelligence artificielle. Plus particulièrement, les hyperparamètres étudiés seront : la fonction de perte ; les extensions et les modifications à l’algorithme de rétropropagation (ADAM, RMSProp, etc.) ainsi que les régulateurs (LASSO, « Ridge regression », etc.). Les mécanismes développés pourront également servir à modéliser la topologie des réseaux (nombres de couches, nombres de neurones, etc.) En effet, dans le cadre de l’algorithme MADS, le traitement des variables de catégorie pourraient s’étendre à des variables discrètes, dont la valeur modifie la dimension du problème. En pratique, le système résultant permettra donc, pour la première fois, d’optimiser simultanément les hyperparamètres reliés à l’entraînement et ceux reliés à la topologie.

    Dongyan Lin

    Supervisé.e par : Blake Richards

    McGill University

    Analyzing mouse hippocampal « time cell » activities during memory task with machine learning approaches

    Previous studies have identified hippocampal “time cells” in CA1 that bridge the temporal gap between discontiguous events by firing in tiling patterns during the delay period of memory tasks, such as alternative maze (Pastalkova et al., 2008) and object-odor pairing tasks (MacDonald et al., 2011). However, recent findings have argued that this tiling might be an analysis artifact due to cell-sorting because it also appears in tasks with no memory load (Salz et al., 2016). To address this discrepancy, our collaborators have collected calcium recordings in mouse hippocampal CA1 region during trial unique, nonmatch-to-location (TUNL) task (Talpos et al., 2010) and showed tiling patterns. Our objective is to use computational methods to determine if these patterns are meaningful. To do this, we will first train decoders on the calcium recordings to decode sample for each trial, with temporal sequences preserved (i.e. sorted tiling columns) or shuffled (i.e. randomized columns). If the tiling patterns are indeed meaningful, we would expect to see higher accuracy of the decoder in the preserved sequences. Our next step is to construct a simulated reinforcement learning agent on simulated TUNL task to see whether there exists a consistent tiling pattern in the activities of the neural networks of the reinforcement learning agent. If so, it would suggest that these patterns play a role in preserving information about the sample location during the delay period as a solution to the task. If not, it would suggest that the tiling patterns previously observed in memory tasks could merely be a ubiquitous artifact. Our findings would have a significant impact on the current view of hippocampal “time cells” as well as the functional segregation of the brain.

    Yiqun (Arlene) Lu

    Supervisé.e par : Guillaume-Alexandre Bilodeau

    Polytechnique Montréal

    Jumpy, Hierarchical and Adversarial Variational Video Prediction

    This project is in the context of intelligent transportation systems. To improve road user detection and tracking, we want to predict their position in future frames using video prediction. However, predicting high fidelity videos over long time scale is notoriously difficult. Current video prediction models either diverges from real samples after a few frames or fail to capture stochasticity in the videos, resulting in bad prediction performance for long videos. In order to overcome this difficulty, new models with ability to do jumpy or hierarchical video prediction are proposed by the AI community. In this proposal, we propose to further develop these ideas and explore new models for stochastic video prediction that is able to do jumpy predictions in a hierarchical manner. We mainly want to explore two research problems: (1) How to do stochastic jumpy video predictions. (2) How to combine jumpy prediction with temporal abstraction.

    Andrei Lupu

    Supervisé.e par : Doina Precup

    McGill University

    Emergent Behaviour in Multi-Agent Reinforcement Learning

    This project aims for the investigation of intricate emergent behaviours in large scale multi-agent reinforcement learning (MARL). Of particular concern are the behaviours of agents in settings where they are tightly interdependent to the point of nearly composing a single entity. Such settings will draw strong inspiration from biological systems, and be achieved either through a shared common reward or through complex and necessary interactions. Because large interconnected populations of agents present a novel collection of settings complete with new challenges, this project will force a rethinking of well-established reinforcement learning practices, all while probing the limits of their scalability. Furthermore, enabling MARL systems that simultaneously achieve large population scales and appropriate complexity will allow for better modelling of intricate phenomena that have been out of reach of previous artificial intelligence methods. This would potentially result in far-reaching benefits in other scientific disciplines, thus broadening the range of applications of reinforcement learning and simultaneously opening it to easier idea cross-pollination from other fields. These settings will be studied empirically by analyzing the behaviour of existing MARL algorithms, and by comparing and contrasting them to new approaches that allow for more complex interactions between agents. The analysis of the results will be performed quantitatively on the basis of standard reinforcement learning and game theoretic methodology, and qualitatively in light of the principles of behavioural biology. The implementation of the environments and the MARL models will be done with modularity and concurrency in mind and the code-base will then be openly released.

    Nicholas Meade

    Supervisé.e par : Siva Reddy

    McGill University

    Stylistic Controls for Neural Text Generation

    Deep learning-based approaches to text generation have proven effective in recent years, with many models able to generate realistic text, often exhibiting higher-order structure. While these models produce high-quality samples, there is usually little control provided over what is specifically generated. Recently, work has begun in this area, but much remains to be explored. This application proposes research towards controllable text generation by implementing a variety of stylistic controls that can be used to influence what is sampled from a neural language model. In my previous work, we developed a conditional generative model for music. We demonstrated that we could control for a variety of characteristics during generation by providing the model with an additional externally-specified input called the control signal. For instance, in this work, we trained a model using a composer-based control signal. This signal identified the composer of each piece on which the model was trained. After training, we used the control signal to produce samples of music in the style of specific composers, for instance, Bach and Beethoven. Based on my previous work with music, we are now interested in implementing a similar set of controls for generating text. Such a set of stylistic controls would extend the practical utility of text generated from neural language models. We plan to explore generation methods involving supervised controls and latent (disentangled) controls.

    Marie-Eve Picard

    Supervisé.e par : Pierre Rainville

    Université de Montréal

    Utilisation d’approches d’apprentissage automatique pour l’identification d’une signature cérébrale de l’expression faciale de la douleur

    L’expression faciale est un outil important pour communiquer diverses informations, notamment la manifestation d’un état de douleur, la présence d’une menace immédiate dans l’environnement et un éventuel besoin d’aide. Les dimensions sensorielle (intensité) et affective (caractère déplaisant) de la douleur peuvent être encodées dans les mouvements faciaux. Les techniques d’analyse jusqu’à présent utilisées pour examiner la relation entre l’expression faciale et l’activité cérébrale lors de l’expérience de la douleur possèdent plusieurs limitations statistiques par rapport à l’évaluation de l’activité cérébrale spatialement distribuée. L’objectif principal du projet proposé est de mieux comprendre les mécanismes neuronaux qui sous-tendent l’expression faciale de la douleur. Des données d’imagerie par résonance magnétique fonctionnelle (IRMf) seront utilisées pour analyser les changements dans l’activité cérébrale en réponse à des stimuli douloureux (mais non dommageables). Plus spécifiquement, ce projet vise à utiliser des approches d’apprentissage automatique (c’est-à-dire l’analyse de modèles multivariés) pour développer une signature cérébrale de l’expression faciale de la douleur afin de prédire les changements faciaux en réponse à des stimuli douloureux dans différents contextes : douleur phasique (stimulation courte), douleur tonique (stimulation longue), et modulation des dimensions sensorielle et affective de la douleur. En bref, ce projet permettra de résoudre certaines lacunes des analyses univariés précédemment utilisées afin de déterminer avec une meilleure précision les bases neurales de l’expression faciale de la douleur et de faire progresser de manière significative notre compréhension des mécanismes cérébraux qui sous-tendent la communication non verbale.

    Myriam Prasow-Émond

    Supervisé.e par : Julie Hlavacek-Larrondo

    Université de Montréal

    Les premières images d’exoplanètes orbitant autour de naines blanches, d’étoiles à neutrons et de trous noirs

    Les binaires X, formés d’une étoile orbitant autour d’un objet compact stellaire compact (naine blanche, étoile à neutrons ou trou noir), sont des laboratoires fantastiques pour comprendre la physique dans des conditions extrêmes. Au cours des dernières décennies, les binaires X ont fait l’objet d’une multitude d’études dans diverses longueurs d’onde, conduisant à des avancées remarquables dans le domaine de la physique de l’accrétion, ainsi que dans la compréhension de la formation de jets de particules relativistes dans de puissants champs magnétiques. Les binaires X sont aussi d’excellents laboratoires pour comprendre les explosions de type supernova ainsi que l’effet de ces explosions sur le système et son environnement. En effet, la présence d’une étoile à neutrons ou d’un trou noir dans ces systèmes implique directement que l’étoile (et ses potentielles planètes) survivent à ces explosions. Plusieurs études montrent que les planètes et les naines brunes peuvent exister dans une multitude d’environnements, tels que celles qui orbitent très proche de leur étoile hôte (Jupiters chaudes) ou celles qui orbitent à des distances de centaines d’unités astronomiques de l’étoile. Ces découvertes montrent que la formation et la survie des planètes sont mal comprises. Par conséquent, ce projet amène un nouveau point de vue, soit celui des conditions extrêmes. Bref, on pourra étudier plusieurs binaires X et des données des télescopes NIRC2/KECK (visible) et NOEMA (millimétrique) ont déjà été acquises en 2018, et d’autres demandes de temps sont en cours. Selon une analyse préliminaire, la présence d’objets astrophysiques est confirmée, et donc ce projet garantit des résultats surprenants pour la communauté de l’astrophysique.

    Chence Shi

    Supervisé.e par : Jian Tang

    HEC Montréal

    Addressing the retrosynthesis problem using a graph-to-graph translation network

    Retrosynthesis analysis, which aims to identify a set of reactant graphs to synthesize a target molecule, is a fundamental problem in computational chemistry and is of central importance to the organic synthesis planing as well as drug discovery. The problem is challenging as the search space of all possible transformations is very huge. For decades, people have been seeking to assist chemists in retrosynthesis analysis with modern computing algorithms. Most existing machine learning works on this task rely on reaction templates that define the subgraph patterns of a set of chemical reactions, which require expensive graph isomorphism and suffer from poor generalization on unseen molecule structures.

    To address the above limitations, in this project, we formulate the retrosynthesis prediction as a graph-to-graph translation task, i.e., translating a product graph to a set of reactant graphs, and propose a novel template-free approach to tackle the problem. We will show that our method excludes the need of domain knowledge, and scales well to large datasets. We will also empirically verify the superiority of our method on the benchmark data set.

    Shi Tianyu

    Supervisé.e par : Luis Miranda-Moreno

    McGill University

    A Multi-agent Decision and Control Framework for Mixed-autonomy Transportation System

    As the autonomous vehicle becomes more and more popular. Recently, there has been a new emphasis on traffic control in the context of mixed-autonomy, where only a fraction of vehicles are connected autonomous vehicles and interacting with human-driven vehicles. As in a mixed autonomy system, there are several challenges. The first challenge is how to encourage different agents’ cooperation so as to maximize the total returns of the whole system. For example, when there is a gap in front of the adjacent line of the autonomous vehicle, if the autonomous vehicle cuts in immediately, the surrounding vehicle in the adjacent line will also decrease its speed sharply, which will end up a shock wave in traffic flow. Instead, if the autonomous vehicle learns to cooperate with other agents, it will adjust its speed steadily and try to mitigate the negative impact on the whole system. The second challenge is how to improve the communication efficiency in multi-agent system. As autonomous vehicle has different characteristics with human-driven agent, for example, their reacting time and action may be different. Therefore, how to formalize personalized policy for each agent is also worth to explore. The third challenge is how to explore expert knowledge (e.g. green wave, max pressure, actuated control) in transportation domain to improve the training efficiency and performance. Our overall goal of this project is to design effective decision and control framework for an efficient and safe mixed autonomy system by mitigating the shockwave and improving the transportation efficiency. To address the aforementioned problems, we will develop a novel multi agent decision framework based on deep reinforcement learning to improve the decision making and control performance of the agents in mixed autonomy system.

    Rey Wiyatno

    Supervisé.e par : Liam Paull

    Université de Montréal

    Exploiting Experiences and Priors in Semantic Visual Navigation

    Robotics has always been anticipated to revolutionize the world. However, despite the significant progress over the past few decades, robots have yet to be able to reliably navigate within an unstructured indoor environment. Semantic visual navigation is the task of navigating within a possibly unknown environment using only visual sensors, such as asking a household robot agent to “go to the kitchen”. Traditional “modular” methods combine a Simultaneous Localization and Mapping (SLAM) component with separate search, planning, and control modules. However, these methods do not scale well to large environments, and require significant engineering efforts. Alternatively, end-to-end “learning” solutions produce agent policies that directly infer actions from camera frames, by applying Deep Reinforcement Learning (DRL) techniques on large-scale datasets. Nevertheless, these policies tend to be reactive, do not explicitly exploit scene geometry, and are not data efficient. Furthermore, both modular and learning-based approaches do not sufficiently exploit knowledge from past task instances to improve subsequent search performance in both repeated environments as well as unseen yet similar environments. Our project explores the learning and use of spatial-semantic priors for more efficient semantic visual navigation. We aim to devise a framework that learns, updates, and exploits a topological-semantic map between discovered locations and objects within. We hypothesize that these advances will result in agents that generalize better to unseen similar environments, as well as becoming increasingly more efficient during repeated search queries within the same environment.

    Chengyuan Zhang

    Supervisé.e par : Lijun Sun

    McGill University

    Statistical Modeling Framework to Understand Dynamic Traffic Patterns from Video Data

    Video-based traffic monitoring systems, as the backbone of modern Intelligent Transportation Systems (ITS), is playing an essential role in sensing traffic conditions and detecting abnormal events/incidents. Semantically understanding traffic scenes and automatically mining the traffic patterns from video data of a static camera can help with traffic situation analysis and anomaly events warning. Given a video of a dynamic traffic scene with several different behaviors happening simultaneously, we want the ITS to learn and understand: “How many typical traffic patterns are in the video? How to semantically interpret these patterns? What are the rules governing the transitions between these patterns?”In this project, we will mainly focus on traffic patterns recognition and anomaly detection from video data, we will: (i) construct representation learning model to extract efficient features; and (ii) develop an unsupervised learning framework based on Bayesian nonparametrics to automatically learn the traffic patterns.

    Doctoral excellence scholarships

    Md Rifat Arefin

    Supervisé.e par : Irina Rish

    Université de Montréal

    Developing Biologically inspired Deep Neural Network for Continual Lifelong Learning

    We humans are able to continually learn throughout our lifetime which is called lifelong learning. This capability is also crucial for computational systems interacting in the real world and processing continuous streams of data. However, the current deep learning systems struggle to continually acquire the incremental information available over time from non-stationary data distributions. They tend to forget the knowledge which is acquired earlier upon learning the new one – such a problem is called catastrophic forgetting. In this project, we will study biological factors of lifelong learning and their implications for the modelling of biologically motivated neural network architectures that can improve life-long learning capability of computational systems by reducing catastrophic forgetting problem.

    Sumana Basu

    Supervisé.e par : Doina Precup

    McGill University

    Off Policy Batch Reinforcement Learning for Healthcare

    Artificial Intelligence (AI) has an increasing impact on our everyday life, one being in health care. Today most of the successful applications of AI in healthcare are for diagnosis or prediction, but not for the treatment. But AI agents also have the potential for sequential decision making such as assisting doctors in reassessing treatment options, as well as in surgery. The branch of AI that is a natural fit for handling such sequential decision-making problems is known as Reinforcement Learning (RL).So far most of the successful applications of RL have been in the video game environments. But there are relatively fewer applications of RL in healthcare. One of the reasons is that unlike games, in healthcare the RL agents cannot interact with the environment to explore new possibilities to learn the optimal treatment policy. Trying new treatment options on patients without knowing their consequences is not only unethical but also can be fatal. So, the agent has to learn retrospectively from previously collected batches of data. In RL literature, this is called Off-Policy Learning. Challenges in off-policy evaluation, sparse reward, non-stationary data, and sample inefficiency are some of the roadblocks for using RL safely and successfully in healthcare. During my Ph.D. I aim to tackle some of these challenges in the context of healthcare.

    Christopher Beckham

    Supervisé.e par : Christopher Pal

    Polytechnique Montréal

    Unsupervised representation learning

    Unsupervised representation learning is concerned with using deep learning algorithms to extract ‘useful’ features (latent variables) from data without any external labels or supervision. This addresses one of the issues with supervised learning, which is the cost and lack of scalability in obtaining labeled data. The techniques developed in this field have broad applicability, especially with regard to training smart ‘AI agents’ and domains where obtaining labeled data is difficult.’Mixup’ (Zhang et al) is a recently-proposed class of data augmentation techniques which involve augmenting a training set with extra ‘virtual’ examples by constructing ‘mixes’ between random pairs of examples in the training set and optimizing some objective on those mixed examples. While the original mixup algorithm simply performed these mixes in input space (which comes with a few limitations) for supervised classification, recent work (Verma et al, Yaguchi et al) proposed performing these mixes in the latent space of the classifier instead, achieving superior results to the original work.One intuitive way to think about ‘latent space mixing’ is that we can imagine that the original data is generated by *many* latent variables, the possible configurations of which increase exponentially as the number of latent variables increases. Because of this we only see a *very small* subset of those configurations in our training set. Therefore, mixup can be seen as allowing the network to explore *novel* combinations of the latent variables it has inferred (which may not already be present in the training set), therefore making the network more robust to novel configurations of latent states (i.e. novel examples) at test time. Empirical results from the works cited corroborate this hypothesis.

    The first stage of my PhD was exploring mixup in the context of unsupervised representation learning (building on the work of Verma et al, which I also co-authored), in which the goal is to learn useful latent variables from unlabeled data. This was done by leveraging ideas from adversarial learning and devising an algorithm which is able able to mix between encoded states of real inputs and decoding them into realistic-looking inputs indistinguishable from the real data. We showed promising results both qualitatively and quantitatively, and recently published our findings at the NeurIPS 2019 conference.

    Some preliminary experiments suggest that one of our proposed variants of ‘unsupervised mixup’ has a connection to ‘disentangled learning’, which explores the inference of latent variables which are conceptually ‘atomic’ but can be arbitrarily composed together to produce more abstract concepts (which is similar to how we as humans structure information in the brain). This lays the groundwork for some more exciting research to pursue during my PhD.

    Xinyu Chen

    Supervisé.e par : Nicolas Saunier

    Polytechnique Montréal

    City-Scale Traffic Data Imputation and Forecasting with Tensor Learning

    With recent advances in sensing technologies, large-scale and multidimensional urban traffic data are collected on a continuous basis from both traditional fixed traffic sensing systems (e.g., loop detectors and video cameras) and emerging crowdsourcing/floating sensing systems (e.g., GPS trajectory from taxis/buses and Google Waze). These data sets have provided us with unprecedented opportunities for sensing and understanding urban traffic dynamics and developing efficient and reliable smart transportation solutions. For example, forecasting the demand and states (e.g., speed, volume) of urban traffic is essential to a wide range of intelligent transportation system (ITS) applications such as trip planning, travel time estimation, route planning, traffic signal control, to name just a few. However, there are two critical issues that undermine the use of these data sets in real-world applications: (1) the missing data and noisy nature make it difficult to get the true signal, and (2) it is computationally expensive to process large-scale data sets for online applications (e.g., traffic prediction). The goal of this project is to develop new framework to better model local consistencies in spatiotemporal traffic data, such as the {sensor dependencies} and {temporal dependencies} resulting from traffic flow dynamics. The scientific objectives are to: (1) Develop nonconvex low-rank matrix/tensor completion models considering spatiotemporal dependencies/correlations (e.g., graph Laplacian [spatial] and time series [temporal]) and traffic domain knowledge (e.g., fundamental diagram, traffic equilibrium, and network flow conservation). (2) Incorporate Gaussian process kernels and neural network structure

    Abhilash Chenreddy

    Supervisé.e par : Delage Erick

    HEC Montréal

    Inverse Reinforcement Learning with Robust Risk Preference

    RL/IRL methods provide powerful tools for solving a wide class of sequential decision-making problems under uncertainty. However, the practical use of these techniques as a computational tool has been limited historically owing to multiple factors like the presence of high-dimensional continuous state and action spaces in many real-world decision problems, the stochastic and noisy nature of the real world systems compared to the simulated environments, and the indifference of traditional reward and utility functions to the risk preference of the agent. I am excited about the possibility of directing my future research towards building risk-aware MDP models as they would provide stronger reliability guarantees than their risk-neutral counterparts. one typical modeling premise in RL/IRL is to optimize the expected utility (i.e., an assumption that humans are risk-neutral), which deviates from actual human behaviors under ambiguity. Recent work suggests such an effort can provide stable solutions for high-dimensional state space problems, thus making them more applicable for practical use cases.As an effort in this direction, under the guidance of Prof. Erick Delage, I am working towards developing risk-aware IRL/RL algorithms for portfolio selection problems. Applications that I am interested in include, but are not limited to, i) learning the agent’s risk profile using inverse learning methods and ii) Risk sensitive exploration in RL setting. Our work tries to formulate the inverse learning model from a distributionally robust optimization (DRO) point of view where the agent performs at least as well as the expert in terms of the risk-sensitive objective. We plan to achieve this by building an ambiguity set for the expert’s risk preference and train the agent to learn by taking a worst-case approach, thus shielding the agent from the ambiguity in the underlying risk distribution.

    Chloé Bourquin

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Mesure de la pulsatilité cérébrale et son impact sur la cognition chez la souris vasculairement compromise par imagerie ultrasonore

    Les maladies cardiovasculaires peuvent être à l’origine d’un vieillissement cérébral accéléré. Les artères, telles l’aorte ou les carotides, sont riches en fibres élastiques, permettant d’adoucir les fluctuations de la pression sanguine (ou pulsatilité) lors du cycle cardiaque dans les vaisseaux cérébraux en aval. Avec l’âge et la maladie, les artères deviennent plus rigides, entraînant une augmentation de la pulsatilité en aval et menant à des altérations microvasculaires. Cartographier la pulsatilité dans l’ensemble du réseau vasculaire cérébral pourrait donc devenir un biomarqueur permettant de diagnostiquer les maladies neurodégénératives. Jusqu’à récemment, suivre l’évolution du pulse dans le réseau vasculaire cérébral n’était pas possible : la microscopie optique ne permet que la mesure des micro-vaisseaux à la surface du cerveau, tandis que l’IRM haut champ permet d’imager un cerveau entier mais n’a pas une résolution spatiotemporelle et sensibilité suffisantes pour mesurer de petits vaisseaux. Une nouvelle technique ultrasonore pourrait relever ce défi : la Microscopie par Localisation Ultrasonore (MLU). Basée sur la localisation et le suivi de microbulles injectées comme agents de contraste, elle permet de cartographier les vaisseaux avec une résolution de l’ordre de 5 µm dans l’ensemble du cerveau. Cependant, cette méthode nécessite de suivre les microbulles durant 10 minutes pour finalement n’obtenir qu’une unique image de la vascularisation cérébrale. Notre objectif est de parvenir à rendre cette méthode dynamique, en la synchronisant avec l’ECG et la respiration afin d’obtenir non pas une image unique mais un film d’au moins une pulsation cardiaque, afin d’observer les variations de vitesse du flux sanguin au cours du cycle, et d’en déduire la pulsatilité. Cette nouvelle méthode permettra de démontrer pour la première fois la variation de la pulsatilité dans le cerveau entier, d’établir un lien de corrélation entre l’augmentation de la pulsatilité et les pertes de cognition ainsi que les dommages cérébraux et d’établir la mesure de la pulsatilité comme biomarqueur pour suivre l’évolution de maladies cardiovasculaires et/ou neurodégénératives.

    Theophile Demazure

    Supervisé.e par : Pierre-Majorique Léger

    HEC Montréal

    Apprentissage profond et classification des états cognitifs pour la modulation en temps réel des interactions humain-machine en milieu automatisé.

    L’univers du travail est en train d’être profondément modifié. Des technologies comme la robotique et des applications de l’intelligence artificielle s’intègrent de plus en plus dans les tâches du travail. L’objectif de cette recherche est de prendre en compte l’humain dans un environnement composé de machines. Ces dernières ne sont pas capables de percevoir que l’employé, avec lequel elles collaborent, est fatigué, absent mentalement, ou tout simplement distrait. Un collègue, dans ce cas-ci, s’ajusterait ou le préviendrait afin qu’il reprenne ses esprits. La machine, quant à elle, poursuivrait son activité sans s’ajuster augmentant le risque d’accident ou d’erreur. Pour répondre à ce problème, ce projet porte sur le développement d’un système qui s’adapte à l’état cognitif de son utilisateur comme la fatigue, la charge mentale ou la fatigue.Les interfaces cerveau-machines utilisent des mesures neurophysiologiques de l’être humain pour surveiller, s’adapter, ou se faire contrôler. À l’intérieur, des algorithmes d’apprentissage machine permettent de classifier l’état cognitif à partir des données capturées en temps réel. En utilisant les signaux électriques dégagés par le cerveau et la dilatation de la pupille, il est possible de discriminer entre plusieurs états dans le temps la situation de l’opérateur.

    Le prototype développé pourra ainsi donner l’ordre à d’autres machines de ralentir la cadence ou de prévenir lorsque l’employé avec qui elles collaborent semble fatigué ou peu vigilant. Ce prototype sera développé et évalué en laboratoire dans un environnement contrôler. Il s’agit d’une preuve de concept à l’entreprise. Les interfaces cerveau-machines sont aujourd’hui principalement utilisées en médecine pour des prothèses, système d’assistance à la parole ou fauteuil roulants. Les retombées sont majoritairement en sécurité au travail (transport, manufacture) et dans l’optimisation de l’interaction humain-machine (collaboration humain-machine).

    Sébastien Henwood

    Supervisé.e par : François Leduc-Primeau

    Polytechnique Montréal

    Coded Neural Network

    Les réseaux de neurones profonds connaissent un engouement généralisé en ce début de décennie. Or, les progrès dans ce domaine s’accompagnent d’une hausse des besoins en capacité de calcul devançant la loi de Moore. Dans ce contexte, on cherche à proposer un ensemble de méthodes permettant d’optimiser les besoins en énergie de réseaux de neurones profonds en prenant en compte les caractéristiques physiques (mémoire, processeur, etc) du système accueillant le réseau pour son usage final.

    L’objectif est d’avoir une méthode suffisamment générale pour s’adapter aux tâches et réseaux variés que les concepteurs pourraient vouloir déployer dans leurs applications, et réduisant la charge énergétique selon un compromis capacité du réseau/énergie contrôlable.
    Ces travaux permettraient d’une part de gagner en énergie sur les systèmes des utilisateurs (par exemple, téléphone intelligent), ce faisant favorisant les usages déconnectés. D’autre part, on cherche à toucher les utilisations en data-centers, si voraces en énergie.
    Ce projet de recherche permettra à terme de tirer parti au mieux des ressources allouées à l’apprentissage automatique dans sa phase d’exploitation, pour s’assurer de son acceptabilité sociale d’une part et de sa viabilité technique et économique d’autre part.

    Jad Kabbara

    Supervisé.e par : Jackie Cheung

    McGill University

    Computational Investigations of Pragmatic Effects in Language

    This thesis focuses on natural language processing (NLP), specifically computational pragmatics, using deep learning methods. While most NLP research today focuses on semantics (literal meaning of words and sentences), my research takes a different approach: I focus on pragmatics which deals with intended meaning of sentences, one that is context-dependent. Correctly performing pragmatic reasoning is at the core of many NLP tasks including information extraction, summarization, machine translation, sentiment/stance analysis. My goal is to develop computational models where pragmatics is a first-class citizen both in terms of natural language understanding and generation. I have already made strong progress toward this goal: I developed a neural model for definiteness prediction [COLING 2016] — the task of determining whether a noun phrase should be definite or indefinite — in contrast to prior work relying on heavily-engineered linguistic features. This has applications in summarization, machine translation and grammatical error correction. I also introduced the new task of presupposition triggering detection [ACL 2018 — best paper award] which focuses on detecting contexts where adverbs (e.g. “again”) trigger presuppositions (e.g.,“John came again” presupposes “he came before”). This work is important because it is a first step towards language technology systems capable of understanding and using presuppositions and because it constitutes an interesting testbed for pragmatic reasoning. Moving forward, I propose to examine the role of pragmatics, particularly presuppositions, in language understanding and generation. I will develop computational models and corpora that incorporate this understanding to improve: (1) summarization systems e.g. in a text rewriting step to learn how to appropriately allocate adverbs in generated sentences to make them more coherent and (2) reading comprehension systems where pragmatic effects are crucial for the proper understanding of texts and where systems can answer questions of pragmatic nature whose answers are not found explicitly in the text. By the end, the thesis would present the first study on presuppositional effects in language to enable pragmatically-empowered natural language understanding and generation systems

    Caroline Labelle

    Supervisé.e par : Sébastien Lemieux

    Université de Montréal

    Enhancing the Drug Discovery Process: Bayesian Inference to evaluate Efficacy Characteristics of potential Drug Through Uncertainty

    During the multi-phase drug-discovery process, many compounds are tested in various assays which generates a great deal of data from which Efficacy Metrics (EM) can be estimated. Compounds are selected with the aim of identifying at least one sufficiently potent and efficient to go into preclinical testing. This selection is based on the EM meeting a specific threshold or by comparison to other compounds.

    Current analysis methods suggest point estimates of EM and hardly consider the inevitable noise present in experimental observations, thus failing to report the uncertainty on the EM and precluding its use during compound selection. We propose to extend our previously introduced statistical methods (EM inference and pairwise comparison) to the ranking of a panel of compounds and to combinatorial analysis (multiple compounds tested simultaneously). Given an EM threshold, we aim at identifying the compounds with the highest probability of meeting that criteria.

    We use a hierarchical Bayesian model to infer EM from dose-response assays (single- and multi-doses), yielding empirical distributions for EM of interest rather than single point estimates. The assay’s uncertainty can thus be propagated to the EM inference and to compound selection. We are thus able to identify all compounds of an experimental dose-response dataset with at least 1% chance of being amongst the best for various given EM, and to characterize the effects of each compounds of a combinatorial assay.

    This novel methodology is developed and applied to the identification of novel compounds able to inhibit cellular growth of leukemic cells.

    Sébastien Lachapelle

    Supervisé.e par : Simon Lacoste-Julien

    Université de Montréal

    Uncertainty in Operations Research, Causality and Out-of-Distribution Generalization

    My research focuses on two main directions: widening the operations research toolbox using recent advances in deep learning and learning causal structures. Both aspects have the potential to be useful in various applications, for example the optimization of railway operations, gene expression studies as well as the understanding of different protein interactions in human cells.Together with Emma Frejinger and its team at the CN chair, we developed a methodology which allows to predict tactical solutions given only partial knowledge of the problem using deep neural networks. We demonstrated the efficiency of the approach on the problem of booking intermodal containers on double-stack trains. Moreover, we are currently working to apply machine learning techniques to standard operations research problems such as the knapsack and the travelling salesman problem in hope of gaining insight about classical algorithms to solve them.

    More recently, I have been interested in the nature of causal reasoning and how machines could acquire it. Typical machine learning systems are good at finding statistical dependencies in data, but often lack the causal understanding which is necessary to predict the effect of an intervention (e.g. the effect of a drug on the human body). Together with my co-authors, we developed « Gradient-Based Neural DAG Learning  », a causal discovery algorithm which aims at going beyond simple statistical dependencies. We showed the algorithm was capable of finding known causal relationships between multiple proteins in human cells.

    In the future, I will work to make machine learning more adaptive and able to reuse past knowledge in order to learn new patterns faster. This is something humans do all the time, but which is hard for current algorithms. I believe causality is part of the answer, but other frameworks like meta-learning, transfer learning and reinforcement learning are going to be necessary. Apart from bringing us closer to human-level intelligence, making progress in this direction would benefit many applications. For instance, if a machine learning system is used to predict tactical solutions to a railway optimization problem, the distribution of problems it faces might shift due to changes in trade legislation, hence rendering the predicted solutions far from optimal. We should aim to build systems which can adapt to a changing world quickly.

    Antoine Boudreau LeBlanc

    Supervisé.e par : Bryn Williams-Jones

    Université de Montréal

    Bioéthique écosystémique et mégadonnées: santé, agriculture et écologie

    Les problèmes actuels sont globaux, liant société, économie et environnement à la santé. L’antibiorésistance par exemple provient d’un mésusage d’antibiotiques en santé et en agriculture qui vient réduire l’efficacité de ceux-ci. Pour attaquer ce problème, de larges collaborations entre médecins, agriculteurs et écologistes deviennent nécessaires, mais demeurent limitées par bons nombres de défis techniques (ex. : partage de données) et éthiques (consentement, sécurité) apparaissant dès l’intégration les données et les connaissances pour intervenir de façon concertée. L’objectif de cette thèse est d’étudier ces enjeux affectant la circulation des données entre santé, agriculture et écologie afin de proposer un modèle de gouvernance des données maximisant l’accès et la protection des données pour appuyer la recherche, la surveillance et l’intervention tout en maintenant la confiance des fournisseurs de données.

    Ce projet fondera son analyse éthique sur une cartographie des relations entre les intervenants clés pouvant supporter un réseau de partage de données entre la santé, l’agriculture et l’écologie. Quatre études de cas sont amorcées et permettent de décrire le processus de constitution de ce réseau aux niveaux interministériel, intersectoriel, interprofessionnel, interpersonnel (certification éthique obtenue). Le devis ethnographique réalisé en étroite collaboration avec ces 4 milieux d’accueil supportera l’écriture d’un cadre de gouvernance par théorisation ancrée. Il sera ensuite comparé aux initiatives internationales (Danemark, Angleterre, États-Unis). Cette thèse permettra d’appuyer la mise en œuvre de réseaux structurants de partage de données intersectorielles au niveau de la médecine vétérinaire au Québec et jettera les bases d’un cadre de gouvernance pour l’interconnexion des bases de données entre organisations et secteurs.

    Maude Lizaire

    Supervisé.e par : Guillaume Rabusseau

    Université de Montréal

    Connexions entre réseaux récurrents, automates pondérés et réseaux de tenseurs pour l’apprentissage avec données séquentielles

    À plusieurs reprises dans l’histoire, des découvertes ont été faites parallèlement par plusieurs scientifiques. On n’a qu’à penser au calcul infinitésimal développé indépendamment par Newton sous l’influence de ses travaux sur les lois universelles du mouvement et Leibniz inspiré par le principe philosophique de l’infiniment petit. À l’intersection entre plusieurs disciplines, ce type de découvertes n’atteignent leur plein potentiel que grâce à la contribution des différentes expertises. Dans cet ordre d’idées, de nombreuses équivalences peuvent être tracées entre les formalismes développés en physique et en intelligence artificielle. En particulier, une méthode pilier de la formulation moderne employée en physique quantique, les réseaux de tenseurs, peut être reliée aux réseaux récurrents, l’une des principales familles de modèles adaptés aux données structurées en apprentissage profond. Ces derniers sont également connectés aux automates pondérés, qui sont des modèles au cœur des méthode formelles et de vérification en informatique théorique. L’exploration des liens entre ces trois méthodes (réseaux de tenseurs, réseaux récurrents et automates pondérés) permet de tirer profit des garanties théoriques offertes par les méthodes formelles, de l’expressivité et des nombreuses applications des réseaux récurrents, tout en faisant le pont avec les débouchés des réseaux de tenseurs dans les domaines des matériaux et de l’informatique quantiques. Le projet vise ainsi à créer des passerelles entre ces différentes disciplines et exploiter les progrès faits dans l’une au profit des autres.

    Elena Massai

    Supervisé.e par : Marina Martinez

    Université de Montréal

    Neuroprosthesis development to recover the gait after spinal cord injury in rats

    Spinal Cord Injury (SCI) interrupts the communication between the brain and the spinal locomotor networks, causing leg paralysis. When SCI is incomplete (iSCI), some nerve fibers survive the lesion and patients with iSCI can eventually regain some motor abilities. The goal of this study is to assess in the rat model whether combined brain and spinal stimulation can lead to a superior locomotion recovery after spinal cord injury. Artificial Intelligence (AI) techniques will be employed to track the motor activity, drive the stimulation and optimize the strategy in real time. By refining the spatiotemporal stimulation parameters, the intelligent algorithm will help the rat’s brain to generate leg trajectory that features a better clearance of the ground during swing, stronger leg extension and higher posture during stance. We expect that optimized neuroprosthetic stimulation will result in locomotor patterns that are more similar to intact rats and will facilitate the recovery of voluntary control of locomotion. The results will provide a framework for the future development of efficient neuromodulation interfaces and prosthetic approaches for rehabilitation.

    Antoine Moevus

    Supervisé.e par : Benjamin De Leener

    Polytechnique Montréal

    Quantitative susceptibility mapping framework for assessing cortical development in neonates after severe deoxygenation at birth

    Hypoxic ischemic encephalopathy (HIE) is a newborn brain pathology that is common but, unfortunately, not well understood. HIE affects 1.5 per 1000 live births in developed countries and is the leading cause of death and devastating sequelae in terms of neonates cognitive, behavioural, and physical disabilities. The most effective clinical treatment, therapeutic hypothermia, improves the survival rate; however, the repercussions of HIE remain unclear for survivors. As of today, the understanding of altered cortical growth mechanisms after HIE is incomplete but promising non-invasive magnetic resonance imaging (MRI) technique, called quantitative susceptibility mapping (QSM), provide new brain biomarkers that can help understand how HIE affects the brain development. Yet, because cortical development of neonates is rapid and sophisticated, standard clinical neurological imaging tools, such as MRI templates, are not suited for neurodevelopmental analysis in neonates.

    Therefore, we propose to implement new methods for solving the QSM reconstruction problem and improve the common MRI template by developing adaptive age-based longitudinal templates. We will adopt a data-driven strategy with deep learning in order to create a new framework for the pediatric and neurology communities.

    Alexis Montoison

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    Méthodes multi-précision pour l’optimisation et l’algèbre linéaire

    Ce projet de recherche a pour but de développer des méthodes capables de basculer d’une précision machine à l’autre durant la résolution de problèmes d’optimisation de grande taille, et d’effectuer l’essentiel des opérations en basse précision où elles sont peu coûteuses et requièrent peu d’énergie.

    Nos résultats préliminaires indiquent des économies énergétiques pouvant aller jusqu’à 90% sur certains problèmes.

    Ces méthodes s’appliquent notamment à la biologie des systèmes, qui requiert des solutions en quadruple précision, et au machine learning, où la demi précision est de plus en plus populaire. Sur les plateformes spécialisées émergentes gérant nativement ces nouvelles précisions, comme les cartes graphiques Turing de Nvidia qui implémentent la demie-précision ou encore le processeur IBM Power9 qui implémente la quadruple précision, ces méthodes seront à même d’exploiter au maximum le bénéfice du travail en multi-précision.

    À l’ère des données massives et de l’explosion de l’information, des algorithmes permettant des économies d’énergie significatives sur les plateformes adéquates sont un investissement pour l’avenir du Canada, en termes du volume de données exploité et de l’environnement.

    Amine Natik

    Supervisé.e par : Guillaume Lajoie

    Université de Montréal

    Decomposition of information encoded in learned representations of recurrent neural networks

    The human brain contains billions of neurons that communicate with each other through trillions of synapses, enabling us to learn new skills, solve complex tasks and understand intricate concepts. Everything we do such as walking, eating, communicating, and learning, is a function of these neurons firing in certain patterns, in specific locations. This sophisticated biological neural network is the outcome of millions of years of evolution. Recent advances in deep learning have proposed several artificial neural network architectures for solving complex learning tasks, by taking simplified inspiration from neural circuits in our brains. Examples of these include convolutional neural networks for image and audio processing, recurrent neural networks for sequence learning and autoencoders for dimensionality reduction. Both biological and artificial networks rely on efficient calibration of synapses (or connection weights) to match desired behaviours. This adjustment is how a network « learns », but is a complicated task that is not well understood. An important substrate of networks after learning is the internal low dimensional representation found in the joint activity of neural populations that emerge upon performing a learned task. The present research aims to explore and further investigate these internal representations, and address the question of how do structural properties of network connectivity impact the geometry, dimensionality and learning mechanisms encoded by these internal features. We plan to answer this question by leveraging multidisciplinary data exploration tools from graph signal processing, dimensionality reduction, representation learning and dynamical systems. We expect that this project will allow us to gain better understanding of how natural and artificial neural networks solve complicated tasks, which in turn will help us find methodological ways to improve existing structures, and build new models, but more from a deeper understanding rather than trial and error.

    Cédric Poutré

    Supervisé.e par : Manuel Morales

    Université de Montréal

    Statistical Arbitrage of Internationally Interlisted Stocks

    In this project, we will investigate a novel form of statistical arbitrage that will combine artificially created financial instruments in a high-frequency world, meaning that we will operate in the millisecond timeframe. These instruments will be constructed in such a way that they will offer very interesting statistical properties that will enable us to exploit violations in the law of one price in the Canadian and American markets. This arbitraging activity is essential, since it is making them more efficient by eliminating mispricing in equities that are quoted on both markets. The novel strategy will be tested on a large basket of equities on three trading venues in North America and given that we are working in high-frequency, this means that millions of market observations are ingested and analyzed daily by our trading algorithms. In order to be proactive in the markets, to make extremely fast and accurate predictions, and because of the complex nature of financial data and its abundance, we will be relying on machine learning algorithms to guide our trading decisions.

    Carter Rhea

    Supervisé.e par : Julie Hlavacek-larrondo

    Université de Montréal

    A Novel Deep Learning Approach to High-Energy Astrophysics

    Despite machine learnings recent rise to stardom in the applied sciences, the astronomy community has been reluctant to accept it. We propose to gently introduce several forms of machine learning to the community through the study of the hot gas pervasive in galaxy clusters. Currently, emission spectra from galaxy clusters are studied by fitting physical models to them and using those models to extract relavent physical parameters. Unforunately, there are several inherent pitfalls with this method. We plan to train different algorithms — from a random forest classifier to a convolutional neural network — to parse the necessary thermodynamic variables from the emission spectra. The fundamental goal of this project is to create and open-source pipeline and suite of tutorials which integrate machine learning into the study of galaxy clusters.

    Charly Robinson La Rocca

    Supervisé.e par : Emma Frejinger

    Université de Montréal

    Learning solutions to the locomotive scheduling problem

    Given a set of demands on a railway network, how should one assign locomotives to trains in order to minimize total costs and satisfy operational constraints? This question is critical for Canada’s largest railway company: Canadian National Railways. Given the size of their network, even a small relative gain in efficiency would produce significant savings. The goal of this research is to explore recent advances in machine learning in order to efficiently solve the locomotive assignment problem. The idea is to train a neural network on precomputed solutions of the problem with the aim of learning the correct configuration of locomotives for a given train. By combining both integer programming and deep learning, the computational time can be reduced by at least an order of magnitude compared to integer programming alone. This is a solution that is significantly more efficient and practical for train operators.

    Davood Wadi

    Supervisé.e par : Sylvain Sénécal

    HEC Montréal

    Cognition-Based Auto-Adaptive Website User Interface in Real Time

    A personal message that is designed specifically for the need and taste of consumers has always been the goal of media outlets, retailers, and social activists. Here at Tech3Lab, we are launching this massive study of personalization in an unprecedented way: by analyzing neurophysiological and psychophysiological signals of the body to determine the best possible look and feel on websites to improve user experience and best convey the intended message.

    Previously, auto-adaptive website personalization was carried out mostly by guesswork and theory, in which there is no real evidence for the parameters used. Thanks to the equipment in Tech3Lab, such as EEG, fNIRS, physiological measurement instruments, and eye tracking measures, we are able to base our adaptive system on direct signals from the body.

    This interdisciplinary study of cognitive neuroscience, marketing, and data science has the potential to revolutionize the approach of designers, developers, and editors to website design by studying auto-adaptive websites using direct body measures.

    Zichao Yan

    Supervisé.e par : William Hamilton

    McGill University

    Bridging the gap between structures and functions: learning interpretable graph neural representation of RNA secondary structures for functional characterization

    Cells are the basic units of life and their activity is regulated by many delicate subcellular processes that are crucial to their survival. Therefore, it is important to gain more insights into the complex control mechanisms at play, both to obtain a better fundamental understanding of biology, and to help understand diseases caused by defects in these mechanisms. We are particularly interested in the regulatory roles played by RNA molecules in the post-transcriptional phase such as subcellular localization and RNA-protein interactions. RNA secondary structures, a representation of how RNA sequences fold onto themselves, can have a significant impact on the molecule’s regulatory functions through its interaction with various mediating agents such as proteins, RNAs and small molecules. Therefore, in order to fully exploit RNA secondary structures to better understanding of their functions, we propose a novel framework of an interpretable graph neural representation of RNAs, which may ultimately lead us to the design of RNA based therapeutics for disease such as neurodegenerative disorders and cancers, the success of which would crucially depend on our capability of understanding the relations between RNA structures and functions.

    Internship grants: Data to tell

    Anaïs Babio

    Université de Montréal

    Stage chez Synapse-C, spécialité science des données

    Marc Boulanger

    Université de Montréal

    Stage chez Radio-Canada, spécialité communication

    Stephanie Cairns

    McGill University

    Stage au CIRANO, spécialité science des données

    Ève Campeau-Poirier

    Université de Montréal

    Stage chez Synapse-C, spécialité science des données

    André-Anne Côté

    HEC Montréal

    Stage chez Synapse-C, spécialité communication

    Ambre Giovanni

    Concordia

    Stage chez Le Devoir, spécialité science des données

    Philippe Robitaille-Grou

    Université de Montréal

    Stage chez Le Devoir, spécialité science des données

    Catherine Soum

    Université de Montréal

    Stage au CIRANO, spécialité communication

    Jérémie Tousignant

    Université de Montréal

    Stage chez Radio-Canada, spécialité science des données

    Postdoc-entrepreneur program

    Marco Bonizzato

    Supervised by: Marina Martinez

    Université de Montréal

    Company: NeuralDrive

    NeuralDrive is a medical-device start-up with a revolutionary view on AI-based neurostimulation therapy for people with spinal cord injury. The device applies neurostimulation of the brain, nerves and muscles to improve the efficiency of motor training, reverse paralysis and enable walking again.

    Yann-Seing Law-Kam Cio

    Supervised by : Sofiane Achiche

    Polytechnique Montréal

    Company: DesignBot inc.

    The project goal is to develop a software solution aimed at alleviating the burden on engineers and designers moving from product ideas to functional proof of concept. The software guides engineers via a methodology, tested using research and artificial intelligence (AI) algorithms, that allows them to identify the key functionalities, properties and means needed to ensure their technology product is functional.

    Postdoctoral research funding

    Ammar Alsheghri

    Supervisé.e par : François Guilbault

    Polytechnique Montréal

    Deep Learning Approach to generate patient-specific teeth / Approche d’apprentissage profond pour générer des dents de remplacement spécifiques au patient.

    Dental offices are faced with hundreds of thousands of dental reconstructions per year. Each dental reconstruction typically requires a dental professional to manually design and input the characteristics of the tooth to be reconstructed. Consequently, this time-consuming process is difficult to reproduce between professionals and hence leads to great variability in quality. This project will use Deep Learning approaches to develop a new methodology that automatically designs patient-specific teeth. Using a dataset of roughly 5,000 digitalized arches as a gold standard, neural networks will be trained to generate and/or deform mesh models to yield a volumetric surface representing the tooth to be reconstructed in its spatial context. The resulting integrated system will be designed to continuously learn. Indeed, teeth generated by the system can be modified by a dental professional making a restoration. The resulting modification will then be used to retrain the network and increase its effectiveness. This project aims to generate dental restorations in a few seconds through artificial intelligence, replacing the current manual process that can take between 30-90 minutes.

    Kartik Ahuja

    Supervisé.e par : Ioannis Mitliagkas

    Université de Montréal

    Theory and Methods for OOD Generalization and Robust Learning.

    “Existing machine learning models are trained using empirical risk minimization (ERM). These models are known to generalize well when the test and the train distribution are similar. In many real-life applications, we expect models to be robust to scenarios when the train and test distributions are different, i.e., out-of-distribution (OOD) generalization. ERM based models often have poor OOD generalization. In this project, we aim to build theory and methods for OOD generalization.

    In recent works, it has been shown that incorporating principles of causality into traditional machine learning is the key to addressing OOD. In our recent work, we showed that incorporating causality into ERM formulation transforms a standard optimization problem into solving for the Nash equilibrium of a special game. Building on these recent works, we will explore the following problems in this project.

    i. Translate the standard PAC learnability notions from standard generalization to OOD generalization.
    ii. Develop methods using game theory and robust optimization that provably learn the predictors that exhibit OOD.
    iii. Incorporate causality into other areas, e.g., reinforcement learning, adversarial learning and unsupervised learning.”

    Taoli Cheng

    Supervisé.e par : Aaron Courville

    Université de Montréal

    Physics-inspired Deep Generative Modeling and New Physics Search for High Energy Physics.

    After finding the Higgs Boson in 2012, the Large Hadron Collider (LHC) at CERN, which explores the frontier of elementary particle physics, has put its focus on searching for new physics signals. However, it’s challenging to find rare signals given the large amount of data produced at the LHC. The modern approach of using machine learning assisted methods comes into aid. This project will focus on physics-inspired generative modeling. Taking advantage of the domain knowledge of underlying physics theories, physics-aware neural nets learn powerful representations by implementing physics laws within the architecture. A physics-aware generative model that is based on a set of fundamental physics laws will simply conserve these laws and bring realistic modeling of physics events. At the same time, the generative models will be able to be used for model-agnostic novelty detection, which assists new physics search at the LHC.

    Nehme El-Hachem

    Supervisé.e par : Vincent-Philippe Lavallée

    Université de Montréal

    Targeting leukemia stem cells using computational systems biology approaches.

    My research project at CHU Sainte Justine Research Center will primarily exploit single-cell RNA sequencing analytical tools to get first insights into the functional impact of mutations driving stem cell self-renewal, a critical step in the development of acute myeloid leukaemia (AML) a type of leukemia originating in the bone marrow from immature myeloid progenitors and affecting both children and adults. Our novel computational approaches will efficiently integrate sequencing data with large pharmaco-genomic databases to identify drug repurposing opportunities that can target specific molecular hits required for leukemia transformation.

    Jessie Galasso-Carbonnel

    Supervisé.e par : Houari Sahraoui

    Université de Montréal

    L’assistance au développement du logiciel au temps des données massives.

    L’ère du cloud et du big data a pour conséquence l’augmentation du besoin de logiciels complexes dans de nombreux domaines comme ceux de la santé ou de l’énergie. Dans ce contexte, automatiser autant que possible la création, la correction et plus généralement la manipulation de code est devenu un enjeu important. De nombreux travaux se sont intéressés, ces dernières années, à l’utilisation de techniques d’optimisation comme les algorithmes évolutionnistes ou l’intelligence artificielle pour la correction automatique de bogues ou la génération de code. Cependant, ces travaux se concentrent sur des cas très spécifiques et offrent peu de possibilités de généralisation. L’idée principale de notre projet est de voir les tâches d’écriture, de correction, de sophistication et de génération de code comme un continuum dans lequel il est possible de réduire de manière substantielle la partie des connaissances fournie par le développeur et de compenser cette réduction par l’abstraction de connaissances à partir des dépôts de données massives sur les logiciels. Nous proposons d’explorer plusieurs techniques d’optimisation (programmation génétique multiobjectif) et d’apprentissage automatique (code embedding et réseaux de neurones profonds), ainsi que leur combinaison, pour l’abstraction de ces connaissances et leur restitution en fonction de la tâche courante de développement.

    Emma Glennon

    Supervisé.e par : Timothée Poisot

    Université de Montréal

    Algorithmic outbreak detection for low-resource settings.

    “Infectious disease outbreaks are most easily controlled when detected quickly, but a lack of access to diagnostic resources can make early detection difficult. This problem is compounded in low-resource settings and those with underfunded public health infrastructure, which face challenges to both detecting and controlling infectious disease.

    This project aims to develop algorithms and simple tools for outbreak detection and identification. Symptom data is already cheaply and routinely collected in many low-resource settings. We propose to use data science techniques to automatically process this data, as well as to create accessible software tools to help public health agencies access these quantitative insights. Put together, this work will help public health agencies in low-resource settings identify and control emerging outbreaks. “

    Kevin Kovalchik

    Supervisé.e par : Etienne Caron

    Centre hospitalier universitaire Mère-Enfant (CHU Sainte-Justine)

    Machine learning-assisted identification of SARS-CoV-2 epitopes from mass spectrometry data.

    The current COVID-19 pandemic highlights the importance of rapid vaccine development platforms. A key step in vaccine development is the identification of viral peptide antigens presented by major histocompatibility complexes (MHC) on cell surfaces, termed MHC-associated peptides (MAPs). Currently, mass spectrometry (MS) is the only platform which allows for the direct, systematic and unbiased identification of MAPs from clinical samples (MS-immunopeptidomics). Using MS data from the SysteMHC Atlas, the largest public repository of MS-immunopeptidomics data, we will build machine learning models for the accurate prediction of MS fragmentation patterns and retention times of MAPs, key features used in peptide identification. After validation, these models will be integrated as core components of a complete peptide identification workflow for MS data. This workflow will be used to identify SARS-Cov-2 epitopes from MS data of infected cells and tissues as part of a collaboration with several academic (IRIC, CHUSJ), private (Nexelis and Trans-Hit Bio) and federal organizations (NRC) to support validation of COVID-19 vaccine efficacy. Beyond the immediate application, the proposed workflow will be applicable to any peptide-centric MS data and will find broad applications in immuno-oncology and systems biology.

    Hiroshi Mamiya

    Supervisé.e par : Erica Moodie

    McGill University

    Building precision retail grocery strategies to promote healthy food purchasing using massive consumer panel data.

    Diets consisting of high-calorie, nutrition-poor foods are one of the leading causes of morbidity and mortality, increasing risk of obesity, type 2 diabetes, cardiovascular disease, infectious diseases and more. We aim to leverage the idea of precision medicine to develop precision retail strategies so as to develop geographically or even individually targeted strategies to promote healthy food purchasing choices. To do so requires massive data and, consequently, efficient data storage and analytic capacities. In an unprecedented approach, I will devote my postdoctoral research to implementing precision retail in health using a decade of individual consumer data from a large, population-based sample to develop efficient analytic methods to develop approaches to motivate healthy diets in vulnerable populations.

    Alexandre Payeur

    Supervisé.e par : Guillaume Lajoie

    Université de Montréal

    Identifying and guiding learning dynamics in the brain using brain-machine interfaces.

    Brain-machine interfaces (BMIs) are an emerging technology with great potential for helping patients with paralysis or motor disabilities. They rely on the brain’s learning capabilities and machine learning to enable brain circuits to control devices such as a computer cursor or a prosthetic arm. Beyond their clinical benefits, implanted BMIs offer unique access to the brain’s learning process itself, because the recorded neural activity exclusively controls the output. The situation is thus similar to artificial networks, i.e. relevant network states are observed and learning rules can be studied in an end-to-end fashion. As a tool for basic neuroscience, BMIs could provide deep insights into the principles of cortical learning. Adopting an approach at the interface of neuroscience and machine learning, we propose to exploit knowledge about the training and optimization of artificial neural networks to better understand learning in the motor cortex, and to develop algorithms that interact more seamlessly with the brain for robust BMIs.

    Ramesh Ramasamy Pandi

    Supervisé.e par : Yossiri Adulyasak

    HEC Montréal

    GPU-based Data-driven Framework for Real-time Dispatching of Autonomous Mobility-on-Demand.

    This IVADO project is primarily motivated by the emerging concept of Autonomous mobility-on-demand (AMoD). AMoD is a transformative and rapidly developing mode of mobility service that offers smart and efficient passenger transportation using self-driving vehicles while concurrently reducing the negative externalities such as congestion and pollution. The main goal of this project is to develop a generic framework for AMoD that integrates novel GPU-based data-driven algorithms to perform non-myopic real-time dispatching of large-scale ride-sharing systems that essentially deals with tens of thousands of customers per hour. Specifically, we plan to develop the GPU-accelerated optimization approaches, deep learning (DL) algorithms for efficient demand prediction, and DL-based predictive fleet control policies to perform non-myopic real-time dispatching of large-scale ride-sharing systems with multiple modes of vehicles. We will conduct simulations on real-world transportation networks to fully analyze the feasibility and effectiveness of the proposed framework.

    Wenshuo Wang

    Supervisé.e par : Lijun Sun

    McGill University

    Interaction-Aware Decision-Making for Autonomous Driving in Urban Environment.

    Autonomous driving will soon be sufficiently reliable and affordable to replace most human driving, providing independent mobility to non-drivers, reducing driver stress, and offering a panacea for urban problems. With recent advances in autonomous driving technology, prototype vehicles are already running on highways. However, given the enormous complexity of driving tasks in cities, it is widely acknowledged that fully autonomous will take decades to achieve for urban driving environments. Mixed traffic where autonomous vehicles share a traffic space with human drivers is inevitable before achieving fully autonomous. Therefore, it becomes a critical question to developing autonomous driving decision-making frameworks that can effectively learn and understand the intent of human drivers and adapt to their driving styles on public roads. The overall objective of this project is to develop a close-loop interaction-aware decision-making framework and algorithms for autonomous driving in a complex urban environment, with a particular focus on urban intersections, by leveraging human intent prediction. This project will achieve four sub-objectives progressively: (1) process video-based sequential data for complex scenarios, (2) learn multi-agent spatial interaction representations, (3) predict human driver intents in space and time to support decision making, (4) integrate a predictable decision-maker to form a close-loop interaction-aware framework.

    Funding of fundamental research projects

    Charles Audet (chercheur principal)

    Équipe : Sébastien Le Digabel, Michael Kokkolaras, Miguel Diage Martinez

    Polytechnique Montréal

    Combining machine learning and blackbox optimization for engineering design

    The efficiency of machine learning (ML) techniques relies on many mathematical foundations, one of which being optimization and its algorithms. Some aspects of ML can be approached using the simplex method, dynamic programming, line-search, Newton or quasi-Newton descent techniques. But there are many ML problems that do not possess an exploitable structure necessary for the application of the above methods. The objective of the present proposal is to merge, import, specialize and develop blackbox optimization (BBO) techniques in the context of ML. BBO considers problems in which the analytical expressions of the objective function and/or of the constraints defining an optimization are unavailable. The most frequent situation is when these functions are computed through a time-consuming simulation. These functions are often nonsmooth, contaminated by numerical noise and can fail to produce an usable output. Research in BBO is in constant growth since the last 20 years, and has seen a variety of applications in many fields. The research projects will be bidirectional. We plan to use and develop BBO techniques to improve the performance of ML algorithms. Conversely, we plan to deploy ML strategies to improve the efficiency of BBO algorithms.

    Julien Cohen-Adad, Polytechnique Montréal

    Équipe : Yoshua Bengio, Joseph Cohen, Nicolas Guizard, Kawin Setsompop, Anne Kerbrat, David Cadotte

    Physics-informed deep learning architecture to generalize medical imaging tasks

    The field of AI has flourished in recent years; in particular deep learning has shown unprecedented performance for image analysis tasks, such as segmentation and labeling of anatomical and pathological features. Unfortunately, while dozens of deep learning papers applied to medical imaging get published every year, most methods are tested in single-center: in the rare case where the code is publicly available, the algorithm usually fails when applied to other centers, which is the “real-world” scenario. This happens because images from different centers have different features than the images used to train the algorithm (contrast, resolution, etc.). Another issue limiting the performance potential of deep learning in medical imaging is that little data and few manual labels are available, and the labels are themselves highly variable across experts. The main objective of this project is to push the generalization capabilities of medical imaging tasks by incorporating prior information from MRI physics and from the inter-rater variability into deep learning architectures. A secondary objective will be to disseminate the developed methods to research and hospital institutions via open-source software (www.ivadomed.org), in-situ training and workshops.

    Patricia Conrod, Université de Montréal

    Équipe : Irina Rish, Sean Spinney

    A neurodevelopmentally-informed computational model of flexible human learning and decision making

    The adolescent period is characterized by significant neurodevelopmental changes which impact on reinforcement learning and the efficiency with which such learning occurs. Our team has modelled passive-avoidance learning using a bayesian reinforcement learning framework. Results indicated that parameters estimating individual differences in impulsivity, reward sensitivity, punishment sensitivity and working memory, best predicted human behaviour on the task. The model was also sensitive to year-to-year changes in performance (cognitive development), with individual components of the learning model showing different developmental growth patterns and relationships to health risk behaviours. This project aims to expand and validate this computer model of human cognition to: 1) Better measure neuropsychological age/delay; 2) understand how learning parameters contribute to human decision making processes on more complex learning tasks; 3) simulate better learning scenarios to inform development of targeted interventions that boost human learning and decision making; and 4) inform next generation artificial intelligence models of lifelong learning.

    Numa Dancause, Université de Montréal

    Équipe : Guillaume Lajoie, Marco Bonizzato

    Novel AI driven neuroprosthetics to shape stroke recovery

    Stroke is the leading cause of disability in occidental countries. After stroke, patients often have abnormally low activity in the part of the brain that controls movements, the motor cortex. However, the malfunctioning motor cortex receives connections from multiple spared brain regions. Our general hypothesis is that neuroprostheses interfacing with the brain can exploit these connections to help restore adequate motor cortex activation after stroke. In theory, brain connections can be targeted using new electrode technologies, but this problem is highly complex. It cannot be done by hand, one patient at a time. We need automated stimulation strategies to harness this potential for recovery. Our main objective is thus to develop an algorithm that efficiently finds the best residual connections to restore adequate excitation of the motor cortex after stroke. In animals, we will implant hundreds of electrodes in the diverse areas connected with the motor cortex. The algorithm will learn the pattern of stimulation that is the most effective to increase activity in the motor cortex. For the first time, machine learning will become a structural part of neuroprosthetic design. We will use these algorithms to create a new generation of neuroprostheses that act as rehabilitation catalyzers.

    Michel Denault, HEC Montréal

    Équipe : Dominique Orban, Pierre-OIivier Pineau

    Paths to a cleaner Northeast energy system through approximate dynamic programming

    Our main research question is the design of greener energy systems for the American Northeast (Canada and USA). Some of the sub questions are as follows. How can renewable energy penetrate the markets? Are supplementary power transmission lines necessary ? Can energy storage improve the intermittency problems of wind and solar power? Which greenhouse gases (GHG) reductions are achievable ? What is the cost of such changes ? Crucially, what is the path to a better system ? To support the transition to this new energy system, our proposition is : 1. to model the evolution of the Northeast power system as a Markov Decision process (MDP), including crucial uncertainties, e.g. on technological advances and renewable energy cost; 2. to solve this decision process with dynamic programming and reinforcement learning techniques; 3. to derive energy/environmental policy intelligence from our computational results. Our methodological approach relies on two building blocks, an inter-regional energy model and a set of algorithmic tools to solve the model as an MDP.

    Vincent Grégoire, HEC Montréal

    Équipe : Christian Dorion, Manuel Morales, Thomas Hurtut

    Learning the Dynamics of the Limit Order Book

    Modern financial markets are increasingly complex. A particular topic of interest is how this complexity affects how easily investors can buy or sell securities at a fair price. Many have also raised concerns that algorithms trading at high frequency could create excess volatility and crash risk. The central objective of our research agenda is to better understand the fundamental forces at play in those markets where trading speed is now measured in nanoseconds. Our project seeks to lay the groundwork, using big data, visualization, and machine learning, to answer some of the most fundamental questions in the literature on market structure. Ultimately, we envision an environment in which we could learn the behavior of the various types of agents in a given market. Once such an environment is obtained, it would allow us to better understand, for instance, the main drivers of major market disruptions. More importantly, it could allow us to guide regulators in the design of new regulations, by testing them in a highly realistic simulation setup, thereby avoiding the unintended consequences associated with potential flaws in the proposed regulation.

    Mehmet Gumus, McGill University

    Équipe : Erick Delage, Arcan Nalca, Angelos Georghiou

    Data-driven Demand Learning and Sharing Strategies for Two-Sided Online Marketplaces

    The proliferation of two-sided online platforms managed by a provider is disrupting the global retail industry by enabling consumers (on one side) and sellers (on the other side) to interact in exponential ways. Evolving technologies such as artificial intelligence, big data analytics, distributed ledger technology, and machine learning are posing challenges and opportunities for the platform providers with regards to understanding the behaviors of the stakeholders – consumers, and third-party sellers. In this proposed research project, we will focus on two-sided platforms for which demand-price relationship is unknown upfront and has to be learned from accumulating purchase data, thus highlighting the importance of the information-sharing environment. In order to address this problem, we will focus on the following closely connected research objectives: 1.Identify the willingness-to-pay and purchase decisions (i.e., conversion rate) of online customers based on how they respond to the design of product listing pages, online price and promotion information posted on the page, shipping and handling prices, and stock availability information. 2.Determine how much of the consumer data is shared with the sellers and quantify the value of different information sharing configurations – given the sellers’ optimal pricing, inventory (product availability), and product assortment (variety) decisions within a setting.

    Julie Hussin, Université de Montréal

    Équipe : Sébastien Lemieux, Matthieu Ruiz, Yoshua Bengio, Ahmad Pesaranghader

    Interpretability of Deep Learning Approaches Applied to Omics Datasets

    The high-throughput generation of molecular data (omics data) nowadays permits researchers to glance deeply into the biological variation that exists among individuals. This variation underlies the differences in risks for human diseases, as well as efficacy in their treatment. This requires combining multiple biological levels (multi-omics) through flexible computational strategies, including machine learning (ML) approaches, becoming highly popular in biology and medicine, with a particular enthusiasm for deep neural networks (DNNs). While it appears like a natural way to analyze complex multi-omics datasets, the application of such techniques to biomedical datasets poses an important challenge: the black-box problem. Once a model is trained, it can be difficult to understand why it gives a particular response to a set of data inputs. In this project, our goal is to train and apply state-of-the-art ML models to extract accurate predictive signatures from multi-omics datasets while focusing on biological interpretability. This will contribute to building the trust of the medical community in the use of these algorithms and will lead to deeper insights into the biological mechanisms underlying disease risk, pathogenesis and response to therapy.

    Jonathan Jalbert, Polytechnique Montréal

    Équipe : Françoise Bichai, Sarah Dorner, Christian Genest

    Modélisation des surverses occasionnées par les précipitations et développement d’outils adaptés aux besoins de la Ville de Montréal

    La contamination fécale des eaux de surface constitue l’une des premières causes de maladies d’origine hydrique dans les pays industrialisés et dans les pays en voie de développement. En zone urbaine, la contamination fécale provient majoritairement des débordements des réseaux d’égouts combinés. Lors de précipitations, l’eau pluviale entre dans le réseau d’égouts et se mélange à l’eau sanitaire pour être acheminée vers la station d’épuration. Si l’intensité des précipitations dépasse la capacité de transport du réseau, le mélange des eaux pluviales et sanitaires est alors directement rejeté dans le milieu récepteur sans passer par la station d’épuration. Ces débordements constituent un risque environnemental et un enjeu de santé publique. À l’heure actuelle, les caractéristiques des événements pluvieux occasionnant des surverses sont incertaines. Ce projet de recherche vise à tirer profit des données sur les surverses récemment rendues publiques par la Ville de Montréal pour caractériser les événements de précipitations occasionnant des surverses sur son territoire. Cette caractérisation permettra, d’une part, d’estimer le nombre de surverses attendues pour le climat projeté des prochaines décennies. D’autre part, elle sera utilisée pour dimensionner les mesures de mitigation, tels que les bassins de rétention et les jardins de pluie.

    Nadia Lahrichi, Polytechnique Montréal

    Équipe : Sébastien Le Digabel, Andrea Matta, Nicolas Zufferey, Andrea Lodi, Chunlong Yu

    Reactive/learning/self-adaptive metaheuristics for healthcare resource scheduling

    The goal of this research proposal is to develop state-of-the-art decision support tools to address the fundamental challenges of accessible and quality health services. The challenges to meeting this mandate are real, and efficient resource management is a key factor in achieving this goal. This proposal will specifically focus on applications related to patient flow. Analysis of the literature shows that most research focuses on single-resource scheduling and considers that demand is known; Patient and resource scheduling problems are often solved sequentially and independently. The research goal is to develop efficient metaheuristic algorithms to solve integrated patient and resource scheduling problems under uncertainty (e.g., demand, prole, and availability of resources). This research will be divided into three main themes, each of them investigating a different avenue to more efficient metaheuristics: A) learning approaches to better explore the search space; B) blackbox optimization for parameter tuning; and C) simulation-inspired approaches to control the noise induced by uncertainty.

    Eric Lécuyer, Université de Montréal

    Équipe : Mathieu Blanchette, Jérôme Waldispühl, William Hamilton

    Deciphering RNA regulatory codes and their disease-associated alterations using machine learning

    The human DNA genome serves as an instruction guide to allow the formation of all the cells and organs that make up our body over the course of our lives. Much of this genome is transcribed into RNA, termed the ‘transcriptome’, that serves as a key conveyor of genetic information and provides the template for the synthesis of proteins. The transcriptome is itself subject to many regulatory steps for which the basic rules are still poorly understood. Importantly, when these steps are improperly executed, this can lead to disease. This project aims to utilize machine learning approaches to decipher the complex regulatory code that controls the human transcriptome and to predict how these processes may go awry in different disease settings.

    Gregory Lodygensky, Université de Montréal

    Équipe : Jose Dolz, Josée Dubois, Jessica Wisnowski

    Next generation neonatal brain segmentation built on HyperDense-Net, a fully automated real-world tool

    There is growing recognition that major breakthroughs in healthcare will result from the combination of databanks and artificial intelligence (AI) tools. This would be very helpful in the study of the neonatal brain and its alterations. For instance, the neonatal brain is extremely vulnerable to the biological consequences of prematurity or birth asphyxia, resulting in cognitive, motor, language and behavioural disorders. A key difference with adults is that key aspects of brain-related functions can only be tested several years later, hindering greatly the advancement of neonatal neuroprotection. Researchers and clinicians need objective tools to immediately assess the effectiveness of a therapy that is given to protect the brain without waiting five years to see if it succeeded. Neonatal brain magnetic resonance imaging can bridge this gap. However, it represents a real challenge as this period of life represents a unique period of intense brain growth (e.g. myelination and gyrification) and brain maturation. Thus, we plan to improve our existing neonatal brain segmentation tools (i.e. HyperDense-Net) using the latest iterations of AI tools. We will also develop a validated tool to determine objective brain maturation in newborns.

    Adam Oberman, McGill University

    Équipe : Michael Rabbat, Chris Finlay, Levon Nurbekyan

    Robustness and generalization guarantees for Deep Neural Networks in security and safety critical applications

    Despite impressive human-like performance on many tasks, deep neural networks are surprisingly brittle in scenarios outside their previous experience, often failing when new experiences do not closely match their previous experiences. This ‘failure to generalize’ is a major hurdle impeding the adoption of an otherwise powerful tool in security- and safety-critical applications, such as medical image classification. The issue is in part due to a lack of our theoretical understanding of why neural networks work so well. They are powerful tools but less interpretable than traditional machine learning methods which have performance guarantees but do not work as well in practice. This research program will aim to address this ‘failure to generalize’, by developing guarantees of generalization, using notions of the complexity of a regularized model, corresponding to model averaging. This approach will be tested in computer vision applications, and will have near-term applications to medical health research, through medical image classification and segmentation. More broadly, the data science methods developed under this project will be applicable to a wide variety of fields and applications, notably wherever reliability and safety are paramount.

    Liam Paull, Université de Montréal

    Équipe : Derek Nowrouzezahrai, James Forbes

    Differentiable perception, graphics, and optimization for weakly supervised 3D perception

    An ability to perceive and understand the world is a prerequisite for almost any embodied agent to achieve almost any task in the world. Typically, world representations are hand-constructed because it is difficult to learn them directly from sensor signals. In this work, we propose to build the components so that this map-building procedure is differentiable. Specifically, we will focus on the perception (grad-SLAM) and the optimization (meta-LS) components. This will allow us to backpropagate error signals from the 3D world back to the sensor inputs. This enables us to do many things, such as regularize sensor data with 3D geometry. Finally, by also building a differentiable rendering component (grad-Sim), we can leverage self-supervision through cycle consistency to learn representations with no or sparse hand-annotated labels. Combining all of these components together gives us the first method of world representation building that is completely differentiable and self-supervised.

    Gilles Pesant, Polytechnique Montréal

    Équipe : Siva Reddy, Sarath Chandar Anbil Parthipan

    Investigating Combinations of Neural Networks and Constraint Programming for Structured Prediction

    L’intelligence artificielle occupe une place de plus en plus importante dans de nombreuses sphères d’activité et dans notre quotidien. En particulier, les réseaux de neurones arrivent maintenant à assimiler puis à accomplir des tâches auparavant réservées aux humains. Cependant lorsqu’une tâche nécessite le respect de règles structurantes complexes, un réseau de neurones éprouve parfois beaucoup de mal à apprendre ces règles. Or un autre domaine de l’intelligence artificielle, la programmation par contraintes, a précisément été conçue pour trouver des solutions respectant de telles règles. Le but de ce projet est donc d’étudier des combinaisons de ces deux approches à l’intelligence artificielle afin de plus facilement apprendre à accomplir des tâches sous contraintes. Dans le cadre du projet, nous nous concentrerons sur le domaine du traitement de la langue naturelle mais nos travaux pourraient aussi s’appliquer à des tâches dans d’autres domaines.

    Jean-François Plante, HEC Montréal

    Équipe : Patrick Brown, Thierry Duchesne, Nancy Reid, Luc Villandré

    Statistical inference and modelling for distributed systems

    Statistical inference requires a large toolbox of models and algorithms that can accommodate complex data structures. Modern datasets are often so large that they need to be stored on distributed systems, with the data stored across a number of nodes with limited bandwidth between them. Many complex statistical models cannot be used with such complex data, as they rely on the complete data being accessible. In this project, we will advance statistical modeling contributions to data science by creating solutions that are ideally suited for analysis on distributed systems. More specifically, we will develop spatio-temporal models as well as accurate and efficient approximations of general statistical models that are suitable for distributed data, and as such, scalable to massive data.

    Wei Qi, McGill University

    Équipe : Xue (Steve) Liu, Max Shen, Michelle Lu

    Deals on Wheels: Advancing Joint ML/OR Methodologies for Enabling City-Wide, Personalized and Mobile Retail

    Moving forward to a smart-city future, cities in Canada and around the world are embracing the emergence of new retail paradigms. That is, retail channels can further diversify beyond the traditional online and offline boundaries, combining the best of the both. In this project, we focus on an emerging mobile retail paradigm in which retailers run their stores on mobile vehicles or self-driving cars. Our mission is to develop cross-disciplinary models, algorithms and data-verified insights for enabling mobile retail.   We will achieve this mission by focusing on three interrelated research themes: Theme 1 – Formulating novel optimization problems of citywide siting and inventory replenishment for mobile stores. Theme 2 – Developing novel learning models for personalized demand estimation. Theme 3 – Integrating Theme 1 and Theme 2 by proposing a holistic algorithmic framework for joint and dynamic demand learning and retail operations, and for discovering managerial insights. The long-term goal is to thereby advance the synergy of operations and machine learning methodologies in the broad contexts of new retail and smart-city analytics.

    Marie-Ève Rancourt, HEC Montréal

    Équipe : Gilbert Laporte, Aurélie Labbe, Daniel Aloise, Valérie Bélanger, Joann de Zegher, Burcu Balçik, Marilène Cherkesly, Jessica Rodríguez Pereira

    Humanitarian Supply Chain Analytics

    Network design problems lie at the heart of the most important issues faced in the humanitarian sector. However, given their complex nature, humanitarian supply chains involve the solution of difficult analytics problems. The main research question of this project is “how to better analyze imperfect information and address uncertainty to support decision making in humanitarian supply chains?”. To this end, we propose a methodological framework combining data analysis and optimization, which will be validated through real-life applications using multiple sources of data. First, we propose to build robust relief networks under uncertainty in demand and transportation accessibility, due to weather shocks and vulnerable infrastructures. We will consider two contexts: shelter location in Haiti and food aid distribution planning in Southeastern Asia. Second, we propose to embed fair cost sharing mechanisms into a collaborative prepositioning network design problem arising in the Caribbean. Classic economics methods will be adapted to solve large-scale stochastic optimization problems, and novel models based on catastrophic insurance theory will be proposed. Finally, a simulation will be developed to disguise data collection as a serious game and gather real-time information on the behavior of decision makers during disasters to extrapolate the best management strategies.

    Saibal Ray, McGill University

    Équipe : Maxime Cohen, James Clark, AJung Moon

    Retail Innovation Lab: Data Science for Socially Responsible Food Choices

    In this research program, we propose to investigate the use of artificial intelligence techniques, involving data, models, behavioral analysis, and decision-making algorithms, to efficiently provide higher convenience for retail customers while being socially responsible. In particular, the research objective of the multidisciplinary team is to study, implement, and validate systems for guiding customers to make healthy food choices in a convenience store setting, while being cognizant of privacy concerns, both online and in a brick-and-mortar store environment. The creation of the digital infrastructure and decision support systems that encourage people and organizations to make health-promoting choices should hopefully result in a healthier population and reduce the costs of chronic diseases to the healthcare system. These systems should also foster the competitiveness of organizations operating in the agri-food and digital technology sectors. A distinguishing feature of this research program is that it will make use of a unique asset – a new “living-lab”, the McGill Retail Innovation Lab (MRIL). It will house a fully functioning retail store operated by a retail partner with extensive sensing, data access, and customer monitoring. The MRIL will be an invaluable source of data to use in developing and validating our approaches as well as a perfect site for running field experiments.

    Léo Raymond-Belzile, HEC Montréal

    Équipe : Johanna Nešlehová, Alexis Hannart, Jennifer Wadsworth

    Combining extreme value theory and causal inference for data-driven flood hazard assessment

    The IPCC reports highlight an increase in mean precipitation, but the impact of climate change on streamflow is not as certain and the existing methodology is ill-equipped to predict changes in flood extremes. Our project looks into climate drivers impacting flood hazard and proposes methodological advances based on extreme value theory and causal inference in order to simulate realistic streamflow extremes at high resolution. The project will also investigate how climate drivers impact the hydrological balance using tools from machine learning for causal discovery to enhance risk assessment of flood hazard.

    Nicolas Saunier, Polytechnique Montréal

    Équipe : Francesco Ciari, Catherine Morency, Martin Trépanier, Lijun Sun

    Bridging Data-Driven and Behavioural Models for Transportation

    Transportation data is traditionally collected through travel surveys and fixed sensors, mostly on the roadways: such data is expensive to collect and has limited spatial and temporal coverage. In recent years, more and more transportation data has become available on a continuous basis from multiple new sources, including users themselves. This has fed the rise of machine learning methods that can learn models directly from data. Yet, such models often lack robustness and may be difficult to transfer to a different region or period. This can be alleviated by taking advantage of domain knowledge stemming from the properties of the flow of people moving in transportation systems with daily activities. This project aims to develop hybrid methods relying on transportation and data-driven models to predict flows for all modes at different spatial and temporal scales using multiple sources of heterogeneous data. This results in two specific objectives: 1. to learn probabilistic flow models at the link level for several modes based on heterogeneous data; 2. to develop a method bridging the flow models (objective 1) with a dynamic multi-agent transportation model at the network level. These new models and methods will be developed and tested using real transportation data.

    Yvon Savaria, Polytechnique Montréal

    Équipe : François Leduc-Primeau, Elsa Dupraz, Jean-Pierre David, Mohamad Sawan

    Ultra-Low-Energy Reliable DNN Inference Using Memristive Circuits for Biomedical Applications (ULERIM)

    Recent advances in machine learning based on deep neural networks (DNNs) have brought powerful new capabilities for many signal processing tasks. These advances also hold great promises for several applications in healthcare. However, state-of-the-art DNN architectures may depend on hundreds of millions of parameters that must be stored and then retrieved, resulting in a large energy usage. Thus, it is essential to reduce their energy consumption to allow in-situ computations. One possible approach involves using memristor devices, a concept first proposed in 1971 but only recently put in practice. Memristors are a very promising way to implement compact and energy-efficient artificial neural networks.  The aim of this research is to advance the state-of-the-art in the energy-efficient implementation of deep neural networks using memristive circuits and introducing DNN-specific methods to better manage uncertainty inherent to integrated circuit fabrication. These advances will benefit a large number of medical applications for which portable devices are required to perform a complex analysis of the state of the patient, and also benefit generally the field of machine learning by reducing the amount of energy required to apply it. Within this project, the energy improvements will be exploited to improve the signal processing performance of an embedded biomedical device for the advanced detection of epileptic seizures.

    Alexandra M. Schmidt, McGill University

    Équipe : Jill Baumgartner, Brian Robinson, Marília Carvalho, Oswaldo Cruz, Hedibert Lopes

    Flexible multivariate spatio-temporal models for health and social sciences

    Health and social economic variables are commonly observed at different spatial scales of a region (e.g. districts of a city or provinces of a country), over a given period of time. Commonly, multiple variables are observed at a given spatial unit resulting in high dimensional data. The challenge in this case is to consider models that account for the possible correlation among variables across space or space and time. This project aims at developing statistical methodology that accounts for this complex hierarchical structure of the observed data. And inference procedure follows the Bayesian paradigm meaning that uncertainty about the unknowns in the model is naturally accounted for. The project is subdivided into four sub projects that range from the estimation of a social economic vulnerability index for a given city to the spatio-temporal modelling of multiple vector borne diseases. The statistical tools proposed here will help authorities with the understanding of the dynamics across space and time of multiple diseases, and assist with the decision making process of evaluating how urban policies and programmes will impact the urban environment and population health, through a lens of health equity.

    David Stephens, McGill University

    Équipe : Yu Luo, Erica Moodie, David Buckeridge, Aman Verma

    Statistical modelling of health trajectories and interventions

    Large amounts of longitudinal health records are now collected in private and public healthcare systems. Data from sources such as electronic health records, healthcare administrative databases and data from mobile health applications are available to inform clinical and public health decision-making.  In many situations, such data enable the dynamic monitoring of the underlying disease process that governs the observations. However, this process is not observed directly and so inferential methods are needed to ascertain progression.  The objective of the project is to build a comprehensive Bayesian computational framework for performing inference for large scale health data. In particular, the project will focus on the analysis of records that arise in primary and clinical care contexts to study patient health trajectories, that is, how the health status of a patient changes across time. Having been able to infer the mechanisms that influence health trajectories, we will then be able to introduce treatment intervention policies that aim to improve patient outcomes.

    An Tang, Université de Montréal

    Équipe : Irina Rish, Guy Wolf, Guy Cloutier, Samuel Kadoury, Eugene Belilovsky, Michaël Chassé, Bich Nguyen

    Ultrasound classification of chronic liver disease with deep learning

    Chronic liver disease is one of the top ten leading causes of death in North America. The most common form is nonalcoholic fatty liver disease which may evolve to nonalcoholic steatohepatitis and cirrhosis if left untreated. In many cases, the liver may be damaged without any symptoms. A liver biopsy is currently required to evaluate the severity of chronic liver disease. This procedure requires the insertion of a needle inside the liver to remove a small piece of tissue for examination under a microscope. Liver biopsy is an invasive procedure with a risk of major complications such as bleeding. Ultrasound is ideal for screening patients because it is a safe and widely available technology to image the whole liver. Our multi-disciplinary team is proposing the use of novel artificial intelligence techniques to assess the severity of chronic liver disease from ultrasound images and determine the severity of liver fat, inflammation, and fibrosis without the need for liver biopsy. This study is timely because chronic liver disease is on the rise which means that complications and mortality will continue to rise if there is no alternative technique for early detection and monitoring of disease severity.

    Guy Wolf, Université de Montréal

    Équipe : Will Hamilton, Jian Tang

    Unified approach to graph structure utilization in data science

    While deep neural networks are at the frontier of machine learning and data science research, their most impressive results come from data with clear spatial/temporal structure (e.g., images or audio signals) that informs network architectures to capture semantic information (e.g., textures, shapes, or phonemes). Recently, multiple attempts have been made to extend such architectures to non-Euclidean structures that typically exist in data, and in particular to graphs that model data geometry or interaction between data elements. However, so far, such attempts have been separately conducted by largely-independent communities, leveraging specific tools from traditional/spectral graph theory, graph signal processing, or applied harmonic analysis. We propose a multidisciplinary unified approach (combining computer science, applied mathematics, and decision science perspectives) for understanding deep graph processing. In particular, we will establish connections between spectral and traditional graph theory applied for this task, introduce rich notions of intrinsic graph regularity (e.g., equivalent to image textures), and enable continuous-depth graph processing (i.e., treating depth as time) to capture multiresolution local structures. Our computational framework will unify the multitude of existing disparate attempts and establish rigorous foundations for the emerging field of geometric deep learning, which is a rapidly growing field in machine learning.

    COVID-19: IVADO projects and initiatives

    Projects funded by IVADO

    Digital clinical trials to accelerate the evaluation of colchicine therapy

    Team : Digital clinical trials to accelerate the evaluation of colchicine therapy Jean-Claude Tardif (Director, Research Centre, Montréal Heart Institute and Professor, Université de Montréal) and Frédéric Lesage (Professor, Polytechnique Montréal)

    This project, which has already received approval from Health Canada, the Québec Ministry of Health and Social Services and the Montréal Heart Institute’s Ethics Committee, seeks to evaluate a colchicine-based treatment, including its impact on mortality rates and pulmonary complications. The scope of the task (recruitment of a cohort of 6,000 subjects) and the extremely tight deadline (as quickly as possible) for this type of project require the implementation of new digital recruitment and follow-up tools.

    Funding amounts:
    IVADO: $125,000
    scale ai: $100,000
    TransMedTech Institute: $50,000

    Colcorona clinical trial

    Polytechnique Montréal news release

    Accelerating the search for a drug for COVID-19

    Team : Yoshua Bengio (Scientific Director, Mila and IVADO, and Professor, Université de Montréal) and Mike Tyers (Principal Investigator, IRIC)

    In a prerequisite for drug development, this project seeks to identify molecules that may specifically associate with SARS-CoV-2. To do this, researchers from Mila and IRIC will first use neural networks to automatically generate billions of potential molecules. An enhancement algorithm will then be used to select the most promising ones for biological evaluation and possible clinical trials.

    Funding amounts:
    IVADO: $100,000
    scale ai: $125,000
    Canada Excellence Research Chair in Data Science for Real-Time Decision-Making: $25,000

    Learn more

    News release

    Genomic genetic profiling of SARS-CoV-2 in Québec

    Julie Hussin (Assistant Professor, Université de Montréal)

    Like other viruses, the virus responsible for COVID-19 mutates and changes over time. These mutations can lead to changes in its spread, in the demographic impact of the disease, or even in the effectiveness of certain treatments. In order to adapt to this reality, this project aims at a real-time genomic analysis of the virus through molecular modelling, focusing primarily on the variants observed in Quebec. The results will provide information for both public health and healthcare, as well as facilitate the work of researchers developing new treatments.

    IVADO funding amount: $100,000

    Modelling of animal reservoirs of pathogens

    Team : Timothée Poisot (Assistant Professor, Université de Montréal) and Colin Carlson (Visiting Professor, Université de Montréal)

    The current COVID-19 pandemic, like others before it, originated in a host animal. However, the ecology, origin and development of these hosts and their viruses remain largely unknown. In order to address this shortcoming, this project aims to model animal populations that act as reservoirs for these pathogens in order to complete knowledge of the disease and to anticipate future resurgences, or outbreaks of new viruses.

    IVADO funding amount: $45,000

    Developing a new diagnostic tool for COVID-19

    Team : Frédéric Leblond (Full Professor, Polytechnique) and Dr. Dominique Trudel (Pathologist, CHUM)

    Methods of diagnosing COVID-19 require chemical reagents whose supplies are limited. One consequence is the potentially extensive spread of the virus by asymptomatic individuals. In order to more easily assess whether or not testing is needed, this project proposes to use Raman spectroscopy and artificial-intelligence algorithms to estimate an individual’s total viral load and, if necessary, then determine whether the coronavirus is present. Ultimately, this could make it possible to significantly reduce the number of tests to be carried out, a benefit for remote regions or regions with limited infrastructure.

    Funding amounts:
    TransMedTech Institute: $33,100
    IVADO: $11,000

    Covid-19 critical-care digital visualization board

    Team : Philippe Doyon-Poulin (Chercheur IVADO et professeur adjoint, Polytechnique) et Philippe Jouvet (Intensiviste pédiatre du CHU Sainte-Justine et professeur titulaire de clinique, Université de Montréal)

    In a pandemic, the number of intensive care inpatients increases rapidly and the management of medical resources is critical to the success of care. The purpose of this project is to develop a digital board to visualize the health status of patients in intensive care units and the allocation of medical resources so as to respond in real time to the needs produced by the COVID-19 crisis. This digital tool will be transferred to the Pediatric Intensive Care Unit at CHU Sainte-Justine and the Intensive Care Unit at the Jewish General Hospital.

    IVADO funding amount: $30,600

    Polytechnique Montréal article

    Identifying the Achilles heel of SARS-CoV-2

    François Major (Principal Investigator, IRIC)

    Using an algorithm based on machine-learning techniques, this project seeks to develop a protocol for better understanding the structural components involved in the vital functions of SARS-CoV-2 or any other RNA virus. This technique will make it possible to produce a list of therapeutic targets to counter their replication and proliferation, thus offering new perspectives for the development of drugs to be used in current or future clinical studies.

    IVADO funding amount: $17,500

    News release

    Monitoring the emergence and expansion of SARS-CoV-2 on a large scale

    Team : David Stephens (Professor, McGill University) and Luc Villandré (Postdoctoral Researcher, HEC Montréal)

    Personalized tracking of COVID-19 cases allows for step-by-step monitoring of the spread of the disease and helps public health officials evaluate the effectiveness of the measures implemented. However, when the number of cases becomes too high, individual follow-up becomes impossible, making it very useful to track the virus at the genetic level. To this end, this project proposes a phylogenetic analysis of the virus, including the ability to link locally sampled cases to each other and to link them to cases in other countries. In this way, it is possible to estimate the virus’s movements and transmission speed. It will then be easier to determine its rate of introduction from outside the country and to assess the proportion of local or community transmission within populations.

    IVADO funding amount: $15,000

    Interconnecting COVID-19 data

    Team : Interconnecting COVID-19 data David Ardia (Researcher, IVADO and Assistant Professor, HEC Montréal) and Emanuele Guidotti (PhD Student, Université de Neuchâtel)

    Numerous COVID-19-related databases exist, but no virtual platform currently incorporates a significant proportion of these sources. This makes it difficult to do a global analysis of them, and to make connections between this often-medical information and external factors, especially socio-political ones. In this perspective, this international project aims to develop a multifactorial open-source platform, enabling the integration and continuous addition of new information.

    IVADO funding amount: $10,000

    Projects supported by IVADO

    Interactive therapeutic target-prediction portal

    Team : Tariq Daouda (Postdoctoral Researcher, Massachusetts General Hospital – Harvard Medical School) and Maude Dumont-Lagacé (Scientific Coordinator, ExCellThera)

    The goal of this project is to provide the scientific community with a platform to predict potential targets for a vaccine against COVID-19. This interactive platform uses an algorithm’s ability to predict which parts of the virus will be exposed on the surface of infected cells and thus generates a list of potential targets. This algorithm, developed by Tariq Daouda in the laboratories of Sébastien Lemieux and Claude Perreault, has already been used successfully, enabling the current situation to be approached from a different angle. Offered to researchers through a portal, it will make it possible to accelerate the development of vaccines against COVID-19, but also against other emerging viruses.

    News release

    Lightening the healthcare community’s load through dialogue systems

    Alexis Smirnov (CTO, Dialogue)

    Many telemedicine tasks (such as responding to 811) involve healthcare professionals. This project proposes to set up several standalone telephone assistance solutions to free up these experts who are currently in high demand, whether to answer citizens’ routine questions, do follow-ups, make appointments or help navigate through healthcare facilities.

    Funding amount: $500,000

    Find out more

    Improved prognosis using chest X-rays

    Joseph Paul Cohen (Postdoctoral Researcher, Université de Montréal)

    Chester is an existing prototype of a radiology assistant that can recognize certain pneumonia-related characteristics. During the current pandemic, this project aims to improve Chester’s disease predictions with the aim of enhancing the management of patient care. How will this be done? By combining artificial intelligence and image recognition, while widely disseminating a public database of clinical metadata for a large number of COVID-19 cases (as well as SARS and other pneumonia cases).

    Find out more

    Ressources

    IVADO community in action

    Julie Hussin

    Senior Researcher, ICM

    “This project aims to analyze the viral sequences at different stages of the evolution of the virus and thus identify indicators associated with the geographical regions where patients have tested positive for COVID-19.”

    “Data-efficient deep learning to better model immune response: (…) building an open-source platform leveraging the latest AI technologies to model pathways in the immune system in order to better predict immune response. (…) we work on AI approaches that can contribute to the process of vaccine design(…)”.

    Find out more

    Michaël Chassé

    Researcher, CHUM

    • Creation of a biobank

    “The main objective of this Québec-wide infrastructure is to provide researchers with the samples and data they need for their work. This will facilitate the co-ordination of research and support efforts for the development of new disease biomarkers, with a view to creating vaccines and drugs.”

    Find out more

    Guy Wolf

    Assistant Professor, Université de Montréal

    • Omics profiling of COVID-19 progression mechanisms and specific analysis of immune responses in young patients

    “This project [will] provide a mechanistic understanding of SARS-CoV-2 virus progression to assess the risk of specific medical profiles and patients, as well as to help identify binding targets for potential antiviral agents and vaccines. (…) An example of an active research question is to understand the apparent resilience of young children to severe infection, which is somewhat atypical for such epidemics.”

    Find out more

    AlayaCare

    Creation of a new, free COVID-19 screening device with a self-administered questionnaire assessing healthcare workers’ symptoms prior to a client visit.

    Find out more

    EdLive

    EdLive makes its distance-learning technology available to schools and businesses. Thanks to this initiative and the collaboration of EdLive partners, several thousand students across Québec are taking courses online.

    Find out more

    IBM

    L’assistant Watson pour les citoyens est maintenant disponible gratuitement pour aider les gouvernements et les institutions de santé à répondre aux questions courantes sur la COVID-19.

    Apprenez-en plus

    Institut national du sport du Québec

    Mental health capsules online. Tips and strategies for better coping with this high-risk period for stress.

    Find out more

    Streamscan

    Implementation of a free cybersecurity monitoring service to safeguard the security of companies’ and organizations’ IT equipment during this crisis.

    Find out more

    Thales

    Launch of a COVID-19 rapid response call by Thales and its artificial intelligence (AI) research centre cortAIx.

    Find out more

    Valital

    Free use of the Valital recruitment platform to more quickly find candidates or volunteers in the medical and research fields.

    Find out more

    Brainbox AI

    Creation of a free HVAC (heating, ventilation and air conditioning) optimization service in response to COVID-19, using a “zone by zone” approach supported by cloud computing technologies.

    Find out more

    CAI Global

    Establishment of an economic and industrial impact forecasting model that, with the help of private databases, evaluates each economic sector of a city, region, RCM or other, in order to describe the situation and its risk factors.

    Find out more

    Undergraduate research initiation grants

    Imene Abid

    Supervisé.e par : Pierre-Majorique Léger

    HEC Montréal

    Générateur de données synthétiques pour améliorer l’apprentissage en science des données

    Simon Chamorro

    Supervisé.e par : Christopher Pal

    Polytechnique Montréal

    Navigational Assistant for the Visually Impaired (NAVI)

    Zyad Benameur

    Supervisé.e par : Chahé Nerguizian

    Polytechnique Montréal

    Méthodes d’apprentissage automatique dans l’aide à l’élaboration de plans d’interventions en éducation

    Omar Chikhar

    Supervisé.e par : Marc Fredette

    HEC Montréal

    Automation of signal processing methods for feature construction on physiological signals

    Léo Choinière

    Supervisé.e par : Julie Hussin

    Institut de cardiologie de Montréal (ICM)

    Traitement de Données Génomiques par Différentes Architectures de Réseaux de Neurones.

    Anas Bouziane

    Supervisé.e par : Bram Adams

    Polytechnique Montréal

    Reverse-engineering of and migration towards scalable NoSQL data architecture

    Valérie Daigneault

    Supervisé.e par : Frédéric Gosselin

    Université de Montréal

    Intégration et traitement temporel de la vision dans le cerveau lors de la reconnaissance d’attributs faciaux : modélisation de données MEG et comportementales en apprentissage machine.

    Etienne Denis

    Supervisé.e par : William Hamilton

    McGill University

    Multi-Relational Link Prediction Using Graph Neural Networks (SEARL)

    David Teddy Diffo Nguemetsing

    Supervisé.e par : Numa Dancause

    Université de Montréal

    Learning algorithms for functional cortical neurostimulation

    Sandra Ferland

    Supervisé.e par : Cisek Paul

    Université de Montréal

    Les mécanismes neuronaux de la prise de décision

    Aude Forcione-Lambert

    Supervisé.e par : Guy Wolf

    Université de Montréal

    Probing learned network structure in a multi-task setting

    Dominique Fournelle

    Supervisé.e par : Julie Hussin

    Université de Montréal

    Annotation des chromosomes sexuels de l’ornithorynque par apprentissage automatique

    Martine Francoeur

    Supervisé.e par : Olivier Bahn

    HEC Montréal

    Projet de stage sur la modélisation du secteur énergétique du Mexique

    Enora Georgeault

    Supervisé.e par : Marie-Ève Rancourt

    HEC Montréal

    Modèles prédictifs de l’allocation des dons de la Croix-Rouge canadienne en réponse aux feux de forêt

    William Glazer-Cavanagh

    Supervisé.e par : Bram Adams

    Polytechnique Montréal

    Automatic integration and deployment of AI models

    Alexandre Gravel

    Supervisé.e par : Bernard Gendron

    Université de Montréal

    Méthodes lagrangiennes pour la résolution de problèmes de conception de réseaux

    Rose Guay Hottin

    Supervisé.e par : Marina Martinez

    Université de Montréal

    Un agent d’apprentissage pour une neuroprothèse cortico-spinale Marina

    Alice-Marie Hamelin

    Supervisé.e par : Michel Gamache

    Polytechnique Montréal

    Outil de planification en temps réel pour les mines souterraines

    Jérémie Huppé

    Supervisé.e par : Maleknaz Nayebi

    Polytechnique Montréal

    Automated communication analysis for Software-aided emergency management

    Arnaud L’Heureux

    Supervisé.e par : Alain Tapp

    Université de Montréal

    Utilisation de réseau profond pour la simplification automatique de textes

    Julien Leissner-Martin

    Supervisé.e par : Jean-François Arguin

    Université de Montréal

    Using Deep Learning to Identify Electrons at the Large Hadron Collider (LHC)

    Anthony Lemieux

    Supervisé.e par : Serge McGraw

    Centre hospitalier universitaire Mère-Enfant (CHU Sainte-Justine)

    Investigation de dérégulations épigénétiques héritables par approches computationnelles

    Rui Ze Ma

    Supervisé.e par : Franz Bernd Lang

    Université de Montréal

    Investigation of systematic errors in genome assembly algorithms

    Mohammed Mahmoud

    Supervisé.e par : Mohamed Ouali

    Polytechnique Montréal

    Prediction of Fiber Quantity and Quality in Forest Supply Chains Using Artificial intelligence Methods

    Filip Milisav

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Studying social influence using a neuroimaging and data science approach

    Alexandre Morinvil

    Supervisé.e par : Giovanni Beltrame

    Polytechnique Montréal

    IA sécuritaire dans les essaims de drones : Développer une approche permettant aux petits essaims de drones de suivre les humains en toute sécurité

    Derek Ojeda Centeno

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Simulation multiniveaux pour les applications des villes intelligentes

    Pierrick Pascal

    Supervisé.e par : Sébastien Le Digabel

    Polytechnique Montréal

    Création d’une interface Julia pour NOMAD pour l’ajustement automatique des hyper-paramètres d’algorithmes d’optimisation.

    Justin Pelletier

    Supervisé.e par : Julie Hussin

    Institut de cardiologie de Montréal (ICM)

    Évaluation de scores de risque polygénique selon le sexe et la structure populationnelle

    Pierre-Elie Personnaz

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    Traitement de la dégénérescence par régularisation en optimisation continue

    Marie-Eve Picard

    Supervisé.e par : Pierre Jolicoeur

    Université de Montréal

    Analyses multivariées des interactions entre différents processus attentionnels (EEG): une approche orientée sur les données.

    Myriam Prasow-Émond

    Supervisé.e par : Julie Hlavacek-Larrondo

    Université de Montréal

    Étude de l’amas de galaxies supermassif MACSJ1447.7+0827

    Zakaria Rayadh

    Supervisé.e par : Jean-Francois Cordeau

    HEC Montréal

    Évaluation empirique de méthodes de prévision de la demande

    Khadija Rekik

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Visualisation et Analyse de données des réseaux des villes intelligentes.

    Adam Sigal

    Supervisé.e par : Liam Paull

    Université de Montréal

    Duckietown AI Driving Olympics.

    Daniel Tomasso

    Supervisé.e par : Dang Khoa Nguye

    Centre hospitalier de l’Université de Montréal (CHUM)

    Epileptic seizure detection by combining smart wear monitoring and artificial intelligence techniques.

    Fama Tounkara

    Supervisé.e par : Franco Lepore

    Université de Montréal

    Validation d’une batterie de tests visuels comme aide au diagnostic de troubles neurologiques.

    Étienne Tremblay

    Supervisé.e par : Réjean Plamondon

    Polytechnique Montréal

    Application heuristique des sciences des données à la théorie cinématique des mouvements humains.

    Anton Volniansky

    Supervisé.e par : Jean-François Tanguay

    Institut de cardiologie de Montréal (ICM)

    Banque de données des issues cliniques à court et long termes des Échafaudages Vasculaires Biorésorbables comparativement aux Stents pharmacoactifs de 2e génération

    Abdelkader Zobir

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Performances des micro-PMUs dans les Villes Intelligentes

    Masters excellence scholarships

    Tiphaine Bonniot de Ruisselet

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    Accélération de méthodes d’optimisation pour les problèmes volumineux par évaluation inexact

    Nous nous intéressons aux problèmes d’optimisation continue, non convexe et sans contraintes dans lesquels l’évaluation des valeurs de l’objectif et de son gradient sont obtenues à l’issue d’un processus coûteux. Nous supposons que l’on peut obtenir à moindre coûts des approximations de l’objectif et de son gradient à un niveau de précision souhaité. Nous regarderons l’impact de ces hypothèses sur la convergence et la complexité de méthodes d’optimisation classiques ainsi que les économies pouvant être effectuées sur le temps de calcul et la consommation énergétique. Cette étude est motivée, entre autre, par les problèmes d’inversion sismique dont la taille peut avoisiner les centaines de millions de variables et dont la fonction et le gradient peuvent être approximés par la résolution d’un problème aux moindres carrés linéaires. L’économie de temps de calcul et d’énergie est un enjeu majeur de l’ère de l’intelligence artificielle et de l’exploration des données volumineuses et cette approche est nouvelle est prometteuse en terme de retombées économiques et environnementales.

    Stephanie Cairns

    Supervisé.e par : Adam Oberman

    McGill University

    Oberman Mathematical approaches to adversarial robustness and confidence in DNN

    Deep convolutional neural networks are highly effective at image classification tasks, achieving higher accuracy than conventional machine learning methods but lacking the performance guarantees associated with these methods. Without additional performance guarantees, for example error bounds, they cannot be safely used in applications where errors can be costly. There is a consensus amongst researchers that greater interpretability and robustness are needed. Robustness can be to differences in the data set where the models can be deployed, or even robustness to adversarial samples: perturbations of the data designed deliberately by an adversary to lead to a misclassification.

    In this project, we will study reliability in two contexts: (i) developing improved confidence in the prediction of the neural network, using modified losses to improve confidence measures (ii) modified losses which result in better robustness to adversarial examples. The overall goal of the project is to lead to more reliable deep learning models.

    Enora Georgeault

    Supervisé.e par : Marie-Ève Rancourt

    HEC Montréal

    Modèles prédictifs de l’allocation des dons de la Croix-Rouge canadienne en réponse aux feux de forêt

    Au Canada, les inondations et les feux de forêt sont les catastrophes naturelles qui provoquent le plus de dégâts. Les efforts de la Croix-Rouge canadienne (CRC) visant à atténuer les impacts des feux de forêt dépendent fortement de la capacité des organisations à planifier, à l’avance, les opérations logistiques de secours. Le premier objectif du projet est l’élaboration de modèles permettant de prédire l’allocation des dons en argent aux bénéficiaires, selon les caractéristiques socio-démographiques de la région et du bénéficiaire ainsi que selon les caractéristiques des feux (sévérité et type). Le second objectif sera de comprendre les facteurs qui ont un impact significatif sur les besoins de la CRC lors d’une réponse à un feu de forêt, afin de faciliter la planification des opérations logistiques et les appels de financement.

    Bhargav Kanuparthi

    Supervisé.e par : Yoshua Bengio

    Université de Montréal

    h detach Modifying the LSTM Gradient Towards Better Optimization

    Recurrent neural networks are known for their notorious exploding and vanishing gradient problem (EVGP). This problem becomes more evident in tasks where the information needed to correctly solve them exist over long time scales, because EVGP prevents important gradient components from being back-propagated adequately over a large number of steps. We introduce a simple stochastic algorithm (\textit{h}-detach) that is specific to LSTM optimization and targeted towards addressing this problem. Specifically, we show that when the LSTM weights are large, the gradient components through the linear path (cell state) in the LSTM computational graph get suppressed. Based on the hypothesis that these components carry information about long term dependencies (which we show empirically), their suppression can prevent LSTMs from capturing them. Our algorithm\footnote{Our code is available at https://github.com/bhargav104/h-detach.} prevents gradients flowing through this path from getting suppressed, thus allowing the LSTM to capture such dependencies better. We show significant improvements over vanilla LSTM gradient based training in terms of convergence speed, robustness to seed and learning rate, and generalization using our modification of LSTM gradient on various benchmark datasets.

    Vincent Labonté

    Supervisé.e par : Michel Gagnon

    Polytechnique Montréal

    Extraction de connaissances en français basée sur une traduction des textes en anglais combinée à l’utilisation d’outils développés pour l’anglais

    Plusieurs institutions gouvernementales rendent disponible sur leurs sites web un très grand volume de documents qui ne sont écrits que dans la langue officielle du pays. Or, de plus en plus ces institutions désirent transformer ces documents en une base de connaissances, déployée en un ensemble de données ouvertes intégrées au Web sémantique. C’est le cas notamment du ministère de la Culture et des Communications du Québec, qui met à la disposition du public un répertoire du patrimoine culturel du Québec, très riche en informations textuelles, mais qu’il est malheureusement difficile d’intégrer aux données des autres acteurs culturels du Québec, ou de lier à toutes les connaissances patrimoniales qui sont déjà présentes dans le réseau de données ouvertes Linked Open Data (LOD).

    Plusieurs travaux ont déjà été proposés pour soutenir l’effort d’extraction de connaissances à partir de textes : des annotateurs sémantiques, qui identifient dans un document les entités qui y sont citées (personnes, organisations, etc.) et les lient à leur représentation dans une base de connaissances du LOD; des extracteurs de relations, capables d’extraire du texte des relations entre deux entités (par exemple, « X est l’auteur du roman Y »); des extracteurs d’événements et d’informations temporelles. Dans la très grande majorité des cas, ces outils ont été développés pour l’anglais, ou offrent de piètres performances lorsqu’appliqués au français.

    Nous proposons donc d’explorer une approche qui consiste à produire, à partir d’un corpus de documents en français, une version équivalente traduite sur laquelle seront appliqués les outils déjà existants pour l’anglais (le service Syntaxnet de Google, par exemple). Cela implique qu’il faudra tenir compte des erreurs et inexactitudes qui résulteront de l’étape de traduction. Pour y arriver, des techniques de paraphrase et de simplification de texte seront explorées, l’hypothèse ici étant que des phrases simples sont plus faciles à traduire et que cette simplification n’aura pas d’impact majeur sur la résolution de la tâche si la sémantique est préservée lors de cette simplification. On notera aussi que certains aspects de la langue, comme l’anaphore, perturbent la traduction (le module de traduction aura du mal à choisir entre les pronoms « it » et « he » pour traduire le pronom « il »). Il faudra dans ces cas mesurer précisément leur impact et proposer des solutions de contournement.

    En bref, le projet proposé permettra de déterminer dans quelle mesure les services de traduction actuellement disponibles préservent suffisamment le sens du texte pour pouvoir exploiter des outils développés pour une autre langue. L’hypothèse que nous désirons valider est que leurs lacunes peuvent être comblées par certains prétraitements du texte original, et que ces prétraitements peuvent être implémentée à faibles coûts (en temps et en ressources).

    Thomas MacDougall

    Supervisé.e par : Sébastien Lemieux

    Université de Montréal

    Use of Deep Learning Approaches in the Activity Prediction and Design of Therapeutic Molecules

    The proposed research is to employ Deep Learning and Neural Networks, which are both fields of Machine Learning, to more accurately predict the effectiveness, or “activity”, of potential therapeutic molecules (potential drugs). We are primarily concerned with predicting a given molecule’s ability to inhibit the growth of primary patient cancer cells (cells taken directly from a patient). The Leucegene project at the Institut de Recherche en Immunologie et Cancérologie (IRIC) has tested the activity of a large number of compounds in inhibiting the growth of cancer cells from patients afflicted with acute myeloid leukemia. The proposed research will use this activity data, along with several other data sources, to build an algorithm that can better predict the effectiveness that a molecule will have in inhibiting cancer cell growth. This means that before a molecule is even synthesized in a chemistry lab, a good estimation of its effectiveness as a therapeutic compound can be made, almost instantly. The first approach is to use Neural Networks and “representation learning”, in which features of the molecules that are important to improving activity are identified automatically by the algorithm. This will be done by representing the molecules as graphs and networks. Another approach that will be taken is the use of “multi-task learning” in which the prediction accuracy of an algorithm can be improved if the same algorithm is trained for multiple tasks on multiple datasets. The « multiple tasks » that will be focused on are multiple, but related, drug targets that are essential to cancer cell growth. Moving beyond activity prediction alone, these machine learning architectures will be expanded to design new chemical structures for potential drug molecules, based on information that is learned from drug molecules with known activities. These approaches have the capacity to improve the predictions about whether molecules will make effective drugs, and to design new molecules that have even better effectiveness than known drugs. Research progress in this area will lower the cost, both in money and time, of the drug development process.

    Bhairav Mehta

    Supervisé.e par : Liam Paull

    Université de Montréal

    Attacking the Reality Gap in Robotic Reinforcement Learning

    As Reinforcement Learning (RL) becomes an increasingly popular avenue of research, one area that stands to be revolutionized is robotics. However, one prominent downside of applying RL in robotics scenarios is the amount of experience today’s RL algorithms require to learn. Since these data-intensive policies cannot be learned on real robots due to time constraints, researchers turn to fast, approximate simulators. Trading off accuracy for speed can cause problems at test time, and policies that fail to transfer to the real world fall prey to the reality gap: the differences between training simulation and the real-world robot. Our project focuses on theoretically analyzing this issue, and provides practical algorithms to improve safety and robustness when transferring robotic policies out of simulation. We propose algorithms that use expert-collected robot data to learn a simulator, allowing for better modeling of the testing distribution and minimizing the reality gap upon transfer. In addition, we study the transfer problem using analysis tools from dynamical systems and continual learning research, looking for indicators in neural network dynamics and optimization that signal when the reality gap is likely to pose an issue. Lastly, we use the analysis to synthesize an algorithm which optimizes for the metrics that signal good, “transferable” policies, allowing safer and more robust sim-to-real transfer.

    Timothy Nest

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Leveraging Machine Learning and Magnetoencephalography for the Study of Normal and atypical states of Consciousness

    Understanding the neural processes and network dynamics underlying conscious perception is a complex yet important challenge that lies at the intersection between cognitive brain imaging, mental health, and data science. Magnetoencephalography (MEG) is a brain imaging technique that has many qualities favorable to investigating conscious perception due to its high temporal resolution and high signal to noise ratio. However MEG analyses across space, time and frequency is challenging due to the extreme high-dimensionality of variables of interest, and susceptibility to overfitting. Furthermore, high-computational complexity limits the ease with which investigators might approach some cross-frequency coupling metrics believed to be important for conscious perception and integration, across the whole brain. To mitigate such challenges, researchers frequently rely on a variety of multivariate feature extraction and compression algorithms. However, these techniques still require substantial tuning, and are limited in their application to the kinds of high-order tensor structures encountered in MEG. New methods for the study of conscious perception with MEG are thus needed.

    In this project, we will leverage very recent advances in computer science and machine learning that extend algorithms currently used in neuroimaging research, to extreme high-dimensional spaces. Taken together, the proposed research will apply state-of-the-art techniques in machine-learning and electrophysiological signal processing to overcome current obstacles in the study of the brain processes that mediate conscious perception. This work will constitute an important contribution to neuroimaging methodology, neuropharmacology, and psychiatry. Beyond expanding our understanding of healthy cognition, this research may ultimately provide novel paths to the study of psychiatric disorders that involve altered conscious perception, such as Schizophrenia.

    Jacinthe Pilette

    Supervisé.e par : Jean-François Arguin

    Université de Montréal

    Recherche de nouvelle physique au Grand collisionneur de hadrons (LHC) à l’aide de l’apprentissage profond

    Le Grand collisionneur de hadrons (LHC) se situe au cœur de la recherche fondamentale en physique. Avec sa circonférence de 27 km, celui-ci constitue le plus grand et plus puissant accélérateur de particules au monde. Ceci en fait le meilleur outil afin d’étudier le domaine de l’infiniment petit. C’est d’ailleurs au LHC que le boson de Higgs fut découvert, menant à l’obtention du prix Nobel de physique en 2013.

    Cependant, le modèle standard, référence qui dicte les lois régissant les particules et leurs interactions, possède plusieurs lacunes que les physiciens et physiciennes n’ont toujours pas réussi à combler. Plusieurs théories furent élaborées, mais aucune d’entre elles ne fut observée au LHC. Face à ce défi, la communauté de physique des particules devra utiliser une nouvelle approche.

    Le groupe ATLAS de l’Université de Montréal s’est ainsi tourné vers l’intelligence artificielle. Le projet élaboré par cette collaboration, et l’objectif principal de cette recherche est de développer un algorithme d’apprentissage profond qui permettrait de détecter des anomalies dans les données. L’algorithme développé sera ensuite utilisé sur les données du détecteur ATLAS dans l’espoir de découvrir des signaux de nouvelle physique et d’améliorer notre compréhension de l’univers.

    Léa Ricard

    Supervisé.e par : Emma Frejinger

    Université de Montréal

    Modélisation de la probabilité d’acceptation d’une route dans un contexte de covoiturage

    Le covoiturage touche aux algorithmes fréquemment étudiés de tournées de véhicule, de ramassage et de livraison avec fenêtres de temps et de transport à la demande dynamique. Toutefois, très peu d’études s’attardent au contexte où les conducteurs et les passagers peuvent rejeter une proposition de route. Alors que le rejet d’une route proposée est rare lorsque les conducteurs sont des professionnels, c’est plutôt la norme dans un contexte de covoiturage. La modélisation de la probabilité d’acceptation d’une route se pose alors comme un problème central dans le développement d’une application mobile de covoiturage de qualité.

    Le modèle d’apprentissage automatique développé devra estimer, selon les caractéristiques de l’utilisateur (notamment s’il est conducteur ou passager) et les routes alternatives proposées, la probabilité d’acceptation d’une route. De prime abord, cette modélisation pose deux défis :

    (1) La façon dont les acceptations et les refus sont collectés pose un problème de type logged bandit. À ce titre, plusieurs propositions peuvent être offertes en même temps et un utilisateur peut en accepter plusieurs. De plus, les offres peuvent être activement refusées, simplement ignorées ou acceptées. Puisque les offres sont affichées séquentiellement, celles qui apparaissent en premier ont plus de chance d’attirer l’attention de l’utilisateur. L’ordre des propositions a donc probablement une influence sur la probabilité d’acceptation.
    (2) Le comportement des nouveaux utilisateurs, pour qui très peu d’information est disponible, devra être inféré à partir des clients similaires de longue date. Ceci est en soi un problème difficile.

    Alexandre Riviello

    Supervisé.e par : Jean-Pierre David

    Polytechnique Montréal

    Hardware Acceleration of Speech Recognition Algorithms

    Speech recognition has become prevalent in our lives in recent years. Personal assistants, such as Amazon’s Alexa or Apple’s Siri are such examples. With the rise of deep learning, speech recognition algorithms gained a lot of precision. This is due, mostly, to the use of neural networks. These complex algorithms, used in the context of a classification task, can distinguish between different characters, phonemes or words. However, they require lots of computations, limiting their use in power-constrained devices, such as smartphones. In my research, I will attempt to find hardware-friendly implementations of these networks. Deep learning algorithms are usually written in high-level languages using frameworks such as Torch or Tensorflow. To generate hardware-friendly representations, models will be adapted, using these frameworks. For example, recent findings have shown that basic networks can use weights and activations represented over 1 or 2 bits and retain their accuracy. The reduction of the precision of the network parameters is called quantization. This concept will be one of the many ways used to simplify the networks. Another aspect of this research will be to revisit methods of representing voice features. Traditionally, spoken utterances were converted to Mel Frequency Cepstrum Coefficients (MFCCs) which are essentially values representing signal power over a frequency axis. These coefficients are calculated roughly every 10 ms and are then sent to the model network. A representation of lower precision can greatly reduce the computational costs of the network. The overall goal of the research will be to improve the calculation speed and to diminish the power consumption of speech recognition algorithms.

    Doctoral excellence scholarships

    Lluis E. Castrejon Subira

    Supervisé.e par : Aaron Courville

    Université de Montréal

    Self-Supervised Learning of Visual Representations from Videos.

    Francis Banville

    Supervisé.e par : Timothée Poisot

    Université de Montréal

    Réseaux d’interactions écologiques et changements climatiques : inférence et modélisation par des techniques d’apprentissage automatique

    Avishek Bose

    Supervisé.e par : William Hamilton

    McGill University

    Domain Agnostic Adversarial Attacks for Security and Privacy.

    Elodie Deschaintres

    Supervisé.e par : Catherine Morency

    Polytechnique Montréal

    Modélisation des interactions entre les modes de transport par l’intégration de différentes sources de données

    Laura Gagliano

    Supervisé.e par : Mohamad Sawan

    Polytechnique Montréal

    Artificial Neural Networks and Bispectrum for Epileptic Seizure Prediction.

    Ellen Jackson

    Supervisé.e par : Hélène Carabin

    Université de Montréal

    Evaluation of a Directed Acyclic Graph for Cysticercosis using Multiple Methods.

    Mengying Lei

    Supervisé.e par : Lijun Sun

    McGill University

    Spatial-Temporal Traffic Pattern Analysis and Urban Computation Applications based on Tensor Decomposition and Multi-scale Neural Networks.

    Tegan Maharaj

    Supervisé.e par : Christopher Pal

    Polytechnique Montréal

    Deep ecology: Bringing together theoretical ecology and deep learning.

    Antoine Prouvost

    Supervisé.e par : Andrea Lodi

    Polytechnique Montréal

    Learning to Select Cutting Planes in Integer Programming.

    Matthew Schlegel

    Supervisé.e par : Martha White

    University of Alberta

    Representing the World Through Predictions in Intelligent Machines.

    Jing Xu

    Supervisé.e par : Guillaume-Alexandre Bilodeau

    Polytechnique Montréal

    Computer Vision for Safe Interactions between Humans and Intelligent Robots.

    Internship grants: Data to tell

    Olivia Gélinas

    Polytechnique Montréal

    Stage chez Le Devoir, spécialité science des données.

    Justine Pépin

    Polytechnique Montréal

    Stage chez Le Devoir, spécialité science des données.

    Sandrine Vieira

    UQAM

    Stage chez Le Devoir, spécialité communication.

    Postdoc-entrepreneur program

    Selçuk Güven

    Supervisé.e par : Philippe Langlais

    Université de Montréal

    Dépistage et diagnostic des troubles de la parole et du langage chez les enfants utilisant l’IA 

    Entreprise : LinguAI

    Le but de ce projet est de créer une plateforme Web où les cliniciens seront guidés dans la différenciation des troubles de la parole et du langage chez les enfants en posant simplement quelques questions de fond et en analysant les échantillons de parole des enfants pour y déceler les erreurs de parole et de langage. La solution proposée utilisera une reconnaissance de la parole qui est suffisamment précise dans les  » troubles de la parole  » qui sera développée au cours de ce projet et ensuite un algorithme sera déployé pour l’analyse détaillée des erreurs. Une partie de ce projet était le projet postdoctoral du boursier. Les objectifs à long terme de ce projet sont de rendre cet outil accessible aux parents de ces enfants également.

    Muhammad Sohail

    Supervisé.e par : Sébastien Lemieux

    Université de Montréal

    Développement d’outils basés sur l’apprentissage automatique pour soutenir la modulation thérapeutique de l’épissage alternatif

    Entreprise : BioBenchAI

    Les gènes humains sont constitués de séquences codantes pour des protéines (les exons) interrompues par des séquences non-codantes (les introns). Les séquences codantes sont jointes selon différentes combinaisons lors d’une étape importante de l’épissage alternatif (EA). Lors de maladies, l’EA est souvent dérégulé mais l’impact fonctionnel d’une telle dérégulation et les stratégies pour la corriger restent largement inconnus en raison de sa complexité et du manque d’outil informatique pour analyser de grands ensembles de données d’EA. Nous proposons de développer un ensemble d’outils informatiques utilisant des algorithmes d’apprentissage automatique qui nous permettront de comprendre l’impact d’une dérégulation de l’EA dans la pathogenèse et ainsi d’aider aux développements de nouvelles stratégies thérapeutiques ciblant les mécanismes de l’EA.

    Postdoctoral research funding

    Winter

    Jhelum Chakravorty

    Supervisé.e par : Doina Precup

    McGill University

    Temporal abstraction in multi-agent environment

    Temporal abstraction refers to the ability of an intelligent agent to reason, act and plan at multiple time scales. The question of how to obtain and reason with temporally abstract representations has been extensively studied in classical planning and control theory, and more recently it has become an important topic in reinforcement learning, especially through the framework of options. The theoretical development of options is based on the framework of Semi-Markov Decision Processes (SMDPs), in which an agent interacts with its environment by observing states and taking actions. As a result of an action, the agent receives an immediate reward, and transitions to a new state drawn from some distribution, after a certain period of time which is also drawn stochastically. Both the state and the dwell-time distribution are dependent only on the agent’s state and action. However, in many cases of practical importance, an agent may be faced either with more general environments, in which the environment may be partially observable, or there may be multiple agents acting in the environment. For example, in energy markets or in transportation there may be many agents, who would be interacting with each other and making decisions without being able to observe relevant information except at specific time points.We propose to focus on establishing a mathematical framework for temporal abstraction which would work in Decentralized Partially Observable Markov Decision Processes. In a multi-agent system, agents take decision and exchange their information at designated decision epochs. In general, the decision epochs are given by the realizations of a random sequence. Instead of looking at every instant of time, when an action is taken by an agent, we are interested in the Decentralized Semi-Markov Decision Processes (Dec-SMDPs), in which a Partially Observable Markov Decision Process (POMDP) corresponding to an agent is embedded between any two successive decision epochs. In between two such decision epochs, each agent chooses actions so as to maximize the total return over a finite or infinite horizon, i.e., it solves a POMDP problem. The optimal decision epochs are chosen based on a given criterion, e.g., exchanging information at some goal states fixed a priori or when the increase of reward from the last decision epoch is less than a threshold. The overall performance, which is to be maximized through such sequential decision making consists of two rewards. The exchange of information in encouraged by an extrinsic reward along with an intrinsic reward that is maximized in between two consecutive decision epochs.We would like to investigate two aspects of this problem setup. First, we are interested in formally establishing the framework for Partially Observable Semi-Markov Decision Processes and its extension to decentralized (multi-agent) problems. We would like to investigate if under certain simplifying assumptions in the planning problem, the posterior beliefs (i.e., belief on the state of the environment based on past information and current action) exhibit certain monotonicity and symmetry properties so that we can infer what the structure of optimal policies could be. We also want to establish the general Dec-SMDP framework for modeling this problem and characterize its properties in comparison with SMDPs.In the subsequent analysis, we would like to investigate learning algorithms for these families of problems. We will build on standard reinforcement learning algorithms for temporal abstraction, such as option-critic, and provide extensions in our case that are consistent with the theoretical characterization of these problems. We will also examine the performance of both value-function-based and policy-gradient style algorithms in this context. We will compare the results that can be obtained using our framework to results in which each agent ignores the others and only tries to optimize myopically its own reward. We will use both standard simulated small problems from the multi-agent literature, designed to emphasize specific aspects, as well as larger scale domains that correspond to simulate transportation and energy markets, where multiple agents work in a cooperative setting to achieve a common task in a decentralized manner, e.g., self-driving cars and smart-grids. In such applications the agents occasionally communicate among themselves and use a common information to update a belief about the state of the world and a local information to decide about their individual policies and terminations of such policies.

    Eugene Belilovsky

    Supervisé.e par : Aaron Courville

    Université de Montréal

    Towards Learning Language Based Navigation in Visually Rich 3-D Environments

    A long term goal of artificial intelligence and robotics is a robot able to perform manual tasks by understanding language instructions or questions and using visual and other sensory input to navigate and interact in a complex environment to achieve it’s goals. Advances in machine learning have succeeded in important perceptual sub-tasks of this problem: object recognition, speech recognition, natural language processing among others. However, how to integrate these successes with sequential decision making and multi-modal reasoning across language, vision, and other modalities is an open question that has been difficult to study. Very recently visually rich 3-D simulations and tasks have arisen aimed to allow the development of algorithms for learning language directed navigation of robots. Even in these constrained simulations, straightforward application of existing machine learning and reinforcement learning techniques are unable to effectively tackle this new set of challenges. We aim to develop methods for these problems focusing on visual relational reasoning and ideas from human learning. We also strive to advance the nascent evaluation methodology of these algorithms. Besides making steps towards our ambitions of creating intelligent agents, methods developed to solve these tasks can be directly applied in household automation, robotic assistants, manufacturing, and autonomous driving.

    Glen Berseth

    Supervisé.e par : Christopher Pal

    Polytechnique Montréal

    Visual Imitation Learning With Partial Information

    For many control and decision-making tasks, it is complex to describe the desired behaviour we hope to elicit from a robot. Many complex tasks that we want robots to be able to perform are dependant on a skill that people acquire at a young age, imitation. The ability of animals to learn from demonstrations has triggered research across many disciplines. This work will push the possibilities of imitation learning by creating methods that will allow robots to learn from diverse video demonstration. Of particular interest are skills that involve interaction with objects in the real world. Imitation learning is a tough problem but is also a very important one. If we make enough progress on imitation learning average people could program robots by providing a few demonstrations of the desired task in the real world.

    Ricardo de Azambuja

    Supervisé.e par : Giovanni Beltrame

    Polytechnique Montréal

    High Fidelity Data Collection for Precision Agriculture with Drone Swarms

    Projections from United Nations show that by 2050 we will need to produce 70% more food. However, agriculture already takes over 38% of the land and it is the largest user of freshwater in the world. What can we do to improve the way food is produced? We propose high precision agriculture! It uses big data to support decisions increasing productivity and reducing the use of land, water, fertilizers, pesticides, herbicides and fungicides. The use of more intelligent methods is also beneficial to biodiversity, changing the way natural resources are managed from an one-size-fits-all approach to a tailor-made solution. Yet, traditional data sources are known to have limited resolution and even low altitude remote sensing (e.g. airplanes or unmanned aerial vehicles – UAVs) can only see from a fixed perspective: above. Additionally, according to PwC there’s a $32.4bn market for UAVs in the agriculture industry. This project proposes to improve productivity and sustainability by increasing the precision of the data collected down to the individual plant level with the use of Artificial Intelligence (Deep Convolutional Neural Networks) powered autonomous micro aerial vehicle swarms capable to fly among crops (e.g. corn, soybean and oats). With the high resolution data collected by a swarm of small and cost effective drones, farmers will be able to take advantage of all machine learning technology already available to optimize food production, maximize yield and minimize impact in the environment.

    Elias Khalil

    Supervisé.e par : Andrea Lodi

    Polytechnique Montréal

    New Frontiers in Learning for Discrete Optimization

    In addressing current and future societal needs, both the public and private sectors are deploying increasingly complex information and decision systems at unprecedented scales. The algorithms underlying such systems must evolve and improve rapidly to keep up with the pace. This project focuses on algorithms for Discrete Optimization, a widely used tool for decision-making and planning in industrial applications. The goal is to devise Machine Learning (ML) methods that streamline the process of algorithm design for discrete optimization, particularly in new, uncharted domains where classical paradigms may not be effective.

    Kazuya Mochizuki

    Supervisé.e par : Jean-François Arguin

    Université de Montréal

    Deep Learning to understand the LHC data

    What is our universe made of? To answer this question with the current technology, the Large Hadron Collider (LHC) has been under its stable operation since 2009, and has collected data of enormous number of proton-proton collisions, O(1e16/year). The protons are accelerated to nearly the speed of light. Each collision reproduces the high energy state that our universe once had right after the Big Bang. The research of fundamental particles under such high energy condition is very important to better understand the laws of our universe, which might tell us the future of our cosmos. Single collision at the LHC produces O(1000) particles, whose data are collected via millions of readout channels from the detector. Therefore, the data to be analyzed at the LHC amount to O(30PB/year), and become complex i.e. it would be a suitable target to apply and study machine learning (ML) techniques. However, many areas of the analyses have yet to be improved using advanced ML algorithms such as Deep Learning (DL). This project will accelerate the application of ML/DL to several aspects of LHC data analyses, with particular focuses on the particle identification, and the data quality evaluation, in order to support a potential discovery of new particles.

    Jonathan Porée

    Supervisé.e par : Jean Provost

    Polytechnique Montréal

    Angiographie Myocardique Ultrasonore Super résolue

    Les maladies cardiovasculaires sont responsables de plus de 30% des décès dans le monde dont plus de 7 millions chaque année sont imputable aux maladies coronariennes. Chez les patients présentant des maladies coronariennes connues ou suspectées, l’imagerie est souvent la première étape du diagnostic. Malheureusement, aucune technique non-invasive ne permet aujourd’hui de cartographier l’anatomie et la fonction des vaisseaux intra myocardiques irriguant le cœur. Le développement d’échographes ultrarapides a récemment permis le développement d’une nouvelle méthode d’angiographie super-résolue, basée sur la détection de microbulles injectées, permettant de cartographier des vaisseaux sanguins à l’échelle capillaire (<10 µm). Cette technique ne peut cependant pas être directement être appliquée au cœur puisqu’elle nécessite encore aujourd’hui plusieurs minutes d’acquisitions et est très sensible au mouvement. Notre objectif principal est la mise au point d’un système ultrasonore de cartographie super résolue de la micro vascularisation intra myocardique en 3D par apprentissage machine destinée au diagnostic précoce des maladies coronariennes. L’utilisation de réseaux de neurones récurrents devrait permettre de prédire la structure et les paramètres du réseau vasculaire et ainsi améliorer le pronostic des patients tout en minimisant la complexité des examens.

    Sharan Vaswani

    Supervisé.e par : Simon Lacoste-Julien

    Université de Montréal

    Theoretical Understanding of Deep Neural Networks

    Deep neural networks have led to state-of-the-art results in a wide range of applications including object detection, speech recognition, machine translation and reinforcement learning. However, the optimization techniques for training such models are not well-understood theoretically. Furthermore, it is unclear how the optimization procedure affects the ability of these models to generalize to new data. In this project, we propose to design scalable theoretically-sound optimization algorithms exploiting the underlying structure of deep networks. We also plan to investigate the interplay between optimization and generalization for these models. We hope that this project will result in improved methods for training deep neural networks.

    Simon Verret

    Supervisé.e par : Yoshua Bengio

    Université de Montréal

    Apprentissage profond pour les propriétés électroniques des matériaux quantiques

    Certains matériaux ont des propriétés qui ne s’expliquent qu’avec les lois de la physique quantique: ce sont les matériaux quantiques. Il est souvent difficile de calculer les prédictions théoriques de leur propriétés, comme c’est le cas pour la supraconductivité à haute température critique, ou les phases topologiques de la matière. Cela ralentit la recherche sur ces matériaux, et donc le développement de nouvelles technologies. En collaboration avec l’Institut Quantique (IQ), à Sherbrooke, le présent projet est d’utiliser les méthodes de pointe en apprentissage profond pour améliorer nos outils de prédictions pour les matériaux quantiques. D’une part, nous cherchons à améliorer les avancée récentes basées sur l’apprentissage profond pour les calculs dits ab initio, qui permettent le calcul des propriétés électroniques et chimiques des molécules et cristaux à partir seulement de leur configuration atomique. D’autre part, nous cherchons à intégrer l’apprentissage profond aux méthodes de pointes pour électrons fortement corrélés, c’est-à-dire pour les matériaux où la configuration seule ne suffit pas, car les électrons interagissent fortement. Il s’agit de la toute première collaboration entre IVADO et l’IQ, qui permettra de développer une l’expertise de pointe en intelligence artificielle pour la modélisation de matériaux quantiques.

    Marzieh Zare

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    AI-powered investigation of the complex neuronal determinants of cognitive capacities in health, aging, and mild cognitive impairment

    Cognitive abilities and mental performance evolve across the life-span and are affected by normal and pathological aging. Understanding how brain function changes with age and how its dynamics relate to cognitive capacities or impairments would greatly contribute to the general well-being of the population and reduction of the economic burden of neurodegenerative diseases. In particular, discovering neural markers of cognitive function and predictors of dysfunction is a particularly important research goal in societies with aging populations like Canada. By combining data analytics and state-of-the-art brain signal analyses, this project aims to reveal the link between complex neural dynamics and cognitive capacities and to assess this relationship in the context of normal and pathological aging. Metrics of neural complexity and non-linear brain dynamics will be probed in large data sets consisting of neuropsychological and electrophysiological (EEG and MEG) data, including sleep EEG data collected in elderly patients with mild cognitive impairment (MCI). In order to exploit putative basic and clinical applications, both shallow learning and deep learning will be used. Furthermore, by exploring new ways to embed realistic brain network properties into deep architectures this research may also lead to novel biologically-inspired artificial neural networks that may be useful outside neuroscience.

    fall

    Valentina Borghesani

    Supervisé.e par : Pierre Bellec

    Université de Montréal

    How do we know what we know: neuropsychology, neuroimaging, and machine learning unraveling the neuro-cognitive substrate of semantic knowledge.

    Human intelligence has two key components: the ability to learn and that of storing a representation of what has been learned. A deeper understanding of how semantic representations are instantiated in biological neural networks (BNNs) will have a two-fold beneficial impact on society. First, it will improve clinical practice providing better diagnostic and prognostic tools for patients with impaired semantic processing. Second, it will inform the development of human-like representations in artificial neural networks (ANNs), leading towards general artificial intelligence. Through a multidisciplinary approach that includes experimental psychology, cognitive neuroimaging, and machine learning, we will shed light on how semantic representations (1) vary across individuals – both healthy volunteers and neurodegenerative patients, (2) are encoded in the brain – thanks to functional magnetic resonance imaging and magnetoencephalography, and (3) can generalize across tasks and stimuli modalities – enabling human adaptive behaviors. The extensive multimodal dataset we will acquire and analyze with state-of-the-art analytical tools will thus pave the way to groundbreaking scientific discoveries for both BNNs and ANNs.

    Nicolas Loizou

    Supervisé.e par : Ioannis Mitliagkas

    Université de Montréal

    Optimization Algorithms for Machine Learning and Deep learning

    In this project, we are interested in the development of efficient algorithms for solving convex and non-convex optimization problems. Convex optimization lies at the heart of many classical machine learning tasks. In this project, one of our goals is the development of provably convergent algorithms for solving structured convex optimization problems. Interesting Directions: Define the weaker assumptions which guarantee convergence of optimization algorithms like Adam, Adagrad, SGD with momentum. What is the optimal mini-batch size of these algorithms? What is the optimal selection for learning rate and momentum parameter? What is the optimal sampling? Deep Neural Networks (DNNs) are the state-of-the art machine learning approach in many application areas. However, the optimization methods used for training such models are not well-understood theoretically. In this project we are interested in the design of novel optimization algorithms that exploring the underlying structure of DNNs. Interesting Directions: Can we theoretically provide an explanation of the heuristics (stagewise stepsize, batch normalization, etc) used in the training of DNNs? Is it possible to design methods that generalize well to new data by studying the loss landscape of DNNs? Can we design efficient distributed data-parallel algorithms with aim to accelerate the training of DNNs?

    Alexandra Luccioni

    Supervisé.e par : Yoshua Bengio

    Université de Montréal

    Using Generative Adversarial Networks to Visualize the Impacts of Climate Change

    It is difficult to overstate the importance of fighting climate change. A recent report from the Intergovernmental Panel on Climate Change determined that dramatic and rapid changes to the global economy are required in order to avoid mounting climate-related risks for natural and human systems. However, public awareness and concern about climate change often does not match the magnitude of threat to humans and our environment. A primary reason for this is cognitive biases which tend to downplay the importance of effects we don’t see or experience personally. Therefore, making abstract predictions of climate change impacts understandable, relatable, and well-communicated is vital in helping to overcome the barriers to public awareness and action with regards to climate change. To contribute to overcoming these challenges, we propose to use a Generative Adversarial Network (GAN) to simulate imagery of the impact that climate-change induced flooding will have on buildings and houses in North America. Our GAN can then be hosted on the Web and used as a tool to help the public understand – both rationally and viscerally – the consequences of not taking sufficient action against climate change.

    Jiaxin Mao

    Supervisé.e par : Jian-Yun Nie

    Université de Montréal

    User Behavior Modeling for Intelligent Information Systems

    Intelligent information systems such as search engines, recommender systems, digital assistants, and social chatbots are ubiquitous today. Machine learning algorithms are the core components of these systems. Therefore, the development of more sophisticated machine learning models for the next generation of intelligent information systems relies on the amount and quality of the training data. As a by-product of operating these systems, we can log a large amount of user interaction data and use it to train and optimize the machine learning models. For example, users’ clicks can be used as implicit relevance feedback to optimize Web search engines. However, optimizing the information system with observed user behavior logs is a non-trivial task as they only provide implicit and noisy signals and depend strongly on context. This project addresses this problem by first building reliable and generalizable user behavior models from the observed user behavior log and then utilizing them to optimize the intelligent information systems. This project will advance the research and development of intelligent information systems by solving the bottleneck of data availability.

    Tangi Migot

    Supervisé.e par : Dominique Orban

    Polytechnique Montréal

    Large scale optimization solvers in Julia for data science

    L’étude d’algorithmes pour résoudre les problèmes d’optimisation est devenue au fil des années la base de la science des données et par extension ses multiples applications dans des secteurs clés tels que la santé, le transport, l’énergie, la finance … De nos jours, de nouveaux défis impliquent une quantité toujours grandissante de données à traiter ainsi qu’un accroissement de la difficulté des modèles utilisés. Ce projet a pour but de nouvelles avancées dans des outils numériques en Julia pour résoudre des problèmes d’optimisation complexe de très grande taille. Dans cette étude, nous considérons deux exemples que sont les problèmes d’optimisation sous contraintes d’équation aux dérivées partielles et les problèmes d’optimisations avec contraintes dégénérées qui surviennent en particulier dans l’étude de la théorie des jeux.

    Jeremy Nadal

    Supervisé.e par : François Leduc-Primeau

    Polytechnique Montréal

    Apprentissage automatique pour des systèmes MIMO massifs à faible consommation d’énergie

    L’année 2020 marquera le début du déploiement à grande échelle de la 5e génération de réseaux cellulaires. Cependant, la question de l’impact énergétique de cette nouvelle génération de réseaux se pose. Actuellement, plus de 70% des coûts énergétiques des opérateurs proviennent des infrastructures radio. Ce bilan énergétique va s’alourdir avec l’introduction des communications en bandes millimétriques. Grâce à l’utilisation d’un grand nombre d’antennes du côté de la station de base, il est théoriquement possible d’augmenter l’efficacité énergique du système. Cependant,  les chaînes de transmission doivent être dupliquées, consommant énormément d’énergie en pratique. De nombreuses solutions sont proposées dans la littérature, mais celles-ci demandent une puissance calculatoire élevé, sans certitudes d’obtenir les performances optimales. Partant de ce constat, l’objectif de ce projet est d’étudier et de proposer de nouvelles solutions économes en énergie, performantes et implantables pour mettre en œuvre les techniques de réduction d’énergie pour des systèmes multi-antennes. La technologie des réseaux de neurone profonds est prometteuse pour résoudre de tels problèmes complexes. Une de leurs grandes forces réside dans leur capacité à apprendre les spécificités de l’environnement réel de fonctionnement. De plus, leur utilisation en télécommunications est facilitée par la possibilité d’aisément générer de vastes bases d’apprentissage.

    Sebastien Paquette

    Supervisé.e par : Alexandre Lehmann

    Université de Montréal

    Decoding auditory perception in cochlear implants users with machine learning

    Predicting outcomes and personalizing care have long been significant challenges in health research. One area where little progress has been made concerns Cochlear Implants (CI), which can restore hearing in the deaf. However, clinical outcomes (speech and emotion perception) vary greatly across implantees in the absence of a clear picture as to why this is the case. Due to progress in machine learning, it is believed that outcomes could be improved by identifying the specific neuro-functional markers of CI use. To address this issue, we aim to identify the neural mechanisms underlying impaired auditory processes in CI users, with an initial focus on emotion perception deficits. For this, machine learning will be used to integrate neuroimaging and acoustical data from empirical experiments into predictive models. An extensive EEG data set of CI and normal hearing participants’ brain responses elicited by emotional sounds will be analyzed. For each group, we will identify: (1) the pattern of brain responses that can discriminate emotions and (2) the specific acoustic features (e.g., tempo, pitch) used for emotion perception.The identified neuro-markers will serve as a proof of concept, toward the broader implementation of machine learning to improve the quality of life of CI users.

    Claudie Ratté-Fortin

    Supervisé.e par : Jean-François Plante

    HEC Montréal

    Apprentissage automatique pour la modélisation d’événements extrêmes

    En contexte de réchauffement climatique, les administrations publiques devront assurer le maintien de la sécurité publique et contenir les impacts socio-économiques et environnementaux qu’engendrent les événements naturels extrêmes. La complexité de ces événements de même que les risques imminents qu’ils apportent à la population nécessitent le développement de modèles de plus en plus complexes afin d’assurer une modélisation adéquate de ces phénomènes. L’utilisation d’approches plus avancées tels que les algorithmes d’apprentissage automatique permettrait de répondre à cette problématique en augmentant, d’une part, la précision des estimations mais également en répondant à la complexité du problème qui devient élevée avec la dimensionnalité des variables à l’étude et la dépendance spatio-temporelle des données. Une modélisation prédictive est d’autant plus cruciale sachant que ces phénomènes augmentent en fréquence, en intensité et en durée en raison du réchauffement global de la planète. L’objectif du projet est d’utiliser des algorithmes d’apprentissage automatique afin d’estimer les probabilités d’occurrence d’événements extrêmes. À terme, des outils de gestion basés sur l’apprentissage automatique seront développés et testés à des fins d’implantation. Les principaux bénéfices incluent une modélisation améliorée des événements extrêmes pour une meilleure gestion des risques (sur les plans économique, social et environnemental) liés à ces événements.

    Wu Yuan-Kai

    Supervisé.e par : Lijun Sun

    McGill University

    Deep Spatiotemporal Modeling for Urban Traffic Data

    Large volumes of spatiotemporal data are increasingly collected and studied in modern transportation systems. Spatiotemporal models for traffic data are critical components of a wide range of intelligent transportation systems (ITS), such as ride sharing, transit service scheduling, signal control, and disruption management. The spatiotemporal data exhibit complex attributes, which introduce numerous challenges needs to be dealt with. Despite the abundance of spatiotemporal modeling techniques developed in different domains, it is still an open issue of making full use of the characteristics of the spatiotemporal datasets. The goal of this postdoc project is to develop new spatiotemporal models for urban traffic data based on deep learning and tensor learning. The specific objectives of this project are to: (1) characterize the spatiotemporal propagation properties of traffic data by deep spatiotemporal neural networks; (2) decouple interaction between external factors and traffic pattern by disentangle representation; (3) capture the strong regularity in collective travel behavior by low-rank tensor factorization and (4) utilize the cross-variable relationship by deep factors models. We will apply our models to large-scale and multivariate spatiotemporal data imputation and prediction. This project will lead to fundamental research advances to spatiotemporal modeling and urban intelligent transportation systems (ITS).

    Undergraduate research initiation grants

    Maxine Arcand-Lavigne

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Data-mining sleep brain signals using machine-learning: effect of caffeine (EEG).

    Samuel Aguilar Lussier

    Supervisé.e par : Éric Lécuyer

    Université de Montréal

    Développement d’approches en apprentissage par machine pour prédire la distribution intracellulaires des acides ribonucléiques (ARNs).

    Viviane Aubin

    Supervisé.e par : Miguel Anjos

    Polytechnique Montréal

    Optimisation de ressources hydroélectriques pour l’intégration des énergies renouvelables.

    Olivier Caron-Grenier

    Supervisé.e par : Numa Dancause

    Université de Montréal

    Adaptive cortical neuroprosthesis for neuromuscular control.

    Éliot Bankolé

    Supervisé.e par : Olivier Bahn

    HEC Montréal

    Modèle d’évaluation intégrée « BaHaMa » : développement d’une version multirégionale.

    Guillaume Caza-Levert

    Supervisé.e par : Jocelyn Dubuc

    Université de Montréal

    Données automatisées d’alimentation pour prédire les maladies des veaux laitiers.

    Karl Chemali

    Supervisé.e par : Carole Fortin

    Université de Montréal

    Évaluation des mouvements du tronc chez des adolescents avec et sans scoliose idiopathique.

    Léo Choinière

    Supervisé.e par : Julie Hussin

    Université de Montréal

    Studying fine-scale population structure using neural networks.

    Gabriel Bisson-Grégoire

    Supervisé.e par : Samuel Kadoury

    Polytechnique Montréal

    Classification de tumeurs du foie à l’aide d’un réseau neuronal convolutif.

    Marise Bonenfant-Couture

    Supervisé.e par : Michel Gagnon et Lyne Da Sylva

    Polytechnique Montréal & Université de Montréal

    Outil d’analyse méthodologique et contextuelle d’articles scientifiques en santé mentale.

    Anas Bouziane

    Supervisé.e par : Yann-Gaël Guéhéneuc

    Polytechnique Montréal

    Reclassification des systèmes de journalisation : une approche par apprentissage machine.

    Florian Coustures

    Supervisé.e par : Marc Fredette

    HEC Montréal

    Optimisation de la calibration de mesures neurophysiologiques.

    Mathieu David-Babin

    Supervisé.e par : Nicolas Vermeys

    Université de Montréal

    Recherche de modèle prédictif de décisions en justice.

    Éric De Celles

    Supervisé.e par : Marc Fredette

    McGill University

    Évaluation de la perte d’information engendrée par l’inspection visuelle ou automatisée des données EEG.

    Thomas Derennes

    Supervisé.e par : An Tang

    Université de Montréal

    Modèle prédictif de réponse de métastases hépatiques de cancers colorectaux à la chimiothérapie avec techniques d’intelligence artificielle.

    Andre Diler

    Supervisé.e par : Samuel Kadoury

    Polytechnique Montréal

    Learning normalized inputs for iterative estimation on medical image segmentation.

    Paloma Fernandez-Mc Auley

    Supervisé.e par : Christine Tappolet

    Université de Montréal

    Éthique et science cognitive de l’attention manipulée par l’IA.

    Jorge Luis Flores

    Supervisé.e par : François Major

    Université de Montréal

    Discoverning the RNA structural determinants of the RNA-binding proteins.

    François Gauthier

    Supervisé.e par : Marc Lavoie

    Université de Montréal

    Topographie cérébrale du rythme thêta dans la régulation émotionnelle: Étude pilote chez une population atteinte de schizophrénie.

    Roxanne Giorgi

    Supervisé.e par : Marc Fredette

    HEC Montréal

    Modélisation de la périodicité d’un signal brut provenant de données EEG.

    Éric Girard

    Supervisé.e par : Daniel Sinnett

    Université de Montréal

    Application de méthodes d’apprentissage machine pour l’amélioration des traitements contre les cancers pédiatriques.

    Aurélie Guilbault

    Supervisé.e par : Pascale Legault

    Université de Montréal

    Interactions ARN-protéines dans la maturation des microARN.

    Simon Guichandut

    Supervisé.e par : Marina Martinez

    Université de Montréal

    Cortical control of motor recovery: a dynamical systems perspective.

    Simonne Harvey-Lavoie

    Supervisé.e par : Annie-Claude Labbé

    Université de Montréal

    Lymphogranulomatose vénérienne : facteurs de risque et présentation clinique.

    Yikun Jiang

    Supervisé.e par : Nathan Yang

    McGill University

    Machine Learning to Nudge Health Behaviours.

    Philippe Kavalec

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Mobilité et performance des communications des villes intelligentes.

    Florence Landry Hould

    Supervisé.e par : Philippe Jouvet

    Université de Montréal

    Validation de l’automatisation du score automatisé de défaillance multiviscérale Pediatric Logistic Organ Dysfunction-2 (aPELOD-2).

    Jean Laprés-Chartrand

    Supervisé.e par : Fabian Bastin

    Université de Montréal

    Extensions de l’algorithme du gradient stochastique pour l’estimation de modèles mixed logit.

    Francis Leblanc

    Supervisé.e par : Guillaume Lettre

    Université de Montréal

    Accessing chromatin interactions by high-resolution analyses of correlated regulatory element variation.

    Anthony Lemieux

    Supervisé.e par : Anthony McGraw

    Université de Montréal

    Approches computationnelles pour investiguer les dérèglements épigénétiques héritables.

    Léa Lingaya

    Supervisé.e par : Louis Doray

    Université de Montréal

    The occurrence of cyber risk incidents.

    Elizabeth Maurice-Elder

    Supervisé.e par : Serge McGraw

    Université de Montréal

    Rétablissement de dérèglements épigénétiques héritables dans les Cellules Embryonnaires par Édition de l’Épigénome.

    Juliette Milner

    Supervisé.e par : Marina Martinez

    Université de Montréal

    Optimisation multivariée d’une neuroprothèse corticale pour le contrôle moteur.

    Justin Pelletier

    Supervisé.e par : Julie Hussin

    Université de Montréal

    Caractérisation de la famille des pharmacogènes CYP4F.

    Charles Piette

    Supervisé.e par : Brunilde Sansò

    Polytechnique Montréal

    Projet en IoT et Villes.

    Man Qing Liang

    Supervisé.e par : Aude Motulsky

    Université de Montréal

    Développement d’un outil d’analyse syntaxique pour structurer des données liées aux prescriptions électroniques.

    Marc Revol

    Supervisé.e par : Emma Frejinger

    Université de Montréal

    Optimisation de l’utilisation de la flotte de locomotives du CN.

    Alex Richard-St-Hilaire

    Supervisé.e par : Julie Hussin

    Université de Montréal

    Détection de mutation de novo dans les gènes du Cytochrome p450.

    Masters excellence scholarships

    Larry Dong

    Supervisé.e par : Erica Moodie

    McGill University

    When making decisions, medical professionals often rely on past experience and their own judgment. However, it is often the case that an individual decision-makerfaces a situation that is unfamiliar to him or her. An adaptive treatment strategy (ATS) can help such biomedical experts in their decision-making, as they are a statistical representation of a decision algorithm for a given treatment that optimizes patient outcomes. ATSs are estimated with large amounts of data, but an issue that may occur is that such sources of data may be subject to unmeasured confounding, whereby important variables needed to ensure the causal inference are missing. The idea behind this research project is to develop a sensitivity analysis to better understand and to quantify the impact of unmeasured confounding on decision rules in ATSs.

    Jonathan Pilault

    Supervisé.e par : Christopher Pal

    Polytechnique Montréal

    Language understanding and generation is a unique capacity of humans. Automatic summarization is an important task in Natural (human) Language Processing. This task consists in reducing the size of discourse while preserving information content. Abstractive summarization sets itself apart from other types of summarization since it most closely relates to how humans would summarize a book, a movie, an article or a conversation. From a research standpoint, automatic abstractive summarization is interesting since it requires models to both understand and generate human language. In the past year, we have seen research that have improved the ability of Neural Networks to choose the most important parts of discourse while beginning to address key pain points (e.g. repeating sentences, nonsensical formulations) during summary text generation. Recent techniques in Computer Vision image generation tasks have shown that image quality can be further improved using Generative Adversarial Networks (GAN). Our intuition is that the same is true for a Natural Language Processing task. We propose to incorporate newest GAN architectures into some of the most novel abstractive summarization models to validate our hypothesis. The objective is to create a state-of-the-art summarization system that most closely mimics human summarizers. This outcome will also bring us closer to understand GANs analytically.

    Alice Wu

    Supervisé.e par : François Soumis

    Polytechnique Montréal

    Combiner l’A.I. et la R.O. pour optimiser les blocs mensuels d’équipages aérien.Nos travaux récents portent sur le développement de deux nouveaux algorithmes Improved Primal Simplex (IPS) et Integral Simplex Using Decomposition (ISUD) qui profitent de l’information a priori sur les solutions attendues pour réduire le nombre de variables et de contraintes à traiter simultanément. Actuellement cette information est donnée par des règles fournies par les planificateurs. L’objectif de recherche sera de développer un système utilisant l’intelligence artificielle (IA) pour estimer la probabilité que la variable liant deux rotations fasse partie de la solution d’un problème de blocs mensuels d’équipages aériens. L’apprentissage se fera sur les données historiques de plusieurs mois, de plusieurs types d’avions et de plusieurs compagnies. L’estimation des probabilités doit se faire à partir des caractéristiques des rotations et non à partir de leurs noms. Une rotation ne revient pas d’une compagnie à l’autre ni d’un mois à l’autre. Il faudra identifier les caractéristiques pertinentes. Il faudra de la recherche sur l’apprentissage pour profiter des contraintes du problème. Il y a des contraintes entre le personnel terminant des rotations et celui en commençant par la suite. La validation de l’apprentissage se fera en alimentant les optimiseurs avec l’information estimée et en observant la qualité des solutions obtenues et les temps de calcul. Il y aura de la recherche à faire dans les optimiseurs pour exploiter au mieux cette nouvelle information.

    Doctoral excellence scholarships

    Chun Cheng

    Supervisé.e par : Louis-Martin Rousseau

    Polytechnique Montréal

    Our project dedicates to deal with uncertainty in drone routing for disaster response and relief operations. To tackle the uncertainties arose from disaster scenarios, like uncertain demand locations and quantities for relief supplies, we use data-driven robust optimization (RO) method. This technique protects the decision makers against parameter ambiguity and stochastic uncertainty by using uncertainty sets. Therefore, it is significant to set proper parameters for the uncertainty set: a small set cannot accurately capture possible risks while a larger one may lead to overly conservative solutions. To address this problem, we use machine learning (ML) technique to extract information from historical data and real-time observations, and create the parameters by ML algorithms. After calibrating the uncertainty set, we will determine appropriate models for the problem by considering various theories in RO, such as static RO and multiple stage adjustable RO. These approaches will be measured against other applicable approaches such as stochastic programming.

    Dominique Godin

    Supervisé.e par : Jean-François Arguin

    Université de Montréal

    Ce projet de recherche a pour objectif de développer et mettre en application des techniques d’apprentissage machine afin de grandement améliorer l’identification des électrons par le détecteur ATLAS du LHC, le plus grand accélérateur de particules jamais construit et l’un des projets scientifiques les plus ambitieux de tous les temps.Afin de mener à bien le programme d’ATLAS, il est nécessaire d’identifier et mesurer chacune des particules, lesquelles s’y créer à un taux de 40 milliards par seconde et génèrent un flot astronomique de données. Parmi celles-ci, les électrons revêtent une très grande importance, mais ils sont également excessivement rares, ne représentant qu’une infime fraction. Considérant la taille et complexité des données disponibles, le problème d’identification des particules aussi rares que les électrons constitue un terrain d’application idéal pour les méthodes d’apprentissage machine. Les algorithmes actuels d’identification des électrons sont très simples et ne font pas usage de ces méthodes de telle sorte qu’une percée dans ce domaine serait une première mondiale qui pourrait éventuellement paver la voie à des découvertes majeures en physique des particules.

    Charley Gros

    Supervisé.e par : Julien Cohen-Adad

    Polytechnique Montréal

    Multiple sclerosis (MS) is a disease, with a high rate in Canada, that leads to major sensory and motor problems. This disease affects the neuronal signal transmission in both brain and spinal cord, creating lesions, which are observable on images acquired with an MRI scanner. The count and volume of lesions on an MRI scan of a patient is a crucial indicator of the disease status and commonly used by doctors for the diagnosis, prognosis and therapeutic drug trials. However, the detection of lesions is very challenging and time consuming for radiologists, due to the high variability of their size and shape.This project aims at developing a new, automatic and fast method of MS lesion detection on MRI data of spinal cord, based on newly developed machine learning algorithms. The new algorithm’s performance will be tested on a large dataset involving patients coming from different hospitals in the world. Once the algorithm is optimized, it will be freely available as part of an open-source software, already widely used for spinal cord MRI processing and analysis. A fundamental goal of this project is the integration of this algorithm in hospitals to help radiologists in their daily work.

    Thomas Thiery

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    When we are walking through a crowd, or playing a sport, our brain continuously makes decisions about directions to go to, obstacles to avoid and information to pay attention to. Fuelled by the successful combinations of quantitative modeling and neural recordings in nonhuman primates, research into the temporal dynamics of decision-making has brought the study of decision-making to the fore within neuroscience and psychology, and has exemplified the benefits of convergent mathematical and biological approaches to understanding brain function. However, studies have yet to uncover the complex dynamics of large-scale neural networks involved in dynamic decision-making in humans. The present research aims to use advanced data analytics to identify the neural features involved in tracking the state of sensory evidence and confirming the commitment to a choice during a dynamic decision-making task. To this end, we will use cutting-edge electrophysiological brain imaging (magnetoencephalography, MEG), combined with multivariate machine learning algorithms. This project, for the first time, will shed light on the whole-brain large-scale dynamics involved in dynamic decision-making, thus providing empirical evidence that can be generalized across subjects to test and refine computational models and neuroscientific accounts of decision-making. By providing a quantitative link between the behavioral and neural dynamics subserving how decisions are continuously formed in the brain, this project will contribute to expose mechanisms that are likely to figure prominently in human cognition, in health and disease. Moreover, this research may provide neurobiological-inspired contributions to machine learning algorithms that implement computationally efficient gating functions capable of making decisions in a dynamically changing environment. ln addition to advancing our knowledge of the way human brains come to a decision, we also foresee long-term health implications for disorders such as Parkinson’s disease.

    Postdoc-entrepreneur program

    Asad Lesani

    Supervised by: Luis Miranda-Moreno

    McGill University

    Company : Bluecity.ai

    Le but de ce projet est de créer un prototype capable de générer des métriques importantes pour les systèmes de transport intelligents. La solution proposée utilisera la technologie 3D LiDAR pour mesurer la distance des objets (stationnaire et mobile), les compter, mesurer leur vitesse et identifier leur classe en temps réel. Ces données seront disponibles localement ou via une plate-forme infonuagique. Ces métriques pourront être utilisées pour l’optimisation des feux de signalisation en temps réel, faire des analyses de sécurité et permettre une meilleure gestion et planification du réseau de transport.

    Marc-André Renaud

    Supervisé.e par : Louis-Martin Rousseau

    Polytechnique Montréal

    Entreprise : Gray Oncology

    L’objectif de ce projet est d’ajouter des fonctionnalités et d’améliorer les performances des modèles d’optimisation actuels de la plateforme Gray. Mise au point durant la thèse, la plateforme Gray atténue la barrière d’entrée pour la réalisation de recherche, même de base, sur la planification du traitement contre le cancer. L’objectif est d’offrir un produit compétitif sur le marché qui tirera parti de l’apprentissage profond pour aider à une planification de traitement de radiothérapie adaptative.

    Postdoctoral research funding

    Winter

    Behrouz Babaki

    Supervisé.e par : Gilles Pesant

    Polytechnique Montréal

    To turn the ever increasing amounts of data into social and economic value, two tasks need to be performed: 1) extracting knowledge from the data, and 2) incorporating this knowledge in the operations that drive the society. The machine learning community addresses the first task by extracting the knowledge from the data and capturing it into ‘learned models’. The second task is studied by the operations research community under the label of ‘optimization’. However, these techniques have been developed almost independently. This makes it less straightforward to integrate them and turn the knowledge obtained from a learned model into actionable decisions. In this project, we exploit the fundamental similarities between the two tasks to develop an integrated system that performs both tasks together. We apply our system to problems in business and finance and demonstrate how this approach can help players in these sectors to use their data for improving their operations.

    Maxime Laborde

    Supervisé.e par : Adam Oberman

    McGill University

    This research is focused on using mathematical tools to accelerate the training time of Deep Neural Networks (DNN)s. DNNs are a powerful tool in Artificial Intelligence, behind applications in machine translation, image recognition, speech recognition and other areas. However training the DNNs requires huge computational resources, which is costly both financially, and in the human effort required to implement them. This research will use advanced mathematical tools to improve the time required to train DNNs.

    Tien Mai

    Supervisé.e par : Teodor Gabriel Crainic

    Université de Montréal

    This project deals with the planning of intermodal rail transportation, integrating methodologies from operations research and machine learning in a new and innovative way. Intermodal container freight transportation is the backbone of international trade and supports a large part of Canadian and North-American imports and exports. Canada has one of the largest rail networks in the world and Canadian railway companies are both network and terminal operators. They face many large-scale optimization problems that are complex because of their sheer size and the uncertainty that affects planning and operations on a continuous basis. The project focuses on a tactical network load and block planning problem that involves decisions related to blocking and railcar fleet management. Assuming that the train schedule is given, the problem entails three consolidation processes: assignment of containers to railcars, of railcars to blocks and of blocks to trains. The project will be dedicated to designing a service network design model and associated solution method that allows to solve realistic, large scale, instances.

    Abbas Mehrabian

    Supervisé.e par : Luc Devroye

    McGill University

    When designing a machine learning algorithm, it is crucial for the designer to understand the input data to which this algorithm will be applied. It is well known that real-world data for any task has a lot of structure, exploiting which allows for faster learning and more accurate prediction. However, understanding this structure is a highly nontrivial task, given the high dimension of the data. In this project we propose to develop a mathematical framework for learning the structure hidden in the data, via the lens of probability theory. Assuming the data is generated by some stochastic process, we would like to infer its distributional properties. Then a natural question is, which distributions are harder to learn, and which ones are easier. The aim of this project is to answer this question from statistical and computational perspectives, at least for a variety of commonly used classes of distributions, such as mixture models and graphical models.

    Patrick Munroe

    Supervisé.e par : François Soumis

    Polytechnique Montréal

    Gestion en temps réel du cargo aérien. Le projet à moyen terme est le développement d’un système de gestion du cargo dans les compagnies aériennes en commençant avec Air Canada. Ce système traitera la planification stratégique, tactique et l’opération en temps-réel. Le niveau stratégique évalue des scénarios à long terme sur l’organisation du réseau, les marchés à développer, les alliances à conclure. Le niveau tactique optimise le choix des itinéraires entre chaque paire de villes pour une semaine type d’une saison. Durant l’opération, les vendeurs pourront obtenir en ligne le meilleur itinéraire pour acheminer une nouvelle commande et le prix de revient. À chaque niveau de décision, il faut estimer la demande pour l’horizon considéré et optimiser l’acheminement de cette demande dans le réseau de transport comprenant des avions tout cargo, l’espace disponible dans les soutes des vols passagers, des sous-contrats avec d’autres transporteurs aériens et routiers. La recherche portera sur le développement de nouvelles méthodes d’estimation de la demande et d’optimisation de l’acheminement dans un grand réseau.

    Maria Isabel Restrepo Ruiz

    Supervisé.e par : Nadia Lahrichi

    Polytechnique Montréal

    The main objective in using optimization approaches for demand and supply management in home healthcare is to match supply and demand by influencing patients/caregivers choices for service time slots/working shifts. Our aim with this project is to develop a decision support tool to deal with approaches for demand and supply management in home healthcare. Specifically, we will implement stochastic models to forecast future demands and to predict caregivers’ absenteeism. Then, we will design and develop choice models to consider patient and caregiver choice behavior. These models will predict the probability of choosing a particular alternative from an offered set (e.g. visit time slots, working shifts) given historical choice data about an individual or a segment of similar individuals. These models will be embedded into an optimization approach that will compute a time slotting/scheduling plan or a pricing strategy to optimally balance the allocation of cost-effective schedules to caregivers and the improvement of service quality.

    Anne-Lise Saive

    Supervisé.e par : Karim Jerbi

    Université de Montréal

    Every day, we experience thousands of situations, but we only remember few of them. Episodic memory is the only memory system that allows people to consciously re-experience past experiences and it is the most sensitive to age and neurodegenerative diseases. It is thus critical to better understand how to enhance learning and memory in both healthy and clinical populations. Emotions are known to robustly strengthen the formation of long-term memories. Characterizing the influence of positive emotions (joy, happiness) on memory could be pivotal in improving memory therapies, yet the underlying brain mechanisms are still surprisingly misunderstood. In this project, we will use a fully data-driven approach to identify the key neuronal processes strengthened by positive emotions that distinguish events we will durably remember from events we will forget. We will combine for the first time high spatial and temporal resolution brain imaging techniques and state-of-the-art machine-learning algorithms. This will be achieved by assessing the ability of multidimensional (across space, time and frequency) arrays of brain data to predict future memory accuracy.

    Rabih Salhab

    Supervisé.e par : Georges Zaccour

    HEC Montréal

    Ride-sharing services such as Uber, Lyft or Didi Chuxing match a group of drivers providing rides with customers through an online ride-sharing platform. This business model faces a number of fundamental challenges. Indeed, the drivers considered as independent contractors choose the area they wish to serve, if they accept or reject rides, and when they start and stop working. With no direct control over the drivers, the ride-sharing platform can only use incentives and select the information it provides to drivers and customers in order to improve the quality of service and balance supply and demand. This project aims to develop a model that anticipates how the drivers respond to provided information, which is a combination of request statistics, prices at various locations and times, and estimation of the state of the road network. Moreover, it intends to generate location and time-dependent pricing schemes and optimal information filters in order to optimize the efficiency of the system. For example, the filters control the amount of information to release to drivers about the requests in order to balance the supply and demand and avoid the drivers from deserting some areas.

    Kristen Schell

    Supervisé.e par : Miguel Anjos

    Polytechnique Montréal

    Hydro-Québec is geographically well positioned to make significant profits in neighboring electricity markets. Facing political mandates to retire coal and nuclear power plants, the markets of Ontario, New York and New England are under increasing stress to provide stable, baseload electricity production. We will utilize the vast historical data from these markets to model their future evolution. Using the insights obtained from this analysis, we will be able to determine optimal strategies for Hydro-Québec to maximize its profits through targeted investment decisions in market interconnections. The results will be generalizable to other provincial utilities in Canada and their participation in the relevant electricity markets.

    Jean-François Spinella

    Supervisé.e par : Guy Sauvageau

    Université de Montréal

    Acute myeloid leukemia (AML) is the most common form of leukemia in adults. Despite advances in supportive care to treat therapy-related complications, the majority of AML patients will not exceed the two-year survival mark because of relapse. This dismal outcome reflects the sub-optimal treatment orientation of poorly understood subtypes of AML. To improve the treatment and outcome of patients, Dr. Guy Sauvageau and colleagues initiated in 2009 the Leucegene project which has become an internationally acknowledged leader in genetic and biological characterization of AML. Exploiting the most innovative technologies, this program already allowed the sequencing of 452 primary human AML specimens. While several types of genomic alterations have been explored in AML, some of them, such as modifications to the chromosome structure, remain elusive despite their known importance in cancer. We are convinced that this is due to unsuitable analysis and we propose here an innovative machine learning approach to efficiently identify these modifications. Tests will be carried out on our sequenced AML specimens. Ultimately, the method will be released to help the scientific community to exploit its cancer data. From a biomedical point of view, it will allow for better definition of AML subgroups, as well as an increase in the chances of identifying new markers for this disease. With the goal to accelerate the transfer of new knowledge from the laboratory to the bedside, this project will help ensure the correct classification and treatment of AML.

    Yu Zhang

    Supervisé.e par : Pierre Bellec

    Université de Montréal

    To understand brain mechanism of cognitive functions is the ultimate goal of neuroscience studies, which also provides fundamental guidance for developing new techniques in artificial intelligence. With accumulated evidence in animals and humans, functional dynamics is suggested to be the non-stationary nature of cognitive process. In this project, we aim to apply deep learning models to characterize the spatial and temporal dynamics of BOLD signals at rest and during cognitive tasks. To account for the temporal dependence of MRI signals, a convolutional recurrent neural network is first used to characterize the spatial and temporal dynamics of resting-state data, and then to map the dynamic somatotopic maps during movement of tongue, hand and foot. The model is further adjusted for classification of functional dynamics among multiple task conditions. The derived characteristic functional dynamics, including sequential temporal response functions and corresponding activation patterns, reveals the dynamic process of human cognitive function and provides essential guidance for brain simulation. Furthermore, our proposed method could also be used in clinic applications, for instance searching for temporal and spatial biomarkers for Alzheimer’s disease and evaluating the treatment effects of precision medicine.

    Summer

    Quentin Cappart

    Supervisé.e par : Louis-Martin Rousseau

    Polytechnique Montréal

    L’optimisation combinatoire occupe une place prépondérante dans notre société actuelle. Que ce soit la logistique, le transport ou la gestion financière, tous ses domaines se retrouvent confrontés à des problèmes pour lesquels on recherche la meilleure solution. Cependant, un grand nombre de problèmes très complexes reste encore hors de portée des méthodes d’optimisation actuelles. C’est pourquoi, l’amélioration de ces techniques est un sujet crucial. Parmi ces dernières, les diagrammes de décisions semblent avoir un avenir prometteur. Un diagramme de décision est une structure qui permet de représenter de manière compacte un problème tout en préservant ses caractéristiques. Cependant, leur efficacité est extrêmement dépendante de l’ordre des variables utilisé pour leur construction. L’objectif de ce projet est d’utiliser les méthodes récentes d’apprentissage automatique pour ordonner les variables lors de la construction d’un diagramme de décision. Les contributions de ce projet permettront la résolution de problèmes combinatoires plus complexes, et plus larges que ce que peuvent faire les méthodes de l’état de l’art. Nous nous consacrerons principalement aux problèmes réels liés au transport et à la logistique. Ce projet sera effectué en partenariat avec l’entreprise Element-AI.

    Jonathan Binas

    Supervisé.e par : Yoshua Bengio

    Université de Montréal

    Recent machine learning approaches have led to impressive demonstrations of machines solving a great variety of difficult tasks, which previously were thought to be restricted to humans. Applied to areas such as health care, environmental challenges, optimization of transport and logistics, or industrial processes, these advances will lead to improved living conditions and the creation of value. While being loosely inspired by biological neural systems, artificial neural networks starkly differ from their biological counterparts in almost every respect. In particular, brains can learn from very few examples, infer causal relationships, and seamlessly transfer skills to new tasks, whereas current machine learning models require enormous amounts of data to just master a single task. To overcome some of these limitations, we introduce new, brain-inspired models for learning and memory, which will allow for meaningful information to be extracted from data more efficiently. The resulting systems will lead to improved, more powerful machine learning systems, which can be applied in numerous contexts, including medical applications, automation, robotics, or forecasting.

    Marco Bonizzato

    Supervisé.e par : Marina Martinez

    Université de Montréal

    A quarter million people every year are affected by spinal cord injury (SCI), which causes paraplegia. When the lesion is incomplete some recovery can occur. Spinal cord stimulation can be applied to help people with SCI to regain control of the paralyzed legs. In the last year Prof. Martinez and I demonstrated a new neuroprosthetic concept whereby cortical stimulation is applied to improve walking. This novel strategy empowers the brain’s own residual networks and increases voluntary control of leg movement with long lasting beneficial effects for recovery. « Fire together, wire together » is the established rule for neural repair. Here we propose to combine for the first time brain and spinal stimulation into an unique combined neuroprosthesis. This approach is compelling, but complicated by the overwhelming amount of stimulation parameters that needs to be characterized. We propose to solve this problem with machine learning. The first ever intelligent neuroprosthesis will monitor changes in muscular activity to explore and learn an optimal set of stimulation parameters. Our results can be rapidly translated to clinical tests.

    Elie Bou Assi

    Supervisé.e par : Dang K. Nguyen

    Université de Montréal

    Epilepsy is a chronic neurological condition that affects as many as 1 in every 100 Canadians. While first line of treatment consists of long-term drug therapy more than a third of patients suffer from seizures that are resistant to antiepileptic drugs. Due to their unpredictable nature, uncontrolled seizures represent a major personal handicap and source of worriment for patients. In addition, persistent seizures constitute a considerable public health burden due to high use of health care resources, high number of disability days or unemployment, and low annual income. Some of the difficulties and challenges faced by drug-refractory patients can be overpassed by implementing algorithms able to anticipate seizures. With accurate seizure forecasting, one could ameliorate refractory epilepsy management improving social integration, productivity and quality of life. Our main objective is the development of a real-time seizure prediction system, based on deep learning, intended to warn patients or caretakers about an incoming seizure and recommend advisory measures.

    Jasmin Coulombe-Huntington

    Supervisé.e par : Micheal Tyers

    Université de Montréal

    Drug combinations can simultaneously target redundant biological pathways and thus offer unique advantages for disease treatment. By growing human cancer cells each with a specific gene deletion in the presence of a drug, we identified gene deletions which make cells more sensitive or more resistant to the growth-inhibition effects of >230 different drugs. In this proposal, I outline a plan to develop software tools to exploit this resource in order to precisely characterize drug mechanisms and to predict useful drug combinations. Tumor growth relies on overactive biomass and energy production, and I found that close to 80% of the drugs we screened altered the sensitivity of cells to the deletion of metabolic genes. I will use a genome-scale mathematical model of cell metabolism to attribute these effects to the lowered activity of other metabolic genes, those whose activities we predict are directly affected by the drug. After modelling the effects of each drug on cell metabolism, we will simulate the effects of drug combinations to identify pairs which effectively block the generation of small molecules important to tumor growth.available data on the effectiveness of drug combinations, I will train a machine learning algorithm to use similar gene deletion data as well as drug molecular similarities to predict useful drug combinations for the treatment of cancer and potentially other diseases. I will also attempt to predict the direct molecular targets of each drug by modelling molecular signalling in cells, leveraging known signalling pathways, molecular interaction networks and pairs of gene deletions sensitive or resistant to similar sets of drugs.

    Pouria Dasmeh

    Supervisé.e par : Adrian Serohijos

    Université de Montréal

    The rise of antibiotic resistance has put antimicrobials, once believed to be miracles of modern medicine, into jeopardy1. The current death toll of AMR is ~800,000 per year (i.e., ~100 per hour) and is expected to rise to ~ 16 million in 2050. In Canada alone, the financial burden of antibiotic resistance is ~ $200 million annually. A key knowledge in our battle against antibiotic resistance is to predict the growth rate of bacteria at different concentrations of antibiotics. Recently, the response of bacterial strains to antibiotics were measured for all possible mutations in important enzymes that confer resistance to beta-lactam antibiotics (e.g., penicillins, ampicillins, etc.). In this project, I will employ the power of machine learning to develop predictive models of resistance at different antibiotic dosages from the available large-scale datasets. This approach would have immediate impacts on the design of antibiotic dosages that prevent or delay the onset of resistance. In this integration of machine learning with biochemistry and molecular medicine, we will seek the potentials of data science to aid decision-making in medicine.

    Benoit Delcroix

    Supervisé.e par : Michel Bernier

    Polytechnique Montréal

    Un défi majeur dans le secteur du bâtiment est l’’absence de systèmes continus de suivi de la performance et d’évaluation des écarts de performance entre la situation observée et celle désirée. L’opération non-optimale des systèmes de Chauffage, Ventilation et Conditionnement d’Air (CVCA) entraîne des pertes énergétiques et de confort des occupants. Le secteur du bâtiment représente environ un tiers de la consommation énergétique au Québec et au Canada. Ainsi, des mesures d’’efficacité dans ce secteur produisent des impacts positifs majeurs. L’idée de ce projet est d’utiliser des méthodes d’apprentissage profond pour exploiter les larges jeux de données générés par les systèmes CVCA. Le but final est d’automatiser la détection et le diagnostic des anomalies, et d’optimiser l’opération des équipements. Les bénéfices incluent une gestion améliorée de l’énergie et une meilleure prise en compte du confort des occupants. Au terme de ce projet, des outils de détection / diagnostic / contrôle basés sur l’apprentissage profond seront développés et testés à des fins d’implantation dans des bâtiments réels.

    Golnoosh Farnadi

    Supervisé.e par : Michel Gendreau

    Polytechnique Montréal

    The increasing use of algorithmic decision-making in domains that affect people’s lives, has raised concerns about possible biases and discrimination that such systems might introduce. Recent concerns on algorithmic discrimination have motivated the development of fairness-aware mechanisms in the machine learning (ML) community and the operations research (OR) community, independently. While in fairness-aware ML, the focus is usually on ensuring that the inference and predictions produced by a learned model are fair, the OR community has developed methods to ensure fairness in solutions of an optimization problem. In this project, I plan to build on the complementary strengths of fairness methods in ML and OR to address these shortcomings in a fair data-driven decision-making system. I will apply this work to real-world problems in the areas of personalized education, employment hiring (business), social well-being (health), and network design (transportation). The advantage of my proposed system compared to the existing works is that it: 1) incorporates domain knowledge with data-driven probabilistic models, 2) detects and describes complex discriminative patterns, 3) returns a fair decision/policy, and 4) breaks negative/positive feedback loops.

    Kuldeep Kumar

    Supervisé.e par : Michel Gendreau

    Polytechnique Montréal

    The increasing use of algorithmic decision-making in domains that affect people’s lives, has raised concerns about possible biases and discrimination that such systems might introduce. Recent concerns on algorithmic discrimination have motivated the development of fairness-aware mechanisms in the machine learning (ML) community and the operations research (OR) community, independently. While in fairness-aware ML, the focus is usually on ensuring that the inference and predictions produced by a learned model are fair, the OR community has developed methods to ensure fairness in solutions of an optimization problem. In this project, I plan to build on the complementary strengths of fairness methods in ML and OR to address these shortcomings in a fair data-driven decision-making system. I will apply this work to real-world problems in the areas of personalized education, employment hiring (business), social well-being (health), and network design (transportation). The advantage of my proposed system compared to the existing works is that it: 1) incorporates domain knowledge with data-driven probabilistic models, 2) detects and describes complex discriminative patterns, 3) returns a fair decision/policy, and 4) breaks negative/positive feedback loops.

    Elizaveta Kuznetsova

    Supervisé.e par : Miguel Anjos

    Polytechnique Montréal

    The cumulative solar and wind power capacity integrated mainly into low and medium voltage grids in Canada represents 9% of total available power capacity in 2015, and is expected to more than double by 2040. This reality will create not only opportunities for sustainable energy production, but also challenges for the system operator due to the uncertain power fluctuations from supporting multiple prosumers (customers who can alternatively behave as energy consumers or producers). This project addresses the question of how to involve prosumers in the energy management process for provision of ancillary services in the grid (e.g. voltage control) while mitigating unsuitable emerging effects. The idea is to consider a three-layer optimization problem related to different voltage levels (high, medium and low). Grid incentives will be optimized at the high voltage level, while lower levels will optimize the dispatch among grid prosumers to maximize their involvement. An Agent-Based Modelling framework will provide a backbone for this multi-level optimization, enable bi-directional information flows, and make it possible to handle the challenges of high data volume and complexity.

    Tarek Lajnef

    Supervisé.e par : Miguel Anjos

    Polytechnique Montréal

    The cumulative solar and wind power capacity integrated mainly into low and medium voltage grids in Canada represents 9% of total available power capacity in 2015, and is expected to more than double by 2040. This reality will create not only opportunities for sustainable energy production, but also challenges for the system operator due to the uncertain power fluctuations from supporting multiple prosumers (customers who can alternatively behave as energy consumers or producers). This project addresses the question of how to involve prosumers in the energy management process for provision of ancillary services in the grid (e.g. voltage control) while mitigating unsuitable emerging effects. The idea is to consider a three-layer optimization problem related to different voltage levels (high, medium and low). Grid incentives will be optimized at the high voltage level, while lower levels will optimize the dispatch among grid prosumers to maximize their involvement. An Agent-Based Modelling framework will provide a backbone for this multi-level optimization, enable bi-directional information flows, and make it possible to handle the challenges of high data volume and complexity.

    Neda Navidi

    Supervisé.e par : Nicolas Saunier

    Polytechnique Montréal

    Learning driving behavior from smartphone location and motion sensors Monitoring and tracking vehicles and driving behavior are of great interest to better assess safety and understand the relationship with potential factors related to the infrastructure, vehicles and users. This has been implemented in recent years by car insurance to better assess their customers’ risk of crash and offer usage-based premium. Driver monitoring and analysis or driver behavior profiling is the process of automatically collecting driving data (e.g., location, speed, acceleration) and predicting the crash risk. These systems are mainly based on Global Positioning System (GPS), which suffers from accuracy issues, e.g. in urban canyons, and is insufficient to detect normal and risky driving events like steering and braking to assess the driving behaviours. To address this problem, researchers have proposed the integration of GPS, Inertial Navigation System (INS) and motion sensors, and map-matching (MM) in a single hybrid system. INS is fused with GPS and used during signal outages to provide continuous positioning (dead reckoning). Map matching is the process of estimating a user’s position on a road segment, which provides more contextual information like road geometry and conditions, historical risk of the segment and other drivers’ behaviour. The objective of this work is to improve the understanding of driver behaviour and crash risk by integrating location and motion data, driving events and road attributes using different machine learning algorithms.The objective of this work is to improve the understanding of driver behaviour and crash risk by integrating location and motion data, driving events and road attributes. The specific objectives are the following: 1) to detect risky driving events, namely hard acceleration/braking, compliance to signalization (e.g. speed limits), sharp steering, tailgating, improper passing and weaving from location and motion data using machine learning (ML); 2) to apply map-matching algorithms to extract road-related attributes; 3) to cluster driver behaviour based on the time series of location and motion data, detected driving events and road-related attributes.

    Nurit Oliker

    Supervisé.e par : Bernard Gendron

    Université de Montréal

    We study the context of a transportation network manager who wants to take decisions on infrastructures, assets and resources to deploy in order to achieve its objectives. The network manager has to take into account that there are several classes of users, most of which pursue their own objectives within the rules stated by the manager, while others have objectives that are antagonist to those of the manager. Our goal is to develop methodology to help the transportation network manager. The application that motivates this research project is based on the transportation network design problem faced by a vehicle inspection agency who wants to inspect a maximum number of vehicles on a given territory under a limited budget. In such application, it is important to take into account the fact that some users will react to the installation of new vehicle inspection stations by diverting from their usual path to avoid inspection. Other applications of interest include the design of transportation networks that are resilient to major accidents and terrorist attacks. In this context, the network manager must anticipate potential threats posed by hostile users.

    Camilo Ortiz Astorquiza

    Supervisé.e par : Emma Frejinger

    Université de Montréal

    The railway industry represents one of the most important means of freight transportation. In Canada only more than 900,000 tons of goods are moved every day. where one of the major companies of the sector is Canadian National Railways (CN). An important component in their overall structure is the locomotive fleet management. The high cost of each locomotive and the large number of them required to satisfy train schedules makes the locomotive planning highly valuable. This in turn, represents an environmental and macroeconomic effect of great importance. Although several variants of locomotive planning problems have been studied before there is still a huge gap between the state-of-practice and the state-of-the-art. Thus, we will first study an optimization model that is tailored for CN’s requirements. Moreover, we will investigate on the development of specialized solution methods that incorporate machine learning with operations research techniques to obtain optimal solutions within reasonable time. This will provide a tool for the partner company to better evaluate scenarios in the locomotive planning and give value to the data while representing an important scientific contribution for the optimization community.

    Musa Ozboyaci

    Supervisé.e par : Sebastian Pechmann

    Université de Montréal

    Protein homeostasis describes the cells capability to keep its proteins in their correct shape and function through a complex regulatory system that integrates protein synthesis, folding and degradation. How cells maintain protein homeostasis is a fundamental phenomenon, an understanding of which has direct implications for prevention and treatment of severe human diseases such as Alzheimer’s and Parkinson’s. The protein quality control is regulated through specific enzymes called molecular chaperones that assist the (re)folding of proteins thus managing a complex and varied proteome efficiently. Although the specificity of interactions of these chaperones with their client proteins is known to be the key to the efficient allocation of protein quality control capacity, a significant yet unanswered question lies in rationalizing the principles of this specificity. This project aims to systematically define the principles of sequence specificity across eukaryotic chaperone network through a combination of molecular modelling and machine learning methods. To this end, the peptide sequences that confer chaperone specificity will be identified systematically using a robust docking procedure accelerated by a Random Forest model. To account for the conditional interdependencies of the energetic contributions of the peptide residues binding to the chaperone receptor and to capture them, probabilistic graphical models will be developed and deep learning methods will be applied to the large dataset obtained from docking simulations. This project, through the unique and rich dataset we will construct and the sophisticated analyses we will apply, will not only unravel the sequence specificity in protein homeostasis interactions during health and disease, but also provide the necessary guidelines for how it can be re-engineered for rational therapeutic intervention.

    Maximilian Puelma Touzel

    Supervisé.e par : Guillaume Lajoie

    Université de Montréal

    Recurrent neural nets are neuroscience-inspired AI algorithms that are revolutionizing the machine learning of complex sequences. They help power a variety of widely used applications such as Google Translate and Apple’’s Siri. But they are also big, complicated models, and learning them is a delicate process, up to now requiring much fine-tuning to avoid the parameter adjustments getting out of control. The human brain also faces this stability problem when it learns sequences, but it has a robust, working solution that we are only beginning to understand. Bringing together experts in neuroscience, applied math, and artificial intelligence, we will adapt sophisticated methods for measuring stability from the mathematics of dynamical systems. We will develop learning algorithms that use this information to efficiently guide the learning, and will employ them in a neuroscience study that compares artificial and brain solutions to learning complex task sequences. Our goal is to unify and extend our understanding of how natural and artificial recurrent neural nets learn complex sequences.

    Raphael Harry Frederico Ribeiro Kramer

    Supervisé.e par : Guy Desaulniers

    Polytechnique Montréal

    Facility location arises as an important field in combinatorial optimization with applications to logistics and data mining. In facility location problems (FLPs), one seeks to find the location of some supply points and to assign customers to those supply points so as to optimize a certain measure of performance. In data mining, several FLPs can be used with the purpose of modelling and solving clustering problems. The p-center problem (PCP) is an example of such type of problem, in which one seeks to find the location of p points (namely the centers) so as to minimize the maximum dissimilarity between any customer and its closest center. This problem is extremely difficult in practice. In a recent article co-authored by the candidate, the most classical variant of the PCP (namely the vertex PCP) is solved by an iterative algorithm for problems containing up to a million data points within reasonable time limits. This is more than 200x larger than previous algorithms. In this project we aim at extending some of the ideas used in that article to solve other classes of facility location problems for large datasets.

    Joshua Stipancic

    Supervisé.e par : Aurélie Labbe

    HEC Montréal

    Road traffic crashes are a serious concern. Typically, dangerous locations in the road network are identified based on historical crash data. However, using crashes is not ideal, as crash data bases contain error and omissions and crashes are not perfect predictors of safety. Our earlier work demonstrates how mobile sensor data, such as GPS travel data collected from regular drivers, can be used to substitute crash data in the safety management process within Quebec City. However, advanced statistical models must be developed to convert the collected sensor data into predicted crash counts at sites throughout the network. This project proposes three advancements to crash models developed in previous work. First, methods for imputing missing data will be proposed and explored. The effect of these methods on the final predicted crash counts will also be quantified. Second, techniques for expanding analysis to an entire road network will be developed. Third, the developed models will be tested on additional datasets in Montreal and Toronto. The ability to predict levels of safety with mobile sensor data is a substantial contribution to the field of transportation.

    Funding of fundamental research projects

    Bram Adams, Polytechnique Montréal

    Équipe : Antoniol Giuliano, Jiang Zhen Ming & Sénécal Sylvain

    A Real-time, Data-driven Field Decision Framework for Large-scale Software Deployments

    As large e-commerce systems need to maximize their revenue, while ensuring customer quality and minimizing IT costs, they are constantly facing major field decisions like « Would it be cost-effective for the company to deploy additional hardware resources for our premium users?” This project will build a real-time, data-driven field decision framework exploiting customer behaviour and quality of service models, release engineering and guided optimization search. It will benefit both Canadian software industry and society, by improving the quality of service experienced by Canadians.

    Jean-François Arguin, Université de Montréal

    Équipe : Tapp Alain, Golling Tobias, Ducu Otilia & Mochizuki Kazuya

    Machine learning for the analysis of the Large Hadron Collider Data at CERN

    The Large Hadron Collider (LHC) is one of the most ambitious experiment ever conducted. It collides protons together near the speed of light to reproduce the conditions of the Universe right after the Big Bang. It possesses all the features of Big Data: 1e16 collisions are produced each year, each producing 1000 particles and each of these particle leaving a complex signature in the 100 million electronic channels of the ATLAS detector. This project will initiate a collaboration between data scientists and physicists to develop the application of machine learning to the analysis of the LHC data.

    Olivier Bahn, HEC Montréal

    Équipe : Caines Peter, Delage Erick, Malhamé Roland & Mousseau Normand

    Valorisation des données et Optimisation Robuste pour guider la Transition Énergétique vers des réseauX intelligents à forte composante renouvelable (VORTEX)

    Une modélisation multiéchelles consistant en une famille de modèles hiérarchisés et opérant à des échelles de temps croissantes (journée / semaine à mois / horizon de trente ans), et des outils mathématiques adaptés (jeux à champ moyen répétés, apprentissage machine, optimisation convexe et robuste), sont proposés comme base pour une gestion raisonnée de la transition vers des réseaux électriques intelligents à forte composante renouvelable. Notre projet proposera en particulier des outils pour aider à la maîtrise de la demande énergétique dans un contexte régional.

    Tolga Cenesizoglu, HEC Montréal

    Équipe : Grass Gunnar & Jena Sanjay

    Real-time Optimal Order Placement Strategies and Limit Order Trading Activity

    Our primary objective is to identify how institutional investors can reduce their risk and trading costs by optimizing when and how to execute their trades. Limit order trading activity is an important state variable for this optimization problem in today’s financial markets where most liquidity is provided by limit orders. We thus plan to first analyze how risk and trading costs are affected by limit order trading activity using a novel, large-scale, ultra-high-frequency trading data set. We will then use our findings to guide us in modeling these effects and devising real-time optimal order placement strategies.

    Laurent Charlin, HEC Montréal

    Équipe : Jena Sanjay Dominik

    Exploiting ML/OR Synergies for Assortment Optimization and Recommender Systems

    We propose to exploit synergies between assortment optimization and recommender systems on the application level, and the interplay between machine learning and mathematical programming on the methodological level. Rank-based choice models, estimated in a purely data-driven manner will introduce diversity into recommender systems, and supervised learning methods will improve the scalability and efficiency of assortment optimization in retail.

    Yoshua Bengio, Université de Montréal

    Équipe : Cardinal Héloïse, Carvalho Margarida & Lodi Andrea

    Data-driven Transplantation Science

    End-stage kidney disease is a severe condition with a rising incidence, currently affecting over 40,000 Canadians.

    The decision to accept or refuse an organ for transplantation is an important one, as the donor’s characteristics are strongly associated with the long-term survival of the transplanted kidney. In partnership with their health care provider, the transplant candidates need to answer two questions: (1) How long is the kidney from this specific donor expected to last for me? (2) If I refuse this specific donor, how much longer am I expected to wait before getting a better kidney?

    We propose to use deep learning to predict the success of a possible matching. The results will contribute to build a clinical decision support tool answering the two questions above and helping transplant physicians and candidates to make the best decision. In addition, the quality of the matching can be the input of optimization algorithms designed to improve social welfare of organ allocations.

    Michel Bernier, Polytechnique Montréal

    Équipe : Kummert Michaël & Bahn Olivier

    Développement d’une méthodologie pour l’’utilisation des données massives issues de compteurs intelligents pour modéliser un parc de bâtiments

    Les données disponibles grâce à la généralisation des compteurs communicants représentent une grande opportunité pour améliorer les modèles de parc de bâtiments et les modèles plus généraux de flux énergétiques, mais les connaissances fondamentales à ce sujet sont encore limitées. Le présent projet vise à y remédier en développant une méthodologie permettant d’’utiliser les données massives des compteurs électriques communicants pour caractériser et calibrer, notamment par modélisation inverse, des archétypes de bâtiments qui pourront être intégrés dans le modèle TIMES.

    Julien Cohen-Adad, Polytechnique Montréal

    Équipe : Kadoury Samuel, Pal Chris, Bengio Yoshua, Romero Soriano & Guilbert François

    Transformative adversarial networks for medical imaging applications

    Following the concept of Generative adversarial networks (GANs), we propose to explore transformative adversarial training techniques where our goal is to transform medical imaging data to a target reference space as a way of normalizing them for image intensity, patient anatomy as well as the many other parameters associated with the variability inherent to medical images. This approach will be investigated both for data normalization and data augmentation strategy, and will be tested in several multi-center clinical data for lesion segmentation and/or classification (diagnosis).

    Guillaume-Alexandre Bilodeau, Polytechnique Montréal

    Équipe : Aloise Daniel, Pesant Gilles, Saunier Nicolas & St-Aubin Paul

    Road user tracking and trajectory clustering for intelligent transportation systems

    While traffic cameras are a mainstay of traffic management centers, video data is still most commonly watched by traffic operators for traffic monitoring and incident management. There are still few applications of computer vision in ITS, apart from integrated sensors for specific data extraction such as road users (RUs) counts. One of the most useful data to extract from video is the trajectory of all RUs, including cars, trucks, bicycles and pedestrians. Since traffic videos include many RUs, finding their individual trajectory is challenging. Our first objective is therefore to track all individual RUs. The second objective is to interpret the very large number of trajectories that can be obtained. This can be done by clustering trajectories, which provides the main motions in the traffic scene corresponding to RU activities and behaviors, al

    François Bouffard, McGill University

    Équipe : Anjos Miguel & Waaub Jean-Philippe

    The Electricity Demand Response Potential of the Montreal Metropolitan Community: Assessment of Potential Impacts and Options

    This project will develop a clear understanding of the potential benefits and trade-offs of key stakeholders for deploying significant electric power demand response (DR) in the Montreal Metropolitan Community (MMC) area. It is motivated primarily by the desire of Hydro-Québec to increase its export potential, while at the same time by the need to assess DR deployment scenarios and their impacts on the people and businesses of the MMC. Data science is at the heart of this work which will need to discover knowledge on electricity consumption in order to learn how to leverage and control its flexibility.

    Patrick Cossette, Université de Montréal

    Équipe : Bengio Yoshua, Laviolette François & Girard Simon

    Towards personalized medicine in the management of epilepsy: a machine learning approach in the interpretation of large-scale genomic data

    To date, more than 150 epilepsy genes have been identified explaining around 35% of the cases. However, conventional genomics methods have failed to explain the full spectrum of epilepsy heritability, as well as antiepileptic drug resistance. In particular, conventional studies lack the ability to capture the full complexity of the human genome, such as interactions between genomic variations (epistasis). In this project, we will investigate how we can use machine learning algorithms in the analyses of genomic data in order to detect multivariate patterns, by taking advantage of our large dataset of individual epilepsy genomes. In this multi-disciplinary project, neurologists, geneticists, bio-informaticians and computational scientists will join forces in order to use machine learning algorithms to detect genomic variants signatures in patients with pharmaco-resistant epilepsy. Having the ability to predict pharmaco-resistance will ultimately reduce the burden of the disease.

    Benoit Coulombe, Université de Montréal

    Équipe : Lavallée-Adam Mathieu, Gauthier Marie-Soleil, Gaspar Vanessa, Pelletier Alexander, Wong Nora & Christian Poitras

    A machine learning approach to decipher protein-protein interactions in human plasma

    Proteins circulating in the human bloodstream make very useful and accessible clinical biomarkers for disease diagnostics, prognostics and theranostics. Typically, to perform their functions, proteins will interact with other molecules, including other proteins. These protein-protein interactions provide valuable insights into a protein’s role and function in humans; it can also lead to the discovery of novel biomarkers for diseases in which the protein of interest is involved. However, the identification of such interactions in human plasma is highly challenging. The lack of proper biochemical controls, which are inherently noisy, makes the confidence assessment of these interactions very difficult. We therefore propose to develop a novel machine learning approach that will extract the relevant signal from noisy controls to confidently decipher the interactome of clinically-relevant proteins circulating in the human bloodstream with the ultimate goal of identifying novel biomarkers.

    Michel Denault, HEC Montréal

    Équipe : Côté Pascal & Orban Dominique

    Simulation and regression approaches in hydropower optimization

    We develop optimization algorithms based on dynamic programming with simulations and regression, essentially Q-learning algorithms. Our main application area is hydropower optimization, a stochastic control problem where optimal releases of water are sought at each point in time.

    Michel Desmarais, Polytechnique Montréal

    Équipe : Charlin Laurent & Cheung Jackie C. K

    Matching individuals to review tasks based on topical expertise level

    The task of selecting an expert to review a paper addresses the general problem of finding a match between a human and an assignment based on the quality of expertise alignment between the two. State of the art approaches generally rely on modeling reviewers as a distribution of topic expertise, or as a set of keywords. Yet, two expert can have the same relative topic distribution and have wide differences in their depth of understanding. A similar argument can be made for papers. The objective of this proposal is to enhance the assignment approach to include the notions of (1) reviewer mastery of a topic, and (2) paper topic sophistication. Means to assess each aspect are proposed, along with approaches to assignments based on this additional information.

    Georges Dionne, HEC Montréal

    Équipe : Morales Manuel, d’Astous Philippe, Yergeau Gabriel, Rémillard Bruno & Shore Stephen H.

    Asymmetric Information Tests with Dynamic Machine Learning and Panel Data

    To our knowledge, the econometric estimation of dynamic panel data models with machine learning is not very developed and tests for the presence of asymmetric information in this environment are lacking. Most often, researchers assume the presence of asymmetric information and propose models (sometimes dynamic) to reduce its effects but do not test for residual asymmetric information in final models. Potential non-optimal pricing of financial products may still be present. Moreover, it is often assumed that asymmetric information is exogenous and related to unobservable agent characteristics (adverse selection) without considering agents’ dynamic behavior over time (moral hazard). Our goal is to use machine learning models to develop new tests of asymmetric information in large panel data sets where the dynamic behavior of agents is observed. Applications in credit risk, high frequency trading, bank securitization, and insurance will be provided.

    Marc Fredette, HEC Montréal

    Équipe : Charlin Laurent, Léger Pierre-Majorique, Sénécal Sylvain, Courtemanche François, Labonté-Lemoyne Élise & Karran Alexander

    Improving the prediction of the emotional and cognitive experience of users (UX) in interaction with technology using deep learning.

    The objective of this research project is to leverage new advances in artificial intelligence, and more specifically deep learning approaches, to improve the prediction of emotional and cognitive experience of users (UX) in interaction with technology. What users experience emotionally and cognitively when interacting with an interface is a key determinant of the success or failure of digital products and services. Traditionally, user experience has been assessed with post hoc explicit measures, (i.e. such as questionnaires. However, these measures are unable to capture the states of users while they interact with technology. Researchers are turning to neuroscience implicit measures to capture the user’s states through psychophysiological inference. Deep learning has recently enabled other fields such as image recognition to make significant progress and we expect that it will do the same for psychophysiological inference, allowing the automatic modeling of complex feature sets.

    Geneviève Gauthier, HEC Montréal

    Équipe : Amaya Diego, Bégin Jean-François, Cabeda Antonio & Malette-Campeau

    L’utilisation des données financières à haute fréquence pour l’estimation de modèles financiers complexes

    Les modèles de marché permettant de reproduire la complexité des interactions entre l’actif sous-jacent et les options requièrent une complexité qui rend leur estimation très difficile. Ce projet de recherche propose d’utiliser les données financières d’options à haute fréquence afin de mieux mesurer et gérer les différents risques du marché.

    Michel Gendreau, Polytechnique Montréal

    Équipe : Potvin Jean-Yves, Aloise Daniel & Vidal Thibaut

    Nouvelles approches pour la modélisation et la résolution de problèmes de livraisons à domicile

    Ce projet porte sur le développement de nouvelles approches permettant de mieux aborder les problèmes de livraisons à domicile qui, suite à l’avènement généralisé du commerce électronique, ont connu un essor très important au cours de la dernière décennie. Une partie des travaux portera sur la modélisation même de ces problèmes, notamment en ce qui concerne les objectifs poursuivis par les expéditeurs. Le reste du projet visera sur le développement d’’heuristiques et de méta-heuristiques à la fine pointe des connaissances pour la résolution efficace de problèmes de grande taille.

    Bernard Gendron, Université de Montréal

    Équipe : Crainic Teodor Gabriel, Jena Sanjay Dominik & Lacoste-Julien Simon

    Optimization and machine learning for fleet management of autonomous electric shuttles

    Recently, a Canada-France team of 11 researchers led by Bernard Gendron (DIRO-CIRRELT, UdeM) has submitted an NSERC-ANR strategic project « Trustworthy, Safe and Smart EcoMobility-on-Demand », supported by private and public partners on both sides of the Atlantic: in Canada, GIRO and the City of Montreal; in France, Navya and the City of Valenciennes. The objective of this project is to develop optimization models and methods for planning and managing a fleet of autonomous electric shuttle vehicles. As a significant and valuable additional contribution to this large-scale project, we plan to study the impact of combining optimization and machine learning to improve the performance of the proposed models and methods.

    Julie Hussin, Université de Montréal

    Équipe : Gravel Simon, Romero Adriana & Bengio Yoshua

    Deep Learning Methods in Biomedical Research: from Genomics to Multi-Omics Approaches

    Deep learning approaches represent a promising avenue to make important advances in biomedical science. Here, we propose to develop, implement and use deep learning techniques to combine genomic data with multiple types of biomedical information (eg. other omics datasets, clinical information) to obtain a more complete and actionable picture of the risk profile of a patient. In this project, we will be addressing the important problem of missing data and incomplete datasets, evaluating the potential of these approaches for prediction of relevant medical phenotypes in population and clinical samples, and developing integration strategies for large heterogeneous datasets. The efficient and integrated use of multiomic data could lead to the improvement of disease risk and treatment outcome predictions in the context of precision medicine.

    Sébastien Jacquemont, Université de Montréal

    Équipe : Labbe Aurélie, Bellec Pierre, Catherine Schramm, Chakravarty Mallar & Michaud Jacques

    Modeling and predicting the effect of genetic variants on brain structure and function

    Neurodevelopmental disorders (NDs) represent a significant health burden. The genetic contribution to NDs is approximately 80%. Whole genome testing in pediatrics is a routine procedure and mutations contributing significantly to neurodevelopmental disorders are identified in over 400 patients every year at the Sainte Justine Hospital. However, the impact of these mutations on cognition and brain structure and function is mostly unknown. However, mounting evidence suggests that genes that share similar characteristics produce similar effects on cognitive and neural systems.

    Our goal: Develop models to understand the effects of mutations, genome-wide, on cognition, brain structure and connectivity.

    Models will be developed using large cohorts of individuals for whom, genetic, cognitive and neuroimaging data was collected.

    Deliverable: Algorithms allowing clinicians to understand the contribution of mutations to the neurodevelopmental symptoms observed in their patients.

    Karim Jerbi, Université de Montréal

    Équipe : Hjelm Devon, Plis Sergey, Carrier Julie, Lina Jean-Marc, Gagnon Jean-François & Dr Pierre Bellec

    From data-science to brain-science: AI-powered investgation of the neuronal determinants of cognitive capacities in health, aging and dementia

    Artificial intelligence is revolutionizing science, technology and almost all aspects of our society. Learning algorithms that have shown astonishing performances in computer vision and speech recognition are also expected to lead to qualitative leaps in biological and biomedical sciences. In this multi-disciplinary research program, we propose to investigate the possibility of boosting information yield in basic and clinical neuroscience research by applying data-driven approaches, including shallow and deep learning, to electroencephalography (EEG) and magnetoencephalography (MEG) data in (a) healthy adults, and aging populations (b) with or (c) without dementia. The proposal brings together several scientists with expertise in a wide range of domains, ranging from data science, mathematics and engineering to neuroimaging, systems, cognitive and clinical neuroscience.

    Philippe Jouvet, Université de Montréal

    Équipe : Hjelm Devon, Plis Sergey, Carrier Julie, Lina Jean-Marc, Gagnon Jean-François & Dr Pierre Bellec

    From data-science to brain-science: AI-powered investgation of the neuronal determinants of cognitive capacities in health, aging and dementia

    Artificial intelligence is revolutionizing science, technology and almost all aspects of our society. Learning algorithms that have shown astonishing performances in computer vision and speech recognition are also expected to lead to qualitative leaps in biological and biomedical sciences. In this multi-disciplinary research program, we propose to investigate the possibility of boosting information yield in basic and clinical neuroscience research by applying data-driven approaches, including shallow and deep learning, to electroencephalography (EEG) and magnetoencephalography (MEG) data in (a) healthy adults, and aging populations (b) with or (c) without dementia. The proposal brings together several scientists with expertise in a wide range of domains, ranging from data science, mathematics and engineering to neuroimaging, systems, cognitive and clinical neuroscience.

    Pierre L’Ecuyer, Université de Montréal

    Équipe : Devroye Luc & Lacoste-Julien Simon

    Monte Carlo and Quasi-Monte Carlo Methods for Optimization and Machine Learning

    The use of Monte Carlo methods (aka, stochastic simulation) has grown tremendously in the last few decades. They a now a central ingredient in many areas, including computational statistics, machine learning, and operations research. Our aim in this project is to study Monte Carlo methods and improve their efficiency, with a focus on applications to statistical modeling with big data, machine learning, and optimization. We are particularly interested in developing methods for which the error converges at a faster rate than straightforward Monte Carlo. We plan to free software that implements these methods.

    Aurélie Labbe, HEC Montréal

    Équipe : Larocque Denis, Charlin Laurent & Miranda-Moreno

    Data analytics methods for travel time estimation in transportation engineering

    Travel time is considered as one of the most important performance measures in urban mobility. It is used by both network operators and drivers as an indicator of quality
    of service or as a metric influencing travel decisions. This proposal tackles the issue of travel time prediction from several angles: i) data pre-processing (map-matching), ii) short-term travel time prediction and iii) long-term travel time prediction. These tasks will require the development of new approaches in statistical and machine learning to adequately model GPS trajectory data and to quantify the prediction error.

    Frédéric Leblond, Polytechnique Montréal

    Équipe : Trudel Dominique, Ménard Cynthia, Saad Fred, Jermyn Michael & Grosset Andrée-Anne

    Machine learning technology applied to the discovery of new vibrational spectroscopy biomarkers for the prognostication of intermediate-risk prostate cancer patients

    Prostate cancer is the most frequent cancer among Canadian men, with approximately 25,000 diagnoses per year. Men with high risk and low risk disease almost always experience predictable disease evolution allowing optimal treatment selection. However, none of the existing clinical tests, imaging techniques or histopathology methods can be used to predict the fate of men with intermediate-risk disease. This is the source of a very important unmet clinical need, because while some of these patients remain free of disease for several years, in others cancer recurs rapidly after treatment. Using biopsy samples in tissue microarrays from 104 intermediate-risk prostate cancer patients with known outcome, we will use a newly developed Raman microspectroscopy technique along with machine learning technology to develop inexpensive prognostic tests to determine the risk of recurrence allowing clinicians to consider more aggressive treatments for patients with high

    Éric Lécuyer, Université de Montréal

    Équipe : Blanchette Mathieu & Waldispühl Jérôme

    Developing a machine learning framework to dissect gene expression control in subcellular space

    Our multidisciplinary team will develop and use an array of machine learning approaches to study a fundamental but poorly understood process in molecular biology, the subcellular localization of messenger RNAs, whereby the transcripts of different human genes are transported to various regions of the cell prior to translation. The project will entail the development of new learning approaches (learning from both RNA sequence and structure data, phylogenetically related training examples, batch active learning) combined with new biotechnologies (large-scale assays of both natural and synthetic RNA sequences) to yield mechanistic insights into the « localization code » and help understand its role in health and disease.

    Sébastien Lemieux, Université de Montréal

    Équipe : Bengio Yoshua , Sauvageau Guy & Cohen Joseph Paul

    Deep learning for precision medicine by joint analysis of gene expression profiles measured through RNA-Seq and microarrays

    This project aims at developing domain adaptation techniques to enable the joint analysis of gene expression profiles datasets acquired using different technologies, such as RNA-Seq and microarrays. Doing so will leverage the large number of gene expression profiles publicly available, avoiding the typical problems and limitations caused by working with small datasets. More specifically, methods developed will be continuously applied to datasets available for Acute Myeloid Leukemia in which the team has extensive expertise.

    Andrea Lodi, Polytechnique Montréal

    Équipe : Bengio Yoshua, Charlin Laurent, Frejinger Emma & Lacoste-Julien Simon

    Machine Learning for (Discrete) Optimization

    The interaction between Machine Learning and Mathematical Optimization is currently one of the most popular topics at the intersection of Computer Science and Applied Mathematics. While the role of Continuous Optimization within Machine Learning is well known, and, on the applied side, it is rather easy to name areas in which data-driven Optimization boosted by / paired with Machine Learning algorithms can have a game-changing impact, the relationship and the interaction between Machine Learning and Discrete Optimization is largely unexplored. This project concerns one aspect of it, namely the use of modern Machine Learning techniques within / for Discrete Optimization.

    Alejandro Murua, Université de Montréal

    Équipe : Quintana Fernando & Quinlan José

    Gibbs-repulsion and determinantal processes for statistical learning

    Non-parametric Bayesian models are very popular for density estimation and clustering. However, they have a tendency to use too many mixture components due to their use of independent parameter priors. Repulsion processes priors such as determinantal processes, solve this issue by putting higher mass on parameter configurations for which the mixture components are well separated. We propose the use of Gibbs-like repulsion processes which are locally determinantal, or adaptive determinantal processes as priors for modeling density estimation, clustering, and temporal and/or spatial data.

    Marcelo Vinhal Nepomuceno, HEC Montréal

    Équipe : Charlin Laurent, Dantas Danilo C., & Cenesizoglu Tolga

    Using machine learning to uncover how marketer-generated post content is associated with user-generated content and revenue

    This projects proposes how machine learning can be used to improve a company’s communication with its customers in order to increase sales. To that end, we will identify how broadcaster-generated content is associated with user-generated content and revenue measures. In addition, we intend to automate the identification of post content, and to propose personalized recurrent neural networks to identify the writing styles of brands and companies and automate the creation of online content.

    Dang Khoa Nguyen, Université de Montréal

    Équipe : Sawan Mohamad, Lesage Frédéric, Zerouali Younes & Sirpal Parikshat

    Real-time detection and prediction of epileptic seizures using deep learning on sparse wavelet representations

    Epilepsy is a chronic neurological condition in which about 20% of patients do not benefit from any form of treatment. In order to diminish the impact of recurring seizures on their lives, we propose to exploit the potential of artificial intelligence techniques for predicting the occruence of seizures and detecting their early onset, such as to issue warnings to patients. The aim of this project is thus to develop an efficient algorithm based on deep neural networks for performing real-time detection and prediction of seizures. This work will pave the way for the development of intelligent implantable sensors coupled with alert systems and on-site treatment delivery.

    Jian-Yun Nie, Université de Montréal

    Équipe : Langlais Philippe, Tang Jian & Tapp Alain

    Knowledge-based inference for question answering and information retrieval

    Question answering (QA) is a typical NLP/AI problem with wide applications. A typical approach first retrieves relevant text passages and then determines the answer from them. These steps are usually performed separately, undermining the quality of the answers. In this project, we aim at developing new methods for QA in which the two steps can benefit from each other. On one hand, inference based on a knowledge graph will be used to enhance the passage retrieval step; on the other hand, the retrieved passages will be incorpor

    Jean-François Plante, HEC Montréal

    Équipe : Brown Patrick, Duschesne Thierry & Reid Nancy

    Statistical modelling with distributed systems

    Statistical inference requires a large toolbox of models and algorithms that can accommodate different structures in the data. Modern datasets are often stored on distributed systems where the data are scattered across a number of nodes with limited bandwidth between them. As a consequence, many complex statistical models cannot be computed natively on those clusters. In this project, we will advance statistical modeling contributions to data science by creating solutions that are ideally suited for analysis on distributed systems.

    Doina Precup, McGill University

    Équipe : Bengio Yoshua & Pineau Joelle

    Learning independently controllable features with application to robotics

    Learning good representations is key for intelligent systems. One intuition is that good features will disentangle distinct factors that explain variability in the data, thereby leading to the potential development of causal reasoning models. We propose to tackle this fundamental problem using deep learning and reinforcement learning. Specifically, a system will be trained to discover simultaneously features that can be controlled independently, as well as the policies that control them. We will validate the proposed methods in simulations, as well as by using a robotic wheelchair platform developed at McGill University.

    Marie-Ève Rancourt, HEC Montréal

    Équipe : Laporte Gilbert, Aloise Daniel, Cervone Guido, Silvestri Selene, Lang Stefan, Vedat Verter & Bélanger Valérie

    Analytics and optimization in a digital humanitarian context

    When responding to humanitarian crises, the lack of information increases the overall uncertainty. This hampers relief efforts efficiency and can amplify the damages. In this context, technological advances such as satellite imaging and social networks can support data gathering and processing to improve situational awareness. For example, volunteer technical communities leverage ingenious crowdsourcing solutions to make sense of a vast volume of data to virtually support relief efforts in real time. This research project builds on such digital humanitarianism initiatives through the development of innovative tools that allow evidence-based decision making. The aim is to test the proposed methodological framework to show how data analytics can be combined with optimization to process multiple sources of data, and thus provide timely and reliabl

    Louis-Martin Rousseau, Polytechnique Montréal

    Équipe : Adulyasak Yossiri, Charlin Laurent, Dorion Christian, Jeanneret Alexandre & Roberge David

    Learning representations of uncertainty for decision making processes

    Decision support and optimization tools are playing an increasingly important role in today’s economy. The vast majority of such systems, however, assume the data is either deterministic or follows a certain form of theoretical probability functions. We aim to develop data driven representations of uncertainty, based on modern machine learning architectures such as probabilistic deep neural networks, to capture complex and nonlinear interactions. Such representations are then used in stochastic optimization and decision processes in the fields of cancer treatment, supply chain and finance.

    Nicolas Saunier, Polytechnique Montréal

    Équipe : Goulet James, Morency Catherine, Patterson Zachary & Trépanier Martin

    Fundamental Challenges for Big Data Fusion and Strategic Transportation Planning

    As more and more transportation data becomes continuously available, transportation engineers and planners are ill-equipped to make use of it in a systematic and integrated way. This project aims to develop new machine learning methods to combine transportation data streams of various nature, spatial and temporal definitions and pertaining to different populations. The resulting model will provide a more complete picture of the travel demand for all modes and help better evaluate transportation plans. This project will rely on several large transportation datasets.

    Yvon Savaria, Polytechnique Montréal

    Équipe : David Jean-Pierre, Cohen-Adad Julien & Bengio Yoshua

    Optimised Hardware-Architecture Synthesis for Deep Learning

    Deep learning requires considerable computing power. Computing power can be improved significantly by designing application specific computing engines dedicated to deep learning. The proposed project consists of designing and implementing a High Level Synthesis tool that will generate an RTL design from the code of an algorithm. This tool will optimize the architecture, the number of computing units, the length and representation of the numbers and  the important parameters of the various memories generated.

    Mohamad Sawan, (Polytechnique Montréal

    Équipe : Savaria Yvon & Bengio Yoshua

    Equilibrium Propagation Framework: Analog Implementation for Improved Performances (Equipe)

    The main aim of this project is to implement the Equilibrium Propagation (EP) algorithm in analog circuits, rather than digital building blocks, to take advantage of their higher computation speed and power efficiency. EP involves minimization of an energy function, which requires a long relaxation phase that is costly (in terms of time) to simulate on digital hardware. But it can be accelerated through analog circuit implementation. Two main implementation phases in this project are: (1) Quick prototyping and proof of concep using an FPAA platform (RASP 3.0), and (2) High performance custom System-on-Chip (SoC) implementation using a standard CMOS process e.g. 65nm to optimize the area, speed, and power consumption.

    François Soumis, Polytechnique Montréal

    Équipe : Desrosiers Jacques, Desaulniers Guy, El Hallaoui Issmail, Lacoste-Julien Simon, Omer Jérémy & Mohammed Saddoune

    Combiner l’apprentissage automatique et la recherche opérationnelle pour traiter plus rapidement les grands problèmes d’horaires d’équipages aériens

    Nous travaux récents portent sur le développement d’algorithmes d’optimisation exacts qui profitent de l’information a priori sur les solutions attendues pour réduire le nombre de variables et de contraintes à traiter simultanément. L’objectif est de développer un système d’apprentissage machine pour obtenir l’’information permettant d’accélérer le plus possible ces algorithmes d’’optimisation, pour traiter de plus grands problèmes d’’horaires d’’équipages aériens. Ce projet produira en plus des avancements en R. O. des avancements en apprentissage sous contraintes et par renforcement.

    An Tang, Université de Montréal

    Équipe : Pal Christopher, Kadoury Samuel, Bengio Yoshua, Turcotte Simon, Nguyen Bich & Anne-Marie Mes-Masson

    Predictive model of colorectal cancer liver metastases response to chemotherapy

    Colon cancer is the 2nd leading cause of mortality in Canada. In patients with colorectal liver metastases, response to chemotherapy is the main determinant of patient survival. Our multidisciplinary team will develop models based to predict response to chemotherapy and patient prognosis using the most recent innovations in deep learning architectures. We will train our model on data from an institutional biobank and validate our model on independent provincial imaging and medico-administrative databases.

    Pierre Thibault, Université de Montréal

    Équipe : Lemieux Sébastien, Bengio Yoshua & Perreault Claude

    Matching MHC I-associated peptide spectra to sequencing reads using deep neural networks

    Identification of MHC I-associated peptides (MAPs) unique to a patient or tumor is key step in developing efficacious cancer immunotherapy. This project aims at developing a novel approach for exploiting Deep Neural Networks (DNN) for the identification of MAPS based on a combination of next-generation sequencing (RNA-Seq) and tandem mass spectrometry (MS/ MS). The proposed developments will take advantage of a unique dataset of approximately 60,000 (MS/MS – sequence) pairs assembled by our team. The project will also bring together researchers from broad horizons: mass spectrometry, bioinformatics, machine learning and cancer immunology.