Carry out interdisciplinary research at the intersection of two pressing AI-related issues: AI governance and AI implementation.

Vision

Overcome the traditional opposition between innovation and regulation by creating transparent, reliable and secure business and governance models that meet the requirements of industry, public institutions and the public, including groups that artificial intelligence (AI) could disqualify.

Objectives

  • Conduct fundamental and multidisciplinary scientific research.
  • Contribute to the development of methods, tools and technologies enabling the development of trustworthy intelligent applications.
  • Promote the adoption of AI for the benefit of society.

Research Axes

Axis 1: AI governance

Axis 1 aims to fill the gaps in AI governance and support companies and organizations to better comply with it. It also addresses the ethical, legal and democratic issues surrounding the widespread use of large language models (LLMs).

Axis 2: AI implementation

Axis 2 focuses on identifying and constantly updating the technological and professional capabilities required for optimal integration of AI in organizations. It also aims to democratize the development of AI applications.

Anticipated Impact

  • Participate in the development of policies to frame the use of AI and robotics in Quebec and Canada.
  • Reduce misinformation and disinformation by developing a fact-checking system.
  • Stimulate the responsible integration of AI in healthcare organizations by measuring their organizational maturity.
  • Promote the continuous acquisition of new AI skills in various work contexts.
  • Contribute to the democratization of the responsible use of AI.

Challenges

Recent advances in AI are paving the way for major and beneficial changes in many sectors. However, the proliferation of AI systems accentuates misinformation and disinformation, raising significant ethical, democratic, legal and security concerns. It also creates challenges in the workplace: erosion of skills and quality of work, reduced autonomy and sense of purpose for employees, microsurveillance of the workforce and “over-trust” of AI tools.

In this context, how can we harness the benefits of AI without fueling its harmful effects? This complex search for balance requires the contribution of researchers from various fields, such as political science, robotics, law, ethics, computer science, communication, and human-machine interaction. Working together, they propose a multidisciplinary scientific approach aimed at reconciling this duality.

Research Team

Co-leaders

Foutse Khomh
Polytechnique Montréal
Lyse Langlois
Université Laval
Pierre-Majorique Léger
HEC Montréal

Researchers

Students

  • Khayreddine Bouabida – Université de Montréal
  • Hugo Cossette-Lefebvre – McGill University
  • Pietro Cruciata – Polytechnique Montréal
  • Amélie Lévesque – HEC Montréal
  • Kellin Pelrine – Université McGill
  • Thaddé Rolon-Merette – HEC Montréal
  • Peter Yu – McGill University

Research Advisor

Florence Lussier-Lejeune: florence.lussier-lejeune@ivado.ca