Price
$70-$980
The momentum towards AI adoption promises that in our work, our play, and our decision making we will soon be – in fact, are already – surrounded by an array of AI tools and agents. At the same time, our understanding of the uncertainty associated with much of this emerging infrastructure is in its infancy. Uncertainty in its many guises, ranging from predictive accuracy of deep learning algorithms and hallucinations in generative AI, to a still largely empirically driven understanding of what AI is – and is not – capable of doing. This workshop would highlight principled ways that statistical theory and methods are contributing to our emerging understanding of the uncertainty that accompanies AI. And, simultaneously, how the pace of AI development is seeding innovation in statistics at a pace rarely seen before, particularly at the interface with similarly fast-evolving areas of applied mathematics. From Bayesian deep learning to the mathematics of transformers and their connections to interacting particle systems. From the analysis of transfer learning via optimal transport and similar ideas like flow matching to emerging formal characterizations of chain-of-thought reasoning. Bringing together statistical experts in AI from around Quebec, Canada, and the larger global AI community, this four day workshop would provide a rare opportunity for this group to converge at a still-relatively-small scale to discuss and continue to lay the foundation for statistical uncertainty quantification in AI.
Registration opens on February 2nd, 2026.

