Price
$70-$280
AI systems are moving into high-stakes domains – health, finance, and public services – where errors, disparities, and privacy breaches carry real social costs. Much current practice is benchmark-driven and metric-optimized, but deployment demands evidence: error control, calibrated uncertainty, fairness that survives distribution shift, and governance artifacts that regulators and clinicians can audit. Statistics provides the spine for this evidence – clear estimands and identifiability, finite and non-asymptotic guarantees, uncertainty quantification, principled testing, and study design. This workshop convenes statisticians and AI/ML researchers to show how statistical principles translate into trustworthy AI systems – not just in theory, but in tools ready for practical use. Our aims are threefold: (1) bridge statistics and AI on shared deployment challenges – fairness, privacy, federated learning, distribution shift, and evaluation; (2) showcase methods with provable guarantees (coverage, error rates, privacy budgets, fairness tests) that are engineered for practice; and (3) catalyze collaborations between the statistics and AI communities, building a culture that treats trustworthy AI as an evidence-based, statistically grounded endeavor.
Registration opens on February 2nd, 2026.
Confirmed Speakers
Elvezio Ronchetti (University of Geneva), Huixia Judy Wang (Rice University), Ji Zhu (University of Michigan), Jian Huang (The Hong Kong Polytechnic University), Junhui Wang (The Chinese University of Hong Kong), Linjun Zhang (Rutgers University), Ricardo Silva (University College London), Weijie Su (University of Pennsylvania), Xiaotong Shen (University of Minnesota) and Yuekai Sun (University of Michigan).

