Price
$70-$980
AI systems are moving into high-stakes domains – health, finance, and public services – where errors, disparities, and privacy breaches carry real social costs. Much current practice is benchmark-driven and metric-optimized, but deployment demands evidence: error control, calibrated uncertainty, fairness that survives distribution shift, and governance artifacts that regulators and clinicians can audit. Statistics provides the spine for this evidence – clear estimands and identifiability, finite and non-asymptotic guarantees, uncertainty quantification, principled testing, and study design. This workshop convenes statisticians and AI/ML researchers to show how statistical principles translate into trustworthy AI systems – not just in theory, but in tools ready for practical use. Our aims are threefold: (1) bridge statistics and AI on shared deployment challenges – fairness, privacy, federated learning, distribution shift, and evaluation; (2) showcase methods with provable guarantees (coverage, error rates, privacy budgets, fairness tests) that are engineered for practice; and (3) catalyze collaborations between the statistics and AI communities, building a culture that treats trustworthy AI as an evidence-based, statistically grounded endeavor.
Registration opens on February 2nd, 2026.

