As AI systems advance at a mind-blowing pace, privacy and security risks remain critically overlooked, even in privacy-preserving frameworks. Federated Learning (FL) was originally designed to enhance privacy by keeping data decentralized, yet it remains vulnerable to inference attacks, which can extract sensitive information from shared model updates. Differential Privacy (DP), with its mathematically rigorous privacy guarantees, provides a principled defense against such threats, but its integration into FL presents significant challenges in terms of utility, efficiency, and robustness. This talk provides a comprehensive exploration of DP in collective intelligence, covering its fundamental principles, DP-SGD, and its application in Federated Learning (FL), including Local DP, Central DP, and Shuffle DP. We then introduce our contributions to this field, including Private Individual Computation (PIC) for Shuffle DP in FL, an analysis of macro-level inference attacks in Horizontal FL (HFL), and a Vertical FL-based framework for synthesising tabular data with privacy-utility trade-offs.
https://www.southampton.ac.uk/people/65cgfc/doctor-han-wu