Dr. Annie Qu, UCSB
Dr. David Blei, Columbia
University
Annie Qu is
Professor at Department of Statistics and Applied Probability,
University of California, Santa Barbara starting July 2025.
She received her Ph.D. in Statistics from the Pennsylvania
State University in 1998. Qu’s research focuses on solving
fundamental issues regarding structured and unstructured
large-scale data and developing cutting-edge statistical
methods and theory in machine learning and algorithms for
personalized medicine, text mining, recommender systems,
medical imaging data, and network data analyses for complex
heterogeneous data. Dr. Qu was a Data Science Founder
Professor of Statistics and the Director of the Illinois
Statistics Office at the University of Illinois at
Urbana-Champaign during her tenure in 2008-2019, and
Chancellor's Professor at UC Irvine in 2020-2025. She was a
recipient of the NSF Career award from 2004 to 2009. She is a
Fellow of the Institute of Mathematical Statistics (IMS), the
American Statistical Association, and the American Association
for the Advancement of Science. She is also a recipient of IMS
Medallion Award and Lecturer in 2024. She serves as Journal of
the American Statistical Association Theory and Methods
Co-Editor from 2023 to 2025, IMS Program Secretary from 2021
to 2027 and ASA Council of Sections of Governing Board Chair
in 2025. She is the recipient of the 2025 Carver Medal of
IMS.
Abstract: In the era of big data, large-scale, multi-modal datasets are increasingly ubiquitous, offering unprecedented opportunities for predictive modeling and scientific discovery. However, these datasets often exhibit complex heterogeneity, such as covariate shift, posterior drift, and missing modalities which can hinder the accuracy of existing prediction algorithms. To address these challenges, we propose a novel Representation Retrieval (R2) framework, which integrates a representation learning module (the representer) with a sparsity-induced machine learning model (the learner). Moreover, we introduce the notion of “integrativeness” for representers, characterized by the effective data sources used in learning representers, and propose a Selective Integration Penalty (SIP) to explicitly improve the property. Theoretically, we demonstrate that the R2 framework relaxes the conventional full-sharing assumption in multi-task learning, allowing for partially shared structures, and that SIP can improve the convergence rate of the excess risk bound. Extensive simulation studies validate the empirical performance of our framework, and applications to two real-world datasets further confirm its superiority over existing approaches.
David Blei
is the William B. Ransford Professor of Statistics and
Computer Science at Columbia University. He studies
probabilistic machine learning and Bayesian statistics,
including theory, algorithms, and application. David has
received several awards for his research, including the ACM
Prize in Computing (2013), a Guggenheim fellowship (2017), a
Simons Investigator Award (2019), the AAAI John McCarthy award
(2024), and the ACM/AAAI Allan Newell Award (2024). He was the
co-editor-in-chief of the Journal of Machine Learning Research
from 2019-2024. He is a fellow of the Association for
Computing Machinery (ACM) and the Institute of Mathematical
Statistics (IMS).
Abstract: TBA