![Loading Events](https://ai.utsa.edu/wp-content/plugins/the-events-calendar/src/resources/images/tribe-loading.gif)
- This event has passed.
MATRIX Spring Seminar Series – Dr. Pradeep Ravikumar
February 5, 2021 • 11:00 am - 12:00 pm
On Friday, February 5th, MATRIX is pleased to host Dr. Pradeep Ravikumar from Carnegie Mellon University for our spring seminar series. This seminar will be held via Zoom. Additional information about the seminar may be found below –
Graceful Statistical Machine Learning
Dr. Pradeep Ravikumar
Machine Learning Department, School of Computer Science, Carnegie Mellon University
February 5, 2021, 11 AM – Noon CST
tinyurl.com/MATRIXSpringSeminar
Modern statistical machine learning systems have achieved tremendous empirical successes in recent years. But there is an increasing realization that these successes are in largely sanitized settings where the training and test environments are largely similar, and the desiderata for these systems are largely based on some well-defined expected
performance measures. When these conditions fail to hold, these systems frequently exhibit a lack of graceful behavior.
We provide two vignettes of our recent research into more graceful machine learning systems. In the first, we provide a new computationally-efficient class of machine learning estimators that are robust under varied settings, including arbitrary training data contamination, and where the training data is heavy-tailed (so that the sample data is not “representative” of the true distribution). Our workhorse is a novel robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem. These results provide some of the first computationally tractable and provably robust estimators for general statistical models. In the second vignette, we re-consider the standard workhorse in statistical machine learning of empirical risk minimization, which aims to measure the expected risk with respect to some loss function for a given task. However, seminal results in behavioral economics have shown that human decision-making is based on different risk measures than the expectation of any given loss function. In contrast to minimizing expected loss, could we minimize a better human-aligned risk measure? While this might not seem natural at first glance, we analyze the properties of such a revised “human aligned” risk measure, and surprisingly show that it also better aligns with varied additional desiderata such as fairness.
Speaker Biography:
Pradeep Ravikumar is an Associate Professor in the Machine Learning Department, School of Computer Science at Carnegie Mellon University. His thesis has received honorable mentions in the ACM SIGKDD Dissertation award and the CMU School of Computer Science Distinguished Dissertation award. He is a Sloan Fellow, a Siebel Scholar, a recipient of the NSF CAREER Award. He is Associate Editor-in-Chief for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), editor for the Machine Learning journal, and action editor for the Journal of Machine Learning Research, and was Program Chair for the International Conference on Artificial Intelligence and Statistics (AISTATS) in 2013.