Join us at our next event.
MATRIX Spring Seminar Series – Dr. Joseph Houpt
On Friday, April 2nd, MATRIX is pleased to host Dr. Joseph Houpt of UTSA as part of our seminar series. All spring seminars will be hosted via Zoom. Full details […]
Event DetailsMATRIX Spring Seminar Series – Dr. Joseph Houpt
On Friday, April 2nd, MATRIX is pleased to host Dr. Joseph Houpt of UTSA as part of our seminar series. All spring seminars will be hosted via Zoom. Full details regarding the seminar may be found below –
Geometric Perspective of Human and Machine Categorization Performance
Dr. Joseph Houpt
Department of Psychology
University of Texas at San Antonio
Friday, April 2, 2021
11 AM – 12 PM CST
http://tinyurl.com/MATRIXSpringSeminar
Abstract: For the foreseeable future, AI systems will be applied to tasks driven by human task requirements, and human interaction with the systems is crucial. This interaction between human operators and AI systems, when done well, can facilitate task completion and lead to more desirable outcomes. One feature that can lead to better interaction between humans and AI system is a model of the human operator’s approach to the task that can be integrated with the AI. In this talk, I will highlight two of the more widely applied mathematical models of human categorization and their close relationship with standard machine learning approaches. Based on that foundation, I will give an overview of my current research goals of leveraging models of human categorization and machine categorization approaches to facilitate both human and AI performance.
LINKMATRIX Spring Seminar Series – Dr. Matthew Hirn
On Friday, March 26th, MATRIX is pleased to host Dr. Matthew Hirn of Michigan State University as part of our seminar series. More information about Dr. Hirn’s research may be […]
Event DetailsMATRIX Spring Seminar Series – Dr. Matthew Hirn
On Friday, March 26th, MATRIX is pleased to host Dr. Matthew Hirn of Michigan State University as part of our seminar series. More information about Dr. Hirn’s research may be found on his webpage. All spring seminars will be hosted via Zoom. Full details regarding the seminar may be found below –
Understanding convolutional neural networks through signal processing
Dr. Matthew Hirn
Department of Computational Mathematics, Science and Engineering; Department of Mathematics
Michigan State University
Friday, March 26, 2021
11 AM -12 PM CST
tinyurl.com/MATRIXSpringSeminar
Abstract: Convolutional neural networks (CNNs) are the go-to tool for signal processing tasks in machine learning. But how and why do they work so well? Using the basic guiding principles of CNNs, namely their convolutional structure, invariance properties, and multi-scale nature, we will discuss how the CNN architecture arises as a natural bi-product of these principles using the language of nonlinear signal processing. In doing so we will extract some core ideas that allow us to apply these types of algorithms in various contexts, including the multi-reference alignment inverse problem, generative models for textures, and supervised machine learning for quantum many particle systems. Time permitting, we will also discuss how these core ideas can be used to generalize CNNs to manifolds and graphs, while still being able to provide mathematical guarantees on the nature of the representation provided by these tools.
LINKMATRIX Spring Seminar Series – Dr. Amanda Fernandez
On Friday, March 19th, MATRIX is pleased to host Dr. Amanda Fernandez of UTSA as part of our seminar series. More information about Dr. Fernandez’s research may be found on […]
Event DetailsMATRIX Spring Seminar Series – Dr. Amanda Fernandez
On Friday, March 19th, MATRIX is pleased to host Dr. Amanda Fernandez of UTSA as part of our seminar series. More information about Dr. Fernandez’s research may be found on her webpage. All spring seminars will be hosted via Zoom. Full details regarding the seminar may be found below –
Robust Visual Understanding through Segmentation and Saliency Estimation: Challenges in Deep Learning Research
Dr. Amanda Fernandez
Department of Computer Science – UTSA
Friday, March 19, 2021
11 AM -12 PM CST
tinyurl.com/MATRIXSpringSeminar
Computer vision aims to emulate the human visual system, providing artificial intelligence agents with the opportunity to learn from visual data. While the field has evolved significantly over the last 60+ years, more recently deep learning architectures have enabled the transition from processing this visual data to learning how to interpret it. Key research can now focus on how explainable a model is, how well it can understand this input, and how vulnerable that understanding may be to an attack.
In this talk, I will outline some of the state-of-the-art in computer vision, identifying significant research milestones as well as open problems. This talk will take a deep learning perspective, focusing on neural network architectures and research challenges such as few-shot learning, semantic segmentation, and adversarial robustness. Finally, I will discuss some of our recent work applying our vision models to virtual reality/eye tracking, autonomous vehicles, and nuclear physics.
Speaker Biography:
MATRIX Spring Seminar Series – Dr. Amey Kulkarni
On Friday, March 5th, MATRIX is pleased to host Dr. Amey Kulkarni from NVIDIA as part of our seminar series. All spring seminars will be hosted via Zoom. Full details […]
Event DetailsMATRIX Spring Seminar Series – Dr. Amey Kulkarni
On Friday, March 5th, MATRIX is pleased to host Dr. Amey Kulkarni from NVIDIA as part of our seminar series. All spring seminars will be hosted via Zoom. Full details regarding the seminar may be found below –
Demystifying Deep Learning Inference on Embedded Platforms
Dr. Amey Kulkarni
NVIDIA
Friday, March 5, 2021
11 AM -12 PM CST
tinyurl.com/MATRIXSpringSeminar
Implementing deep learning inference on resource constraint embedded platforms is challenging. In recent years, we are witnessing artificial intelligence being deployed on embedded platforms in our everyday life, such as keywords detection on voice assistant devices, voice and image recognition on cell phones, robotics platforms performing food delivery and smart security systems etc. However, successful deployment of highly complex deep learning models require optimization at both the hardware and the software levels. In this talk, I will share my experience on enabling efficient inference on embedded systems using two widely adopted software techniques i.e. quantization and pruning. Performing both quantization and pruning is challenging due to accuracy vs complexity trade-off. Additionally, we will review some of the NVIDIA tools to improve end-to-end deep learning workflow with examples and their performance evaluations on NVIDIA’s Jetson embedded platform.
LINKMATRIX Spring Seminar Series – Dr. Pradeep Ravikumar
On Friday, February 5th, MATRIX is pleased to host Dr. Pradeep Ravikumar from Carnegie Mellon University for our spring seminar series. This seminar will be held via Zoom. Additional information […]
Event DetailsMATRIX Spring Seminar Series – Dr. Pradeep Ravikumar
On Friday, February 5th, MATRIX is pleased to host Dr. Pradeep Ravikumar from Carnegie Mellon University for our spring seminar series. This seminar will be held via Zoom. Additional information about the seminar may be found below –
Graceful Statistical Machine Learning
Dr. Pradeep Ravikumar
Machine Learning Department, School of Computer Science, Carnegie Mellon University
February 5, 2021, 11 AM – Noon CST
tinyurl.com/MATRIXSpringSeminar
Modern statistical machine learning systems have achieved tremendous empirical successes in recent years. But there is an increasing realization that these successes are in largely sanitized settings where the training and test environments are largely similar, and the desiderata for these systems are largely based on some well-defined expected
performance measures. When these conditions fail to hold, these systems frequently exhibit a lack of graceful behavior.
We provide two vignettes of our recent research into more graceful machine learning systems. In the first, we provide a new computationally-efficient class of machine learning estimators that are robust under varied settings, including arbitrary training data contamination, and where the training data is heavy-tailed (so that the sample data is not “representative” of the true distribution). Our workhorse is a novel robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem. These results provide some of the first computationally tractable and provably robust estimators for general statistical models. In the second vignette, we re-consider the standard workhorse in statistical machine learning of empirical risk minimization, which aims to measure the expected risk with respect to some loss function for a given task. However, seminal results in behavioral economics have shown that human decision-making is based on different risk measures than the expectation of any given loss function. In contrast to minimizing expected loss, could we minimize a better human-aligned risk measure? While this might not seem natural at first glance, we analyze the properties of such a revised “human aligned” risk measure, and surprisingly show that it also better aligns with varied additional desiderata such as fairness.
Speaker Biography:
Pradeep Ravikumar is an Associate Professor in the Machine Learning Department, School of Computer Science at Carnegie Mellon University. His thesis has received honorable mentions in the ACM SIGKDD Dissertation award and the CMU School of Computer Science Distinguished Dissertation award. He is a Sloan Fellow, a Siebel Scholar, a recipient of the NSF CAREER Award. He is Associate Editor-in-Chief for IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), editor for the Machine Learning journal, and action editor for the Journal of Machine Learning Research, and was Program Chair for the International Conference on Artificial Intelligence and Statistics (AISTATS) in 2013.
MATRIX Spring Seminar Series – Matthew Mattina
On Friday, January 29th, MATRIX is pleased to host Matthew Mattina from ARM for our initial spring seminar. Mr. Mattina is the ARM Distinguished Engineer and Senior Director of their […]
Event DetailsMATRIX Spring Seminar Series – Matthew Mattina
On Friday, January 29th, MATRIX is pleased to host Matthew Mattina from ARM for our initial spring seminar. Mr. Mattina is the ARM Distinguished Engineer and Senior Director of their Machine Learning Research Lab. He is also a member of the MATRIX advisory board. This seminar will be held via Zoom at the following link. Additional information about the seminar may be found below –
Tiny but powerful: Hardware for High Performance, Low Power Machine Learning
Matthew Mattina
Arm
January 29, 2021, 11 AM – Noon CST
tinyurl.com/MATRIXSpringSeminar
This talk will cover the emerging “tinyml” paradigm and give some tinyml example applications and benchmarks. We’ll look at what makes TinyML particularly challenging, and how neural network models and hardware can be co-designed to meet these challenges.
Speaker Biography:
Matthew Mattina is the Distinguished Engineer and Senior Director of Machine Learning Research at ARM, where he leads a global team of machine learning researchers developing advanced hardware, software, and algorithms. Prior to joining ARM in 2015, Matt was Chief Technology Officer at Tilera, where he was responsible for overall company technology, processor architecture, and strategy. Prior to Tilera, Matt was a CPU architect at Intel and invented and designed the Intel Ring Uncore Architecture. Matt has been granted over 30 patents relating to CPU design, multicore processors, on-chip interconnects, and cache coherence protocols. Matt holds a BS in Computer and Systems Engineering from Rensselaer Polytechnic Institute and an MS in Electrical Engineering from Princeton University.
LINK