- This event has passed.
Spring Seminar Series 2024 – Dr. Wenpin Tang
January 26 • 11:00 am - 12:00 pm
Contractive Diffusion Models and Score Matching by Continuous Reinforcement Learning
Wenpin Tang, Ph.D.
Assistant Professor
Department of Industrial Engineering and Operations Research
Columbia University, New York, NY
https://www.columbia.edu/~wt2319
01/26/2024
11am – 12pm CST
Location: San Pedro 1, Yotta Room 430
506 Dolorosa St, San Antonio, TX 78204
Zoom: https://utsa.zoom.us/j/94807623288
Abstract:
In this talk, I will link two different topics. The past decade has witnessed the success of generative modeling (e.g., GANs and VAEs) in creating high quality samples in a wide variety of data modalities. The first part of this talk is concerned with the recently developed diffusion models, the key idea of which is to reverse a certain stochastic dynamics. I will first take a continuous-time perspective and examine the performance of different SDE schemes including VE (variance exploding) and VP (variance preserving). The discretization is more subtle, and our idea is to “contract” the reversed dynamics leading to possible new diffusion model designs. In the second part, I will talk about continuous reinforcement learning. Reinforcement Learning (RL) has been successfully applied to wide-ranging domains in the past decade. In recent years, a fast-growing body of research has extended the frontiers of continuous RL to designing model-free methods and algorithms. I will discuss the recently introduced “q-learning” and closely related policy optimization. Finally, I will highlight a natural application of continuous RL to fine-tune the score function in the diffusion models.