To enable large scale adoption AI needs to behave as expected while being adaptable, secure, and trustworthy.
Recent study from Carnegie endowment on AI Global Surveillance Index shows that an estimated 75 countries are actively using AI technologies for varying degrees of surveillance purposes, including smart city/safe city platforms, facial recognition systems, and smart policing.
A central question then is, how can we develop and design AI technologies that interact with human users and negotiate users’ trust? In this research thrust the MATRIX researchers will focus on understanding:
- Adversarial machine learning models that resist security challenges
- Study attack consequences
- Bias and fairness in AI algorithms
- Quality evaluation of a learning system
- Explainable models
To address these challenges, computer scientists and engineers will work alongside policy, ethics, economics, and psychology researchers.
Thrust leads
-
Gabriela Ciocarlie, Ph.D.
Vice President for Securing Automation and Secure Manufacturing Architecture,
Gabriela.Ciocarlie@cymanii.org
CyManII, UTSA
Associate Professor
Department of Electrical and Computer Engineering, UTSA -
Panagiotis (Panos) Markopoulos, Ph.D.
Associate Professor, Klesse Endowed Professor
panagiotis.markopoulos@utsa.edu210-458-6482
Department of Electrical and Computer Engineering, UTSA