Caltech Young Investigators Forum
Formal Behavior Synthesis Applied to Managing a Team of UAVs for Fighting Fires and Specification Inference Applied to Human Prediction
Abstract: For the first part of the talk, we are interested in synthesizing controllers from LTL specifications. We describe a team of unmanned aerial vehicles (UAVs) tasked with fighting a wildfire by satisfying mission requirements expressed as linear temporal logic (LTL) specifications. From literature, reactive synthesis has been used as a formal means of constructing controllers that guarantee the satisfaction of LTL specifications. However, computational complexity of reactive synthesis increases with the number of agents, tasks and environment behaviors. To reduce this complexity, we present a high-level mission planner and controller for managing a team of UAVs through the utilization of reactive synthesis and dynamic allocation of the UAVs as resources for the fire. In the second part of the talk, we consider the inverse process of synthesizing controllers from LTL specifications. We present an approach that leverages LTL specification inference results given demonstrations of formal behavior to improve human motion prediction in pedestrian settings where social norms are usually performed.
Bio: Estefany Carrillo received the B.S. and M.S. degrees in electrical engineering from the University of Maryland at College Park, College Park, MD, USA, in 2012 and 2017 respectively. She is currently pursuing the Ph.D. degree in aerospace engineering at the University of Maryland, College Park, College Park, MD, USA. She is currently a Research Assistant with the Department of Aerospace Engineering, UMD, under the supervision of Dr. Huan Xu. She is the recipient of the Amazon Lab 126 Fellowship for the year 2020-2021. Her research focuses on the use of formal methods and hybrid systems theory in the design of verifiable controllers for complex high-level tasks and control of multiagent systems.
Trustworthy Machine Learning: On the Preservation of Individual Privacy and Fairness
Abstract: Machine learning (ML) techniques have seen significant advances over the last decade. While their social benefits are enormous, they can also inflict great harm if not used with care. In this talk, I will focus on two critical issues in ML systems: fairness and privacy. On the fairness front, although many fairness criteria have been proposed to measure and remedy biases in ML systems, their impact is often only studied in a static, one-shot setting. I will first present my work on evaluating the long-term impact of (fair) ML decisions on population groups that are repeatedly subject to such decisions. I will illustrate how imposing common fairness criteria intended to protect disadvantaged groups may lead to undesirable pernicious long-term consequences by exacerbating inequality. On the privacy front, when ML models are trained over individuals' personal data, it is critical to preserve their individual privacy while maintaining a sufficient level of model accuracy. I will illustrate two key ideas that can be used to balance an algorithm's privacy-accuracy tradeoff. I will present a privacy-preserving algorithm that leverages these ideas in the context of distributed learning.
Bio: Xueru Zhang is a Ph.D. candidate in the Department of Electrical Engineering and Computer Science at the University of Michigan. She received her M.S. degree from the University of Michigan in 2016 and B.S. degree in Electronic and Information Engineering from Beihang University (BUAA), Beijing, China, in 2015. Her research lies at the intersection of machine learning, optimization, and economics, including topics such as data privacy, algorithmic fairness, and security economics. She is a recipient of the Rackham Predoctoral Fellowship and an invited participant at the Rising Stars in EECS workshop.
This talk is part of the Caltech Young Investigators Lecture Series, sponsored by the Division of Engineering and Applied Science.