Mechanical and Civil Engineering Seminar
PhD Thesis Defense
Abstract: Learning-based controllers have recently shown impressive results in well-defined environments, promising to enable a host of new capabilities for complex robotic systems. However, such controllers cannot be widely deployed in highly uncertain environments due to significant issues relating to learning reliability and safety.
The first half of the talk will focus on reinforcement learning, and discuss why the integration of model information into the reinforcement learning framework is crucial to ensure reliability and safety. I will discuss how such model information can be leveraged to constrain/guide exploration and provide explicit safety guarantees for uncertain systems.
The second half of the talk will discuss fundamental limitations that arise when utilizing machine learning to derive safety guarantees. In particular, I will show that widely used uncertainty models can be highly inaccurate when predicting rare events, and examine the implications of this for safe learning. To overcome some of these limitations, I propose a novel approach based on assume-guarantee contracts to ensure safety in human environments.
Please virtually attend this thesis defense:
Zoom Link: https://caltech.zoom.us/j/8798132143