PhD Thesis Defense
Reliability is a crucial aspect for the successful deployment of deep learning systems across various domains, including generative models, control and visual perception. In this thesis, we explore the reliability of inference dynamics in deep neural networks such as ResNet, neural Ordinary Differential Equations (ODEs), and diffusion models.
We begin by examining the inference dynamics in standard networks with a discrete sequence of hidden layers, applying self-consistency and local Lipschitz bounds to enhance robustness against input perturbations. Our exploration then extends to neural ODEs, where the neural network specifies a vector field that continuously transforms the state. We employ forward invariance to achieve robustness, marking the first instance of training neural ODE policies with non-vacuous certified guarantees. The focus shifts next to diffusion models and their inference processes, particularly in adhering to symbolic constraints. For this, we introduce a novel sampling algorithm inspired by stochastic control principles. This algorithm enables diffusion models to generate contents that follow non-differentiable rule specifications. Our work offers a cohesive understanding of inference dynamics in various deep learning architectures and propose new algorithms to significantly improve their reliability.