H.B. Keller Colloquium
Predictive models deployed in social settings are often performative. This means that the model's predictions—by means of being used to make consequential downstream decisions—influence the outcomes the model aims to predict in the first place. For example, travel time estimates influence routing decisions and thus realized travel times, stock price predictions influence trading activity and hence prices. Such feedback-loop behavior arises in a variety of domains, including public policy, trading, traffic predictions, and recommendation systems. In this talk I will highlight several phenomena that arise when iteratively optimizing a predictive model in a performative context. When ignored, performativity surfaces as undesirable distribution shift and is routinely dealt with via retraining. First, I will discuss why solutions obtained via retraining can be suboptimal in terms of the learner's risk and I will describe methods that outperform retraining by properly accounting for the performative feedback. Then, I will focus on a setting where performativity arises from the aggregate behavior of strategic agents. I will argue that the rate at which the learner updates their model is crucial in describing the agents' behavior, and I will discuss different equilibria that can be achieved by varying the learner's update frequency.