Chen Institute Virtual Seminar CANCELED
This event has been canceled. We hope to reschedule this seminar in the New Year.
Abstract: Deep learning has yielded impressive performance over the last few years. However, it is no match to human perception and reasoning. Recurrent feedback in the human brain is shown to be critical for robust perception, and is able to correct the potential errors using an internal generative model of the world. Inspired by this, we augment any existing neural network with feedback (NN-F) in a Bayes-consistent manner. We demonstrate inherent robustness in NN-F that is far superior to standard neural networks.
Compositionality is another important hallmark of human intelligence. Humans are able to compose concepts to reason about entirely new scenarios. We have created a new dataset for few-shot learning, inspired by the Bongard challenge. We show that all existing meta learning methods severely fall short of human performance. We argue that neuro-symbolic reasoning is critical for tackling such few-shot learning challenges and showcase some success stories.