Friday, September 4, 2020


Exploiting symmetry in structured data is a powerful way to improve learning and generalization ability of AI systems, and extract more information, in applications from vision and NLP to robotics. This is exemplified by convolutional neural nets, which are an ubiquitous architecture. Recently, there has been a great deal of progress to develop improved equivariant and invariant learning architectures, as well as improved data augmentation methods. There has also been progress on the theoretical foundations of the area, from the perspectives of statistics and optimization. The notion of adding data via data augmentation also arises in problems such as adversarial robustness. This workshop will bring together leading researchers in the area to discuss the state of the art of the field. The activity is part of the Center for Foundations of Information Processing at Penn, supported by NSF TRIPODS.

Due to Covid-19, the workshop will be held online as a Zoom webinar. While this means less in-person interaction, it also means that the talks are accessible to everyone for free, without the costs of travel.

Recorded talks are available as a Youtube playlist. Individual videos are also linked below. The link below takes you to the playlist in the “play all” mode.


Schedule

9:00am- Haggai Maron, Leveraging permutation group symmetries for designing equivariant neural networks

9:30am- Taco Cohen, Equivariant Networks and Natural Graph Networks

10:00am- Danilo J. Rezende, Generative Models and Symmetries

10:30am- Kilian Q. Weinberger, Learning with Marginalized Augmenation

11:00am- Mark van der Wilk, Learning Invariances through Backprop with Bayesian Model Selection

11:20am- Break

11:40am- Alexander Robey, Model-based Robust Deep Learning

12:00pm- Alejandro Ribeiro, Algebraic Neural Networks: Symmetry and Stability

12:20pm- Pratik Chaudhari, Learning with few labeled data

12:40pm- Carlos Esteves, Spin-Weighted Spherical CNNs

12:55pm- Jane H. Lee, A group-theoretic framework for data augmentation

1:10pm- Panel

1:55pm- Lunch Break

2:30pm- Fabio Anselmi, Neurally plausible mechanisms for learning selective and invariant representations

3:00pm- Christine Allen-Blanchette, LagNetViP: A Lagrangian Neural Network for Video Prediction

3:30pm- Tess E. Smidt, Unintended features of Euclidean symmetry equivariant neural networks

4:00pm- Greg Valiant, Amplifying Datasets: A Theoretical Perspective

4:30pm- Chelsea Finn, Meta-Learning Symmetries

5:00pm- Hongyang R. Zhang, Generalization Effects of Linear Transformations in Data Augmentation


Speakers

Christine Allen-Blanchette

Christine Allen-Blanchette

Princeton University

https://cablanc.github.io/

Fabio Anselmi

Fabio Anselmi

Baylor College of Medicine/MIT

https://www.bcm.edu/people-search/fabio-anselmi-48386

Pratik Chaudhari

Pratik Chaudhari

University of Pennsylvania

https://pratikac.github.io/

Taco Cohen

Taco Cohen

Qualcomm AI Research

https://tacocohen.wordpress.com/

Carlos Esteves

Carlos Esteves

University of Pennsylvania

https://machc.github.io/

Jane H. Lee

Jane H. Lee

Twitter/Yale University
Chelsea Finn

Chelsea Finn

Stanford University

https://ai.stanford.edu/~cbfinn/

Haggai Maron

Haggai Maron

NVIDIA Research

https://haggaim.github.io/

Danilo J. Rezende

Danilo J. Rezende

Google DeepMind

https://danilorezende.com/

Alejandro Ribeiro

Alejandro Ribeiro

University of Pennsylvania

https://alelab.seas.upenn.edu/

Alexander Robey

Alexander Robey

University of Pennsylvania

https://scholar.google.com/citations?user=V5NWZc8AAAAJ&hl=en

Greg Valiant

Greg Valiant

Stanford University

https://theory.stanford.edu/~valiant/

Kilian Q. Weinberger

Kilian Q. Weinberger

Cornell University

https://www.cs.cornell.edu/~kilian/

Mark van der Wilk

Mark van der Wilk

Imperial College London

https://markvdw.github.io/

Hongyang R. Zhang

Hongyang R. Zhang

Northeastern University

http://www.hongyangzhang.com/


Contacts

Please contact Edgar Dobriban or Kostas Daniilidis with any questions.