Have you seen something interesting or curious or mysterious while training a deep neural network?

Share these interesting and unusual deep learning phenomena here!

Workshop Abstract

Our understanding of modern neural networks lags behind their practical successes. This growing gap poses a challenge to the pace of progress in machine learning because fewer pillars of knowledge are available to designers of models and algorithms. This workshop aims to close this understanding gap. We solicit contributions that view the behavior of deep nets as natural phenomena, to be investigated with methods inspired from the natural sciences like physics, astronomy, and biology. We call for empirical work that isolates phenomena in deep nets, describes them quantitatively, and then replicates or falsifies them.

As a starting point for this effort, we focus on the interplay between data, network architecture, and training algorithms. We seek contributions that identify precise, reproducible phenomena, and studies of current beliefs such as “sharp local minima do not generalize well” or “SGD navigates out of local minima”. Through the workshop, we hope to catalogue quantifiable versions of such statements, and demonstrate whether they occur reliably.


Time Event
8:45 - 9:00 Opening Remarks
9:00 - 9:30 Nati Srebro: Optimization’s Untold Gift to Learning: Implicit Regularization
9:30 - 9:45 Shengchao Liu: Bad Global Minima Exist and SGD Can Reach Them
9:45 - 10:00 Hattie Zhou: Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
10:00 - 10:30 Chiyuan Zhang: Are all layers created equal? - Studies on how neural networks represent functions
10:30 - 11:00 Break and Posters
11:00 - 11:15 Niru Maheswaranathan: Line attractor dynamics in recurrent networks for sentiment classification
11:15 - 11:30 Karttikeya Mangalam: Do deep neural networks learn shallow learnable examples first?
11:30 - 12:00 Crowdsourcing Deep Learning Phenomena
12:00 - 1:30 Lunch and Posters
1:30 - 2:00 Aude Oliva: Reverse engineering neuroscience and cognitive science principles
2:00 - 2:15 Beidi Chen: Angular Visual Hardness
2:15 - 2:30 Lior Wolf: On the Convex Behavior of Deep Neural Networks in Relation to the Layers’ Width
2:30 - 3:00 Andrew Saxe: Intriguing phenomena in training and generalization dynamics of deep networks
3:00 - 4:00 Break and Posters
4:00 - 4:30 Olga Russakovsky: Strategies for mitigating social bias in deep learning systems
4:30 - 5:30 Panel Discussion: Kevin Murphy, Nati Srebro, Aude Oliva, Andrew Saxe, Olga Russakovsky. Moderator: Ali Rahimi

Accepted Papers

Call for Papers

We solicit the following kind of experimental work:

We specifically do not require the phenomenon to be novel. We value instead a formalization of the phenomenon, followed by reliable evidence to support it or a thorough refutation of it. We especially welcome work that carefully characterizes the limits of the phenomenon observed, and show that it only occurs under specific conditions and settings. We do not require an explanation of why a phenomenon might occur, only demonstrations that it does so reliably (or refutations). We hope that the catalogue of phenomena we accumulate will serve as a starting point for a better understanding of deep learning.

Submission Instructions

Submissions are closed!

Please submit your paper through OpenReview.

The main part of a submission should be at most 4 pages long. These first four pages should contain a definition of the phenomena of interest and the main experimental results. There is no space limit for references, acknowledgements, and details included in appendices.

Papers should be formatted with at least a 10 pt font, standard line spacing, and a 1 inch margin. We do not require a specific formatting style beyond these constraints.

We welcome all unpublished results and also papers that were published in 2018 or later. Submission must be anonymized.

Important Dates


Please email icml2019phenomena@gmail.com with any questions.