Building an understanding of complex systems from the pieces up.

Drawing on physics, machine learning, and information theory, with an emphasis on visualization.

University of Pennsylvania | Postdoc | Philadelphia, PA 2021-

Google Research | AI Resident | New York, NY 2019-2021
University of Chicago | PhD (Physics) | Chicago, IL 2013-2019
UC Berkeley | BA (Physics, computer science) | Berkeley, CA 2009-2013
Lawrence Berkeley National Lab | Research assistant | Berkeley, CA 2012-2013

 
 


Number of layers:
Nonlinearity: tanh relu hard_sigmoid elu softsign

You’re creating your very own random, never-before-seen neural network! The network maps a 2D input to a 2D output and the display shows how an input square gets warped. Try changing the number of layers (64 units each) of the fully connected network or the nonlinearity applied at each layer. Check out Visualizing random networks for more details.