Visualizing random neural networks | March 2021

To try to build some intuition for the nature of neural networks as “universal function approximators”, we’ll explore how input gets warped to output. Specifically, we’ll restrict ourselves to input and output that can be easily plotted — namely, points in one and two dimensions.

The simplest case — $\mathbb{R}^1\rightarrow\mathbb{R}^1$— is also incredibly familiar: $y = f(x)$, where we parameterize a space of functions with a fully connected network of parameters $\{\phi\}$. Disclaimer: everything visual from here on out will be done with some beginner javascript and Tensorflow.js, so apologies if anything breaks or runs slowly.


Number of layers:
tanh relu


More to come!

Try the visualization generator below, which creates a fully connected network with 64 units per layer, mapping a 2D input to a 2D output. The visualization shows the warping of the area [-1, 1]^2 after being fed through one such network, randomly initialized with normally distributed weights (mean 0, stddev 0.5).

Check out how the different non-linearities impart different characteristics, and see how adding more layers heaps on more twisting and distorting.


Number of layers:
tanh relu hard_sigmoid elu softsign
Previous
Previous

Composting at restaurant scale | June 2021

Next
Next

Randomness and Mind Games (Virtual Reality) | 2017