Neural Net Playground.
Train a small neural network on a 2D classification problem and watch the decision boundary form, frame by frame. Each hidden neuron also shows what shape it has individually learned — that’s usually the most interesting part.
What you’re seeing
The plot on the left shows a 2D dataset and a colored background — that background is the network’s prediction at every point in space. Orange regions are where it predicts class −1, blue regions are class +1. Soft = uncertain, saturated = confident.
The diagram on the right is the network. Each circle is a neuron; each line is a weight. Line thickness encodes magnitude, color encodes sign. Inside each hidden neuron is a tiny version of that neuron’s output across the input space — so you can see what shape that one neuron has learned to detect.
The point of stacking layers: a first-layer neuron usually learns a half-plane (something like “is the input above this line?”). A second-layer neuron combines those into more interesting shapes. Watching that build, layer by layer, is why depth matters.