Margin
← back to simulations
Phase 4 simulation

CNN Explorer.

See how convolutional networks see images. Filter Lab lets you build a 3×3 kernel by hand and watch what it does to a small image, one pixel at a time. Layer Explorer runs a real pretrained MobileNet on a preset image and shows what each of its layers is responding to.

loading the simulation…

What you’re seeing

A convolution slides a small grid of numbers (a kernel) across an image, multiplying and summing as it goes. Different kernels detect different things — edges, blurs, sharpenings — without anyone training them. The math is small. Filter Lab lets you write the kernel yourself and hover any output pixel to see the nine multiplications that produced it.

A convolutional network stacks dozens of these, with the kernels learned from data instead of hand-written. Early layers tend to learn edge-like patterns. Middle layers combine those into textures and parts. Late layers respond to entire shapes — a face, a cat, a car. Layer Explorer shows you those activations at four representative depths in a real pretrained model.

The classification at the bottom is whatever MobileNet itself thinks your image is — including the surprises. Synthetic images often confuse it amusingly.