What this is
An interactive sandbox to study how repeated convolution + nonlinearity changes signals in image space.
Choose an input pattern, pick or edit a kernel, and watch the evolution over time.
Core update rule
xt+1 = act( conv(xt, W) )
You can switch activation, boundary handling, and per-step normalization to isolate different effects.
What to look for
- Diffusion vs transport: does energy spread symmetrically or drift?
- Stability: does the signal explode, vanish, or converge?
- Bias from nonlinearity: how ReLU changes propagation vs identity.
- Boundary effects: zero vs wrap vs reflect.
Notes
This is a single-channel toy model intended for intuition. It does not reproduce full CNN training dynamics,
but it helps visualize how kernel structure and nonlinearities shape signal flow over repeated layers/steps.
Controls
Iterate: x ← act(conv(x, W))
Higher = slower, but smoother gifs. 128 is a good default.
Tip: hit Simulate after editing the kernel, then Play.
GIF export captures all frames (0..T). Larger grids/steps increase time & file size.
Viewer
Rendered as a signed heatmap (blue↔red)
t = 0
μ = 0
σ = 0
Higher gain = more contrast (good for diffusion tails).
Kernel editor
Edit symmetric/antisymmetric components or paste a matrix. Then click Simulate.
β² = 0 emphasizes symmetric structure, β² = 1 emphasizes antisymmetric structure.
Mixed kernel (derived)
Computed as √(β²)·antisym + √(1 − β²)·sym.
Paste full kernel
Paste a full 3×3 or 5×5 kernel and we'll decompose it into symmetric + antisymmetric components.
Note: convolution here is single-channel. This is meant for intuition about propagation/transport/diffusion.
Symmetric component
Symmetric entries are mirrored around the center.
Antisymmetric component
Antisymmetric entries flip sign across the center.