Published 08/29/2025, 02:30 AM
View PostI made a website that runs an interactive simulation of thousands of double pendulums. It uses WebGPU to simulate each pendulum in parallel. On my MacBook Pro M2 in Chrome it runs at 70 FPS and 5000 simulation ticks per second on battery power, without any meaningful simulation energy loss. WebGPU is still a relatively new API that works best in Chrome, but more browsers are starting to add support for it.
It lets you choose a pendulum to sample, zoom in or out, choose a visualization mode, choose a ColorCET color map, and adjust the simulation speed and accuracy.
You can explore the source code (and contribute!) on the GitHub page. This project was inspired by 2swap's YouTube video on the same subject.
Each pixel in this image represents a double pendulum's sensitivity to initial conditions. The stationary pendulum at the center has angles (0, 0). Its sensitivity to a small perturbation in its initial angles doesn't have any effect on its long term behavior, so the pixel is dark. The lighter and noisy pixels are initial pendulum states that are sensitive to a small perturbation.
Each pendulum's state can be described completely with four numbers: (theta1, theta2, omega1, omega2) representing the two angles of the pendulum arms and their angular velocities. At the beginning of the simulation, all pendulums' theta1 and theta2 angles are initialized to the pixel's location in angle space [-pi, pi], while omega1 and omega2 are set to 0. So at time=0 each pixel is a stationary pendulum with unique initial angles.
There are several islands of stable pendulums. They trace periodic paths and do not deviate from that path even after a long time. It's a ton of fun to zoom in to see their shapes evolve and the paths they trace.
Visualization modes
There are 4 different visualization modes. This is the default mode "Angle 2" and it shows the angle of each pendulum's second arm. 0 is straight down and pi is straight up.
The "Angle 1" mode looks very similar but the colors are less saturated because the first arm is more likely pointing downward.
The "Sensitivity" mode runs a second simulation for slightly perturbed pendulum per pixel, and the color represents the distance between the 2 pendulums. A dark pixel is a pendulum that is very close to its initially perturbed sibling.
"Energy loss" mode shows how simulation error accumulates for each pendulum. With a low time step, the error is negligible for most pendulums.
Technical details
The simulation uses a WebGPU compute shader and the 4th Order Runge-Kutta method, which is surprisingly fast and accurate. WebGPU is still a relatively new API that works best in Chrome, but more browsers are starting to add support for it.
I wrote this using TypeGPU, Svelte, Tailwind, DiasyUI, and Catppuccin.
Kilo Code
As a software engineer I feel like I had to give AI a real shot and disclose my thoughts on it.
I had some help with the Kilo Code extension for VS Code. I used the free copilot gpt-4.1 model. It was my first time trying an AI coding agent. I found it much more helpful than copy/pasting code with an AI chatbot. It was great for the first attempts of new features. It was usually able to write code that compiles and runs, and it did a good job of taking my feedback and fixing runtime issues. I was also impressed at its ability to pull in context from the relevant files without exceeding the limit.
I'm not a huge fan of the giant proprietary models because of how carelessly their training data is scraped, and I don't really want to support them with my money or data. Kilo Code saved me from some of the coding tedium, but it's not magic and it can't do everything.
Sometimes it got stuck in a loop and never recovered.
Sometimes it wrote too much and it couldn't backtrack.
Sometimes its code didn't work and neither of us could fix it.
I tried using Kilo Code with local models using Ollama and LM Studio but it was very buggy and disappointing. Today's consumer computer hardware is incredibly fast and capable, and I would love a future with AI that is local, accessible, open source, open weights, and trained on open data. I really hope local AI gets better because I would much prefer cheap and widely distributed AI rather than paying for data center compute.
Thanks for reading! I hope you have fun with it.