# Introduction

Dynamical systems are systems that describe how states evolve with time. For instance given the position and the velocity of a ball, it is possible to calculate how its position and speed will evolve with time. Some dynamical systems display a tendency to evolve towards certain states independent of their initial states, like the Lorenz system.

In this post I will show how I rendered and animated two 3D dynamical systems.

# The Clifford Attractor

The system that inspired me to generate a 3D attractor was the Clifford attractor, which is defined as:

\begin{align*}

x_{n+1} &= \sin(a y_n) + c \cos(a x_n)\\

y_{n+1} &= \sin(b x_n) + d \cos(b y_n)\\

\end{align*}

If the constants are set to \(a=1.5, \; b=-1.8, \; c=1.6, \; d=0.9\), and \(x_0\), \(y_0\) to random values, and the equation is iterated and the \(x_n, \; y_n\) pixel value are incremented, an image like this (after some post-processing) can be generated:

Using the Clifford attractor as template, I created two 3D attractor equations.

# A1 Attractor

I defined the first attractor (which I arbitrarily called “A1”) as:

\begin{align*}

x_{n+1} &= z \sin(a + y_n) + y \cos(d + z_n) \\

y_{n+1} &= x \sin(b + z_n) + z \cos(e + x_n) \\

z_{n+1} &= y \sin(c + x_n) + x \cos(f + y_n) \\

\end{align*}

This attractor, depending on its parameters can generate images like this:

To speed up code, instead of computing all the points from a single initial point, I computed multiple initial points and tracked how they evolved with time. The code to generate a single image can be summarized as the following sequence:

- Generate a point cloud of P \((x_0,\; y_0,\; z_0)\) points around a unit sphere centered at \((0,\; 0,\; 0)\).
- For every generated \((x_0,\; y_0,\; z_0)\) point, compute \(x_n,\; y_n,\; z_n\) for \(n < N\) and add those points to the point cloud.
- Remove points \((x_n,\; y_n,\; z_n)\) for \(n < S\) from the point cloud.
- 3D rotate the points.
- Perspective project 3D points onto 2D.
- Store in the framebuffer the number of times each 2D point “hits” a pixel.
- Colormap the framebuffer.

To generate a video sequence I modified the \(a,\; b,\; c,\; d,\; e,\;f\) paramaters and the rotation transformation at every frame the video.

# A2 Attractor

I defined the second attractor (A2) as:

\begin{align*}

x_{n+1} &= z \sin(a + y_n x_n) + y \cos(d + z_n x_n) \\

y_{n+1} &= x \sin(b + z_n y_n) + z \cos(e + x_n y_n) \\

z_{n+1} &= y \sin(c + x_n z_n) + x \cos(f + y_n z_n) \\

\end{align*}

Using the same algorithm as in A1, but computing 128 times more points, the attractor equations produced the following image:

Again, the video sequence was generated the same way as in the A1 video.

# Conclusion

Rendering synthetic images is relatively simple, but requires some trial and error to find equations and parameters that can generate interesting images.

Here I rendered images at 4K (3840 × 2160), the A1 and A2 videos, and also the A1 image (a single frame of the A1 video sequence) used 500 M points per image, and most of the times look “grainy”. To make grain disappear (while keeping the same resolution), more points need to be computed, and that is what I did when rendering the A2 image, where I used 64 G points instead.

The tradeoff of computing more points is computational time. Rendering images can get computationally expensive, both video sequences, made of 1800 frames, took ~24 h to render using 16 threads on a i9-9900K @ 3.6 GHz. The A2 image on the other side, took ~1 h.

The code is available on GitHub.