Chaos for Prediction

When most people see a volatile time series, they think it’s random. But chaos and randomness are not the same. Chaos has structure – sensitive to initial conditions, and governed by deterministic laws. What if we could map time series data onto a chaotic system and let its internal dynamics do the forecasting?

In this post, I explore how to embed financial or physical signals into a multi-jointed pendulum model – a chaotic system – and simulate its motion forward as a predictive tool.

1. Chaos <> Randomness

Randomness implies no pattern, no predictability, no underlying structure.

Chaos, on the other hand:

  • Is deterministic
  • Has hidden structure
  • Is extremely sensitive to initial conditions

This distinction matters. If your time series shows signs of low-dimensional chaos, it means you don’t need to memorize every fluctuation – you need to understand the underlying dynamical attractor.

2. Mapping Time Series to a Chaotic Pendulum

Here’s the method, step by step:

2.1 Use Smoothed Log Returns as Target

We start with the log return of the time series, smoothed (e.g. with a low-pass filter) to remove noise but retain the underlying motion. This becomes our target signal – the movement we want the system to mimic. Also the log return moves up and down a center line.

2.2 Define a Multi-Joint Pendulum Model

We define a chaotic system – like a 3-joint pendulum – whose full state space is:

P = [θ1, θ2, θ3, ω1, ω2, ω3, L1, L2, L3]

Where:

  • θ = angles
  • ω = angular velocities
  • L = pendulum lengths (or masses – you can choose your degrees of freedom)
2.3 Fit Pendulum Parameters to the Target Signal

We focus on the horizontal motion of the final pendulum segment – the „tip“ – as our system’s observable output. For a 3-joint pendulum, the horizontal position of the tip is:

x_tip(t) = L1 * sin(θ1(t)) + L2 * sin(θ2(t)) + L3 * sin(θ3(t))

Where:

  • L_i are the lengths of the pendulum arms (possibly learnable parameters)
  • θ_i(t) are the angles of each arm at time t (determined by simulation)

This x_tip(t) becomes our model’s prediction of the smoothed log return signal.

We fit the pendulum by adjusting the initial conditions and parameters (e.g., arm lengths, masses, friction) to minimize the difference between x_tip(t) and the smoothed log return curve, using a loss function like Mean Squared Error:

Loss = ∑_t (x_tip(t) – r̂(t))²

Where r̂(t) is the target signal (the smoothed log return).

This transforms the chaotic system into a time series emulator – tuned not to memorize, but to mimic the dynamics.

Conclusion

This approach turns the tables on how we think about prediction. Instead of training a model to approximate a signal directly, we embed the signal into the state of a chaotic physical system – one that evolves naturally with rich, nonlinear behavior.