You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The [Hurst exponent](https://en.wikipedia.org/wiki/Hurst_exponent) is used as a measure of long-term memory of time series. It relates to the autocorrelations of the time series, and the rate at which these decrease as the lag between pairs of values increases.
17
-
16
+
The [Hurst exponent](https://en.wikipedia.org/wiki/Hurst_exponent) is used as a measure of long-term memory of time series. It relates to the auto-correlations of the time series, and the rate at which these decrease as the lag between pairs of values increases.
18
17
It is a statistics which can be used to test if a time-series is mean reverting or it is trending.
18
+
19
+
The idea idea behind the Hurst exponent is that if the time-series $x_t$ follows a Brownian motion (aka Weiner process), than variance between two time points will increase linearly with the time difference. that is to say
Trending time-series have a Hurst exponent H > 0.5, while mean reverting time-series have H < 0.5. Understanding in which regime a time-series is can be useful for trading strategies.
20
30
31
+
These are some references to understand the Hurst exponent and its applications:
32
+
21
33
*[Hurst Exponent for Algorithmic Trading](https://robotwealth.com/demystifying-the-hurst-exponent-part-1/)
34
+
*[Basics of Statistical Mean Reversion Testing](https://www.quantstart.com/articles/Basics-of-Statistical-Mean-Reversion-Testing/)
22
35
23
36
## Study with the Weiner Process
24
37
25
-
We want to construct a mechanism to estimate the Hurst exponent via OHLC data because it is widely available from data provider and easily constructed as an online signal during trading.
38
+
We want to construct a mechanism to estimate the Hurst exponent via OHLC data because it is widely available from data providers and easily constructed as an online signal during trading.
26
39
27
40
In order to evaluate results against known solutions, we consider the Weiner process as generator of timeseries.
28
41
29
42
The Weiner process is a continuous-time stochastic process named in honor of Norbert Wiener. It is often also called Brownian motion due to its historical connection with the physical model of Brownian motion of particles in water, named after the botanist Robert Brown.
30
43
44
+
We use the **WeinerProcess** from the stochastic process library and sample one path over a time horizon of 1 (day) with a time step every second.
@@ -50,9 +67,19 @@ The value should be close to the **sigma** of the WeinerProcess defined above.
50
67
float(paths.paths_std(scaled=True)[0])
51
68
```
52
69
53
-
### Range-base Variance estimators
70
+
The evaluation of the hurst exponent is done by calculating the variance for several time windows and by fitting a line to the log-log plot of the variance vs the time window.
54
71
55
-
We now turn our attention to range-based volatility estimators. These estimators depends on OHLC timeseries, which are widely available from data providers such as [FMP](https://site.financialmodelingprep.com/).
72
+
```{code-cell} ipython3
73
+
paths.hurst_exponent()
74
+
```
75
+
76
+
As expected, the Hurst exponent should be close to 0.5, since we have calculated the exponent from the paths of a Weiner process.
77
+
78
+
+++
79
+
80
+
### Range-based Variance Estimators
81
+
82
+
We now turn our attention to range-based variance estimators. These estimators depends on OHLC timeseries, which are widely available from data providers such as [FMP](https://site.financialmodelingprep.com/).
56
83
To analyze range-based variance estimators, we use he **quantflow.ta.OHLC** tool which allows to down-sample a timeserie to OHLC series and estimate variance with three different estimators
57
84
58
85
***Parkinson** (1980)
@@ -65,14 +92,111 @@ For this we build an OHLC estimator as template and use it to create OHLC estima
These numbers are different from the realized variance because they are based on the range of the prices, not on the actual prices. The realized variance is a more direct measure of the volatility of the process, while the range-based estimators are more robust to market microstructure noise.
123
+
124
+
The Parkinson estimator is always higher than both the Garman-Klass and Rogers-Satchell estimators, the reason is due to the use of the high and low prices only, which are always further apart than the open and close prices. The GK and RS estimators are similar and are more accurate than the Parkinson estimator, especially for greater periods.
125
+
126
+
```{code-cell} ipython3
127
+
pd.options.plotting.backend = "plotly"
128
+
fig = vdf.plot(markers=True, title="Weiner Standard Deviation from Range-based estimators - correct value is 2.0")
129
+
fig.show()
130
+
```
131
+
132
+
To estimate the Hurst exponent with the range-based estimators, we calculate the variance of the log of the range for different time windows and fit a line to the log-log plot of the variance vs the time window.
The Hurst exponent should be close to 0.5, since we have calculated the exponent from the paths of a Weiner process. But the Hurst exponent is not exactly 0.5 because the range-based estimators are not the same as the realized variance. Interestingly, the Parkinson estimator gives a Hurst exponent closer to 0.5 than the Garman-Klass and Rogers-Satchell estimators.
176
+
177
+
## Mean Reverting Time Series
178
+
179
+
We now turn our attention to mean reverting time series, where the Hurst exponent is less than 0.5.
180
+
181
+
```{code-cell} ipython3
182
+
from quantflow.sp.ou import Vasicek
183
+
import pandas as pd
184
+
pd.options.plotting.backend = "plotly"
185
+
186
+
p = Vasicek(kappa=2)
187
+
paths = {f"kappa={k}": Vasicek(kappa=k).sample(n=1, time_horizon=1, time_steps=24*60*6) for k in (1.0, 10.0, 50.0, 100.0, 500.0)}
188
+
pdf = pd.DataFrame({k: p.path(0) for k, p in paths.items()}, index=paths["kappa=1.0"].dates(start=start_of_day()))
189
+
pdf.plot()
190
+
```
191
+
192
+
We can now estimate the Hurst exponent from the realized variance. As we can see the Hurst exponent decreases as we increase the mean reversion parameter.
193
+
194
+
```{code-cell} ipython3
195
+
pd.DataFrame({k: [p.hurst_exponent()] for k, p in paths.items()})
196
+
```
197
+
198
+
And we can also estimate the Hurst exponent from the range-based estimators. As we can see the Hurst exponent decreases as we increase the mean reversion parameter along the same lines as the realized variance.
0 commit comments