- Recap
- Model Update
- Model Testing
- Model Results
- Conclusions
- Code

In the previous post I gave a basic "proof" of concept, where we designed a trading strategy using Sklearn's implementation of Gaussian mixture models. The strategy attempts to predict an asset's return distribution such that returns that fall outside the predicted distribution are considered *outliers* and likely to mean revert. It showed some promise but had many areas in need of improvement.

In this version I've refactored a lot of the code into a more object oriented structure. Now the code uses three classes.

- ModelRunner() class - This is the class for executing the model and returning our prediction dataframe and some key parameters.
- ResultEval() class - This takes the data from the prediction dataframe and key parameters and outputs our strategy returns and summary information.
- ModelPlots() class - This takes our data and outputs key plots to help visualize the strategy performance.

I did this for several reasons.

- Reduce the likelihood of input errors by creating objects that share parameters.
- Increase the ease of model testing.
- Increase interpretability.

In this version, we are going to expand the analysis to include other, actively traded ETFs, and test the reproducibility of the results, and generalization ability of the model.

Here are the ETFs we will examine:

symbols = ['SPY', 'DIA', 'QQQ', 'GLD', 'TLT', 'EEM', 'ACWI']

Assuming the correct imports, with the refactored code we can run the model in the following fashion. We'll focus on the **TOO_LOW** events although I encourage readers to experiment with both.

```
# Project Directory
DIR = 'YOUR/PROJECT/DIRECTORY/'
# get fed data
f1 = 'TEDRATE' # ted spread
f2 = 'T10Y2Y' # constant maturity ten yer - 2 year
f3 = 'T10Y3M' # constant maturity 10yr - 3m
factors = [f1, f2, f3]
ft_cols = factors + ['lret']
start = pd.to_datetime('2002-01-01')
end = pd.to_datetime('2017-01-01')
symbols = ['SPY', 'DIA', 'QQQ', 'GLD', 'TLT', 'EEM', 'ACWI']
for mkt in symbols:
data = get_mkt_data(mkt, start, end, factors)
# Model Params
# ------------
a, b = (.2, .7) # found via coarse parameter search
alpha = 0.99
max_iter = 100
k = 2 # n_components
init = 'random' # or 'kmeans'
nSamples = 2_000
year = 2009 # cutoff
lookback = 1 # years
step_fwd = 5 # days
MR = ModelRunner(data, ft_cols, k, init, max_iter)
dct = MR.prediction_cycle(year, alpha, a, b, nSamples)
res = ResultEval(dct, step_fwd=step_fwd)
event_dict = res._get_event_states()
event = list(event_dict.keys())[1] # TOO_LOW
post_events = res.get_post_events(event_dict[event])
end_vals = res.get_end_vals(post_events)
smry = res.create_summary(end_vals)
p()
p('*'*25)
p(mkt, event.upper())
p(smry.T)
mp = ModelPlots(mkt, post_events, event, DIR, year)
mp.plot_pred_results(dct['pred'], dct['year'], dct['a'], dct['b'])
mp.plot_equity_timeline()
```

In this post I'm going to skip to the results and conclusions, and provide the refactored code at the end.

First let's look at the model results using SPY.

The first thing I noticed was that the confidence intervals were less responsive to increases in return volatility. The difference shows up in the reduction in accuracy. In Part 1, I believe the accuracy was ~71% whereas in the updated model the accuracy has dipped to ~68%! Does that hurt our strategy?

Judging by the equity curve, our strategy is not noticeably impacted by the reduced model accuracy!

The plotted equity curve is the cumulative sum of each event's returns assuming every event was a "trade". This should include overlapping events.

Let's look at the model results for the other ETFs.

The model has some interesting output. Notice that model accuracy ranges from ~57% (TLT) to ~83% (EEM). However, both of these equity curves end positively. GLD is distinctly volatile, and ends poorly, however the model was 75% accurate. DIA, QQQ, SPY, and ACWI all have stable sharply positive equity curves.

This supports my initial findings that model accuracy seems loosely, if at all, related to the strategy's equity curve. These results do indicate that the strategy is worth further evaluation but I'm hesitant to declare success.

I need to test the strategy over a longer period of time and make sure to include 2008/9. Also, I need to drill down into evaluating the strategy results vs the correlation of asset returns. For example, DIA, QQQ, and SPY are highly correlated, so we would expect the strategy to have similar results among those ETFs, but what about negatively and uncorrelated assets? TLT is generally negatively correlated with SPY while GLD is likely uncorrelated. Is the strategy performance for those two ETFs representative of other negatively/uncorrelated ETFs?

```
%load_ext watermark
%watermark
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import sklearn.mixture as mix
import scipy.stats as scs
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import missingno as msno
from tqdm import tqdm
import warnings
warnings.filterwarnings("ignore")
import affirm
sns.set(font_scale=1.25)
style_kwds = {'xtick.major.size': 3, 'ytick.major.size': 3,
'font.family':u'courier prime code', 'legend.frameon': True}
sns.set_style('white', style_kwds)
p=print
p()
%watermark -p pandas,pandas_datareader,numpy,scipy,sklearn,matplotlib,seaborn
# **********************************************************************
def get_mkt_data(mkt, start, end, factors):
"""Function to get benchmark data from
Yahoo and Factor data from FRED
Params:
mkt : str(), symbol
start : pd.DateTime()
end : pd.DateTime()
factors : list() of str()
Returns:
data : pd.DataFrame()
"""
MKT = (web.DataReader([mkt], 'yahoo', start, end)['Adj Close']
.rename(columns={mkt:mkt})
.assign(lret=lambda x: np.log(x[mkt]/x[mkt].shift(1)))
.dropna())
data = (web.DataReader(factors, 'fred', start, end)
.join(MKT, how='inner')
.dropna())
return data
# **********************************************************************
class ModelRunner():
def __init__(self, *args, **kwargs):
"""Class to run mixture model model
Params:
data : pd.DataFrame()
ft_cols : list() of feature columns str()
k : int(), n_components
max_iter : int(), max iterations
init : str() {random, kmeans}
"""
self.data = data
self.ft_cols = ft_cols
self.k = k
self.max_iter = max_iter
self.init = init
np.random.seed(123457) # make results reproducible
def _run_model(self, bgm=None, **kwargs):
"""Function to run mixture model
Params:
data : pd.DataFrame()
ft_cols : list of str()
k : int(), n_components
max_iter : int()
init : str() {random, kmeans}
Returns:
model : sklearn model object
hidden_states : array-like, hidden states
"""
X = self.data[self.ft_cols].values
if bgm:
model = mix.BayesianGaussianMixture(n_components=self.k,
max_iter=self.max_iter,
init_params=self.init,
**kwargs,
).fit(X)
else:
model = mix.GaussianMixture(n_components=self.k,
max_iter=self.max_iter,
init_params=self.init,
**kwargs,
).fit(X)
hidden_states = model.predict(X)
return model, hidden_states
def _get_state_est(self, model, hidden_states):
"""Function to return estimated state mean and state variance
Params:
model : sklearn model object
hidden_states : {array-like}
Returns:
mr_i : mean return of last estimated state
mvar_i : model variance of last estimated state
"""
# get last state
last_state = hidden_states[-1]
# last value is mean return for ith state
mr_i = model.means_[last_state][-1]
mvar_i = np.diag(model.covariances_[last_state])[-1]
return mr_i, mvar_i
def _get_ci(self, mr_i, mvar_i, alpha, a, b, nSamples):
"""Function to sample confidence intervals
from the JohnsonSU distribution
Params:
mr_i : float()
mvar_i : float()
alpha : float()
a : float()
b : float()
nsamples : int()
Returns:
ci : tuple(float(), float()), (low_ci, high_ci)
"""
rvs_ = scs.johnsonsu.rvs(a, b, loc=mr_i, scale=mvar_i, size=nSamples)
ci = scs.johnsonsu.interval(alpha=alpha, a=a, b=b,
loc=np.mean(rvs_), scale=np.std(rvs_))
return ci
def prediction_cycle(self, *args, **kwargs):
"""Function to make walk forward predictions from cutoff year onwards
Params:
year : int(), cutoff year
alpha : float()
a : float()
b : float()
nsamples : int()
Returns:
dict() :
pred : pd.DataFrame()
year : str()
a, b : float(), float()
"""
cutoff = year
train_df = self.data.ix[str(cutoff - lookback):str(cutoff)].dropna()
oos = self.data.ix[str(cutoff+1):].dropna()
# confirm that train_df end index is different than oos start index
assert train_df.index[-1] != oos.index[0]
# create pred list to hold tuple rows
preds = []
for t in tqdm(oos.index):
if t == oos.index[0]:
insample = train_df
# run model func to return model object and hidden states using params
model, hstates = self._run_model(**kwargs)
# get hidden state mean and variance
mr_i, mvar_i = self._get_state_est(model, hstates)
# get confidence intervals from sampled distribution
low_ci, high_ci = self._get_ci(mr_i, mvar_i, alpha, a, b, nSamples)
# append tuple row to pred list
preds.append((t, hstates[-1], mr_i, mvar_i, low_ci, high_ci))
# increment insample dataframe
insample = data.ix[:t]
cols = ['ith_state', 'ith_ret', 'ith_var', 'low_ci', 'high_ci']
pred = (pd.DataFrame(preds, columns=['Dates']+cols)
.set_index('Dates').assign(tgt = oos['lret']))
# logic to see if error exceeds neg or pos CI
pred_copy = pred.copy().reset_index()
# Identify indices where target return falls between CI
win = pred_copy.query("low_ci < tgt < high_ci").index
# create list of binary variables representing in/out CI
in_rng_list = [1 if i in win else 0 for i in pred_copy.index]
# assign binary variables sequence to new column
pred['in_rng'] = in_rng_list
return {'pred':pred, 'year':year, 'a':a, 'b':b}
# **********************************************************************
class ResultEval():
def __init__(self, data, step_fwd):
"""Class to evaluate prediction results
Params:
data : dict() containing results of ModelRunner()
step_fwd : int(), number of days to evalute post event
"""
self.df = data['pred'].copy().reset_index()
self.step_fwd=step_fwd
def _get_event_states(self):
"""Function to get event indexes
Index bjects must be called 'too_high', 'too_low'
Returns:
dict() : values are index objects
"""
too_high = self.df.query("tgt > high_ci").index
too_low = self.df.query("tgt < low_ci").index
return {'too_high':too_high, 'too_low':too_low}
def get_post_events(self, event):
"""Function to return dictionary where key, value is integer
index, and Pandas series consisting of returns post event
Params:
df : pd.DataFrame(), prediction df
event : {array-like}, index of target returns that exceed CI high or low
step_fwd : int(), how many days to include after event
Returns:
after_event : dict() w/ values = pd.Series()
"""
after_event = {}
for i in range(len(event)):
tmp_ret = self.df.ix[event[i]:event[i]+self.step_fwd, ['Dates','tgt']]
# series of returns with date index
after_event[i] = tmp_ret.set_index('Dates', drop=True).squeeze()
return after_event
def get_end_vals(self, post_events):
"""Function to sum and agg each post events' returns"""
end_vals = []
for k in post_events.keys():
tmp = post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
end_vals.append(tmp.sum())
return end_vals
def create_summary(self, end_vals):
"""Function to take ending values and calculate summary
Will fail if count of ending values (>0) or (<0) is less than 1
"""
gt0 = [x for x in end_vals if x>0]
lt0 = [x for x in end_vals if x<0]
assert len(gt0) > 1
assert len(lt0) > 1
summary = (pd.DataFrame(index=['value'])
.assign(mean = f'{np.mean(end_vals):.4f}')
.assign(median = f'{np.median(end_vals):.4f}')
.assign(max_ = f'{np.max(end_vals):.4f}')
.assign(min_ = f'{np.min(end_vals):.4f}')
.assign(gt0_cnt = f'{len(gt0):d}')
.assign(lt0_cnt = f'{len(lt0):d}')
.assign(sum_gt0 = f'{sum(gt0):.4f}')
.assign(sum_lt0 = f'{sum(lt0):.4f}')
.assign(sum_ratio = f'{sum(gt0) / abs(sum(lt0)):.4f}')
.assign(gt_pct = f'{len(gt0) / (len(gt0) + len(lt0)):.4f}')
.assign(lt_pct = f'{len(lt0) / (len(gt0) + len(lt0)):.4f}')
)
return summary
# **********************************************************************
class ModelPlots():
def __init__(self, mkt, post_events, event_state, project_dir, year):
"""Class to visualize prediction results and summary
Params:
mkt : str(), symbol
post_events : dict() of pd.Series()
event_state : str(), 'too_high', 'too_low'
project_dir : str()
year : int(), cutoff year
"""
self.mkt = mkt
self.post_events = post_events
self.event_state = event_state
self.DIR = project_dir
self.year = year
def plot_equity_timeline(self):
"""Function to plot event timeline with equity curve second axis"""
agg_tmp = []
fig, ax = plt.subplots(figsize=(10, 7))
ax1 = ax.twinx()
ax.axhline(y=0, color='k', lw=3)
for k in self.post_events.keys():
tmp = self.post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
agg_tmp.append(tmp)
if tmp.sum() > 0: color = 'dodgerblue'
else: color = 'red'
ax.plot(tmp.index, tmp.cumsum(), color=color, alpha=0.5)
ax.set_xlim(pd.to_datetime(str(self.year) + '-12-31'), tmp.index[-1])
ax.set_xlabel('Dates')
ax.set_title(f"{self.mkt} {self.event_state.upper()}", fontsize=16)
#sns.despine(offset=2)
agg_df = pd.concat(agg_tmp).cumsum()
ax1.plot(agg_df.index, agg_df.values, color='k', lw=5)
ax.set_ylabel('Event Returns')
ax1.set_ylabel('Equity Curve')
fig.savefig(self.DIR + f'{self.mkt} {self.event_state.upper()} post events timeline {pd.datetime.today()}.png', dpi=300)
return
def plot_events_timeline(self):
"""Function to plot even timeline only"""
fig, ax = plt.subplots(figsize=(10, 7))
ax.axhline(y=0, color='k', lw=3)
for k in self.post_events.keys():
tmp = self.post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
if tmp.sum() > 0: color = 'dodgerblue'
else: color = 'red'
ax.plot(tmp.index, tmp.cumsum(), color=color, alpha=0.5)
ax.set_xlim(pd.to_datetime('2009-12-31'), tmp.index[-1])
ax.set_xlabel('Dates')
ax.set_title(f"{self.mkt} {self.event_state.upper()}", fontsize=16, fontweight='demi')
sns.despine(offset=2)
fig.savefig(self.DIR + f'{self.mkt} {self.event_state.upper()} post events timeline.png', dpi=300)
return
def plot_events_post(self):
"""Function to plot events from zero until n days after"""
fig, ax = plt.subplots(figsize=(10, 7))
ax.axhline(y=0, color='k', lw=3)
for k in self.post_events.keys():
tmp = self.post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
if tmp.sum() > 0: color = 'dodgerblue'
else: color = 'red'
tmp.cumsum().reset_index(drop=True).plot(color=color, alpha=0.5, ax=ax)
ax.set_xlabel('Days')
ax.set_title(f"{self.mkt} {self.event_state.upper()}", fontsize=16, fontweight='demi')
sns.despine(offset=2)
fig.savefig(self.DIR + f'{self.mkt} {self.event_state.upper()} post events.png', dpi=300)
return
def plot_distplot(self, ending_values, summary):
"""Function to plot histogram of ending values"""
colors = sns.color_palette('RdYlBu', 4)
fig, ax = plt.subplots(figsize=(10, 7))
sns.distplot(pd.DataFrame(ending_values), bins=15, color=colors[0],
kde_kws={"color":colors[3]}, hist_kws={"color":colors[3], "alpha":0.35}, ax=ax)
ax.axvline(x=float(summary['mean'][0]), label='mean', color='dodgerblue', lw=3, ls='-.')
ax.axvline(x=float(summary['median'][0]), label='median', color='red', lw=3, ls=':')
ax.axvline(x=0, color='black', lw=1, ls='-')
ax.legend(loc='best')
sns.despine(offset=2)
ax.set_title(f"{self.mkt} {self.event_state.upper()}", fontsize=16, fontweight='demi')
fig.savefig(self.DIR + f'{self.mkt} {self.event_state.upper()} distplot.png', dpi=300)
return
def plot_pred_results(self, df, year, a, b):
"""Function to plot prediction results and confidence intervals"""
# colorblind safe palette http://colorbrewer2.org/
colors = sns.color_palette('RdYlBu', 4)
fig, ax = plt.subplots(figsize=(10, 7))
ax.scatter(df.index, df.tgt, c=[colors[1] if x==1 else colors[0] for x in df['in_rng']], alpha=0.85)
df['high_ci'].plot(ax=ax, alpha=0.65, marker='.', color=colors[2])
df['low_ci'].plot(ax=ax, alpha=0.65, marker='.', color=colors[3])
ax.set_xlim(df.index[0], df.index[-1])
nRight = df.query('in_rng==1').shape[0]
accuracy = nRight / df.shape[0]
ax.set_title('{:^10}\ncutoff year: {} | accuracy: {:2.3%} | errors: {} | a={}, b={}'
.format(self.mkt, year, accuracy, df.shape[0] - nRight, a, b))
in_ = mpl.lines.Line2D(range(1), range(1), color="white", marker='o', markersize=10, markerfacecolor=colors[1])
out_ = mpl.lines.Line2D(range(1), range(1), color="white", marker='o', markersize=10, markerfacecolor=colors[0])
hi_ci = mpl.lines.Line2D(range(1), range(1), color="white", marker='.', markersize=15, markerfacecolor=colors[2])
lo_ci = mpl.lines.Line2D(range(1), range(1), color="white", marker='.', markersize=15, markerfacecolor=colors[3])
leg = ax.legend([in_, out_, hi_ci, lo_ci],["in", "out", 'high_ci', 'low_ci'],
loc = "center left", bbox_to_anchor = (1, 0.85), numpoints = 1)
sns.despine(offset=2)
file_str = self.DIR+f'{self.mkt} prediction success {pd.datetime.today()}.png'
fig.savefig(file_str, dpi=300, bbox_inches="tight")
return
```

]]>**Recap****Hypothesis****Strategy****Conclusion****Caveats and Areas of Exploration****References**

In Part 1 we learned about Hidden Markov Models and their application using a toy example involving a lazy pet dog. In Part 2 we learned about the expectation-maximization algorithm, K-Means, and how Mixture Models improve on K-Means weaknesses. If you still have some questions or fuzzy understanding about these topics, I would recommend reviewing the prior posts. In those posts I also provide links to resources that really helped my understanding.

Given what we know about Mixture Models and their ability to characterize general distributions, can we use it to model a return series, such that we can identify **outlier** returns that are likely to mean revert?

**This strategy attempts to predict an asset's return distribution**. Actual returns that fall outside the predicted confidence intervals are considered ** outliers** and likely to revert to the mean.

We first fit a Gaussian Mixture Model to the historical daily return series. We use the model's estimate of the hidden state's mean and variance as parameters to a random sampling from the JohnsonSU distribution. We then calculate confidence intervals from the sampled distribution.

From there we evaluate model accuracy and the n days cumulative returns after each outlier event. We compute some summary statistics and try to answer the hypothesis.

Searching the net I found a useful bit of code from this site. Instead of assuming our asset return distribution is normal, we can use Python and Scipy.stats to find the brute force answer. We can cycle through each continuous distribution and run a goodness-of-fit procedure called the KS-test. The KS-test is a non-parametric method which examines the distance between a *known *cumulative distribution function and the CDF of the your sample data. The KS-test outputs the probability that your sample data comes from the benchmark distribution.

```
# code sample from:
# http://www.aizac.info/simple-check-of-a-sample-against-80-distributions/
cdfs = [
"norm", #Normal (Gaussian)
"alpha", #Alpha
"anglit", #Anglit
"arcsine", #Arcsine
"beta", #Beta
"betaprime", #Beta Prime
"bradford", #Bradford
"burr", #Burr
"cauchy", #Cauchy
"chi", #Chi
"chi2", #Chi-squared
"cosine", #Cosine
"dgamma", #Double Gamma
"dweibull", #Double Weibull
"erlang", #Erlang
"expon", #Exponential
"exponweib", #Exponentiated Weibull
"exponpow", #Exponential Power
"fatiguelife", #Fatigue Life (Birnbaum-Sanders)
"foldcauchy", #Folded Cauchy
"f", #F (Snecdor F)
"fisk", #Fisk
"foldnorm", #Folded Normal
"frechet_r", #Frechet Right Sided, Extreme Value Type II
"frechet_l", #Frechet Left Sided, Weibull_max
"gamma", #Gamma
"gausshyper", #Gauss Hypergeometric
"genexpon", #Generalized Exponential
"genextreme", #Generalized Extreme Value
"gengamma", #Generalized gamma
"genlogistic", #Generalized Logistic
"genpareto", #Generalized Pareto
"genhalflogistic", #Generalized Half Logistic
"gilbrat", #Gilbrat
"gompertz", #Gompertz (Truncated Gumbel)
"gumbel_l", #Left Sided Gumbel, etc.
"gumbel_r", #Right Sided Gumbel
"halfcauchy", #Half Cauchy
"halflogistic", #Half Logistic
"halfnorm", #Half Normal
"hypsecant", #Hyperbolic Secant
"invgamma", #Inverse Gamma
"invnorm", #Inverse Normal
"invweibull", #Inverse Weibull
"johnsonsb", #Johnson SB
"johnsonsu", #Johnson SU
"laplace", #Laplace
"logistic", #Logistic
"loggamma", #Log-Gamma
"loglaplace", #Log-Laplace (Log Double Exponential)
"lognorm", #Log-Normal
"lomax", #Lomax (Pareto of the second kind)
"maxwell", #Maxwell
"mielke", #Mielke's Beta-Kappa
"nakagami", #Nakagami
"ncx2", #Non-central chi-squared
# "ncf", #Non-central F
"nct", #Non-central Student's T
"norm" # Normal
"pareto", #Pareto
"powerlaw", #Power-function
"powerlognorm", #Power log normal
"powernorm", #Power normal
"rdist", #R distribution
"reciprocal", #Reciprocal
"rayleigh", #Rayleigh
"rice", #Rice
"recipinvgauss", #Reciprocal Inverse Gaussian
"semicircular", #Semicircular
"t", #Student's T
"triang", #Triangular
"truncexpon", #Truncated Exponential
"truncnorm", #Truncated Normal
"tukeylambda", #Tukey-Lambda
"uniform", #Uniform
"vonmises", #Von-Mises (Circular)
"wald", #Wald
"weibull_min", #Minimum Weibull (see Frechet)
"weibull_max", #Maximum Weibull (see Frechet)
"wrapcauchy", #Wrapped Cauchy
"ksone", #Kolmogorov-Smirnov one-sided (no stats)
"kstwobign"] #Kolmogorov-Smirnov two-sided test for Large N
sample = data['lret'].values
for cdf in cdfs:
try:
#fit our data set against every probability distribution
parameters = eval("scs."+cdf+".fit(sample)");
#Applying the Kolmogorov-Smirnof one sided test
D, p = scs.kstest(sample, cdf, args=parameters);
#pretty-print the results
D = round(D, 5)
p = round(p, 5)
#pretty-print the results
print (cdf.ljust(16) + ("p: "+str(p)).ljust(25)+"D: "+str(D));
except: continue
```

After running this code you should see output similar to the below code. For simplicity sake, just remember the higher the p-value, the more confident the ks-test is that our data came from the given distribution.

I had never heard of the Johnson SU distribution before this code. I had to research it, and I found that the Johnson SU was developed to in order to apply the established methods and theory of the normal distribution to non-normal data sets. What gives it this flexibility is the two shape parameters, gamma and delta, or a, b in Scipy. For more information I recommend this Wolfram reference link and this Scipy.stats link.

After selecting the distribution we can code up the experiment.

```
# import packages
# code was done in Jupyter Notebook
%load_ext watermark
%watermark
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import sklearn.mixture as mix
import scipy.stats as scs
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.dates import YearLocator, MonthLocator
%matplotlib inline
import seaborn as sns
import missingno as msno
from tqdm import tqdm
import warnings
warnings.filterwarnings("ignore")
sns.set(font_scale=1.25)
style_kwds = {'xtick.major.size': 3, 'ytick.major.size': 3,
'font.family':u'courier prime code', 'legend.frameon': True}
sns.set_style('white', style_kwds)
p=print
p()
%watermark -p pandas,pandas_datareader,numpy,scipy,sklearn,matplotlib,seaborn
```

Now let's get some data.

```
# get fed data
f1 = 'TEDRATE' # ted spread
f2 = 'T10Y2Y' # constant maturity ten yer - 2 year
f3 = 'T10Y3M' # constant maturity 10yr - 3m
start = pd.to_datetime('2002-01-01')
end = pd.to_datetime('2017-01-01')
mkt = 'SPY'
MKT = (web.DataReader([mkt], 'yahoo', start, end)['Adj Close']
.rename(columns={mkt:mkt})
.assign(lret=lambda x: np.log(x[mkt]/x[mkt].shift(1)))
.dropna())
data = (web.DataReader([f1, f2, f3], 'fred', start, end)
.join(MKT, how='inner')
.dropna())
p(data.head())
# gives us a quick visual inspection of the data
msno.matrix(data)
```

Now we create our convenience functions. The first is the **run_model()** function which takes the data, feature columns, and Sklearn mixture parameters to produce a fitted model object and the predicted hidden states. Note that you can use a Bayesian Gaussian mixture if you so choose. The difference between the two models is that the Bayesian mixture model will try to derive the correct number of mixture components up to a chosen maximum. For more information on the Bayesian mixture model I recommend consulting the Sklearn docs.

```
def _run_model(df, ft_cols, k, max_iter, init, bgm=None, **kwargs):
"""Function to run mixture model
Params:
df : pd.DataFrame()
ft_cols : list of str()
k : int(), n_components
max_iter : int()
init : str() {random, kmeans}
Returns:
model : sklearn model object
hidden_states : array-like, hidden states
"""
X = df[ft_cols].values
if bgm:
model = mix.BayesianGaussianMixture(n_components=k,
max_iter=max_iter,
init_params=init,
**kwargs,
).fit(X)
else:
model = mix.GaussianMixture(n_components=k,
max_iter=max_iter,
init_params=init,
**kwargs,
).fit(X)
hidden_states = model.predict(X)
return model, hidden_states
```

The next function takes the model object and predicted hidden states and returns the estimated mean and variance of the last state.

```
def _get_state_est(model, hidden_states):
"""Function to return estimated state mean and state variance
Params:
model : sklearn model object
hidden_states : {array-like}
Returns:
mr_i : model mean return of last estimated state
mvar_i : model variance of last estimated state
"""
# get last state
last_state = hidden_states[-1]
# last value is mean return for ith state
mr_i = model.means_[last_state][-1]
mvar_i = np.diag(model.covariances_[last_state])[-1]
return mr_i, mvar_i
```

Now we take the estimated state mean and variance of the last predicted state and feed it into the **_get_ci() **function. This function takes the alpha and shape parameters, estimated mean and variance and randomly samples from the JohnsonSU distribution. From this distribution we derive confidence intervals.

```
def _get_ci(mr_i, mvar_i, alpha, a, b, nSamples):
"""Function to sample confidence intervals from the JohnsonSU distribution
Params:
mr_i : float()
mvar_i : float()
alpha : float()
a : float()
b : float()
nsamples : int()
Returns:
ci : tuple(float(), float()), (low_ci, high_ci)
"""
np.random.RandomState(0)
rvs_ = scs.johnsonsu.rvs(a, b, loc=mr_i, scale=mvar_i, size=nSamples)
ci = scs.johnsonsu.interval(alpha=alpha, a=a, b=b, loc=np.mean(rvs_), scale=np.std(rvs_))
return ci
```

The final function visualizes our model predictions, by highlighting target returns that fell inside and outside the confidence intervals, along with our predicted confidence intervals.

```
def plot_pred_success(df, year, a, b):
# colorblind safe palette http://colorbrewer2.org/
colors = sns.color_palette('RdYlBu', 4)
fig, ax = plt.subplots(figsize=(10, 7))
ax.scatter(df.index, df.tgt, c=[colors[1] if x==1 else colors[0] for x in df['in_rng']], alpha=0.85)
df['high_ci'].plot(ax=ax, alpha=0.65, marker='.', color=colors[2])
df['low_ci'].plot(ax=ax, alpha=0.65, marker='.', color=colors[3])
ax.set_xlim(df.index[0], df.index[-1])
nRight = df.query('in_rng==1').shape[0]
accuracy = nRight / df.shape[0]
ax.set_title(r'cutoff year: {} | accuracy: {:2.3%} | errors: {} | a={}, b={}'
.format(year, accuracy, df.shape[0] - nRight, a, b))
in_ = mpl.lines.Line2D(range(1), range(1), color="white", marker='o', markersize=10, markerfacecolor=colors[1])
out_ = mpl.lines.Line2D(range(1), range(1), color="white", marker='o', markersize=10, markerfacecolor=colors[0])
hi_ci = mpl.lines.Line2D(range(1), range(1), color="white", marker='.', markersize=15, markerfacecolor=colors[2])
lo_ci = mpl.lines.Line2D(range(1), range(1), color="white", marker='.', markersize=15, markerfacecolor=colors[3])
leg = ax.legend([in_, out_, hi_ci, lo_ci],["in", "out", 'high_ci', 'low_ci'],
loc = "center left", bbox_to_anchor = (1, 0.85), numpoints = 1)
sns.despine(offset=2)
plt.tight_layout()
return
```

Now we can run the model in a walk-forward fashion. The code uses a chosen lookback period up until the cutoff year to fit the model. From there, the code iterates refitting the model each day, outputting the predicted confidence intervals. The code is setup to run using successive cutoff years, however I will leave that to you readers to experiment with. In this demo we will break the loop after the first cutoff year.

```
%%time
# Model Params
# ------------
a, b = (.2, .7) # found via coarse parameter search
alpha = 0.99
max_iter = 100
k = 2
init = 'random' #'kmeans'
nSamples = 2_000
ft_cols = [f1, f2, f3, 'lret']
years = range(2009,2016)
lookback = 1 # chosen for ease of computation
# Iterate Model
# ------------
for year in years:
cutoff = year
train_df = data.ix[str(cutoff - lookback):str(cutoff)].dropna()
oos = data.ix[str(cutoff+1):].dropna()
# confirm that train_df end index is different than oos start index
assert train_df.index[-1] != oos.index[0]
# create pred list to hold tuple rows
preds = []
for t in tqdm(oos.index):
if t == oos.index[0]:
insample = train_df
# run model func to return model object and hidden states using params
model, hstates = _run_model(insample, ft_cols, k, max_iter, init, random_state=0)
# get hidden state mean and variance
mr_i, mvar_i = _get_state_est(model, hstates)
# get confidence intervals from sampled distribution
low_ci, high_ci = _get_ci(mr_i, mvar_i, alpha, a, b, nSamples)
# append tuple row to pred list
preds.append((t, hstates[-1], mr_i, mvar_i, low_ci, high_ci))
# increment insample dataframe
insample = data.ix[:t]
cols = ['ith_state', 'ith_ret', 'ith_var', 'low_ci', 'high_ci']
pred = (pd.DataFrame(preds, columns=['Dates']+cols)
.set_index('Dates').assign(tgt = oos['lret']))
# logic to see if error exceeds neg or pos CI
pred_copy = pred.copy().reset_index()
# Identify indices where target return falls between CI
win = pred_copy.query("low_ci < tgt < high_ci").index
# create list of binary variables representing in/out CI
in_rng_list = [1 if i in win else 0 for i in pred_copy.index]
# assign binary variables sequence to new column
pred['in_rng'] = in_rng_list
plot_pred_success(pred, year, a, b)
break
```

After that's complete we need to set up our analytics functions to evaluate the return patterns post each event. Recall that an **event** is an actual return that fell outside of our predicted confidence intervals.

```
def post_event(df, event, step_fwd=None):
"""Function to return dictionary where key, value is integer
index, and Pandas series consisting of returns post event
Params:
df : pd.DataFrame(), prediction df
event : {array-like}, index of target returns that exceed CI high or low
step_fwd : int(), how many days to include after event
Returns:
after_event : dict()
"""
after_event = {}
for i in range(len(event)):
tmp_ret = df.ix[event[i]:event[i]+step_fwd, ['Dates','tgt']]
# series of returns with date index
after_event[i] = tmp_ret.set_index('Dates', drop=True).squeeze()
return after_event
def plot_events_timeline(post_events, event_state):
fig, ax = plt.subplots(figsize=(10, 7))
ax.axhline(y=0, color='k', lw=3)
for k in post_events.keys():
tmp = post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
if tmp.sum() > 0: color = 'dodgerblue'
else: color = 'red'
ax.plot(tmp.index, tmp.cumsum(), color=color, alpha=0.5)
ax.set_xlim(pd.to_datetime('2009-12-31'), tmp.index[-1])
ax.set_xlabel('Dates')
ax.set_title(f"{mkt} {event_state.upper()}", fontsize=16, fontweight='demi')
sns.despine(offset=2)
return
def plot_events_post(post_events, event_state):
fig, ax = plt.subplots(figsize=(10, 7))
ax.axhline(y=0, color='k', lw=3)
for k in post_events.keys():
tmp = post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
if tmp.sum() > 0: color = 'dodgerblue'
else: color = 'red'
tmp.cumsum().reset_index(drop=True).plot(color=color, alpha=0.5, ax=ax)
ax.set_xlabel('Days')
ax.set_title(f"{mkt} {event_state.upper()}", fontsize=16, fontweight='demi')
sns.despine(offset=2)
return
def plot_distplot(ending_values, summary):
colors = sns.color_palette('RdYlBu', 4)
fig, ax = plt.subplots(figsize=(10, 7))
sns.distplot(pd.DataFrame(ending_values), bins=15, color=colors[0],
kde_kws={"color":colors[3]}, hist_kws={"color":colors[3], "alpha":0.35}, ax=ax)
ax.axvline(x=float(summary['mean'][0]), label='mean', color='dodgerblue', lw=3, ls='-.')
ax.axvline(x=float(summary['median'][0]), label='median', color='red', lw=3, ls=':')
ax.axvline(x=0, color='black', lw=1, ls='-')
ax.legend(loc='best')
sns.despine(offset=2)
ax.set_title(f"{mkt} {event_state.upper()}", fontsize=16, fontweight='demi')
return
def get_end_vals(post_events):
"""Function to sum and agg each post events' returns"""
end_vals = []
for k in post_events.keys():
tmp = post_events[k].copy()
tmp.iloc[0] = 0 # set initial return to zero
end_vals.append(tmp.sum())
return end_vals
def create_summary(end_vals):
gt0 = [x for x in end_vals if x>0]
lt0 = [x for x in end_vals if x<0]
assert len(gt0) > 1
assert len(lt0) > 1
summary = (pd.DataFrame(index=['value'])
.assign(mean = f'{np.mean(end_vals):.4f}')
.assign(median = f'{np.median(end_vals):.4f}')
.assign(max_ = f'{np.max(end_vals):.4f}')
.assign(min_ = f'{np.min(end_vals):.4f}')
.assign(gt0_cnt = f'{len(gt0):d}')
.assign(lt0_cnt = f'{len(lt0):d}')
.assign(sum_gt0 = f'{sum(gt0):.4f}')
.assign(sum_lt0 = f'{sum(lt0):.4f}')
.assign(sum_ratio = f'{sum(gt0) / abs(sum(lt0)):.4f}')
.assign(gt_pct = f'{len(gt0) / (len(gt0) + len(lt0)):.4f}')
.assign(lt_pct = f'{len(lt0) / (len(gt0) + len(lt0)):.4f}')
)
return summary
```

Now we can run the following code to extract the events, output the plots and view the summary.

```
df = pred.copy().reset_index()
too_high = df.query("tgt > high_ci").index
too_low = df.query("tgt < low_ci").index
step_fwd=5 # how many days to look forward
event_states = ['too_high', 'too_low']
for event in event_states:
after_event = post_event(df, eval(event), step_fwd=step_fwd)
ev = get_end_vals(after_event)
smry = create_summary(ev)
p()
p('*'*25)
p(mkt, event.upper())
p(smry.T)
plot_events_timeline(after_event, event)
plot_events_post(after_event, event)
plot_distplot(ev, smry, event)
```

To answer the original hypothesis about finding market bottoms, we can examine the returns after a *too low* event. Looking at the summary we can see that **the mean and median return are +62 and +82 bps respectively**. Looking at the **sum_ratio** we can see that that the sum of all positive return events is almost 2x the sum of all negative returns. We can also see that, given a *too low* event, after 5 days SPY had positive returns 65% of the time!

These are positive indicators that we may be able to predict market bottoms. However, I would emphasize more testing is needed.

- We don't consider market frictions such as commissions or slippage
- The daily prices may or may not represent actual traded values.
- I used a coarse search to find the JohnsonSU shape parameters, a and b. These may or may not be the best values. Just note that we can use these parameters to arbitrarily adjust the confidence intervals to be more or less conservative. I leave this for the reader to explore.
- In many cases both
*too high,*and*too low*events result in majority positive returns, this*could*be an indication of the overall bullishness of the sample period that may or may not affect model results in the future. - I chose k=2 components for computational simplicity, but there may be better values.
- I chose the lookback period for computational simplicity, but there may be better values.
- Varying the
**step_fwd**parameter may hurt or hinder the strategy. - What makes this approach particularly interesting, is that we don't
**want**anything close to 100% accuracy from our predicted confidence intervals, otherwise we won't have enough "trades".**This adds a level of artistry/complexity because the parameter values we choose should create predictable mean reversion opportunities, but the model accuracy is not a good indicator of this.**Testing the strategy with other assets shows "profitability" in some cases where the model accuracy is sub 60%.

**Please contact me if you find any errors. **

- Part 1 Recap
- Part 2 Goals
- Jupyter (IPython) Notebook
- References

In part 1 of this series we got a feel for Markov Models, Hidden Markov Models, and their applications. We went through the process of using a hidden Markov model to solve a toy problem involving a pet dog. We concluded the article by going through a high level quant finance application of Gaussian mixture models to detect historical regimes.

In this post, my goal is to impart a basic understanding of the expectation maximization algorithm which, not only forms the basis of several machine learning algorithms, including K-Means, and Gaussian mixture models, but also has lots of applications beyond finance. We will also cover the K-Means algorithm which is a form of EM, and its weaknesses. Finally we will discuss how Gaussian mixture models improve on several of K-Means weaknesses.

This post is structured as a Jupyter (IPython) Notebook. I used several different resources\references and tried to give proper credit. Please contact me if you find errors, have suggestions, or if any sources were not attributed correctly.

*Click here to view this notebook directly on NBviewer.jupyter.org*

- Who is Andrey Markov?
- What is the Markov Property?
- What is a Markov Model?
- What makes a Markov Model Hidden?
- A Hidden Markov Model for Regime Detection
- Conclusion
- References

Markov was a Russian mathematician best known for his work on stochastic processes. The focus of his early work was number theory but after 1900 he focused on probability theory, so much so that he taught courses after his official retirement in 1905 until his deathbed [2]. During his research Markov was able to extend the law of large numbers and the central limit theorem to apply to certain sequences of dependent random variables, now known as **Markov Chains** [1][2]. Markov chains are widely applicable to physics, economics, statistics, biology, etc. Two of the most well known applications were Brownian motion [3], and random walks.

"...a random process where the future is independent of the past given the present." [4]

Assume a simplified coin toss game with a fair coin. Suspend disbelief and assume that the Markov property is not yet known and we would like to predict the probability of flipping heads after 10 flips. Under the assumption of conditional dependence (the coin has memory of past states and the future state depends on the sequence of past states) we must record the specific sequence that lead up to the 11th flip and the joint probabilities of those flips. So imagine after 10 flips we have a random sequence of heads and tails. The joint probability of that sequence is 0.5^10 = 0.0009765625. Under conditional dependence, the probability of heads on the next flip is 0.0009765625 * 0.5 = 0.00048828125.

Is that the real probability of flipping heads on the 11th flip? Hell no!

We know that the event of flipping the coin does not depend on the result of the flip before it. The coin has no memory. The process of successive flips does not encode the prior results. Each flip is a unique event with equal probability of heads or tails, aka conditionally independent of past states. This is the Markov property.

A Markov chain (model) describes a stochastic process where the assumed probability of future state(s) depends only on the current process state and not on any the states that preceded it (*shocker*).

Let's get into a simple example. Assume you want to model the future probability that your dog is in one of three states given its current state. To do this we need to specify the state space, the initial probabilities, and the transition probabilities.

Imagine you have a very lazy fat dog, so we define the **state space **as sleeping, eating, or pooping. We will set the initial probabilities to 35%, 35%, and 30% respectively.

```
import numpy as np
import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
# create state space and initial state probabilities
states = ['sleeping', 'eating', 'pooping']
pi = [0.35, 0.35, 0.3]
state_space = pd.Series(pi, index=states, name='states')
print(state_space)
print(state_space.sum())
```

The next step is to define the transition probabilities. They are simply the probabilities of staying in the same state or moving to a different state given the current state.

```
# create transition matrix
# equals transition probability matrix of changing states given a state
# matrix is size (M x M) where M is number of states
q_df = pd.DataFrame(columns=states, index=states)
q_df.loc[states[0]] = [0.4, 0.2, 0.4]
q_df.loc[states[1]] = [0.45, 0.45, 0.1]
q_df.loc[states[2]] = [0.45, 0.25, .3]
print(q_df)
q = q_df.values
print('\n', q, q.shape, '\n')
print(q_df.sum(axis=1))
```

Now that we have the initial and transition probabilities setup we can create a Markov diagram using the **Networkx** package.

To do this requires a little bit of flexible thinking. Networkx creates *Graphs* that consist of *nodes *and *edges*. In our toy example the dog's possible states are the nodes and the edges are the lines that connect the nodes. The transition probabilities are the *weights. *They represent the probability of transitioning to a state given the current state.

Something to note is networkx deals primarily with dictionary objects. With that said, we need to create a dictionary object that holds our edges and their weights.

```
from pprint import pprint
# create a function that maps transition probability dataframe
# to markov edges and weights
def _get_markov_edges(Q):
edges = {}
for col in Q.columns:
for idx in Q.index:
edges[(idx,col)] = Q.loc[idx,col]
return edges
edges_wts = _get_markov_edges(q_df)
pprint(edges_wts)
```

Now we can create the graph. To visualize a Markov model we need to use *nx.MultiDiGraph().* A multidigraph is simply a directed graph which can have multiple arcs such that a single node can be both the origin and destination.

In the following code, we create the graph object, add our nodes, edges, and labels, then draw a bad networkx plot while outputting our graph to a dot file.

```
# create graph object
G = nx.MultiDiGraph()
# nodes correspond to states
G.add_nodes_from(states_)
print(f'Nodes:\n{G.nodes()}\n')
# edges represent transition probabilities
for k, v in edges_wts.items():
tmp_origin, tmp_destination = k[0], k[1]
G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
print(f'Edges:')
pprint(G.edges(data=True))
pos = nx.drawing.nx_pydot.graphviz_layout(G, prog='dot')
nx.draw_networkx(G, pos)
# create edge labels for jupyter plot but is not necessary
edge_labels = {(n1,n2):d['label'] for n1,n2,d in G.edges(data=True)}
nx.draw_networkx_edge_labels(G , pos, edge_labels=edge_labels)
nx.drawing.nx_pydot.write_dot(G, 'pet_dog_markov.dot')
```

Now a look at the dot file.

Not bad. If you follow the edges from any node, it will tell you the probability that the dog will transition to another state. For example, if the dog is sleeping, we can see there is a 40% chance the dog will keep sleeping, a 40% chance the dog will wake up and poop, and a 20% chance the dog will wake up and eat.

Consider a situation where your dog is acting strangely and you wanted to model the probability that your dog's behavior is due to sickness or simply quirky behavior when otherwise healthy.

In this situation the **true **state of the dog is *unknown*, thus **hidden** from you. One way to model this is to *assume* that the dog has **observable** behaviors that represent the true, hidden state. Let's walk through an example.

First we create our state space - healthy or sick. We assume they are equiprobable.

```
# create state space and initial state probabilities
hidden_states = ['healthy', 'sick']
pi = [0.5, 0.5]
state_space = pd.Series(pi, index=hidden_states, name='states')
print(state_space)
print('\n', state_space.sum())
```

Next we create our transition matrix for the hidden states.

```
# create hidden transition matrix
# a or alpha
# = transition probability matrix of changing states given a state
# matrix is size (M x M) where M is number of states
a_df = pd.DataFrame(columns=hidden_states, index=hidden_states)
a_df.loc[hidden_states[0]] = [0.7, 0.3]
a_df.loc[hidden_states[1]] = [0.4, 0.6]
print(a_df)
a = a_df.values
print('\n', a, a.shape, '\n')
print(a_df.sum(axis=1))
```

This is where it gets a little more interesting. Now we create the **emission or observation** probability matrix. This matrix is size M x O where M is the number of hidden states and O is the number of possible observable states.

The emission matrix tells us the probability the dog is in one of the hidden states, given the current, observable state.

Let's keep the same observable states from the previous example. The dog can be either sleeping, eating, or pooping. For now we make our best guess to fill in the probabilities.

```
# create matrix of observation (emission) probabilities
# b or beta = observation probabilities given state
# matrix is size (M x O) where M is number of states
# and O is number of different possible observations
observable_states = states
b_df = pd.DataFrame(columns=observable_states, index=hidden_states)
b_df.loc[hidden_states[0]] = [0.2, 0.6, 0.2]
b_df.loc[hidden_states[1]] = [0.4, 0.1, 0.5]
print(b_df)
b = b_df.values
print('\n', b, b.shape, '\n')
print(b_df.sum(axis=1))
```

Now we create the graph edges and the graph object.

```
# create graph edges and weights
hide_edges_wts = _get_markov_edges(a_df)
pprint(hide_edges_wts)
emit_edges_wts = _get_markov_edges(b_df)
pprint(emit_edges_wts)
```

```
# create graph object
G = nx.MultiDiGraph()
# nodes correspond to states
G.add_nodes_from(hidden_states)
print(f'Nodes:\n{G.nodes()}\n')
# edges represent hidden probabilities
for k, v in hide_edges_wts.items():
tmp_origin, tmp_destination = k[0], k[1]
G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
# edges represent emission probabilities
for k, v in emit_edges_wts.items():
tmp_origin, tmp_destination = k[0], k[1]
G.add_edge(tmp_origin, tmp_destination, weight=v, label=v)
print(f'Edges:')
pprint(G.edges(data=True))
pos = nx.drawing.nx_pydot.graphviz_layout(G, prog='neato')
nx.draw_networkx(G, pos)
# create edge labels for jupyter plot but is not necessary
emit_edge_labels = {(n1,n2):d['label'] for n1,n2,d in G.edges(data=True)}
nx.draw_networkx_edge_labels(G , pos, edge_labels=emit_edge_labels)
nx.drawing.nx_pydot.write_dot(G, 'pet_dog_hidden_markov.dot')
```

The hidden Markov graph is a little more complex but the principles are the same. For example, you would expect that if your dog is eating there is a high probability that it is healthy (60%) and a very low probability that the dog is sick (10%).

Now, what if you needed to discern the health of your dog over time given a sequence of observations?

```
# observation sequence of dog's behaviors
# observations are encoded numerically
obs_map = {'sleeping':0, 'eating':1, 'pooping':2}
obs = np.array([1,1,2,1,0,1,2,1,0,2,2,0,1,0,1])
inv_obs_map = dict((v,k) for k, v in obs_map.items())
obs_seq = [inv_obs_map[v] for v in list(obs)]
print( pd.DataFrame(np.column_stack([obs, obs_seq]),
columns=['Obs_code', 'Obs_seq']) )
```

Using the **Viterbi** algorithm we can identify the most likely sequence of hidden states given the sequence of observations.

High level, the Viterbi algorithm increments over each time step, finding the **maximum** probability of any path that gets to state **i**at time **t**, that ** also** has the correct observations for the sequence up to time

The algorithm also keeps track of the state with the highest probability at each stage. At the end of the sequence, the algorithm will iterate backwards selecting the state that "won" each time step, and thus creating the most likely path, or likely sequence of hidden states that led to the sequence of observations.

```
# define Viterbi algorithm for shortest path
# code adapted from Stephen Marsland's, Machine Learning An Algorthmic Perspective, Vol. 2
# https://github.com/alexsosn/MarslandMLAlgo/blob/master/Ch16/HMM.py
def viterbi(pi, a, b, obs):
nStates = np.shape(b)[0]
T = np.shape(obs)[0]
# init blank path
path = np.zeros(T)
# delta --> highest probability of any path that reaches state i
delta = np.zeros((nStates, T))
# phi --> argmax by time step for each state
phi = np.zeros((nStates, T))
# init delta and phi
delta[:, 0] = pi * b[:, obs[0]]
phi[:, 0] = 0
print('\nStart Walk Forward\n')
# the forward algorithm extension
for t in range(1, T):
for s in range(nStates):
delta[s, t] = np.max(delta[:, t-1] * a[:, s]) * b[s, obs[t]]
phi[s, t] = np.argmax(delta[:, t-1] * a[:, s])
print('s={s} and t={t}: phi[{s}, {t}] = {phi}'.format(s=s, t=t, phi=phi[s, t]))
# find optimal path
print('-'*50)
print('Start Backtrace\n')
path[T-1] = np.argmax(delta[:, T-1])
#p('init path\n t={} path[{}-1]={}\n'.format(T-1, T, path[T-1]))
for t in range(T-2, -1, -1):
path[t] = phi[path[t+1], [t+1]]
#p(' '*4 + 't={t}, path[{t}+1]={path}, [{t}+1]={i}'.format(t=t, path=path[t+1], i=[t+1]))
print('path[{}] = {}'.format(t, path[t]))
return path, delta, phi
path, delta, phi = viterbi(pi, a, b, obs)
print('\nsingle best state path: \n', path)
print('delta:\n', delta)
print('phi:\n', phi)
```

Let's take a look at the result.

```
state_map = {0:'healthy', 1:'sick'}
state_path = [state_map[v] for v in path]
(pd.DataFrame()
.assign(Observation=obs_seq)
.assign(Best_Path=state_path))
```

By now you're probably wondering how we can apply what we have learned about hidden Markov models to quantitative finance.

Consider that the largest hurdle we face when trying to apply predictive techniques to asset returns is nonstationary time series. In brief, this means that the expected mean and volatility of asset returns changes over time.

Most time series models assume that the data is stationary. This is a major weakness of these models.

Instead, let us frame the problem differently. We know that time series exhibit temporary periods where the expected means and variances are stable through time. These periods or *regimes* can be likened to *hidden states*.

If that's the case, then all we need are observable variables whose behavior allows us to infer the true hidden state(s). If we can better estimate an asset's most likely regime, including the associated means and variances, then our predictive models become more adaptable and will likely improve. We can also become better risk managers as the estimated regime parameters gives us a great framework for better scenario analysis.

In this example, the observable variables I use are: the underlying asset returns, the Ted Spread, the 10 year - 2 year constant maturity spread, and the 10 year - 3 month constant maturity spread.

```
import pandas as pd
import pandas_datareader.data as web
import sklearn.mixture as mix
import numpy as np
import scipy.stats as scs
import matplotlib as mpl
from matplotlib import cm
import matplotlib.pyplot as plt
from matplotlib.dates import YearLocator, MonthLocator
%matplotlib inline
import seaborn as sns
import missingno as msno
from tqdm import tqdm
p=print
```

Using pandas we can grab data from Yahoo Finance and FRED.

```
# get fed data
f1 = 'TEDRATE' # ted spread
f2 = 'T10Y2Y' # constant maturity ten yer - 2 year
f3 = 'T10Y3M' # constant maturity 10yr - 3m
start = pd.to_datetime('2002-01-01')
end = pd.datetime.today()
mkt = 'SPY'
MKT = (web.DataReader([mkt], 'yahoo', start, end)['Adj Close']
.rename(columns={mkt:mkt})
.assign(sret=lambda x: np.log(x[mkt]/x[mkt].shift(1)))
.dropna())
data = (web.DataReader([f1, f2, f3], 'fred', start, end)
.join(MKT, how='inner')
.dropna()
)
p(data.head())
# gives us a quick visual inspection of the data
msno.matrix(data)
```

Next we will use the **sklearn's GaussianMixture **to fit a model that estimates these regimes. We will explore *mixture models * in more depth in part 2 of this series. The important takeaway is that mixture models implement a closely related unsupervised form of density estimation. It makes use of the expectation-maximization algorithm to estimate the means and covariances of the hidden states (regimes). For now, it is ok to think of it as a magic button for guessing the transition and emission probabilities, and most likely path.

We have to specify the number of components for the mixture model to fit to the time series. In this example the components can be thought of as regimes. We will arbitrarily classify the regimes as High, Neutral and Low Volatility and set the number of components to three.

```
# code adapted from http://hmmlearn.readthedocs.io
# for sklearn 18.1
col = 'sret'
select = data.ix[:].dropna()
ft_cols = [f1, f2, f3, 'sret']
X = select[ft_cols].values
model = mix.GaussianMixture(n_components=3,
covariance_type="full",
n_init=100,
random_state=7).fit(X)
# Predict the optimal sequence of internal hidden state
hidden_states = model.predict(X)
print("Means and vars of each hidden state")
for i in range(model.n_components):
print("{0}th hidden state".format(i))
print("mean = ", model.means_[i])
print("var = ", np.diag(model.covariances_[i]))
print()
sns.set(font_scale=1.25)
style_kwds = {'xtick.major.size': 3, 'ytick.major.size': 3,
'font.family':u'courier prime code', 'legend.frameon': True}
sns.set_style('white', style_kwds)
fig, axs = plt.subplots(model.n_components, sharex=True, sharey=True, figsize=(12,9))
colors = cm.rainbow(np.linspace(0, 1, model.n_components))
for i, (ax, color) in enumerate(zip(axs, colors)):
# Use fancy indexing to plot data in each state.
mask = hidden_states == i
ax.plot_date(select.index.values[mask],
select[col].values[mask],
".-", c=color)
ax.set_title("{0}th hidden state".format(i), fontsize=16, fontweight='demi')
# Format the ticks.
ax.xaxis.set_major_locator(YearLocator())
ax.xaxis.set_minor_locator(MonthLocator())
sns.despine(offset=10)
plt.tight_layout()
fig.savefig('Hidden Markov (Mixture) Model_Regime Subplots.png')
```

In the above image, I've highlighted each regime's daily expected mean and variance of SPY returns. It appears the 1th hidden state is our low volatility regime. Note that the 1th hidden state has the largest expected return and the smallest variance.The 0th hidden state is the neutral volatility regime with the second largest return and variance. Lastly the 2th hidden state is high volatility regime. We can see the expected return is negative and the variance is the largest of the group.

```
sns.set(font_scale=1.5)
states = (pd.DataFrame(hidden_states, columns=['states'], index=select.index)
.join(select, how='inner')
.assign(mkt_cret=select.sret.cumsum())
.reset_index(drop=False)
.rename(columns={'index':'Date'}))
p(states.head())
sns.set_style('white', style_kwds)
order = [0, 1, 2]
fg = sns.FacetGrid(data=states, hue='states', hue_order=order,
palette=scolor, aspect=1.31, size=12)
fg.map(plt.scatter, 'Date', mkt, alpha=0.8).add_legend()
sns.despine(offset=10)
fg.fig.suptitle('Historical SPY Regimes', fontsize=24, fontweight='demi')
fg.savefig('Hidden Markov (Mixture) Model_SPY Regimes.png')
```

Here is the SPY price chart with the color coded regimes overlaid.

In this post we've discussed the concepts of the Markov property, Markov models and hidden Markov models. We used the networkx package to create Markov chain diagrams, and sklearn's GaussianMixture to estimate historical regimes. In part 2 we will discuss mixture models more in depth. For more detailed information I would recommend looking over the references. Setosa.io is especially helpful in covering any gaps due to the highly interactive visualizations.

- https://en.wikipedia.org/wiki/Andrey_Markov
- https://www.britannica.com/biography/Andrey-Andreyevich-Markov
- https://www.reddit.com/r/explainlikeimfive/comments/vbxfk/eli5_brownian_motion_and_what_it_has_to_do_with/
- http://www.math.uah.edu/stat/markov/Introduction.html
- http://setosa.io/ev/markov-chains/
- http://www.cs.jhu.edu/~langmea/resources/lecture_notes/hidden_markov_models.pdf
- https://github.com/alexsosn/MarslandMLAlgo/blob/master/Ch16/HMM.py
- http://hmmlearn.readthedocs.io

**Motivating the Journey****Where Do Edges Come From?****The Problem with Traditional Research****The Hidden Side**

**A Brief Description:****Part 1 - A Visual Introduction to Hidden Markov Models with Python****Part 2 - Exploring Mixture Models with Scikit-Learn and Python****Part 3 - Predicting Market Bottoms with Scikit-Learn and Python**

Edges come from superior ability to identify and execute profitable strategies.

You can see this simply by imagining the first strategy able to identify pricing errors on identical items in different markets. This knowledge is valuable in two scenarios: you can execute the transaction yourself or you know someone who can and will pay you for the "signal".

Abstractly, a signal can be thought of as a glitch in the matrix allowing us a view through a window into probabilistic future states. Signals can come from anywhere and are not always understood.

Our job is to find these signals, vet them, and implement them. This is difficult in practice. The competitive environment we seek to understand is dynamic with positive and negative feedback loops operating at various scales. The system processes are very noisy making signal extraction confusing and difficult. Competitors are always seeking strategies that "work" until they don't.

Generally profitable edges stop working when both your identification and execution strategies are well known. Thus a profit motive for secrecy and obfuscation exists among participants. If you are familiar with poker this will sound very familiar.

This also means that using well known identification techniques puts you at a strategic disadvantage because your competitors have likely incorporated knowledge of your methods into their own strategies.

Therefore we must continuously search for strategies that are not well understood, not well known, or otherwise difficult for our competitors to implement.

Too much published "research" focuses on using well known statistical tools to draw conclusions that do not improve the odds of profitable investment. Worse still, many research papers' results are not reproducible.

For periods of time, techniques involving technical analysis, regression, and simple correlations, were good enough to beat the market. This worked because the methodology was not well known or well understood. Times have changed.

These methods have been taught and promoted to generations of practitioners. These techniques form the foundation of many market participants investment strategies. Therefore the majority of well known strategies are already in use by the market.

This means sophisticated participants have had time and opportunity to develop counter strategies to take advantage of the limitations of publicly known methods.

Typical business finance teachings focus on the theory that stock values are directly tied to the expected value of net cash flows produced by the underlying operating business from now into some future period. Other research links stock prices to any number of other observable factors. My perception is that these well taught methods can bias our exploratory research when it comes to the art and science of **prediction.**

Successful prediction does not require understanding or logic. Prediction does not require expertise in the industry or business which generated the data. These things can help solidify our belief in the power of the prediction, however successful prediction methods only require a stable, positive payoff function relative to prediction accuracy over an expanding time period. Nothing more, nothing less.

Rigid knowledge structures can blind us to potential opportunities. Using statistics to explore observable factors only, ignores the entire spectrum of hidden, unobserved factors influencing asset returns.

By definition a hidden factor is not directly observable. Its presence or influence is detected by its effect on observable factor(s) or on a delayed basis.

Conceptualizing the influence of hidden factors is difficult for many decision makers to either understand or incorporate into already existing processes.

The combination of bias created by traditional finance and difficulty conceptualizing hidden factors, creates the barriers to entry we need for successful strategy development. We can reasonably assume this research pathway is still rich with profitable edges and worth pursuing.

In part 1, we will discuss Markov Models, Hidden Markov Models and a toy application for regime detection.

In part 2, we will explore the motivation behind mixture models and how they improve on the weaknesses of K-means algorithms. We will also discuss the connection between Mixture Models and Hidden Markov Models. Finally we will extend our toy regime detector to use a mixture model instead.

In part 3, we will implement a toy strategy using mixture models to predict market bottoms. The strategy assumes that we can calibrate a model to predict the market return distribution such that actual returns that fall below the confidence intervals are profitable long entries over short time periods.

Post thumbnail picture taken from Bayesian Intelligence Slideshare presentation.

]]>- Motivation
- Get Data
- Default Plot with Recession Shading
- Add Chart Titles, Axis Labels, Fancy Legend, Horizontal Line
- Format X and Y Axis Tick Labels
- Change Font and Add Data Markers
- Add Annotations
- Add Logo/Watermarks

Since I started this blog a few years ago, one of my obsessions is creating good looking, informative plots/charts. I've spent an inordinate amount of time learning how to do this and it is still a work in a progress. However all my work is not in vain as several of you readers have commented and messaged me for the code behind some of my time series plots. Beginning with basic time series data, I will show you how I produce these charts.

Import packages

```
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style('white', {"xtick.major.size": 2, "ytick.major.size": 2})
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71","#f4cae4"]
sns.set_palette(sns.color_palette(flatui,7))
import missingno as msno
p=print
save_loc = '/YOUR/PROJECT/LOCATION/'
logo_loc = '/YOUR/WATERMARK/LOCATION/'
```

Get time series data from Yahoo finance and recession data from FRED.

```
# get index and fed data
f1 = 'USREC' # recession data from FRED
start = pd.to_datetime('1999-01-01')
end = pd.datetime.today()
mkt = '^GSPC'
MKT = (web.DataReader([mkt,'^VIX'], 'yahoo', start, end)['Adj Close']
.resample('MS') # month start b/c FED data is month start
.mean()
.rename(columns={mkt:'SPX','^VIX':'VIX'})
.assign(SPX_returns=lambda x: np.log(x['SPX']/x['SPX'].shift(1)))
.assign(VIX_returns=lambda x: np.log(x['VIX']/x['VIX'].shift(1)))
)
data = (web.DataReader([f1], 'fred', start, end)
.join(MKT, how='outer')
.dropna())
p(data.head())
p(data.info())
msno.matrix(data)
```

Now we have to setup our recession data so we can get the official begin and end dates for each recession over the period.

```
# recessions are marked as 1 in the data
recs = data.query('USREC==1')
# Select the two recessions over the time period
recs_2k = recs.ix['2001']
recs_2k8 = recs.ix['2008':]
# now we can grab the indices for the start
# and end of each recession
recs2k_bgn = recs_2k.index[0]
recs2k_end = recs_2k.index[-1]
recs2k8_bgn = recs_2k8.index[0]
recs2k8_end = recs_2k8.index[-1]
```

Now we can plot the default chart with recession shading. Let's take a look.

```
# Let's plot SPX and VIX cumulative returns with recession overlay
plot_cols = ['SPX_returns', 'VIX_returns']
# 2 axes for 2 subplots
fig, axes = plt.subplots(2,1, figsize=(10,7), sharex=True)
data[plot_cols].plot(subplots=True, ax=axes)
for ax in axes:
ax.axvspan(recs2k_bgn, recs2k_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
ax.axvspan(recs2k8_bgn, recs2k8_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
```

The default plot is ok but we can do better. Let's add chart titles, axis labels, spruce up the legend, and add a horizontal line for 0.

```
fig, axes = plt.subplots(2,1, figsize=(10,7), sharex=True)
data[plot_cols].plot(subplots=True, ax=axes)
# for subplots we must add features by subplot axis
for ax, col in zip(axes, plot_cols):
ax.axvspan(recs2k_bgn, recs2k_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
ax.axvspan(recs2k8_bgn, recs2k8_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
# lets add horizontal zero lines
ax.axhline(0, color='k', linestyle='-', linewidth=1)
# add titles
ax.set_title('Monthly ' + col + ' \nRecessions Shaded Gray')
# add axis labels
ax.set_ylabel('Returns')
ax.set_xlabel('Date')
# add cool legend
ax.legend(loc='upper left', fontsize=11, frameon=True).get_frame().set_edgecolor('blue')
# now to use tight layout
plt.tight_layout()
```

This is a step up but still not good enough. I prefer more informative dates on the x-axis, and percent formatting on the y-axis.

```
# better but I prefer more advanced axis tick labels
fig, axes = plt.subplots(2,1, figsize=(12,9), sharex=True)
data[plot_cols].plot(subplots=True, ax=axes)
# for subplots we must add features by subplot axis
for ax, col in zip(axes, plot_cols):
ax.axvspan(recs2k_bgn, recs2k_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
ax.axvspan(recs2k8_bgn, recs2k8_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
# lets add horizontal zero lines
ax.axhline(0, color='k', linestyle='-', linewidth=1)
# add titles
ax.set_title('Monthly ' + col + ' \nRecessions Shaded Gray')
# add axis labels
ax.set_ylabel('Returns')
ax.set_xlabel('Date')
# upgrade axis tick labels
yticks = ax.get_yticks()
ax.set_yticklabels(['{:3.1f}%'.format(x*100) for x in yticks]);
dates_rng = pd.date_range(data.index[0], data.index[-1], freq='6M')
plt.xticks(dates_rng, [dtz.strftime('%Y-%m') for dtz in dates_rng], rotation=45)
# add cool legend
ax.legend(loc='upper left', fontsize=11, frameon=True).get_frame().set_edgecolor('blue')
# now to use tight layout
plt.tight_layout()
```

It's an improvement, but I hate Arial font, and would like to add data point markers.

```
# I want markers for the data points, and change to font
mpl.rcParams['font.family'] = 'Ubuntu Mono'
fig, axes = plt.subplots(2,1, figsize=(10,7), sharex=True)
data[plot_cols].plot(subplots=True, ax=axes, marker='o', ms=3)
# for subplots we must add features by subplot axis
for ax, col in zip(axes, plot_cols):
ax.axvspan(recs2k_bgn, recs2k_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
ax.axvspan(recs2k8_bgn, recs2k8_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
# lets add horizontal zero lines
ax.axhline(0, color='k', linestyle='-', linewidth=1)
# add titles
ax.set_title('Monthly ' + col + ' \nRecessions Shaded Gray')
# add axis labels
ax.set_ylabel('Returns')
ax.set_xlabel('Date')
# upgrade axis tick labels
yticks = ax.get_yticks()
ax.set_yticklabels(['{:3.2f}%'.format(x*100) for x in yticks]);
dates_rng = pd.date_range(data.index[0], data.index[-1], freq='6M')
plt.xticks(dates_rng, [dtz.strftime('%Y-%m') for dtz in dates_rng], rotation=45)
# add cool legend
ax.legend(loc='upper left', fontsize=11, frameon=True).get_frame().set_edgecolor('blue')
# now to use tight layout
plt.tight_layout()
```

It's starting to look pretty good, but we can get even more fancy. Say we wanted to annotate the global maximum and minimum returns in each subplot along with their respective dates for SPX and VIX . That could be a challenge. To do this we first need to extract the max/mins and idxmax/idxmin for both series.

```
# I want to know show the global max and mins and their dates
# --------------------------------------------------------------- #
# MAX SPX Returns
spx_max_ = data[plot_cols[0]].max()
spx_max_idx_ = data[plot_cols[0]].idxmax(axis=0, skipna=True)
# MIN SPX Returns
spx_min_ = data[plot_cols[0]].min()
spx_min_idx_ = data[plot_cols[0]].idxmin(axis=0, skipna=True)
# MAX VIX Returns
vix_max_ = data[plot_cols[1]].max()
vix_max_idx_ = data[plot_cols[1]].idxmax(axis=0, skipna=True)
# MIN VIX Returns
vix_min_ = data[plot_cols[1]].min()
vix_min_idx_ = data[plot_cols[1]].idxmin(axis=0, skipna=True)
```

Now that we have this information we can get clever with the annotation tools Matplotlib provides. Also, I want to touch up some of the axis labels and axis tick labels as well.

```
mpl.rcParams['font.family'] = 'Ubuntu Mono'
fig, axes = plt.subplots(2,1, figsize=(12,9), sharex=True)
data[plot_cols].plot(subplots=True, ax=axes, marker='o', ms=3)
# for subplots we must add features by subplot axis
for ax, col in zip(axes, plot_cols):
ax.axvspan(recs2k_bgn, recs2k_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
ax.axvspan(recs2k8_bgn, recs2k8_end, color=sns.xkcd_rgb['grey'], alpha=0.5)
# lets add horizontal zero lines
ax.axhline(0, color='k', linestyle='-', linewidth=1)
# add titles
ax.set_title('Monthly ' + col + ' \nRecessions Shaded Gray', fontsize=14, fontweight='demi')
# add axis labels
ax.set_ylabel('Returns', fontsize=12, fontweight='demi')
ax.set_xlabel('Date', fontsize=12, fontweight='demi')
# upgrade axis tick labels
yticks = ax.get_yticks()
ax.set_yticklabels(['{:3.1f}%'.format(x*100) for x in yticks]);
dates_rng = pd.date_range(data.index[0], data.index[-1], freq='6M')
plt.xticks(dates_rng, [dtz.strftime('%Y-%m-%d') for dtz in dates_rng], rotation=45)
# bold up tick axes
ax.tick_params(axis='both', which='major', labelsize=11)
# add cool legend
ax.legend(loc='upper left', fontsize=11, frameon=True).get_frame().set_edgecolor('blue')
# add global max/min annotations
# add cool annotation box
bbox_props = dict(boxstyle="round4, pad=0.6", fc="cyan", ec="b", lw=.5)
axes[0].annotate('Global Max = {:.2%}\nDate = {}'
.format(spx_max_, spx_max_idx_.strftime('%a, %Y-%m-%d')),
fontsize=9,
fontweight='bold',
xy=(spx_max_idx_, spx_max_),
xycoords='data',
xytext=(-150, -30),
textcoords='offset points',
arrowprops=dict(arrowstyle="->"), bbox=bbox_props)
axes[0].annotate('Global Min = {:.2%}\nDate = {}'
.format(spx_min_, spx_min_idx_.strftime('%a, %Y-%m-%d')),
fontsize=9,
fontweight='demi',
xy=(spx_min_idx_, spx_min_),
xycoords='data',
xytext=(-150, 30),
textcoords='offset points',
arrowprops=dict(arrowstyle="->"), bbox=bbox_props)
axes[1].annotate('Global Max = {:.2%}\nDate = {}'
.format(vix_max_, vix_max_idx_.strftime('%a, %Y-%m-%d')),
fontsize=9,
fontweight='bold',
xy=(vix_max_idx_, vix_max_),
xycoords='data',
xytext=(-150, -30),
textcoords='offset points',
arrowprops=dict(arrowstyle="->"), bbox=bbox_props)
axes[1].annotate('Global Min = {:.2%}\nDate = {}'
.format(vix_min_, vix_min_idx_.strftime('%a, %Y-%m-%d')),
fontsize=9,
fontweight='demi',
xy=(vix_min_idx_, vix_min_),
xycoords='data',
xytext=(-150, -20),
textcoords='offset points',
arrowprops=dict(arrowstyle="->"), bbox=bbox_props)
# now to use tight layout
plt.tight_layout()
```

Wow, now it's looking really good. But what if you wanted to insert branding via a watermark? That's simple, add the following line of code before the **plt.tight_layout() **line and voila.

```
# add logo watermark
im = mpl.image.imread(logo_loc)
axes[0].figure.figimage(im, origin='upper', alpha=0.125, zorder=10)
```

]]>**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

This is a stylized implementation of the strategy described in the research paper titled "What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns?" by Yuhang Xing, Xiaoyan Zhang and Rui Zhao. The authors show that their SKEW factor predicts individual equity returns up to 6 months!

**ABSTRACT**

The shape of the volatility smirk has significant cross-sectional predictive power for future equity returns.Stocks exhibiting the steepest smirks in their traded options underperform stocks with the least pronounced volatility smirks in their options by around 10.9% per year on a risk-adjusted basis.This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

My strategy differs in that I arbitrarily chose 1 and 4 week holding periods to study. Additionally this strategy only analyzes a cross-section of ETFs instead of individual stocks. I chose ETFs because liquidity and data quality concerns are minimized. Here are the selected ETFs under analysis.

- Zhang, Xiaoyan and Zhao, Rui and Xing, Yuhang, What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns? (August 14, 2008). AFA 2009 San Francisco Meetings Paper. Available at SSRN:http://ssrn.com/abstract=1107464 orhttp://dx.doi.org/10.2139/ssrn.1107464

**Results simulated using the Quantopian Platform.*

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>- Part-1 Recap
- Part-1 Error Corrections
- Part-2 Implementation Details, Deviations, Goals
- Prepare Data
- Setup PYMC3 Generalized Linear Models (GLM)
- Evaluate and Interprate Models
- Conclusions
- References

In part 1 We discussed the theorized underpinnings of Ying Wu of Stevens Institute of Technology - School's asset pricing model. Theory links the catalyst of systemic risk events to the funding difficulties of major financial intermediaries. Thus crisis risk is linked to liquidity events. The model proposes a method to estimate a proxy index for the systematic liquidity risk. We use an illiquidity metric calculated across a large group of stocks, then apply a tool called the Hill estimator to measure the average distance of extreme illiquidity events from the tail cutoff. We explored the high level intuition behind the Hill estimator.

We created an implementation of the Hill estimator, aggregated the stock data, calculated the illiquidity metrics and the ELR index, and finally output the intermediate data into a hdf5 file for quick read/write access.

We did not get this far in part-1, but the paper asserts that we can use this index as an asset pricing component. This could also be thought of as the primary feature or independent variable in a simple linear regression (think CAPM). The target variable is the expected aggregate returns. From there the paper says we can create long-short portfolios by ranking the stocks according to their factor betas and sorting them into quantiles.

In part 1 the implementation of the hill estimator was incorrect. The ELR index is supposed to be comprised of the values that exceed 95th percentile. In the original implementation I calculated the average of *all *values not just those in the tail region. Therefore the quick and dirty observations made previously are for a different index. On the left is the original incorrect index. On the right is the corrected index.

After spending some time rereading the research paper there is a subtle bit of additional complexity I have not included in this implementation that may affect the results I get vs those found in the paper.

In the paper the threshold value is calculated as the 95th percentile cross-sectionally for the entire month. Then the index is constructed by calculating the average log distance from that threshold for any datapoint located in the tail. To create an index like this requires binning the data by month, getting the threshold value of that month by aggregating the *daily illiquidity *metrics of a few thousand stocks for that month, then calculating the log average distance between those tail values and the threshold.

This means we likely need a whole month of data before we can calculate the ELR value. We can potentially use a rolling 21 or 30 day window to simulate a monthly lookback but based on the paper it does not seem that the author used this method. If, instead we go by calendar months, this likely means we need *a lot *more data before we can draw any conclusions. For example the author's sample period is from 1968-2011 and only includes NYSE stocks among other stock universe selection details.

In my exploration of ELR index, I prefer to keep it simpler, and calculate the 95th percentile threshold based on the cross sectional *daily *illiquidity values instead of the whole month.

Part-2 Goals:

- Import the calculated the daily illiquidity values
- Resample the illiquidity measures by week, taking the median and max illiquidity values, then calculate the ELR Index
- Use pymc3's generalized linear models function to fit a model for predicting the cross-sectional scaled returns.
- Interpret and Evaluate the models.

First we need to import packages and get our data ready.

```
# import packages
import sys
import os
# ------------------- % import datasets % ------------------- #
datasets = '/YOUR/DATASET/LOCATION/_Datasets/'
import pandas as pd
import pandas_datareader.data as web
from pandas.tseries.offsets import *
import numpy as np
import scipy.stats as scs
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('bmh')
%matplotlib inline
import seaborn as sns
sns.set_style('white', {"xtick.major.size": 3, "ytick.major.size": 3})
import pymc3 as pm
from scipy import optimize
import time
from tqdm import tqdm
p = print
```

I created a hdf5 file for the aggregated returns because I want to use them as a proxy for our target variable of expected market returns. I import those here and create a time series consisting of the cross sectional median and average log returns.

```
## read in data
# log returns for aggregrate mkt proxy
LRET_FILE = datasets + 'LRET_Set_2016-11-22.h5'
lret_set = pd.read_hdf(LRET_FILE, 'RETURNS_DV_SET')
lret_set = lret_set.loc[:,lret_set.columns.to_series().str.contains('_lret').tolist()]
# calc median and mean cross sectional
mkt = pd.DataFrame({'cross_mdn_rets':lret_set.median(axis='columns'),
'cross_avg_rets':lret_set.mean(axis='columns')},
index=lret_set.index
)
## read in illiquidity data for ELR calculations
ILQ_FILE = datasets + 'Illiquidity_Set_2016-11-22.h5'
ilq = pd.read_hdf(ILQ_FILE, 'Illiquidity_Set')
```

After loading the data into our environment, we resample the data to a weekly frequency using both median and max values for comparison. On my outdated laptop this took approximately 7 minutes.

```
# weekly resample
freq = '1W'
df = ilq.resample(freq).median()
df_max = ilq.resample(freq).max()
```

Next we define our convenience functions for calculating our ELR index. Notice that I deviate from the traditional z-score scaling method and implement the Gelman scaler which divides the centered values by 2 times the standard deviation. You can read more details from Andrew Gelman's paper[2] about why we use this method. The high-level intuition is that this scale improves regression coefficient interpretability across binary, discrete, and continuous variables.

```
# convenience functions for gamma calculation and scaler
# gamma estimate
def _ext_lq_risk(series):
# threshold is 95th percentile
p_star = np.nanpercentile(series, 95)
illiq = series[series > p_star]
#illiq = series # looks better on chart but less explanatory power
lg_illiq = np.log(illiq / p_star)
lg_illiq = lg_illiq[np.isfinite(lg_illiq)]
try:
gamma = 1./ ((1./len(lg_illiq)) * sum(lg_illiq))
except ZeroDivisionError:
gamma = np.nan
return gamma
# scaler function
gelman_scaler = lambda ser: (ser - ser.mean()) / (2*ser.std())
# calculate elr index
def _calculate_elr(df, cutoff=100, scaler=None):
gs = {} # gammas dictionary
nan_dates = []
for d in df.index:
# we want at least N nonnull values
if df.loc[d].notnull().sum() > cutoff:
gamma = _ext_lq_risk(df.loc[d])
gs[d] = gamma
else:
nan_dates.append(d)
gdf = pd.DataFrame.from_dict(gs, orient='index').sort_index()
gdfz = scaler(gdf)
gdfz.columns = ['ELR']
return gdfz, nan_dates
```

Now we can set up our main experimental dataframe. We need to make sure the our market proxy dataframe, which consists of the aggregate sample returns, has the same index as our ELR dataframe before we merge them. Also, remember we are going to experiment with two resampled dataframes, one with the weekly median illiquidity, and one with the weekly maximum illiquidity. Our final step after creating our merged dataframes is to add a column for our Gelman scaled aggregate returns.

```
# calculate ELR index on resampled data
gdfz_mdn, _ = _calculate_elr(df, scaler=gelman_scaler)
gdfz_max, _ = _calculate_elr(df_max, scaler=gelman_scaler)
# market resample must match gdfz before merge
# merge dataframes
mkt_rs = mkt.resample(freq).mean()
mrg_mdn = pd.concat([gdfz_mdn, mkt_rs], join='inner', axis=1)
mrg_max = pd.concat([gdfz_max, mkt_rs], join='inner', axis=1)
# add cross sectional average Gelman scored returns
avg_col = 'cross_avg_rets'
mrg_mdn['cross_avg_zrets'] = gelman_scaler(mrg_mdn[avg_col])
mrg_max['cross_avg_zrets'] = gelman_scaler(mrg_max[avg_col])
mrg_mdn.head()
```

Before running our model I define some output convenience functions adapted from the excellent blog Applied AI[3].

```
# pymc3 convenience functions adapted from blog.applied.ai
def trace_median(x):
return pd.Series(np.median(x,0), name='median')
def plot_traces(trcs, retain=1000, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs[-retain:], varnames=varnames, figsize=(12,nrows*1.4)
,lines={k: v['mean'] for k, v in
pm.df_summary(trcs[-retain:],varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs[-retain:], varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data'
,xytext=(5,10), textcoords='offset points', rotation=90
,va='bottom', fontsize='large', color='#AA0022')
def plot_pm_acf(trace, varnames=None, burn=None):
pm.autocorrplot(trace, varnames=varnames, burn=burn, figsize=(7,5))
return
```

Now we can set up our model. I will gloss over some of the particulars pymc3 and the *Generalized Linear Model (glm) *functions for now. I'm also skipping over why I'm using a Bayesian methodology vs. a frequentist one. Generally speaking, Bayesian modeling is the preferred methodology due to robustness and explicit modeling of the uncertainty in our point estimates. I plan to revisit this topic in more detail in the future, but there are plenty of tutorials and explanations of why Bayesian is the way to go.

Anyone familiar with R will appreciate the following simplicity of model setup. First we need to define our model formula as a string.

```
# predicting cross sectional average returns using the ELR index
ft_endog = 'cross_avg_zrets'
ft_exog = ['ELR'] # this format allows easy addition of more variables
fml = '{} ~ '.format(ft_endog) + ' + '.join(ft_exog)
p(fml)
# 'cross_avg_zrets ~ ELR'
```

Next we follow pymc3's glm model convention and choose the number of samples we wish to draw from the predicted posterior.

```
# choose samples and run model
samples = 5000
with pm.Model() as mdl:
## Use GLM submodule for simplified model specification
## Betas are Normal (as per default settings (for Ridge)
## Likelihood is Normal (with HalfCauchy for error prior)
pm.glm.glm(fml, mrg_mdn, family=pm.glm.families.Normal())
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell)
## take samples using NUTS sampler
trc_ols = pm.sample(samples, start=start_MAP, step=pm.NUTS())
rvs = [rv.name for rv in mdl.unobserved_RVs]
rvs.remove('sd_log_')
plot_traces(trc_ols, varnames=rvs)
plot_pm_acf(trc_ols, varnames=rvs, burn=1000)
p(pm.df_summary(trc_ols[-1000:], varnames=rvs))
p('\nMedian Illiquidity ELR Model\nDIC:', pm.dic(trc_ols[-1000:], model=mdl))
```

We run the model for both the median and max illiquidity estimates.

First we need to decide how we will evaluate which model is best. For this I have chosen the Deviance Information Criterion (DIC) which is implemented in pymc3 and designed specifically for Bayesian modelling using MCMC. Like similar alternative measures, the smaller the number the better our model.

First we evaluate the resampled median illiquidity model.

median model trace

On the left we can examine the distribution of our sample estimate for the intercept, ELR, and model error. On the right we can see the sample trace. This should look like white noise and it does. We can see the intercept is basically zero, the ELR beta is -0.06 and the standard deviation is 0.5

median acf

We plot the ACF of our variables to confirm that the sample traces are white noise. However we can see a strongly negative autocorrelation for each variable at its first lag.

median model summary and dic

We can see that the both the ELR and sd are significant as their highest posterior density does NOT include zero in the interval. The DIC is 1531. Now let's compare the median model to the max model.

elr max model trace

We observe that the models are similar in their output, however notice the ELR in this instance has a stronger negative correlation with our target variable than does the median model. The traces on the right side appear to resemble white noise. Let's confirm by looking at the ACF plot.

elr max model acf

This confirms our intuition that the series is close to white noise. We can also see a pretty strong negative autocorrelation at lag 1 for each of our variables. This is not ideal but ok for our exploratory purposes.

max model hpd and dic

We can see that the ELR and sd are both significant as neither interval includes zero. The magnitude of the ELR coefficient is larger in the max model which corresponds to a lower *better* DIC.

We designed an experiment to evaluate the relationship between the ELR index and the cross sectional scaled returns. We deviated from the original paper in a couple notable ways. We used the daily illiquidity measure and resampled to a weekly frequency using both the weekly median, and the weekly max. We then calculated the ELR index using the weekly cross sectional data as opposed to the highly nuanced monthly methodology used in the paper.

We then designed a basic linear model using pymc3 to explore the ELR index's impact on the scaled cross sectional returns. After examining the results I am somewhat disappointed we weren't able to show as strong a link as demonstrated in the paper. The max model is clearly the better model according to the DIC, but even then we can see the ELR index is only weakly related to the cross sectional returns.

A positive takeaway is that the *sign* of the relationship is what we would expect. The ELR we calculated is negatively correlated with cross section of returns.

As currently constructed, using this method to form the basis of an asset pricing model seems dubious at best and definitely lowers my expectations when I simulate a long-short strategy in Quantopian.

- Wu, Ying, Asset Pricing with Extreme Liquidity Risk (October 10, 2016). Available at SSRN:https://ssrn.com/abstract=2850278 orhttp://dx.doi.org/10.2139/ssrn.2850278
- Gelman, A. (2008), Scaling regression inputs by dividing by two standard deviations. Statist. Med., 27: 2865–2873. doi:10.1002/sim.3107
- Sedar, Jonathan. "Bayesian Inference with PyMC3 - Part 2."
*The Sampler*. Applied AI, 06 Sept. 2016. Web. 13 Dec. 2016.

**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

This is a stylized implementation of the strategy described in the research paper titled "What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns?" by Yuhang Xing, Xiaoyan Zhang and Rui Zhao. The authors show that their SKEW factor predicts individual equity returns up to 6 months!

**ABSTRACT**

The shape of the volatility smirk has significant cross-sectional predictive power for future equity returns.Stocks exhibiting the steepest smirks in their traded options underperform stocks with the least pronounced volatility smirks in their options by around 10.9% per year on a risk-adjusted basis.This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

My strategy differs in that I arbitrarily chose 1 and 4 week holding periods to study. Additionally this strategy only analyzes a cross-section of ETFs instead of individual stocks. I chose ETFs because liquidity and data quality concerns are minimized. Here are the selected ETFs under analysis.

- Zhang, Xiaoyan and Zhao, Rui and Xing, Yuhang, What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns? (August 14, 2008). AFA 2009 San Francisco Meetings Paper. Available at SSRN:http://ssrn.com/abstract=1107464 orhttp://dx.doi.org/10.2139/ssrn.1107464

**Results simulated using the Quantopian Platform.*

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>- Introduction
- Get Data
- Calculate Cross-Sectional Extreme Liquidity Risk
- Quick and Dirty Observations
- Next Steps
- References

One of the primary goals of quantitative investing is effectively managing tail risk. Failure to do so can result in crushing drawdowns or a total blowup of your fund/portfolio. Commonly known tools for estimating tail risk, e.g. Value-at-Risk, often underestimate the likelihood and magnitude of risk-off events. Furthermore, tail risk events are increasingly associated with liquidity events.

Theory links the catalyst of systemic risk events to the funding difficulties of major financial intermediaries. For example, an unexpected default by a major institution would lead to that firm's counterparties reducing risk while they assessing the fallout. Those counterparties are likely to reduce risk by selling assets and/or withdrawing funding resources from the market. This could lead to margin calls, and more selling as the default works its way across the financial network cascading into a negative feedback loop.

A good theoretical risk model will address the relationship between liquidity and tail risk. Ying Wu of Stevens Institute of Technology - School of Business, may have discovered a framework that links these two concepts in a parsimonious and practical manner. His paper 'Asset Pricing with Extreme Liquidity Risk'[1] combines Amihud's[2] stock illiquidity metric with the Hill estimator for modeling tail distributions. He then constructs a normalized Extreme Liquidity Risk (ELR) metric and runs a simple linear regression for each stock to assess its sensitivity to the ELR. His results find that a long-short portfolio based on buying stocks with the highest sensitivity to ELR and shorting those with the lowest, earns a empirically and economically significant return over the time period studied.

The Amihud stock illiquidity metric is a stock's daily absolute return divided by its dollar volume, averaged over some time period. It was constructed for use as a rough measure of price impact and designed to be easily calculated for long time series.

The Hill estimator[3] is a mathematical tool that allows us to focus on the tail of a sample distribution. This tool allows us to "skip" over trying to fit a single distribution over the entire sample and instead we can use the formal framework of Extreme Value Theory to evaluate the *extreme (tail)* values only. The link between Wu's choice of this estimator is based on the empirical evidence of power law behavior in the tails of the price-impact series. This further supports the use of Amihud's illiquidity metric as it was designed to be a crude yet effective measure of price impact.

I urge readers to explore the paper further as some of the deeper mathematical underpinnings are beyond the scope of this post.

For this exploratory study I used the pandas Yahoo Finance API to download 20 years of stock data using a symbol list constructed by CRSP.

```
# Import
import pandas as pd
import pandas_datareader.data as web
from pandas.tseries.offsets import BDay
import numpy as np
import scipy.stats as scs
import matplotlib.pyplot as plt
# get symbols
datasets = '/YOUR/DATASETS/LOCATION/_Datasets/'
symbols = pd.read_csv(datasets+'CSRP_symbol_list.txt',sep='\t').values.flatten()
```

Here is the text file of symbols I used --> Symbols.

Next we construct our convenience functions to aggregate the stock data.

```
# Get Prices Function
def _get_px(symbol, start, end):
return web.DataReader(symbol, 'yahoo', start, end)
# Create HDF5 data store for fast read write
def _create_symbol_datastore(symbols, start, end):
prices_hdf = pd.HDFStore(datasets + 'CRSP_Symbol_Data_Yahoo_20y.hdf')
symbol_count = len(symbols)
N = copy(symbol_count)
missing_symbols = []
for i, sym in enumerate(symbols, start=1):
if not pd.isnull(sym):
try:
prices_hdf[sym] = _get_px(sym, start, end)
except Exception as e:
print(e, sym)
missing_symbols.append(sym)
N -= 1
pct_total_left = (N / symbol_count)
print('{}..[done] | {} of {} symbols collected | {:>.2%}'.format(\
sym, i, symbol_count, pct_total_left))
prices_hdf.close()
print(prices_hdf)
return missing_symbols
# Get past 20 years of data from today
# Evaluate missing symbols if you so choose
today = pd.datetime.today().date()
start = today - 252 * BDay() * 20
missing = _create_symbol_datastore(symbols, start, today)
```

This takes roughly 30 minutes to run, which is a good time for a coffee break.

Next we need to calculate each stock's daily illiquidity measure according to Amihud. I also save this data to its own HDF5 store. I find it good practice to save intermediate calculations where possible for reference and ease of reproducibility.

```
# calculate each symbols returns and dollar volumes
# add to dataframe with symbol_lret, symbol_dv, symbol_illiq
FILE = datasets + 'CRSP_Symbol_Data_Yahoo_20y.hdf'
start = pd.to_datetime('1999-01-01')
end = pd.to_datetime('2016-11-22')
idx = pd.bdate_range(start, end)
DF = pd.DataFrame(index=idx)
for sym in tqdm(keys):
tmp_hdf = pd.read_hdf(FILE,
mode='r', key=sym)
tmp_hdf = tmp_hdf[['Volume', 'Adj Close']]
# I want at least 1000 daily datapoints per stock
if len(tmp_hdf) > 1000:
try:
dv = (tmp_hdf['Adj Close'] * tmp_hdf['Volume'] / 1e6)[1:]
lret = np.log(tmp_hdf['Adj Close'] / tmp_hdf['Adj Close'].shift(1)).dropna()
daily_illiq = np.abs(lret) / dv
tmp_df = pd.DataFrame({sym.lstrip('/')+'_lret':lret,
sym.lstrip('/')+'_dv':dv,
sym.lstrip('/')+'_illiq':daily_illiq},
index=lret.index)
DF = DF.join(tmp_df, how='outer')
except: continue
print(DF.info())
# Illiquidity HDF originally run on 2016-Nov-11
# DataFrame key is "Illiquidity_Set"
ILQ_FILE = datasets + 'Illiquidity_Set_2016-11-22.h5'
ilq_set = DF.loc[:, DF.columns.to_series().str.contains('_illiq').tolist()]
ilq_set.to_hdf(ILQ_FILE, 'Illiquidity_Set')
```

8487 * 4954 = 42,044,598 data points! Some of these are np.nan but still, clearly CSV storage is a non-starter.

Now we are in a position to calculate the Extreme Liquidity Risk metric (ELR) or "Tail Index" for the aggregated stocks. First we read in our 'Illiquidity_set' dataframe from the HDF5 file. Then we create a convenience function to calculate the daily ELR. First lets take a quick glance at the ELR formula:

Wu, Ying, Asset Pricing with Extreme Liquidity Risk (October 10, 2016)

My understanding is that this is a log average of the relative "distance" between the aggregated stocks' illiquidity measures and the threshold *p*. P** is the line in the sand between distribution "body" and distribution "tail". The paper uses the convention of the 95% percentile as the threshold value so I use that here as well.

```
# Read hdf illiquidity
ILQ_FILE = datasets + 'Illiquidity_Set_2016-11-22.h5'
ilq = pd.read_hdf(ILQ_FILE, 'Illiquidity_Set')
# function to get daily values for gamma calc
def _ext_lq_risk(series):
# UPDATED: DEC 5TH
# threshold is 95th percentile
# right tailed convention
p_star = np.nanpercentile(series, 95)
illiq = series[series > p_star]
lg_illiq = np.log(illiq / p_star)
lg_illiq = lg_illiq[np.isfinite(lg_illiq)]
try:
gamma = 1./ ((1./len(lg_illiq)) * sum(lg_illiq))
except ZeroDivisionError:
gamma = np.nan
return gamma
```

Now we can calculate the Tail Index and normalize the values to get the ELR series.

```
# Calculate Tail Index for all dates greater than cutoff
df = ilq.copy()
gs = {} # gammas dictionary
cutoff=100
nan_dates = []
for d in df.index:
# we want at least N nonnull values
if df.loc[d].notnull().sum() > cutoff:
gamma = _ext_lq_risk(df.loc[d])
gs[d] = gamma
else:
nan_dates.append(d)
gdf = pd.DataFrame.from_dict(gs, orient='index').sort_index()
gdf.columns = ['Tail_Index']
# the ELR metric is a normalized version of the tail index
# normalize gamma dataframe to calc "ELR"
gdfz = (gdf - gdf.mean())/gdf.std()
gdfz.columns = ['ELR']
```

Let's plot it and take a look.

Blackarbs LLC

First another plot. I skip the code here to save space, but would be happy to post it if requested. The plot below is the IWM used as a market proxy, its drawdown chart, and below that is the ELR. The shaded regions are official NBER recessions.

Blackarbs llc

The ELR appears to rise prior to the official beginning of the Dot-Com bust. It stays relatively elevated throughout the period and begins to decline sometime during the first persistent rally off the lows. Prior to the beginning of 2008's official recession, the ELR is mixed. However, the ELR rises sharply sometime prior to the massive decline in the broad market. In fact it was rising during a period where the market bounced, providing an early warning of the cataclysmic dropoff to come. Furthermore it begins declining shortly after the official NBER recession end date, providing investors with support for getting back into the market. Interestingly the ELR is in a downtrend for most of the low-volatility period that followed the recession. Clearly the metric is not a perfect predictor, but there seems to be evidence that it could be a useful tool, and certainly warrants more rigorous investigation.

There are several directions to pursue regarding Extreme Liquidity Risk Index. We can explore the time series itself using Time Series Analysis (TSA), we can use frequentist or bayesian inference to this end. Or we can get straight to the good stuff, and simulate the long-short portfolio based on each stock's return sensitivity to the ELR as reported in the paper that inspired this post. Check back for part 2, as we explore this concept further.

- Wu, Ying, Asset Pricing with Extreme Liquidity Risk (October 10, 2016). Available at SSRN:https://ssrn.com/abstract=2850278 or http://dx.doi.org/10.2139/ssrn.2850278
- Amihud, Yakov. "Illiquidity and Stock Returns: Cross-section and Time-series Effects."
*Journal of Financial Markets*5.1 (2002): 31-56. Web. - "Heavy-tailed Distribution." Wikipedia. Wikimedia Foundation, n.d. Web. 29 Nov. 2016.

**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

This is a stylized implementation of the strategy described in the research paper titled "What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns?" by Yuhang Xing, Xiaoyan Zhang and Rui Zhao. The authors show that their SKEW factor predicts individual equity returns up to 6 months!

**ABSTRACT**

The shape of the volatility smirk has significant cross-sectional predictive power for future equity returns.Stocks exhibiting the steepest smirks in their traded options underperform stocks with the least pronounced volatility smirks in their options by around 10.9% per year on a risk-adjusted basis.This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

My strategy differs in that I arbitrarily chose 1 and 4 week holding periods to study. Additionally this strategy only analyzes a cross-section of ETFs instead of individual stocks. I chose ETFs because liquidity and data quality concerns are minimized. Here are the selected ETFs under analysis.

- Zhang, Xiaoyan and Zhao, Rui and Xing, Yuhang, What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns? (August 14, 2008). AFA 2009 San Francisco Meetings Paper. Available at SSRN:http://ssrn.com/abstract=1107464 orhttp://dx.doi.org/10.2139/ssrn.1107464

**Results simulated using the Quantopian Platform.*

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>- Motivation
- The Basics
- Stationarity
- Serial Correlation (Autocorrelation)
- Why do we care about Serial Correlation?

- White Noise and Random Walks
- Linear Models
- Log-Linear Models
- Autoregressive Models - AR(p)
- Moving Average Models - MA(q)
- Autoregressive Moving Average Models - ARMA(p, q)
- Autoregressive Integrated Moving Average Models - ARIMA(p, d, q)
- Autoregressive Conditionally Heterskedastic Models - ARCH(p)
- Generalized Autoregressive Conditionally Heterskedastic Models - GARCH(p, q)
- References

Early in my quant finance journey, I learned various time series analysis techniques and how to use them but I failed to develop a deeper understanding of how the pieces fit together. I struggled to see the bigger picture of why we use certain models vs others, or how these models build on each other's weaknesses. The underlying purpose for employing these techniques eluded me for too long. That is, until I came to understand this:

By developing our time series analysis (TSA) skillset we are better able to understand what has already happened, *and *make better, more profitable, predictions of the future. Example applications include predicting future asset returns, future correlations/covariances, and future volatility.

This post is inspired by the great work Michael Halls Moore has done on his blog, Quantstart, especially his series on TSA. I thought translating some of his work to Python could help others who are less familiar with R. I have also adapted code from other bloggers as well. See References.

Before we begin let's import our Python libraries.

```
import os
import sys
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import statsmodels.api as sm
import scipy.stats as scs
from arch import arch_model
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
p = print
p('Machine: {} {}\n'.format(os.uname().sysname,os.uname().machine))
p(sys.version)
# Machine: Linux x86_64
# 3.5.2 |Anaconda custom (64-bit)| (default, Jul 2 2016, 17:53:06)
# [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
```

Let's use the **pandas_datareader** package to grab some sample data using the Yahoo Finance API.

```
end = '2015-01-01'
start = '2007-01-01'
get_px = lambda x: web.DataReader(x, 'yahoo', start=start, end=end)['Adj Close']
symbols = ['SPY','TLT','MSFT']
# raw adjusted close prices
data = pd.DataFrame({sym:get_px(sym) for sym in symbols})
# log returns
lrets = np.log(data/data.shift(1)).dropna()
```

A time series is a series of data points indexed (or listed or graphed) in time order. - Wikipedia

Here I use an infogrpahic found on SeanAbu.com. I find the pictures very intuitive.

Seanabu.com

*So what? Why do we care about stationarity?*

- A stationary time series (TS) is simple to predict as we can assume that future statistical properties are the same or proportional to current statistical properties.
- Most of the models we use in TSA assume
**covariance-stationarity (#3 above)**. This means**the descriptive statistics these models predict e.g. means, variances, and correlations, are only reliable if the TS is stationary and invalid otherwise.**

"For example, if the series is consistently increasing over time, the sample mean and variance will grow with the size of the sample, and they will always underestimate the mean and variance in future periods. And if the mean and variance of a series are not well-defined, then neither are its correlations with other variables."- http://people.duke.edu/~rnau/411diff.htm

With that said, **most TS we encounter in finance is NOT stationary.** Therefore a large part of TSA involves identifying if the series we want to predict is stationary, and if it is not we must find ways to transform it such that it is stationary. (More on that later)

Essentially when we model a time series we decompose the series into three components: trend, seasonal/cyclical, and random. The random component is called the residual or error. It is simply the difference between our predicted value(s) and the observed value(s). Serial correlation is when the residuals (errors) of our TS models are correlated with each other.

We care about serial correlation because it is critical for the validity of our model predictions, and is intrinsically related to stationarity. Recall that the residuals (errors) of a *stationary* TS are serially *uncorrelated* by definition! If we fail to account for this in our models the standard errors of our coefficients are underestimated, inflating the size of our T-statistics. The result is too many Type-1 errors, where we reject our null hypothesis even when it is True! **In layman's terms, ignoring autocorrelation means our model predictions will be bunk, and we're likely to draw incorrect conclusions about the impact of the independent variables in our model.**

White noise is the first Time Series Model (TSM) we need to understand. By definition a time series that is a white noise process has serially UNcorrelated errors and the expected mean of those errors is equal to zero. Another description for serially uncorrelated errors is, independent and identically distributed (i.i.d.). This is important because, if our TSM is appropriate and successful at capturing the underlying process, the residuals of our model will be i.i.d. and resemble a white noise process. Therefore part of TSA is literally trying to fit a model to the time series such that the residual series is indistinguishable from white noise.

Let's simulate a white noise process and view it. Below I introduce a convenience function for plotting the time series and analyzing the serial correlation visually. This code was adapted from the blog Seanabu.com

```
def tsplot(y, lags=None, figsize=(10, 8), style='bmh'):
if not isinstance(y, pd.Series):
y = pd.Series(y)
with plt.style.context(style):
fig = plt.figure(figsize=figsize)
#mpl.rcParams['font.family'] = 'Ubuntu Mono'
layout = (3, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
qq_ax = plt.subplot2grid(layout, (2, 0))
pp_ax = plt.subplot2grid(layout, (2, 1))
y.plot(ax=ts_ax)
ts_ax.set_title('Time Series Analysis Plots')
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax, alpha=0.5)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax, alpha=0.5)
sm.qqplot(y, line='s', ax=qq_ax)
qq_ax.set_title('QQ Plot')
scs.probplot(y, sparams=(y.mean(), y.std()), plot=pp_ax)
plt.tight_layout()
return
```

We can model a white noise process easily and output the TS plot for visual inspection.

```
np.random.seed(1)
# plot of discrete white noise
randser = np.random.normal(size=1000)
tsplot(randser, lags=30)
```

guassian white noise

We can see that process appears to be random and centered about zero. The autocorrelation (ACF) and partial autocorrelation (PACF) plots also indicate no significant serial correlation. Keep in mind we should see approximately 5% significance in the autocorrelation plots due to pure chance as a result of sampling from the Normal distribution. Below that we can see the QQ and Probability Plots, which compares the distribution of our data with another theoretical distribution. In this case, that theoretical distribution is the standard normal distribution. Clearly our data is distributed randomly, and appears to follow Gaussian (Normal) white noise, as it should.

```
p("Random Series\n -------------\nmean: {:.3f}\nvariance: {:.3f}\nstandard deviation: {:.3f}"
.format(randser.mean(), randser.var(), randser.std()))
# Random Series
# -------------
# mean: 0.039
# variance: 0.962
# standard deviation: 0.981
```

A Random Walk is defined below:

- Michael halls moore [quantsart.com]

The significance of a random walk is that it is **non-stationary **because the covariance between observations is time-dependent. If the TS we are modeling is a random walk it is unpredictable.

Let's simulate a random walk using the "numpy.random.normal(size=our_sample_size)" function to sample from the standard normal distribution.

```
# Random Walk without a drift
np.random.seed(1)
n_samples = 1000
x = w = np.random.normal(size=n_samples)
for t in range(n_samples):
x[t] = x[t-1] + w[t]
_ = tsplot(x, lags=30)
```

Random walk without a drift

Clearly our TS is not stationary. Let's find out if the random walk model is a good fit for our simulated data. Recall that a random walk is ** xt = xt-1 + wt**. Using algebra we can say that

```
# First difference of simulated Random Walk series
_ = tsplot(np.diff(x), lags=30)
```

first difference of a random walk series

Our definition holds as this looks exactly like a white noise process. What if we fit a random walk to the first difference of SPY's prices?

```
# First difference of SPY prices
_ = tsplot(np.diff(data.SPY), lags=30)
```

fitting a random walk model to SPY ETF prices

Wow, it's quite similar to white noise. However, notice the shape of the QQ and Probability plots. This indicates that the process is close to normality but with **'heavy tails'. **There also appears to be some significant serial correlation in the ACF, and PACF plots around lags 1, 5?, 16?, 18 and 21. This means that there should be better models to describe the actual price change process.

Linear models aka trend models represent a TS that can be graphed using a straight line. The basic equation is:

In this model the value of the dependent variable is determined by the beta coefficients and a singular independent variable, *time. *An example could be a company's sales that increase by the same amount at each time step. Let's look at a contrived example below. In this simulation we assume Firm ABC sales regardless of time are -$50.00 (*beta 0 or the intercept term*) and +$25.00 (*beta 1*) at every time step.

```
# simulate linear trend
# example Firm ABC sales are -$50 by default and +$25 at every time step
w = np.random.randn(100)
y = np.empty_like(w)
b0 = -50.
b1 = 25.
for t in range(len(w)):
y[t] = b0 + b1*t + w[t]
_ = tsplot(y, lags=lags)
```

Linear trend model simulation

Here we can see that the residuals of the model are correlated and linearly decreasing as a function of the lag. The distribution is approximately normal. Before using this model to make predictions we would have to account for and remove the obvious autocorrelation present in the series. The significance of the PACF at lag 1 indicates that an *autoregressive* model may be appropriate.

These models are similar to linear models except that the data points form an exponential function that represent a constant rate of change with respect to each time step. For example, firm ABC's sales increasing X% at each time step. When plotting the simulated sales data you get a curve that looks like this:

```
# Simulate ABC exponential growth
# fake dates
idx = pd.date_range('2007-01-01', '2012-01-01', freq='M')
# fake sales increasing at exponential rate
sales = [np.exp( x/12 ) for x in range(1, len(idx)+1)]
# create dataframe and plot
df = pd.DataFrame(sales, columns=['Sales'], index=idx)
with plt.style.context('bmh'):
df.plot()
plt.title('ABC Sales')
```

simulated exponential function

We can then transform the data by taking the natural logarithm of sales. Now a linear regression is a much better fit to the data.

```
# ABC log sales
with plt.style.context('bmh'):
pd.Series(np.log(sales), index=idx).plot()
plt.title('ABC Log Sales')
```

Natural logarithm of exponential function

These models have a fatal weakness as discussed previously. They assume serially UNcorrelated errors, which as we have seen in the linear model example is not true. In real life, TS data usually violates our stationary assumptions which motivates our progression to autoregressive models.

When the dependent variable is regressed against one or more lagged values of itself the model is called autoregressive. The formula looks like this:

AR(p) model formula

When you describe the **"order"** of the model, as in, an AR model of order **"p", **the p represents the number of lagged variables used within the model. For example an AR(2) model or *second-order *autoregressive model looks like this:

AR(2) model formula

Here, alpha (a) is the coefficient, and omega (w) is a white noise term. Alpha cannot equal zero in an AR model. Note that an AR(1) model with alpha set equal to 1 is a *random walk* and therefore not stationary.

AR(1) model with alpha = 1; random walk

Let's simulate an AR(1) model with alpha set equal to 0.6

```
# Simulate an AR(1) process with alpha = 0.6
np.random.seed(1)
n_samples = int(1000)
a = 0.6
x = w = np.random.normal(size=n_samples)
for t in range(n_samples):
x[t] = a*x[t-1] + w[t]
_ = tsplot(x, lags=lags)
```

AR(1) Model with alpha = 0.6

As expected the distribution of our simulated AR(1) model is normal. There is significant serial correlation between lagged values especially at lag 1 as evidenced by the PACF plot.

Now we can fit an AR(p) model using Python's statsmodels. First we fit the AR model to our simulated data and return the estimated alpha coefficient. Then we use the statsmodels function **"select_order()" **to see if the fitted model will select the correct lag. If the AR model is correct the estimated alpha coefficient will be close to our true alpha of 0.6 and the selected order will equal 1.

```
# Fit an AR(p) model to simulated AR(1) model with alpha = 0.6
mdl = smt.AR(x).fit(maxlag=30, ic='aic', trend='nc')
%time est_order = smt.AR(x).select_order(
maxlag=30, ic='aic', trend='nc')
true_order = 1
p('\nalpha estimate: {:3.5f} | best lag order = {}'
.format(mdl.params[0], est_order))
p('\ntrue alpha = {} | true order = {}'
.format(a, true_order))
```

Looks like we were able to recover the underlying parameters of our simulated data. Let's simulate an AR(2) process with alpha_1 = 0.666 and alpha_2 = -0.333. For this we make use of statsmodel's **"arma_generate_samples()"** function. This function allows us to simulate an AR model of arbitrary orders. Note that there are some peculiarities of Python's version which requires us to take some extra steps before using the function.

```
# Simulate an AR(2) process
n = int(1000)
alphas = np.array([.666, -.333])
betas = np.array([0.])
# Python requires us to specify the zero-lag value which is 1
# Also note that the alphas for the AR model must be negated
# We also set the betas for the MA equal to 0 for an AR(p) model
# For more information see the examples at statsmodels.org
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ar2 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n)
_ = tsplot(ar2, lags=lags)
```

AR(2) simulation with alpha_1 = 0.666 and alpha_2 = -0.333

Let's see if we can recover the correct parameters.

```
# Fit an AR(p) model to simulated AR(2) process
max_lag = 10
mdl = smt.AR(ar2).fit(maxlag=max_lag, ic='aic', trend='nc')
est_order = smt.AR(ar2).select_order(
maxlag=max_lag, ic='aic', trend='nc')
true_order = 2
p('\ncoef estimate: {:3.4f} {:3.4f} | best lag order = {}'
.format(mdl.params[0],mdl.params[1], est_order))
p('\ntrue coefs = {} | true order = {}'
.format([.666,-.333], true_order))
# coef estimate: 0.6291 -0.3196 | best lag order = 2
# true coefs = [0.666, -0.333] | true order = 2
```

Not bad. Let's see how the AR(p) model will fit MSFT log returns. Here is the return TS.

MSFT log returns time series

```
# Select best lag order for MSFT returns
max_lag = 30
mdl = smt.AR(lrets.MSFT).fit(maxlag=max_lag, ic='aic', trend='nc')
est_order = smt.AR(lrets.MSFT).select_order(
maxlag=max_lag, ic='aic', trend='nc')
p('best estimated lag order = {}'.format(est_order))
# best estimated lag order = 23
```

The best order is 23 lags or 23 parameters! Any model with this many parameters is unlikely to be useful in practice. Clearly there is more complexity underlying the returns process than this model can explain.

MA(q) models are very similar to AR(p) models. The difference is that the MA(q) model is a linear combination of past white noise error terms as opposed to a linear combo of past observations like the AR(p) model. The motivation for the MA model is that we can observe "shocks" in the error process directly by fitting a model to the error terms. In an AR(p) model these shocks are observed indirectly by using the ACF on the series of past observations. The formula for an MA(q) model is:

Omega (w) is white noise with E(wt) = 0 and variance of sigma squared. Let's simulate this process using beta=0.6 and specifying the AR(p) alpha equal to 0.

```
# Simulate an MA(1) process
n = int(1000)
# set the AR(p) alphas equal to 0
alphas = np.array([0.])
betas = np.array([0.6])
# add zero-lag and negate alphas
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ma1 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n)
_ = tsplot(ma1, lags=30)
```

Simulated ma(1) process with beta=0.6

The ACF function shows that lag 1 is significant which indicates that a MA(1) model may be appropriate for our simulated series. I'm not sure how to interpret the PACF showing significance at lags 2, 3, and 4 when the ACF only shows significance at lag 1. Regardless we can now attempt to fit a MA(1) model to our simulated data. We can use the same statsmodels **"ARMA()" **function specifying our chosen orders. We call on its **"fit()"** method to return the model output.

```
# Fit the MA(1) model to our simulated time series
# Specify ARMA model with order (p, q)
max_lag = 30
mdl = smt.ARMA(ma1, order=(0, 1)).fit(
maxlag=max_lag, method='mle', trend='nc')
p(mdl.summary())
```

MA(1) model summary

The model was able to correctly estimate the lag coefficent as 0.58 is close to our true value of 0.6. Also notice that our 95% confidence interval does contain the true value. Let's try simulating an MA(3) process, then use our ARMA function to fit a third order MA model to the series and see if we can recover the correct lag coefficients (betas). Betas 1-3 are equal to 0.6, 0.4, and 0.2 respectively.

```
# Simulate MA(3) process with betas 0.6, 0.4, 0.2
n = int(1000)
alphas = np.array([0.])
betas = np.array([0.6, 0.4, 0.2])
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
ma3 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n)
_ = tsplot(ma3, lags=30)
```

Simulated ma(3) process with betas = [0.6, 0.4, 0.2]

```
# Fit MA(3) model to simulated time series
max_lag = 30
mdl = smt.ARMA(ma3, order=(0, 3)).fit(
maxlag=max_lag, method='mle', trend='nc')
p(mdl.summary())
```

MA(3) model summary

The model was able to estimate the real coefficients effectively. Our 95% confidence intervals also contain the true parameter values of 0.6, 0.4, and 0.3. Now let's try fitting an MA(3) model to the SPY's log returns. Keep in mind we do not know the *true *parameter values.

```
# Fit MA(3) to SPY returns
max_lag = 30
Y = lrets.SPY
mdl = smt.ARMA(Y, order=(0, 3)).fit(
maxlag=max_lag, method='mle', trend='nc')
p(mdl.summary())
_ = tsplot(mdl.resid, lags=max_lag)
```

SPY MA(3) model summary

Let's look at the model residuals.

SPY MA(3) model Residuals

Not bad. Some of the ACF lags concern me especially at 5, 16, and 18. It could be sampling error but that combined with the heaviness of the tails makes me think this isn't the best model to predict future SPY returns.

As you may have guessed, the ARMA model is simply the merger between AR(p) and MA(q) models. Let's recap what these models represent to us from a quant finance perspective:

- AR(p) models try to capture
*(explain)*the momentum and mean reversion effects often observed in trading markets. - MA(q) models try to capture
*(explain)*the shock effects observed in the white noise terms. These shock effects could be thought of as unexpected events affecting the observation process e.g. Surprise earnings, A terrorist attack, etc.

"For a set of products in a grocery store, the number of active coupon campaigns introduced at different times would constitute multiple 'shocks' that affect the prices of the products in question."

- AM207: Pavlos Protopapas, Harvard University

ARMA's weakness is that it ignores the *volatility clustering *effects found in most financial time series.

The model formula is:

arma(p, q) equation

Let's simulate an ARMA(2, 2) process with given parameters, then fit an ARMA(2, 2) model and see if it can correctly estimate those parameters. Set alphas equal to [0.5,-0.25] and betas equal to [0.5,-0.3].

```
# Simulate an ARMA(2, 2) model with alphas=[0.5,-0.25] and betas=[0.5,-0.3]
max_lag = 30
n = int(5000) # lots of samples to help estimates
burn = int(n/10) # number of samples to discard before fit
alphas = np.array([0.5, -0.25])
betas = np.array([0.5, -0.3])
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
arma22 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n, burnin=burn)
_ = tsplot(arma22, lags=max_lag)
mdl = smt.ARMA(arma22, order=(2, 2)).fit(
maxlag=max_lag, method='mle', trend='nc', burnin=burn)
p(mdl.summary())
```

simulated ARma(2, 2) process

ARMA(2, 2) Model summary

The model has correctly recovered our parameters, and our true parameters are contained within the 95% confidence interval.

Next we simulate a ARMA(3, 2) model. After, we cycle through a non trivial number of combinations of p, q to fit an ARMA model to our simulated series. We choose the best combination based on which model produces the lowest Akaike Information Criterion (AIC).

```
# Simulate an ARMA(3, 2) model with alphas=[0.5,-0.25,0.4] and betas=[0.5,-0.3]
max_lag = 30
n = int(5000)
burn = 2000
alphas = np.array([0.5, -0.25, 0.4])
betas = np.array([0.5, -0.3])
ar = np.r_[1, -alphas]
ma = np.r_[1, betas]
arma32 = smt.arma_generate_sample(ar=ar, ma=ma, nsample=n, burnin=burn)
_ = tsplot(arma32, lags=max_lag)
# pick best order by aic
# smallest aic value wins
best_aic = np.inf
best_order = None
best_mdl = None
rng = range(5)
for i in rng:
for j in rng:
try:
tmp_mdl = smt.ARMA(arma32, order=(i, j)).fit(method='mle', trend='nc')
tmp_aic = tmp_mdl.aic
if tmp_aic < best_aic:
best_aic = tmp_aic
best_order = (i, j)
best_mdl = tmp_mdl
except: continue
p('aic: {:6.5f} | order: {}'.format(best_aic, best_order))
# aic: 14108.27213 | order: (3, 2)
```

The correct order was recovered above. Below we see the output of our simulated time series before any model fitting.

Simulated arma(3, 2) series with alphas = [0.5,-0.25,0.4] and betas = [0.5,-0.3]

ARmA(3, 2) BEst model summary

We see that the correct order was selected and the model correctly estimated our parameters. However notice the MA.L1.y coefficent; the true value of 0.5 is almost outside of the 95% confidence interval!

Below we observe the model's residuals. Clearly it is a white noise process, thus the best model has been fit to *explain* the data.

ARMA(3, 2) best model residual white noise

Next we fit an ARMA model to SPY returns. The plot below is the time series before model fitting.

SPY Returns

```
# Fit ARMA model to SPY returns
best_aic = np.inf
best_order = None
best_mdl = None
rng = range(5) # [0,1,2,3,4,5]
for i in rng:
for j in rng:
try:
tmp_mdl = smt.ARMA(lrets['SPY'], order=(i, j)).fit(
method='mle', trend='nc'
)
tmp_aic = tmp_mdl.aic
if tmp_aic < best_aic:
best_aic = tmp_aic
best_order = (i, j)
best_mdl = tmp_mdl
except: continue
p('aic: {:6.5f} | order: {}'.format(best_aic, best_order))
# aic: -11518.22902 | order: (4, 4)
```

We plot the model residuals.

SPY best model residuals arma(4, 4)

The ACF and PACF are showing no significant autocorrelation. The QQ and Probability Plots show the residuals are approximately normal with heavy tails. However, this model's residuals do NOT look like white noise! Look at the highlighted areas of obvious conditional heteroskedasticity (*conditional volatility*) that the model has not captured.

ARIMA is a natural extension to the class of ARMA models. As previously mentioned many of our TS are not stationary, however they can be made stationary by differencing. We saw an example of this when we took the first difference of a Guassian random walk and proved that it equals white noise. Said another way, we took the nonstationary random walk and transformed it to stationary white noise by first-differencing.

Without diving too deeply into the equation, just know the **"d"** references the number of times we are differencing the series. A side note, in Python we must use **np.diff()** function if we need to difference a series more than once. The **pandas **functions **DataFrame****.diff()/Series.diff() only takes the first difference of a dataframe/series and does not implement the recursive differencing needed in TSA. **

In the following example, we iterate through a non-trivial number of combinations of (p, d, q) orders, to find the best ARIMA model to fit SPY returns. We use the AIC to evaluate each model. The lowest AIC wins.

```
# Fit ARIMA(p, d, q) model to SPY Returns
# pick best order and final model based on aic
best_aic = np.inf
best_order = None
best_mdl = None
pq_rng = range(5) # [0,1,2,3,4]
d_rng = range(2) # [0,1]
for i in pq_rng:
for d in d_rng:
for j in pq_rng:
try:
tmp_mdl = smt.ARIMA(lrets.SPY, order=(i,d,j)).fit(method='mle', trend='nc')
tmp_aic = tmp_mdl.aic
if tmp_aic < best_aic:
best_aic = tmp_aic
best_order = (i, d, j)
best_mdl = tmp_mdl
except: continue
p('aic: {:6.5f} | order: {}'.format(best_aic, best_order))
# aic: -11518.22902 | order: (4, 0, 4)
# ARIMA model resid plot
_ = tsplot(best_mdl.resid, lags=30)
```

It should be no surprise that the best model has a differencing of 0. Recall that we already took the first difference of log prices to calculate the stock returns. Below, I plot the model residuals. The result is essentially identical to the ARMA(4, 4) model we fit above. Clearly this ARIMA model has not explained the conditional volatility in the series either!

ARIMA model fit to spy returns

Now we have at least accumulated enough knowledge to make a simple forecast of future returns. Here we make use of our model's **forecast() **method. As arguments, it takes an integer for the number of time steps to predict, and a decimal for the alpha argument to specify the confidence intervals. The default setting is 95% confidence. For 99% set alpha equal to 0.01.

```
# Create a 21 day forecast of SPY returns with 95%, 99% CI
n_steps = 21
f, err95, ci95 = best_mdl.forecast(steps=n_steps) # 95% CI
_, err99, ci99 = best_mdl.forecast(steps=n_steps, alpha=0.01) # 99% CI
idx = pd.date_range(data.index[-1], periods=n_steps, freq='D')
fc_95 = pd.DataFrame(np.column_stack([f, ci95]),
index=idx, columns=['forecast', 'lower_ci_95', 'upper_ci_95'])
fc_99 = pd.DataFrame(np.column_stack([ci99]),
index=idx, columns=['lower_ci_99', 'upper_ci_99'])
fc_all = fc_95.combine_first(fc_99)
fc_all.head()
```

```
# Plot 21 day forecast for SPY returns
plt.style.use('bmh')
fig = plt.figure(figsize=(9,7))
ax = plt.gca()
ts = lrets.SPY.iloc[-500:].copy()
ts.plot(ax=ax, label='Spy Returns')
# in sample prediction
pred = best_mdl.predict(ts.index[0], ts.index[-1])
pred.plot(ax=ax, style='r-', label='In-sample prediction')
styles = ['b-', '0.2', '0.75', '0.2', '0.75']
fc_all.plot(ax=ax, style=styles)
plt.fill_between(fc_all.index, fc_all.lower_ci_95, fc_all.upper_ci_95, color='gray', alpha=0.7)
plt.fill_between(fc_all.index, fc_all.lower_ci_99, fc_all.upper_ci_99, color='gray', alpha=0.2)
plt.title('{} Day SPY Return Forecast\nARIMA{}'.format(n_steps, best_order))
plt.legend(loc='best', fontsize=10)
```

21-Day spy Return forecast - Arima(4,0,4)

ARCH(p) models can be thought of as simply an AR(p) model applied to the variance of a time series. Another way to think about it, is that the variance of our time series NOW *at time t*, is conditional on past observations of the variance in previous periods.

arch(1) model formula - penn state

Assuming the series has zero mean we can express the model as:

ARCH(1) model if zero mean

```
# Simulate ARCH(1) series
# Var(yt) = a_0 + a_1*y{t-1}**2
# if a_1 is between 0 and 1 then yt is white noise
np.random.seed(13)
a0 = 2
a1 = .5
y = w = np.random.normal(size=1000)
Y = np.empty_like(y)
for t in range(len(y)):
Y[t] = w[t] * np.sqrt((a0 + a1*y[t-1]**2))
# simulated ARCH(1) series, looks like white noise
tsplot(Y, lags=30)
```

simulated ARCH(1) Process

Simulated arch(1)**2 process

Notice the ACF, and PACF seem to show significance at lag 1 indicating an AR(1) model for the variance may be appropriate.

Simply put GARCH(p, q) is an ARMA model applied to the variance of a time series i.e., it has an autoregressive term and a moving average term. The AR(p) models the variance of the residuals (squared errors) or simply our time series squared. The MA(q) portion models the variance of the process. The basic GARCH(1, 1) formula is:

garch(1, 1) formula from quantstart.com

Omega (w) is white noise, and alpha and beta are parameters of the model. Also alpha_1 + beta_1 must be less than 1 or the model is unstable. We can simulate a GARCH(1, 1) process below.

```
# Simulating a GARCH(1, 1) process
np.random.seed(2)
a0 = 0.2
a1 = 0.5
b1 = 0.3
n = 10000
w = np.random.normal(size=n)
eps = np.zeros_like(w)
sigsq = np.zeros_like(w)
for i in range(1, n):
sigsq[i] = a0 + a1*(eps[i-1]**2) + b1*sigsq[i-1]
eps[i] = w[i] * np.sqrt(sigsq[i])
_ = tsplot(eps, lags=30)
```

Simulated Garch(1, 1) process

Again, notice that overall this process closely resembles white noise, however take a look when we view the squared **eps** series.

simulated garch(1, 1) process squared

There is clearly autocorrelation present and the significance of the lags in both the ACF and PACF indicate we need both AR and MA components for our model. Let's see if we can recover our process parameters using a GARCH(1, 1) model. Here we make use of the **arch_model **function from the **ARCH **package.

```
# Fit a GARCH(1, 1) model to our simulated EPS series
# We use the arch_model function from the ARCH package
am = arch_model(eps)
res = am.fit(update_freq=5)
p(res.summary())
```

garch model fit summary

Now let's run through an example using SPY returns. The process is as follows:

- Iterate through combinations of ARIMA(p, d, q) models to best fit our time series.
- Pick the GARCH model orders according to the ARIMA model with lowest AIC.
- Fit the GARCH(p, q) model to our time series.
- Examine the model residuals and squared residuals for autocorrelation

Also note that I've chosen a specific time period to better highlight key points. However the results will be different depending on the time period under study.

```
def _get_best_model(TS):
best_aic = np.inf
best_order = None
best_mdl = None
pq_rng = range(5) # [0,1,2,3,4]
d_rng = range(2) # [0,1]
for i in pq_rng:
for d in d_rng:
for j in pq_rng:
try:
tmp_mdl = smt.ARIMA(TS, order=(i,d,j)).fit(
method='mle', trend='nc'
)
tmp_aic = tmp_mdl.aic
if tmp_aic < best_aic:
best_aic = tmp_aic
best_order = (i, d, j)
best_mdl = tmp_mdl
except: continue
p('aic: {:6.5f} | order: {}'.format(best_aic, best_order))
return best_aic, best_order, best_mdl
# Notice I've selected a specific time period to run this analysis
TS = lrets.SPY.ix['2012':'2015']
res_tup = _get_best_model(TS)
# aic: -5255.56673 | order: (3, 0, 2)
```

residuals of arima(3,0,2) model fit to SPY returns

Looks like white noise.

squared RESIDUALS OF ARIMA(3,0,2) MODEL FIT TO SPY RETURNS

Squared residuals show autocorrelation. Let's fit a GARCH model and see how it does.

```
# Now we can fit the arch model using the best fit arima model parameters
p_ = order[0]
o_ = order[1]
q_ = order[2]
# Using student T distribution usually provides better fit
am = arch_model(TS, p=p_, o=o_, q=q_, dist='StudentsT')
res = am.fit(update_freq=5, disp='off')
p(res.summary())
```

GARCH(3, 2) model fit to spy returns

Convergence warnings can occur when dealing with very small numbers. Multiplying the numbers by factors of 10 to scale the magnitude can help when necessary, however for this demonstration it isn't necessary. Below are the model residuals.

residuals of garch(3, 2) model fit to SPY returns

Looks like white noise above. Now let's view the ACF and PACF of the squared residuals.

Looks like we have achieved a good model fit as there is no obvious autocorrelation in the squared residuals.

- Quantstart.com - https://www.quantstart.com/articles#time-series-analysis
- Harvard Lectures in Python - http://iacs-courses.seas.harvard.edu/courses/am207/blog/lecture-17.html
- Penn State Stats - https://onlinecourses.science.psu.edu/stat510/node/78
- stationary pic + tsplot - http://www.seanabu.com/2016/03/22/time-series-seasonal-ARIMA-model-in-python/
- stationary quote, etc - http://people.duke.edu/~rnau/411diff.htm
- interpreting qq plots - http://stats.stackexchange.com/questions/101274/how-to-interpret-a-qq-plot
- Kaplan SchweserNotes (Level 2) - Quantitative Methods

**Strategy Summary****Results****Conclusions/Analysis**

First, if you're unfamiliar with the Implied Volatility Skew Strategy you can find a recent deep dive into the strategy and its performance **here.**

In this short post, I look at the effect of using only the top N ranked ETFs from each Long/Short portfolio. In this case, N is equal to 3. This is an arbitrary selection and this study could be done with the top 1, 2, 4, etc. This differs from the original strategy in that the original strategy simply bins the ETFs into quantiles (according to the factor value) and creates the portfolio from the top and bottom quantiles; **this strategy takes the top 3 ETFs from within the top and bottom quantiles.**

**I will reference the original strategy as S1 and the modified strategy as S2 for the remainder of the post.**

This strategy comparison exposes some key themes found in quant finance, but first some highlights:

- S2 outperforms S1 on a pure total return basis over the period. (~23% vs ~18%)
- S2 has a higher annualized alpha. (26% vs 20%)
- S2 has a lower annualized beta. (0% vs 10%)
- S2 has nearly double the volatility of S1. (13% vs 7%)
- S2 has nearly double the maximum drawdown of S1. (-4.4% vs -2.3%)
- S2 has max drawdown duration 3x larger than S1. (72 days vs 24 days)
- Both strategies have a positive information ratio of approximately 6%.

We can extract a few lessons here.

S2 trades a maximum of six ETFs at a time vs the ~10+ ETFs of S1, so **we would expect S2 to have a higher level of volatility than S1. **You might ask, "Why is that?". Well, this is basic Market Portfolio Theory which relates diversification and portfolio volatility to the number of assets traded and their correlations with each other. Essentially, **the theory says that the volatility of a portfolio of assets will be less than the sum of their individual volatility's as long as the correlations between the assets is not equal to 1.** This is the power and benefit of diversification in a nutshell.

Unfortunately for S2, the increase in total returns is not enough to compensate us for the increase in volatility (risk). This shows up primarily in the annualized sharpe ratio. **S1 clearly has the edge in return per unit of risk, with a sharpe ratio of ~2.8 vs ~2.** This also manifests in the Calmar ratio, which tracks the ratio of Avg. Annual Returns over Maximum Drawdown. Here **S1 is also superior sporting a Calmar ratio of ~9.8 vs ~6.7.**

S2's lower beta surprised me initially, but this is likely explained by trading less ETFs overall combined with those ETFs having low market correlation.

**Another issue we encounter with S2 vs S1 has to do with liquidity**. In the equity curve charts, I have highlighted the general region where the strategy was on hiatus. Notice in S2, that there are tiny trades still occurring throughout this period. There should be no trades happening. **What this means is that because S2 has to invest more cash into a smaller number of ETFs, if any of those ETFs are very thinly traded, it results in lots of unfilled orders that can take a significant amount of time to finish allocating or liquidating.** Again, this is another side effect of S2's reduced diversification.

Thus far all the evidence supports S1 as the superior strategy indicating that using a top N factor rank within top and bottom quantiles does not improve the strategy.

]]>

**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

**ABSTRACT**

This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

The following symbols were removed from analysis: IPW, IYC, PLTM, GAF, GUNR, IYK

**Results simulated using the Quantopian Platform.*

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)****Deep Dive into the Weekly Strategy using Quantopian's Pyfolio****Strategy Concerns**

**ABSTRACT**

This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

***Results simulated using the Quantopian platform.*

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

Here I use Quantopian's portfolio analytics tool to further analyze the strategy. Note that all returns are simulated.

Annualized returns are extremely healthy at 26% with annualized volatility in the single digits. The sharpe ratio is impressive at 3.2 and the calmar ratio is double digits reflecting the low maximum drawdown this strategy has sustained. Some features we would hope for include a positive skew which indicates a higher likelihood of positive returns. Annualized alpha is substantial at 22% while maintaining very low beta exposure to the benchmark (SPY) at 10%. These numbers are fantastic but we must pump the brakes on our excitement as the strategy only has 9 months of results. Furthermore, at least 1.5 of the 9 months the strategy was not tracked due to the downfall of the Yahoo Finance options API.

A quick glance shows that the strategy has had strong cumulative return performance over the first three quarters of 2016. We can see the gap in the strategy results during the period I was searching for a new options data provider.

Monthly and weekly returns skew to the positive. The only losing month thus far appears to be March. Even without the near ~8% gain in April the strategy still skews positively.

Rolling portfolio beta relative to the SPY benchmark is near zero. Six month rolling sharpe has been consistently high around 3 for the last three months. The strategy has near zero sensitivity to the rolling Fama-French factors which is a positive sign.

This strategy turns over almost once a week, as designed, with daily trading volume averaging a little over 20,000 shares. Gross leverage very rarely exceeds two. Based on the gross leverage we can see that the strategy struggles to get filled on the more thinly traded ETFs which results in the gross leverage rarely achieving its target leverage of two. This includes my custom slippage model which increases the bar volume limit to 50%, which means up to 50% of a minutes bar's volume can be traded by the strategy.

Drawdowns have been manageable and very small, with the max at 2.26% only lasting 16 days. Max drawdown duration has been approximately 1 month or 24 days.

Here are the maximum long and short portfolio weights. ~22% isn't awful but it's not ideal. Again this is a reflection of slippage and lack of timely fills. However it is not all bad as seen below.

Clearly those overweight positions are the result of slippage during portoflio turnover as those allocations are represented by short duration spikes on both the long and short side in the **Portfolio Allocation Over Time** plot. More importantly, in the **Long/Short Max and Median Position Concentration** chart we can see that during the position weight spikes, the portfolio has had near offsetting positions such that it is not unhedged or overexposed to either the long or short side for long periods of time. We can confirm this by plotting the net leverage and average net leverage.

Let's examine some trading statistics.

First item that jumps out is the quantity of **losing round_trips** is greater than **winning round_trips** for **short trade****s** which is interesting. Only 38% of the short trades are profitable. However, look at the **Profit factor **row. For short trades it is very close to 1 indicating near breakeven. How is that possible? Look at the **Ratio Avg. Win:Avg Loss**. Short trades have the highest ratio at $1.58. This means that average winning trade is almost 1.6x the size of the average losing trade in dollar terms. The **Profit factor** for the long trades is a healthy $1.93 and the strategy overall shows an investable profit factor of $1.41.

The strategy looks great overall. However there are key concerns that must be highlighted. I will do so in bullet form:

- Small sample size incorporates only 9 months of data, ~1.5 months the strategy was not tracked.
- Liquidity and slippage can be an issue when the strategy is run live. The question is, is there more liquidity available for some of the thinly traded ETFs, during live trading that is not reflected in historical volume, due to the creation/redemption process?
- If gross and net leverage are brought into better balance the strategy returns could differ from the simulation results for better or for worse.
- Execution and commission costs could, in theory, be improved and/or reduced for the benefit of the strategy due to the algorithm's high turnover. This gives the portfolio manager/administrator leverage in negotiating commissions.
- Bad data is always a concern that could bias the simulation results up or down.
- Simulated results can be substantially different from live trading results.

**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

**ABSTRACT**

This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>**Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

**ABSTRACT**

This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>**Strategy Restart****Strategy Summary****References****4-Week Holding Period Strategy Update****1-Week Holding Period Strategy Updated (Target Leverage=2)**

After a ~2 month pause, the implied volatility long/short strategy has returned! If you were previously unaware this strategy relied on aggregating free options data via the now defunct Yahoo Finance Options API. After some time I was able to track down another free, reliable, source for options data via Barchart.com. I show how to create a web scraper to aggregate the data here * <Aggregating Free Options Data with Python>. *Without further delay I present the strategy updates below.

**ABSTRACT**

This predictability persists for at least six months, and firms with the steepest volatility smirks are those experiencing the worst earnings shocks in the following quarter. The results are consistent with the notion that informed traders with negative news prefer to trade out-of-the-money put options, and that the equity market is slow in incorporating the information embedded in volatility smirks. [1]

Here is the skew measure they use.

SOURCE: WHAT DOES INDIVIDUAL OPTION VOLATILITY SMIRK TELL US ABOUT FUTURE EQUITY RETURNS?

Download the spreadsheet here.

Download a text file of all the portfolio stocks here.

RESULTS SIMULATED USING QUANTOPIAN PLATFORM

]]>- Motivation
- Code Requirements
- Creating our Scraper Class
- Aggregating the Data
- Github Gist Code
- Disclaimers

This year I implemented a simulated trading strategy based on the research paper titled "What Does Individual Option Volatility Smirk Tell Us About Future Equity Returns?" by Yuhang Xing, Xiaoyan Zhang and Rui Zhao. The authors show that their SKEW factor has predictive power for equity returns for up to 6 months.

Because historical options data is difficult to find and/or prohibitively expensive I tracked the results of the simulated strategy in near real time using a combination of the Yahoo Finance Options API made available via the Pandas package and the Quantopian platform for realistic backtesting. Unfortunately, the Yahoo Finance API has changed and it appears that the options data is no longer offered. Therefore my last strategy update took place on July 12, 2016 which can be seen here.

The strategy has thus far exceeded all expectations. I tracked two versions of the strategy, one maintained a 4-week holding period, the other a weekly holding period. The 4-week holding period strategy showed a cumulative return of ~9%, with 25 of 28 weeks showing positive gains! The weekly strategy (with target leverage of 2) fared slightly better with total returns ~16%, double-digit alpha ~24%, near-zero beta ~8%, single digit volatility ~8%, with a max drawdown of ~2.2%!

The strategy showed immense promise but with only 28 weeks of results the sample size is unfortunately too small. As a Python programmer when one API goes down it's time to find another. With that said I created my own using the excellent free Barchart.com resource.

First you must sign up for a free account with Barchart.com and note your username and password. For reference, my current system is running Windows 8.1, 64-Bit with a WinPython distribution using Python 3.5. The code requires the following packages:

- pandas
- numpy
- requests
- bs4
- re
- logging
- more_itertools
- tqdm

Assuming you have the requisite packages and have signed up for a Barchart.com user account we can now code our scraper class. High-level, there are a few things to note when designing our scraper.

We can get basic options data without an account. It looks like this:

source: http://www.barchart.com/options/stocks/SPY

That's decent but we need volatility and greeks data which requires a free account.