Composite Macro ETF Cumulative Return Momentum (08.16.2015)

Here are the updated ETF components I'm using to construct the ETF composites. 


’Large Cap’ :[‘SPY’],
‘Mid Cap’ :[‘MDY’],
‘Small Cap’ :[‘IWM’],
‘Global Equity’ :[‘VEU’,’ACWI’,’VXUS’,’DGT’],
‘AsiaPac Equity’ :[‘EWT’,’EWY’,’EWA’,’EWS’,’AAXJ’,’FXI’,’EWH’,’EWM’,’EPI’,’INDA’,’RSX’],
‘Europe Equity’ :[‘FEZ’,’EZU’,’VGK’,’HEDJ’,’EWU’,’EWI’,’EWP’,’EWQ’,’EWL’,’EWD’],
‘Emerging | Frontier’ :[‘EWZ’,’EWW’,’ECH’,’GAF’,’FM’,’EEM’,’VWO’],
‘Real Estate’ :[‘RWO’,’RWX’,’RWR’,’IYR’,’VNQ’],
‘Consumer Discretionary’:[‘XLY’,’XRT’],
‘Consumer Staples’ :[‘XLP’,’FXG’],
‘Energy’ :[‘XLE’,’IPW’,’XOP’],
‘Financials’ :[‘XLF’,’KBE’,’KIE’,’IYG’,’KRE’],
‘Healthcare’ :[‘XLV’,’XBI’,’IBB’],
‘Industrial’ :[‘XLI’,’IYT’],
‘Materials’ :[‘XLB’,’XHB’,’XME’,’IGE’,’MOO’,’GDX’,’GDXJ’],
‘Technology’ :[‘XLK’,’SMH’,’HACK’,’FDN’],
‘Telecom’ :[‘IYZ’],
‘Utilities’ :[‘IDU’,’XLU’],
‘Oil | Gas’ :[‘UNG’,’BNO’,’OIL’],
‘Precious Metals’ :[‘GLD’,’SLV’,’IAU’],
‘Technology’ :[‘XLK’,’SMH’,’HACK’,’FDN’],
‘Bonds’ :[‘TLT’,’AGG’,’JNK’,’LQD’],
‘T-Bond Yields’ :[‘^TYX’,’^TNX’,’^FVX’]
— blackarbsCEO

Last 504 Trading Days

Composite ETF Cumulative Returns

Last 252 Trading Days

Composite ETF Cumulative Returns

Last 126 Trading Days

Composite ETF Cumulative Returns

Last 63 Trading Days

Composite ETF Cumulative Returns

last 21 Trading days

Composite ETF Cumulative Returns

Last 10 Trading days

Composite ETF Cumulative Returns

Price Dispersion as a Smart Money Indicator

Before I get into the topic at hand, let me say I have not seen the following stock price data interpreted or studied like I am about to show you. As far as I am aware my approach is unique in that it is not overly complicated, can be generalized across a large cross section of asset class ETFs, and makes intuitive sense regarding market structure. 

Before I introduce the chart it is important that I clarify some definitions. 

What is Price Dispersion?

I'm sure this may have many meanings among market participants but for our purposes the Blackarbs definition of price dispersion is as follows:

Price Dispersion is an alternative measure of a security’s volatility. Specifically, it is used to track market participant’s agreement regarding a security’s value. Major value disagreements show up as spikes in the level of price dispersion. Equation: (bar.high - bar.low) / (bar.close)
— blackarbs.com/glossary

In this study price dispersion is the stock's daily price range expressed as a percentage of the Adjusted Close price.

How Does Price dispersion work as a smart money indicator?

The interpretation is based on the following assumptions.

  • Assumption (1):  Smart money is big money. These are the major players, the whales if you will, who move markets when they make trading decisions. These players tend to hold medium to long term views on positions and as such are primarily concerned with value. They tend to be buyers when others are selling and sellers when others are buying. 
  • Assumption (2):  Daily range is inherently a measure of market participants agreement or disagreement regarding the price(value) of a security. If the daily range is increasing or relatively large there is value disagreement. If the daily range is decreasing or relatively small there is value agreement.

There are a few more assumptions that will make more sense after viewing the chart. I must warn you the chart may look complicated on the surface but I assure you, the interpretation is relatively simple after I explain the details. 

XLK - Technology Select Sector SPDR ETF L/63 Days

What is this chart? what is contained in the two subplots?

Above is a chart of XLK. I have numbered the two subplots respectively.  The plot marked (1) shows an exponentially weighted cumulative return over the last 63 days. On the secondary axis I have plotted the daily adjusted close price. The black horizontal line is the 0% return value. 

The plot marked (2) contains a barplot of the daily price dispersion. The black line is a threshold value calculated as the top quintile (top 20%) of dispersion values over the period studied. On the right hand axis or secondary axis, is an exponentially weighted moving average of the daily dispersion. The red dotted line is also a threshold value defining the top quintile (top 20%) of all EMA values over the period. 

The blue vertical lines represent the bars where price dispersion exceeded the threshold value. The blue verticals are plotted on both subplots and correspond to the same dates. Note: due to formatting issues, at times the blue vertical lines are not aligned perfectly however, they still represent the same dates as the dispersion subplot. 

What are the final Three assumptions used to interpret this plot?

  1. Assumption (3): Smart money traders create the largest value disagreements therefore spikes in price dispersion indicate areas of trading opportunity and/or significant support and resistance levels.
  2. Assumption (4): Single interspersed vertical lines are more often associated with position closing events (liquidation/profit taking/short covering). Clustered or consecutive vertical lines (>=2) are indicators of buying and/or an interim bottom, however this is not always true. More importantly, as the size of the vertical cluster grows the more likely a sustainable trend change is occurring.
  3. Assumption (5): Rising dispersion as measured by the 21 Day EMA indicates increased risk of declining(negative) returns. Declining dispersion is associated with increased probability of increasing(positive) returns. 

put it all together, what is the chart saying about xlk?

Examining the plot we can see there has been much disagreement over the ETF value during the 63 day period. Resistance is ~$43.50 which coincided with a clustered dispersion spike in late May.  Price trended negatively over the period until disagreement in the ~$41.50 range indicating an interim bottom formed during late June/early July. However the outlook moving forward is mixed with a negative bias. Cumulative returns over the period are slightly negative and the dispersion EMA trend is clearly elevated above the threshold value. 

Let's look at another one.

XLK - TECHNOLOGY SELECT SPDR ETF  L/126 DAYS

This is the same ETF over the last 126 days or roughly 6 trading months. We can see the similarities in structure which reinforces some of the interpretations made previously. I've poorly circle the clustered areas. Notice how they coincide with high conviction trend changes.

The first occurred late March and happened to form a significant bottom ~$41.25. This value was not retested until July. The second cluster occurred  in early May and also formed an interim bottom around $42. From there price advanced until the next cluster which formed a significant top ~$43.50 in late May. 

To reiterate the outlook for XLK is mixed. The most recent cluster triggered during August 11/12. Generally this is a bullish sign, however with price dispersion clearly elevated on two timeframes and cumulative rolling returns below zero I would have a bearish to neutral bias.

how is this intuitive to understand??

Big money moves markets. Big money is opportunistic and likely to get involved at advantageous prices. By measuring the disagreement over a security's value, as measured by price dispersion, we can identify significant areas of perceived value.  Everything else provided in the chart is simply used to help identify and contextualize these areas. 


Composite Equity ETF Analysis (8/10/2015)

While I continue to update the ICC Valuation methodology I plan to post more of the custom charts I use to gain insight into current market structure, momentum, and relative value. 

Updated Composite ETF List


cat = {
       'Large Cap'             :['SPY'],
       'Mid Cap'               :['MDY'], 
       'Small Cap'             :['IWM'],
       'Global Equity'         :['VEU','ACWI','VXUS'],
       'AsiaPac Equity'        :['EWT','EWY','EWA','EWS',\
       							'AAXJ','FXI','EWH','EWM',\
                            	'EPI','INDA','RSX'],
       'Europe Equity'         :['FEZ','EZU','VGK','HEDJ',\
       							'EWU','EWI','EWP','EWQ',\
      							 'EWL','EWD'],
       'Emerging | Frontier'   :['EWZ','EWW','ECH','GAF',\
       							'FM','EEM','VWO'],
       'Real Estate'           :['RWO','RWX','RWR','IYR','VNQ'],
       'Consumer Discretionary':['XLY','XRT'],
       'Consumer Staples'      :['XLP','FXG'],                         
       'Energy'                :['XLE','IPW','XOP'],                   
       'Financials'            :['XLF','KBE','KIE','IYG','KRE'],
       'Healthcare'            :['XLV','XBI','IBB'],
       'Industrial'            :['XLI','IYT'],
       'Materials'             :['XLB','XHB','XME','IGE','MOO'],
       'Technology'            :['XLK','SMH','HACK','FDN'],
       'Telecom'               :['IYZ'],                            
       'Utilities'             :['IDU','XLU']
        }  

Best vs Worst Performing ETF Composite L/252 Days

Best vs Worst Performing ETF Composite L/63 Days

Best vs Worst Performing ETF Composite L/21 Days

BarPlot Cumulative Returns L/4 Weeks

Z - Score Average Risk-Adjusted Returns L/21 Days

COMPOSITE SECTOR ETF VALUATION REPORT [7.6.2015]

Check out the updated IPython notebook by following the link. In this update we see the interest rate sensitive sectors like Financials, Real Estate, Utilities may be offering a good tactical buying opportunity. 

How do Bank Stocks Perform During Periods of Rising Rates? (Python Code Version)

This is the Python version of a guest article that originally appeared on RectitudeMarket.com. In this version I include the Python code used to generate the anaylsis.

This subject has garnered a healthy debate among market participants in recent weeks. Conventional wisdom says that banks and the financial sector overall should benefit from a rising rate environment. The story goes that bank profitability is inextricably linked to `Net Interest Margin (NIM)`. If rates are rising, it is assumed the likely result of a strong economy, during which banks should be able to charge more for the funds they loan, while also increasing loan volume.

A popular analysis on SeekingAlpha.com written by industry veteran Donald van Deventer, makes the case that bank stock prices are negatively correlated to interest rates. While I appreciate the detail and skill of the writer I thought the analysis left some `meat on the bone` so to speak.

  1. He concludes "Bank Stock Prices are Negatively Correlated with Higher Interest Rates". I believe this is not actionable for an investor today and in fact answers the wrong question.

  2.  As an investor the most important variables are the returns from ownership of an asset. The prices themselves are of minimal importance.

  3. This analysis shows that traditional correlations between rates and financial stocks have been changing.

  4. My analysis shows the cumulative returns from ownership of financial stocks including the 'Major Banks' Industry Classification are distinctly positive over the period of study.

  5. My analysis shows that cumulative returns from ownership of bank stocks given yields are falling, are highly negative having peaked around 2002-03.

Before I describe the results of this analysis I must make several disclosures regarding the datasets used.

First and foremost all the analysis was done in Python. I exported all available symbols listed on the Nasdaq and NYSE exchanges from the Nasdaq website. I filtered the symbols first by ‘Finance’ sector. Then I used a market cap filter of greater than $1 billion. Finally I grouped the data by industry and dropped any industry symbols where the total industry was represented by less than 5 symbols. 

import pandas as pd
pd.options.display.float_format = '{:.4f}%'.format 
import numpy as np
import pandas.io.data as web
from pandas.tseries.offsets import *
import datetime as dt
import math
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.dates as dates
%matplotlib inline
size=(12,7)
import seaborn as sns
sns.set_style('white')
flatui = ["#9b59b6", "#3498db", "#95a5a6", "#e74c3c", "#34495e", "#2ecc71","#f4cae4"]
sns.set_palette(sns.color_palette(flatui,7))
from pprint import pprint as pp

# ================================================================== #
# datetime management

date_today = dt.date.today()
one_year_ago = date_today - 252 * BDay()
five_years_ago = date_today - (5 * 252 * BDay())
ten_years_ago = date_today - (10 * 252 * BDay())
max_years_ago = date_today - (25 * 252 * BDay())

# ================================================================== #
# import stock lists 

path = r"C:\Users\Owner\Documents\_Trading_Education\data_sets_for_practice\\"
NYSE = pd.read_csv(path + 'NYSE_All_companylist.csv')
Nasdaq = pd.read_csv(path + 'Nasdaq_All_companylist.csv')

# print('{}\n{}'.format( Nasdaq.head(), Nasdaq.info() ))
# ================================================================== #
# select financial firms 

nyse_fin = NYSE.loc[(NYSE['Sector'] == 'Finance') & (NYSE['MarketCap'] >= 1e9)]
nsdq_fin = Nasdaq.loc[(Nasdaq['Sector'] == 'Finance') & (Nasdaq['MarketCap'] >= 1e9)]
# print('{}\n{}'.format( nyse_fin.head(), nsdq_fin.head() ))

# ================================================================== #
# combine both dataframes

all_sym = pd.concat([nyse_fin,nsdq_fin])
all_sym.info()
# ================================================================== #
# groupby 'Industry'; check summary statistics

all_grp = all_sym.groupby('Industry')
all_size = all_grp.size()
all_ind_wts = ((all_size / all_size.sum()) * 100).round(2)
all_mktcap_avg = all_grp['MarketCap'].mean().order(ascending=False)
# print('> {}\n>> {}\n {}'.format(all_size, all_ind_wts, all_mktcap_avg ))
print('> {}'.format(all_size))

> Industry
Accident &Health Insurance             7
Banks                                  2
Commercial Banks                      27
Diversified Commercial Services        2
Diversified Financial Services         2
Finance Companies                      1
Finance: Consumer Services            20
Investment Bankers/Brokers/Service    29
Investment Managers                   27
Life Insurance                        20
Major Banks                           96
Property-Casualty Insurers            48
Real Estate                           18
Savings Institutions                  16
Specialty Insurers                    11
dtype: int64
# ================================================================== #
# filter symbols if Industry group size is less than 5
filtered_symbols = all_grp.filter(lambda x: len(x) > 5)
filtered_grp = filtered_symbols.groupby('Industry')

filtered_size = filtered_grp.size()
filtered_ind_wts = ((filtered_size / filtered_size.sum()) * 100).round(2)
filtered_mktcap_avg = filtered_grp['MarketCap'].mean().order(ascending=False)
print('>> {}\n>> {}\n {}'.format(filtered_size, filtered_ind_wts, filtered_mktcap_avg))

>> Industry
Accident &Health Insurance             7
Commercial Banks                      27
Finance: Consumer Services            20
Investment Bankers/Brokers/Service    29
Investment Managers                   27
Life Insurance                        20
Major Banks                           96
Property-Casualty Insurers            48
Real Estate                           18
Savings Institutions                  16
Specialty Insurers                    11
dtype: int64
>> Industry
Accident &Health Insurance            2.1900%
Commercial Banks                      8.4600%
Finance: Consumer Services            6.2700%
Investment Bankers/Brokers/Service    9.0900%
Investment Managers                   8.4600%
Life Insurance                        6.2700%
Major Banks                          30.0900%
Property-Casualty Insurers           15.0500%
Real Estate                           5.6400%
Savings Institutions                  5.0200%
Specialty Insurers                    3.4500%
dtype: float64
 Industry
Commercial Banks                     36040759083.6163
Life Insurance                       21216713129.3125
Major Banks                          19336610403.7998
Investment Bankers/Brokers/Service   18135804631.3441
Finance: Consumer Services           12974551702.9260
Specialty Insurers                   10956345056.7109
Accident &Health Insurance            9773432756.0971
Investment Managers                   8789388295.6570
Property-Casualty Insurers            8393947526.6806
Real Estate                           3410973631.1011
Savings Institutions                  2817572654.6600
Name: MarketCap, dtype: float64

I used the filtered set of symbols and collected <= 25 years of data from Yahoo Finance using ‘adjusted close’ prices. Unfortunately there are obvious gaps in the data. I tried to minimize the effects by resampling the daily data into weekly data and using rolling means, returns, correlations etc. where appropriate. I am unsure of the exact issue behind the data gaps, but I don’t believe it invalidates the general interpretation of the analysis.

I then collected <= 25 years of Treasury yield data for 5, 10, and 30 year maturities using the symbols ‘^FVX’, ‘^TNX’, ‘^TYX’, respectively. 

Note: The following code block shows how I downloaded the data and created the indices for both dataframes so that I could merge the data together for easier analysis.


%%time
# ================================================================== #
# define function to get prices from yahoo finance
def get_px(stock, start, end):  
    try:
        return web.DataReader(stock, 'yahoo', start, end)['Adj Close']
    except:
        print( 'something is f_cking up' )

# ================================================================== #
# get adj close prices 

stocks = [filtered_symbols['Symbol']]
px = pd.DataFrame()
for i, stock in enumerate(stocks):
    # print('{}...[done]\n__percent complete: >>> {}'.format(stock, (i/len(stocks))))
    px[stock] = get_px( stock, max_years_ago, date_today )
# print('>>{}  \n>> {}'.format(px.tail(), px.info()))

px.to_excel(path + '_blog_financial px_{}.xlsx'.format(date_today))

# ================================================================== #
# grab yield data
yields = ['^TYX','^TNX','^FVX']

rates = pd.DataFrame()
for i in yields:
    rates[i] = get_px( i, max_years_ago, date_today )
rates.to_excel(path + '_blog_treasury rates_{}.xlsx'.format(date_today)) 

After collecting all the data Yahoo Finance had to offer I created financial industry composites using an equal weighted average of the returns of each stock within each industry. I narrowed the focus to the following industries: Major Banks, Investment Bankers/Brokers/Service, Investment Managers, and Commercial Banks.


%%time
# ================================================================== #
# import price data

px = pd.read_excel(path + '_blog_financial px_{}.xlsx'.format(date_today))
rets = np.log(px / px.shift(1)) # calculate log returns
#rets.info()
# ================================================================== #
# construct proper indices for px data to include industry

rets_tpose = rets.T.copy() # transpose df to get symbols as index
r = rets_tpose.reset_index() # reset index to get symbols as column
r = r.sort('index').reset_index(drop=True) # sort the symbol column 'index'; reset numerical index and drop it as col
#r.head()
# ~~~~~~~~~~~ setup industry/columns by sorting symbols using all_sym df; reset numerical index and drop it as col
new_index = filtered_symbols[['Symbol','Industry']].sort('Symbol').reset_index(drop=True) # output dataframe

# ================================================================== #
# create proper multiindex for groupby operations
syms = new_index['Symbol']
industry = new_index['Industry']
idx = list(zip(*(industry, syms)))
idx = pd.MultiIndex.from_tuples(idx, names=['Industry_', 'Symbols_'])
#idx
# ================================================================== #
# construct new log return dataframe using idx

lrets = r.set_index(idx).drop(['index'], axis=1).sortlevel('Industry_').dropna(axis=1,how='all')
lrets_grp = lrets.T.groupby(axis=1, level='Industry_').mean() # equal weighted means of each stock in group
dt_idx = pd.to_datetime(lrets_grp.index) # convert index to datetime
lrets_grp = lrets_grp.set_index(dt_idx, drop=True) # update index 
# lrets_grp.head()

%%time
# ================================================================== #
# import treasury rate data
rates = pd.read_excel(path + '_blog_treasury rates_{}.xlsx'.format(date_today), index_col=0, parse_dates=True).dropna()
rates = rates.set_index(pd.to_datetime(rates.index), drop=True)
# rates.info()

I grouped all the calculations into one code block for ease of reference.


# ================================================================== #
# block of calculations

# ================================================================== #
# resample log returns weekly starting monday
lrets_resampled = lrets_grp.resample('W-MON')

# ================================================================== #
# rolling mean returns
n = 52
roll_mean = pd.rolling_mean( lrets_resampled, window=n, min_periods=n ).dropna(axis=0,how='all')

# ================================================================== #
# rolling sigmas
roll_sigs = pd.rolling_std( lrets_resampled, window=n, min_periods=n ).dropna(axis=0,how='all') * math.sqrt(n)

# ================================================================== #
# rolling risk adjusted returns 
roll_risk_rets = roll_mean/roll_sigs

# ================================================================== #
# calculate log returns of treasury rates
rate_rets = np.log( rates / rates.shift(1) ).dropna()
rate_rets_resampled = rate_rets.resample('W-MON')

# ================================================================== #
# cumulative log returns of resampled rates
lrates_cumsum = rate_rets_resampled.cumsum()

# ================================================================== #
# rolling mean returns of rates
lrates_roll_mean = pd.rolling_mean(rate_rets_resampled, n, n).dropna(axis=0, how='all')

# ================================================================== #
# join yield and stock ret df

# ~~~~ raw resampled log returns
mrg = lrets_resampled.join(rate_rets_resampled, how='outer')

# ~~~~ z-scored raw resampled log returns
zrets = (lrets_resampled - lrets_resampled.mean()) / lrets_resampled.std()
zrates = (rate_rets_resampled - rate_rets_resampled.mean()) / rate_rets_resampled.std()
zmrg = zrets.join(zrates, how='outer')

# ~~~~ rolling means log returns
roll = roll_mean
rates_roll = lrates_roll_mean
mrg_roll = roll.join(rates_roll, how='outer')

# ~~~~ z-scored rolling means
z_roll = (roll_mean - roll_mean.mean()) / roll_mean.std()
zrates_roll = (lrates_roll_mean - lrates_roll_mean.mean()) / lrates_roll_mean.std()
mrg_roll_z = z_roll.join(zrates_roll, how='outer')

# ================================================================== #
# study focus 

# ~~~~ raw resampled log returns
focus = mrg[['Major Banks','Investment Bankers/Brokers/Service','Investment Managers','Commercial Banks','^TYX','^TNX','^FVX']]
# ~~~~ z-scored raw resampled log returns
focus_z = zmrg[['Major Banks','Investment Bankers/Brokers/Service','Investment Managers','Commercial Banks','^TYX','^TNX','^FVX']]
# ~~~~ z-scored rolling means
focus_roll = mrg_roll[['Major Banks','Investment Bankers/Brokers/Service','Investment Managers','Commercial Banks','^TYX','^TNX','^FVX']]
# ~~~~ z-scored rolling means
focus_roll_z = mrg_roll_z[['Major Banks','Investment Bankers/Brokers/Service','Investment Managers','Commercial Banks','^TYX','^TNX','^FVX']]

# ================================================================== #
# select time periods of rising rates

focus_rising = focus
rates_gt_zero_tyx = focus_rising[focus_rising['^TYX'] > 0] 
rates_gt_zero_tnx = focus_rising[focus_rising['^TNX'] > 0] 
rates_gt_zero_fvx = focus_rising[focus_rising['^FVX'] > 0] 

cols_tyx = [col for col in rates_gt_zero_tyx.columns if col not in ['^TYX','^TNX','^FVX']]
cols_tnx = [col for col in rates_gt_zero_tnx.columns if col not in ['^TYX','^TNX','^FVX']]
cols_fvx = [col for col in rates_gt_zero_fvx.columns if col not in ['^TYX','^TNX','^FVX']]

rates_gt_zero_tyx_x = rates_gt_zero_tyx[cols_tyx]
rates_gt_zero_tnx_x = rates_gt_zero_tnx[cols_tnx]
rates_gt_zero_fvx_x = rates_gt_zero_fvx[cols_fvx]

Note: I did not show the plot code I used b/c I did not want to distract too much from the actual analysis. If anyone is interested in how I generated the following charts, contact me. 

Rolling Mean Returns appear to show regime shift in correlations

Looking at the following chart there appears to be a distinct change in the behavior of 52 week rolling mean returns. I z-scored the data for easier interpretation but the raw data shows the same relationships. In the period before ~2004 it appears that Treasury rates and rolling average returns are indeed negatively correlated as they clearly oscillate in opposition. However at some point approximately between Q4 2002 and Q1 2004 this relationship changed as the rolling mean returns appear to move in sync with rates afterwards in a loosely positive correlation.

Recessions shaded in gray. Theorized regime change shaded in blue. 

Rolling Correlations support theory of regime shift in correlations

This next plot shows the 52 week correlations of the composite industries compared to each of the Treasury yield maturities. There is a clear gap in the data, however we can see that prior to my theorized regime shift there were multiple long periods where correlations between rates and the composites were negative (< 0.0). Since then, the correlations have oscillated between highly positive (~>0.5) and 0, with short duration of actual negative correlations.

Recessions shaded in gray.

Cumulative Returns during periods of rising rates are highly positive since 2002-2003

Next I analyzed the data filtered to include only financial industry composite returns during periods where the changes in rates were positive (> 0.0). I did this for each of the three maturities and calculated the cumulative sum. All three charts show negative or zero returns prior to the 2002. Afterwards beginning around 2003, composite returns begin rising together until present day! This result is a clear indicator of two concepts.

  1. There is a high probability of a regime change in the data-set
  2. More importantly, this chart shows that investors had more opportunity to gain from being long financial stocks during periods of rising rates than the alternative.

Recessions shaded in gray

Cumulative Returns during periods of falling rates peaked around 2002-03 and are highly negative since

For comparison I filtered the composite returns to periods where the changes in rates were negative (< 0.0). I did this for each of the three yield maturities. This chart also supports the theory of a regime change in the data set. More importantly, it shows that every composite industry except ‘Investment Managers’ peaked during the 2002-2003 time period and all have been in steep decline since ~2007. Currently all composites show negative cumulative returns.

Recessions shaded in gray

Conclusions

This analysis has some areas worth further investigation and it certainly has some weak points. However, we can make some strong informed conclusions.

  1. Analysis of financial composite prices and yield changes are not enough for an investor to make an informed portfolio decision.

  2. There appears to be a clear regime change in the data-set. Therefore, investment decisions today based on analysis prior to the regime change can give conflicting results, and lead to sub-optimal investment allocations and unnecessary losses.

  3. When analyzing the conditional financial composite returns during the most recent regime, this research shows investors had significantly more gains given periods of rising rates than periods of falling rates!

Feel free to contact me with questions, comments, or feedback: BCR@BlackArbs.com @blackarbsCEO