Guest Post: Modified Ned Davis Method—April 2016 Update, by Frank Roellinger

Updated: Apr 25th, 2016 | Vance Harwood

Over the years I have developed a lot of respect for the condition of breadth divergence, when an index such as the S&P 500 is rising and the NYSE daily cumulative advance-decline line is not.  In January 2016 I listed the performance of my method on the short side as a function of the number of consecutive weeks of divergence at the time of the short signal.  The optimum interval was 41 to 79 weeks, and as that interval was being approached, I speculated that a profitable short could be on the horizon.

It didn’t work out that way.  The last short began on December 11, 2015 at 31 consecutive weeks of divergence.  That short ended with a 3.8% profit on Feb. 26, 2016 and the method returned to 100% long.  That trade is now ahead by about 10.6%, but what may be more significant is that divergence ended on April 22, 2016. Thus a new short beginning any time soon does not have an especially good chance of ending profitably (but of course I will take it—one never knows for sure about these things).  See this post for my track record in calling corrections.

Might divergence ending after such an extended period (49 weeks as of April 15, 2016) have bullish implications?  I wondered, so checked, and discovered the following dates at which divergence periods from 31 to 109 consecutive weeks had ended.  I then noted how long it had been from that date to the date of the next small cap market peak which had been followed by a decline of 19% or more.  Results are as follows:

Divergence
End
Small Cap
Peak
Lag
17-Feb-61 1-Dec-61 287 days
17-May-63 22-Apr-66 2 Years 341 days
25-Sep-64 22-Apr-66 1 Year 209 days
10-Dec-76 8-Sep-78 1 Year 272 days
19-Sep-80 12-Jun-81 266 days
22-Oct-82 24-Jun-83 245 days
15-Feb-85 21-Aug-87 2 Years 187 days
30-Jan-87 21-Aug-87 203 days
28-Apr-89 6-Oct-89 161 days
30-Aug-91 17-Apr-98 6 Years 232 days
24-Dec-92 17-Apr-98 5 Years 115 days
2-May-03 13-Jul-07 4 Years 73 days
24-Jul-09 23-Apr-10 273 days

There was quite a variation, from slightly over 5 months to more than 6 years.  The amount of gain also varied widely, suggesting that an average or median of either would be of questionable value.  However, the import clearly seems to be that breadth reaching a new high after a considerable period of deterioration is a bullish sign.

Another bullish sign has been daily NYSE breadth since the February 11, 2016 low.  There are several ways to measure this, but one meaningful way is to sum the number of advancing issues and the number of declining issues over N days and then compute the ratio of the former sum over the latter.

On April 22, 2016 this ratio was 1.46.  Since 1948, only 5 times has this ratio been this high or higher:

29-Jan-71
4-Mar-75
18-Feb-76
18-Oct-82
15-Sep-09

Market historians will recognize all of these as times when significant bull markets had been established and outsized gains lay ahead before the next severe bear market began; each time the beginning of that bear phase was a long way off.

Of course, none of this can be any guarantee of what the market will do, but over the years I have found breadth to be one of the best indicators available.  I am confident to be 100% long now, but as always, I have my eye on the exit sign.

Below is the latest chart of my method’s performance from 1960 through April 22, 2016 (click to enlarge).

MDM 042416

 

Related Posts



Predicting Stock Market Returns—Lose the Normal and Switch to Laplace

Updated: Jun 24th, 2016 | Vance Harwood

Everyone agrees the normal distribution isn’t a great statistical model for stock market returns, but no generally accepted alternative has emerged.  A bottom-up simulation points to the Laplace distribution as a much better choice.

A well-known problem in financial risk assessment is the failure of the normal distribution (also known as the Gaussian distribution) to correctly predict big up or down days on the stock market.  Even though these volatile days are infrequent they can make a big difference in the performance of an investment portfolio.  At least publically the financial industry has not moved on to better models, probably because no alternative has been accepted as superior.

The Issue

 My interest in this area stems from the frequent use of sigma notation by commenters when the market experiences big swings.  For example, there was a multi-day10 sigma increase in the value of the VIX index associated with the August 2015 correction and an 8 sigma single day downswing a couple of weeks later.

The term “sigma” is equivalent to the statistical term “standard deviation”, one of the two key descriptors in a distribution (the other is the mean or average).  Sigma can be used as a shorthand way of indicating the relative magnitude of a market move.  For example, if the S&P 500 drops 2.92% in a day (doubtless inciting headlines with the word “crash” and “depression” in them) we can determine the sigma level of this event as three by dividing the percentage drop (2.92%) by the S&P’s historic standard deviation (0.973%).

The normal distribution assesses the odds of a -3 sigma day like this at 0.135%, which assuming a 252 day trading year predicts a drop this size or greater should occur about once every 3 years of trading.

The odds associated with 8 to 10 sigma events for a normal distribution are truly mind boggling.  The chart below illustrates how often events of various sigma levels should be expected.

 

Plus / Minus Sigma Level Probability of occurring on any given day How often event is expected to occur Associated S&P 500 percentage move Actual S&P 500 occurrences (Jan 1950-2016) vs (expected from normal distribution)
>+-1 31.73% 80 trading days per year +-0.973% 3534  (expected 5276)
>+-2 4.56% 12 trading days per year +-1.95% 776    (expected 758)
>+-3 0.27% 1 event every 8 months +-2.92% 229    (expected 44)
>+-4 6.33×10-3 % Once in 62 years +-3.89% 98   (expected 1)
>+5 5.73×10-5 % One in 6900 years +-4.86% 50   (expected 0)
>+-8 1.22×10-13 % Once in 3.2 trillion years +-7.78% 8     (expected 0)
>+-9 2.25×10-17 % Twice in 20000 trillion years +-8.76% 7     (expected 0)
>+-10 1.53×10-21 % Once in 2.6 x 10^20 years +-9.73% 3     (expected 0)

 

When high sigma events do occur analysts often use their outrageous unlikeliness to promote their agenda—usually predicting the imminent collapse of the financial system and/or indict their favorite bad guys (e.g., Federal Reserve, Evil Bankers) for manipulating the markets.

I disagree.  The reasonable conclusion from seeing 5 or higher sigma events in the markets should not be that things are falling apart or that the market is rigged, instead, we should recognize that we are using the wrong probability model.

 

Where the Normal Distribution Works—And Where it Doesn’t

Graphically comparing the S&P 500 returns since 1950 with the matched normal distribution looks like this:

LP-ActualsvsNorm-linear

 

The actual returns have a higher central peak and the “shoulders” are a bit narrower, but the historic frequency of events occurring within the +-3 sigma range isn’t significantly different from what is predicted by the normal distribution. The big mismatches are in the tails of the distribution—which you really can’t see on a linear scale. Changing the vertical axis to use logarithmic scaling allows us to zoom in on the tails.

ActvsSimWline

 

Once you get past +-4 sigma moves it’s hard to visualize how vast the differences are between the S&P 500 actual returns and the predictions of the normal distribution. The vertical dotted black line on the right side of the chart illustrates the problem.  In the last 67 years, the S&P has had two days with returns between +5.5% and +5.6%.  The normal distribution estimates the probability of one event of that magnitude as 0.0001 in 67 years (where the dotted black line touches the red line) —in other words, we should expect a move near +5.5% once in 67,000 years on average—and in reality, we’ve had two of them in 67 years!

From a risk analysis standpoint, this is analogous to building your house above the 67,000-year flood line and being flooded out twice in the last 67 years.

There are many academic papers proposing alternate distributions to address this problem (e.g., truncated Cauchy, Student T distribution, Gamma distribution, stochastic volatility), but no generally accepted alternative has emerged.

To my knowledge, none of these papers suggested a causal mechanism to explain why stock market returns should have their proposed distribution of returns.  It’s relatively easy to torture a set of equations until it delivers the sort of distribution you desire.  What’s hard is proving that these distributions will do a good job of predicting future returns.  Since new return data comes in only one day at a time, it will take decades before any of these alternate proposals can emerge as superior.

 

So What Can Be Done?

 In this section, I’ll provide an abbreviated discussion of the steps that led me to an alternative to the normal distribution. At the bottom of the post, in the “Quant Corner” section I’ll provide some details for those that want the next level of detail.

Verifying that a new model is superior to the normal distribution is tough since new historical data dribbles in too slow to be helpful.  However, all is not lost—another way to verify is to come up with a realistic bottom-up model for how the stock market works and then run that model many times with realistic random inputs to generate lots of expected results.  If the simulated results show a convincing match to the actual results then those results can be used to evaluate various theoretical solutions.  It’s not a proof, but it’s more convincing (at least to me) than a top-down historic data matching exercise.

My approach to modeling the bottom-up behavior of the stock market incorporates one of its key characteristic—intraday correlations.  Depending on the day, stocks may move randomly with respect to each other, in lock-step, and occasionally in complete opposition (e.g., oil drops in price, energy companies go down, transportation stocks go up).  We can quantify the degree of synchronicity of these moves with a statistical measure called Pearson’s Correlation Coefficient, which returns values ranging from -1 for patterns moving in opposition to 1 for patterns in lockstep.  I measured historical patterns of correlation between stocks by comparing the moves of one sector of the S&P 500 with another.  I decided to use sectors instead of stocks to minimize the impact of corporate actions and company specific idiocentricities.  Using 16 years of data I computed the correlation between the S&P 500 (energy and consumer) sectors and produced the following histogram:

XLI-XLP-correl histo

 

As you can see lock-step (correlation of one) on the far right is the most common situation—but there’s a fair amount of variation.  This chart is representative; I looked at multiple combinations of different sectors and they all looked very similar to this result. I hypothesize that on many days the buyers and sellers of the stocks in a sector behave very differently from the buyers and sellers in other sectors, however on some days (e.g., panics, market rallies), the behaviors of all the participants in all sectors become synchronized.  The behavior of crowds/rallies/mobs might be a good analogy (thanks to Asad Aziz for that observation).

Rather than try to derive a theoretical relationship from this data I took the easy way out and just used the data itself in a Monte Carlo simulation of the stock market to model 5 million days of trading with variable correlation.  For each day of the simulation, I randomly picked a trading day between December 23rd, 1998 and January 25th, 2016 and used XLI’s correlation with XLP on that day to generate randomized returns for 75 different stocks with that same level of correlation.  The daily returns for those 75 stocks were then averaged to generate the daily return of the “market” itself.

The chart below compares my simulated results (green line) to the actual S&P 500 and the predictions of the normal distribution:

LP-ActvsNormvsSim-linear

 

The distribution of simulated results matches the S&P 500 actuals significantly better than the normal distribution.  This simulation has the leptokurtic shape (high central peak, narrower upper shoulders) characteristic of stock market returns.

The next chart uses a logarithmic vertical scale to show that the simulated distribution also closely matches the S&P 500’s tail distribution

LP-ActvsMonte-log

Comparing events in the tails is tricky because you can’t have partial events.  Instead, what happens is that as you move out on the tails you have more and more bins with zero events.  With the 5 million day simulation I had enough data to extend out the tails considerably, but with only 16630 actual data points empty bins occur pretty quickly in the tails. I used moving averages (purple lines on the chart below) to convert these occasional events into an events-per-bin metric—providing a more nuanced picture of how the actual tails compare to the simulated ones.

LPvsActivevsSim-wMA

 

The data for the right tail closely matches the simulation. The left tail actuals are fatter than the simulation, but it’s still not a bad match
The Obvious Solution

 Given the impressive match between the actual and simulated results, the next question is whether there’s a theoretical distribution that matches these non-Gaussian results.

One characteristic that jumped out at me when I looked at the logarithmically scaled histograms of actual data was the linearity of the slopes.  A straight line on a log chart is an exponential relationship.  Two back-to-back exponential decay curves, centered at the mean, should closely match the data. This kind of distribution is called a double exponential, or Laplace distribution.

The Laplace distribution is similar to the normal distribution in that it has two parameters, the location, and the scale factor.  For a set of returns matching an ideal Laplace distribution, the location parameter is equivalent to the mean, and the scale factor is equal to the standard deviation of the population divided by the square root of two.  Below an ideal Laplace distribution (black lines) is overlaid on the chart of S&P 500 actual and simulated data.

S&PwClassicLaplace

 

Not surprisingly there isn’t an exact match between the S&P 500 distribution and the ideal Laplace distribution; the tails of the S&P are somewhat wider.  Since the goal of this exercise was to come up with a way to more accurately estimate the big up/down days I adjusted the Laplace’s scale parameter such that the number of 5 sigma or greater events was roughly the same between the predicted and actual distributions.  The result looks like this:

S&PwCalibratedLaplace

 

This adjusted Laplace distribution closely matches the S&P 500 historical data in predicting that every year there’s around a 75% chance of having a 5 sigma or higher event—a far cry from the normal distribution’s prediction of once per 6900 years.

The adjustment for wide tails increased the scale factor by 19% and gives a central peak prediction within 8% of the actual value.  The equivalent tail adjustment for the normal distribution requires a 70% adjustment and leaves the central peak 63% lower than the actuals.

I haven’t looked at a lot of cases, but I suspect the adjustment factor will be relatively consistent.  The scale adjustment for IWM (Russell 2000) is 17% and 12% for Apple.

 

So What?

The art of science and engineering is using concepts and relationships we know to be not quite true to generate reliable results.  For analyzing market risk the normal distribution does not pass the “not quite true” test; it is seriously flawed when used to predict / analyze the more extreme moves of the market that historically happen every couple of years.  It’s time to start using the Laplace distribution.

 

Quant Corner

  1. The first pass of my Monte Carlo simulation assumed that the individual stocks had normally distributed returns. A closer look indicated that individual stocks, not just indexes exhibited Laplace distributions, so I used Laplace distributions on the final simulation for all of the simulated stocks.  My intuition is that the actions of the various buyer/sellers of a stock are highly correlated some days while being much more random or anti-correlated on other days—generating a histogram similar to the XLI / XLP correlation chart.
  2. The simulation required me to generate pairs of random numbers correlated at a specified level. This correlation changed for every simulated trading day.  Generating correlated random numbers is not a trivial process.  Contract the author if you would like more information.
  3. The Monte Carlo simulation generated returns with a standard deviation of around 14%. In order to realistically compare this data set to the S&P 500 daily returns I linearly normalized the simulation results such that its standard deviation matched the S&P 500’s.
  4. The equation used for the Laplace distributions shown in the charts, the probability density function is:

    Laplace-pdf

      Where:

    •  Location is the mean
    •  Scale specifies the spread of the distribution
    •  ABS is the absolute value function
  5. The equation used for generating random variables according to the Laplace distribution is:

    Laplace-random variable

    Where:

    • The function “sign” returns -1 if the argument is negative, +1 if it is positive,  0 for zero
    • rand() returns a uniformly distributed random number between 0
      and 1 non-inclusive
  6. The ideal scale factor for a Laplace distribution is the standard deviation of the population divided by the square root of two. The calibrated scale factor I used to match the event frequency in the 5 sigma or larger tails was the ideal scale factor times 1.19.



Frank Calls the Corrections with His Modified Ned Davis Method

Updated: Apr 24th, 2016 | Vance Harwood

In September 2013, I published a post written by Frank Roellinger on his stock market trading system—a modification of the Ned Davis system first published in the 80s. Since Frank’s work was first published here he has shifted his Russell 2000 positions 11 times, each move reported here and on my twitter account.   His hypothetical portfolio value has increased  from 1658 to 1936 (+16.7%)—impressive given that the Russell 2000 is down 1.7% over that period.   However the most impressive thing has been his method’s accuracy is predicting the last four significant market downturns.

SPY-Frank-wc

The arrows indicate when his published strategies switched from long to short.

Kudos to Frank for an impressive track record.

For more information:

Related Posts



Guest Post: Short “Sweet Spot” Approaching?  —by Frank Roellinger

Updated: Apr 24th, 2016 | Vance Harwood

Probably the most difficult thing to do in stock market investing is to identify a good time to sell.  Many technical indicators have been devised to identify lows around the time they occur or soon thereafter with a moderate degree of success, but to my knowledge that has not happened for tops with comparable success.

My own modified Davis method does not do a very good job at this. It does profit on the short side, but only about 2% per year on average.  Were it not for the occasional severe bear market where my method will be short or at most 50% long—enabling significant funds to be invested near the beginning of the next bull market, my method probably would not do any better than buy and hold over the long term.

However, I have discovered a tendency which I think is nice to know, based on the length of the breadth divergence period preceding the short signal.  My test for breadth divergence uses a measure similar to the classic cumulative advance-decline line.  As a bull market matures there will be a point when prices make new highs, but that high is not accompanied by a new high in the cumulative advance-decline line.  This is the point of breadth divergence, and this phenomenon has occurred near the end of virtually all bull markets.  Stan Weinstein described this metric in “Secrets For Profiting in Bull and Bear Markets” almost 30 years ago, and he probably wasn’t the first to do so.

When developing my buy/sell algorithms, I avoid the trap of fitting my approach to the historical data by using only an older subset of the historical data to determine my thresholds.  I then run my method forward test on the newer, out of sample data, to determine how it would have performed.

Here is a list of all of the short trades in my forward test, arranged by the number of consecutive weeks of divergence in effect at the time of the signal.  The chart below summarizes the results.

Frank chart2

 

It appears that divergence must be in place for at least 13 weeks to have a good shot at a successful short.  In particular, it also appears that there is a “sweet spot” from 41 to 79 weeks.  There, the probability has been 0.80 of a profitable short.  Over the entire period there were 5 shorts that resulted in double-digit profits, and 2 of them occurred in this interval.  Beyond 79 weeks the probability of success declines significantly.  (Bear in mind that the method shorts at the 50% level, so these figures would be doubled if it shorted at the 100% level.)

Note that something similar to a normal distribution is formed in the above chart.  I am no statistician, but it may be that these results are indicative of some degree of natural order in the workings of the stock market.

However, there does not seem to be anything here that could be incorporated into my modified Ned Davis algorithm, and even if it could, that obviously could not be considered as part of the forward test.  The only thing to do is to take every short as it comes, as one never knows for certain when “the big one” will begin.

At the last short (12/11/15) of my method, divergence had been in effect for 31 weeks.  At this writing, the figure is 34 weeks, and it will take a very strong market to end this condition.  The current short may not end profitably, but the next short may have a good chance of being in the 41 to 79-week sweet spot.

Related Posts



A Very Simple Model for Pricing VIX Futures

Updated: Dec 27th, 2015 | Vance Harwood

Serious volatility watchers are always observing a three-ring circus. The left ring holds the general market. Center ring has options on the S&P 500 and the various CBOE VIX® style indexes and to the right are VIX futures, Volatility Exchange Traded Products like VXX, UVXY, TVIX, and XIV plus associated options.

Activities in the three rings usually follow a familiar choreographed pattern. The VIX moves in opposition to the market while VIX futures and their kin trail the VIX unenthusiastically. VIX futures converge to the VIX’s value at expiration but prior to that they following their own path—usually charging a premium to the VIX, but sometimes offering steep discounts. Meanwhile, in the background, the VIX maintains its reversion to mean behavior, a macro cycle the short term moves modulate.

One of my ongoing interests is monitoring the Volatility Circus’ rings two and three—the family ensembles of VIX and VIX Futures.  I note unusual movements and try to determine which one of them is “right” more often—perhaps foreshadowing market moves. Recently I’ve developed a model that helps describe this relationship. It is presented later in this post.

Interpreting the values of VIX futures has been especially challenging. The price relationship of the next to expire VIX future and the VIX tends to be very dynamic in the last few weeks before its expiration.  With only a single data point, the one active future with less a month to expiration, there hasn’t been much data to work with.

Of course, there are mind-bending mathematical models available for VIX Future pricing—but unless you have a Ph.D. in quantitative finance they are probably too complex to be helpful.

Enter the CBOE’s Weekly Futures

By introducing VIX futures with weekly expiration dates the CBOE boosted the number of close-in data points from one to five—a dramatic improvement. One day while looking at Eli Mintz’s vixcental.com chart on these new futures a light bulb lit up in my head

TS29Sep15

The green dots are the newly introduced futures. Taken together the leftmost part of this curve looked logarithmic to me.

Sure enough, when plotted in Excel the logarithmic trendline match to the first two months of the futures was very good.

ln match to VIX Futures

 

However around month 4 the trendline starts seriously understating VIX futures prices.

7mo Trend Projection

Apparently there’s an additional mechanism that boosts the futures’ value over time.

The Model

Using VIX Futures data from 2004 on I developed the following equation which does a surprisingly good job of estimating VIX futures’ prices given its simplicity.  The only inputs are the current VIX value, the number of days (X) until the future expires, and the historical median value of the VIX.

VS-VX_FUT version B equation

The VIX closing median value from January 1990 through October 2015 is 18.01.

Example calculation: if the VIX is at 16 and a VIX future has 10 days before expiration this model predicts a price of 16.93.

16+ (1-16/18.01)* Ln (10+1) +3.1623*0.23 = 16+ 0.1116* 2.3979 + 3.1623*.21 = 16.93

A near real-time chart of the VIX Futures values predicted by the equation is posted here.

A Few Notes on the Equation

  • At VIX Future expiration (X = 0), the equation sets the VIX futures price equal to the VIX. The convergence term in the middle is forced to zero because Ln (0+1) equals zero and the carry cost term on the right is forced to zero by X being zero.
  • If the VIX matches its historical median price the convergence term is canceled out by the expression in front of the natural log, and the only difference in prices from the VIX will be the square root of time scaled factor on the right.
  • If the VIX is relatively low (below the historic median) the equation predicts the typical premium prices of the VIX futures relative to the VIX. The market is in this state 75% to 85% of the time.
  • Conversely, if the VIX is high, the equation predicts the VIX futures will be cheap relative to the VIX levels.
  • Since volatility increases with the square root of time, the term on the right side of the equation suggests a time scaled volatility component.  The 0.21 factor was determined empirically by adjusting its value until the average errors for the 3rd, 4th, and 5th-month futures from March 2004 through September 2015 were less than a percent.  The resulting 0.21 factor is quite close to the historical VIX median volatility of 0.18, so it’s possible that it is an implied volatility factor that rests a few percentage points above the historical value.

Why A Model?

You might reasonably ask why bother with a model when you can just look up the current VIX futures prices on the web. This model is interesting to me because:

  • It helps me understand the underlying mechanisms behind VIX Futures pricing
  • I can determine how current VIX futures prices are behaving compared to their predicted behavior—useful for evaluating situations where event risk is distorting prices or the market is especially panicky
  • I can predict future VIX futures prices for various VIX scenarios

Model Errors

The model is very inaccurate at times, with errors on historic data sometimes exceeding +30%/-15% percent. The chart below shows the model’s error terms for the next to expire futures since 2004.

sqrt-time-VX-FT model pt21

The model tends to overestimate futures prices while in sustained periods of low VIX and underestimate the prices in bear markets. During big volatility spikes (Oct 08, Aug 11, Aug 15) the model predicts VIX futures values that far exceed the actual prices.

It isn’t surprising that the model doesn’t adroitly handle the impact of big jumps or drops in market volatility since it doesn’t incorporate any historical information at all—other than the long-term median VIX value.

The error spike on the far right of the error chart is a whopper, nearly 50%. On August 24th, 2015 the VIX closed up 45% at 40.74, but the front month future (September) only climbed 26% to 25.13. The model predicted a value of 37.16, up 39%.

Despite the chaos prevalent on August 24th the futures market did an impressive job of predicting the eventual (23 days later) expiration value of the September 2015 futures.  Expiration was at 22.38, only 2.75 points away from the August 24th closing value.

A more accurate model would need to incorporate the effects of VIX jumps and slumps. It’s not a trivial problem. In general, the VIX futures seriously lag big jumps in the VIX, but then stay higher than you’d expect after the volatility drops.  Part of this post is an enhancement to the model that  does take VIX’s gyrations into account, but it still leaves a lot to be desired.

Now instead of resembling Fellini’s circus the VIX futures moves in the Volatility Circus feel more rational to me. Their movements are often mysterious and complex—but a simple theme unites.

Related Posts