Thursday, June 5, 2008

Schaeffer on the Volatility of the VIX

Bernie Schaeffer is out with another interesting take on the VIX today. In Schaeffer’s Short Takes: The Volatility of the VIX (may require free registration), he offers some compelling data and charts on the historical volatility of the VIX.

The charts make for good reading, but it is Schaeffer’s conclusion that I wish to focus on:

“From a sentiment perspective, one might conclude that a high ‘second derivative VIX’ is an indication of excessive bearishness. At the very least one can reasonably conclude that if the protection trade is in fact crowded, then the chances of a major downside accident are significantly reduced as big money is already down on the black swan event.”

The important point is that the more downside protection that investors load up on (using VIX calls or by buying puts on other indices), the less impact any downturn will have. In other words, there will be little in the way of a vicious cycle of selling if options are mitigating losses from a bear move. By the same token, a black swan event, by definition, has to be a surprise. If investors are prepared for the beast, then it will have to arrive in another form, if it arrives at all.

As an aside, I don’t believe the term ‘second derivative VIX’ is the best phrase to use when speaking about the historical volatility of the VIX. The VIX is the implied volatility of the SPX, so a second derivative would logically refer to the implied volatility of the VIX, something I have labeled meta volatility in this space in the past. While technically one can derive both historical and implied volatility from the VIX, mixing historical volatility and implied volatility together in this context muddies the already murky waters of what a VIX derivative is.

It sounds like it is about time for another VIX 101 post to clarify some of this…

1 comment:

  1. Talking about "second derivatives" or any other derivative on a discontinuous, finite (& small) data set such as the VIX doesn't make any sense to a mathematician. The assumptions are, or seem to be, an infinite number of points in the data set (contradicted by reality) wherein the data points are very close together (less then epsilon for all positive epsilon).

    Engineers seem to have a hard time understanding that performing a mathematical operation and getting a result doesn't mean that what you've got makes any sense at all.

    ReplyDelete