At UC Investments, we apply three levels of quantitative risk measurement to our decision making — Risk 1.0, 2.0 and 3.0 — with each level building on, not eliminating, the previous one’s value.
This is the standard risk management approach that’s been used for over two decades. Here, the returns of the current portfolio are evaluated by looking back at history and trying to answer these questions: If the current portfolio had been held over the past, how much would its returns have varied? How volatile would it have been?
Looking at the past two years of our products as a guide to evaluating the future risk, we find our General Endowment Pool (GEP) has an annualized volatility of 7.2% and UC Retirement Plan (UCRP) has an annualized volatility of 7.5%. Putting this in intuitive terms, we should not be too surprised if, over the course of a year, we find returns vary by seven percent or so. In more technical terms, the 7.2% is the one standard deviation range for returns, and we can expect returns to vary within that range with about a 70% probability.
To add some context, applying this risk level to a portfolio holding only the S&P 500 gives a volatility of 10%. So both GEP and UCRP are about three-fourths as volatile as holding the S&P 500 index.
But the recent past has not been typical. Over a longer time period, the S&P 500 has been more in the 15- to 20-percent range, and our portfolios have tracked closer to 10%, so still about three-fourths the risk of holding the S&P 500, but higher risk in absolute terms.
We’ll discuss more about the implications of our current low-volatility regimes later in this report, but the important point here is that low volatility now can set the stage for higher risk down the road.
After the 2008 crisis, it became apparent that Risk 1.0 failed because the risks in the crisis did not look like those of the past. So a new way to look at risk arose, Risk 2.0: stress and scenario analysis.
With stress analysis, we are not wedded to historical asset performance as the guide. Instead, we hypothesize about the effect of various scenarios, each of which could include a variety of events and market stresses. Three illustrative scenarios we are considering in the current environment are:
- Stresses in specific markets
“What will happen to our portfolios if stocks drop by 10 percent?”
- Multi-faceted scenarios
“What will happen to our portfolios if China has a credit crunch, with all of the market dislocations that this would imply?”
- Stresses in history, Risk 1.0-style
“What would happen to our portfolios if the 2013 “Taper Tantrum” were to occur again?”
Since we don’t know what crises are on the horizon, we stress test our portfolios by applying two market shocks: a 10% drop in the S&P 500 Index and a 100-basis point rise in the U.S. 10-year Treasury Bond. Each of the hypothetical scenarios combines shocks to a variety of markets and is based on how these various markets would likely become embroiled in the event. So capturing the risk for these two stressors is more than simply a matter of taking our exposure to the S&P 500 and the U.S. Treasury, respectively, and multiplying these exposures by the shock we are hypothesizing.
Our public equity positions drop essentially one-to-one with the shock to the S&P 500. For the Treasury shock, we lose 3.4% in our fixed income positions because when rates rise, bond prices drop. However, we more than make up for that loss through the effect of the rate increase on our equities.
We are exposed to the three scenarios we consider here, most notably to an increase in market volatility and a Chinese credit crunch, which will lead to a drop in our equity positions in the 8% to 9% range. A repeat of the 2013 “Taper Tantrum” shock hits both our equity and fixed income positions. A large Federal Reserve balance sheet unwind could spark a similar episode.
Though Risk 2.0 gives us more risk guidance than 1.0, it still doesn’t get us where we want to be. The issue, as we all know from the financial crisis of 2008, is that one problem leads to the next, with cascading — and sometimes snowballing — dynamics that can embroil the market. The initial shock is never the end of the story.
And the plot of the story is often intricate and unpredictable. For example, a market drop will force those who are leveraged to sell. Their selling pushes prices down further. They can’t sell in a market that is under pressure, so they start to sell other assets in their portfolio, which creates contagion. With the prices dropping and volatility increasing, potential buyers pull back and funding dries up. The result can be a “fat tail” risk, a risk that emerges down the road from the initial shock as these various dynamics gather speed. History-based Risk 1.0 cannot pick up on these dynamics. They occur infrequently, and each time they are different. So the standard 1.0 depiction of risk is that it grows symmetrically and smoothly over time.
So to really deal with risk, we have to capture and understand this dynamic, which is what Risk 3.0 is built to do. The foundation of Risk 3.0 is a method called Agent-based modeling. This approach seeks to capture the dynamic evolution of financial market contagion. Agent-based models have been used for years in other fields, to understand the emergence of traffic congestion on the roadway, for example, or of panics and stampedes during fires. If it sounds like this type of modeling should carry through to the essence of crisis behavior in the financial sector, you’re right. And by comparing the implications of following through with the dynamics of various market stresses with the shock test results from Risk 2.0, we get a more complete picture of what we’re facing and how we can best react.
We are on the leading edge of Risk 3.0 in the industry, developing the models and data sources needed to manage our risk in this revolutionary way. This involves crowdsourcing data and surveying the various financial market agents to understand the leverage, liquidity and concentration that each of these agents hold and the rippling effects they may cause. Though we still are in what might be called the Beta version of Risk 3.0, we believe it will allow us to steel ourselves against the next crisis. Or maybe even profit from it.
Risk 3.0 Case Study:
A low-volatility environment
Volatility is currently at or near its lowest levels in over 20 years for many asset classes. Paradoxically, a sustained period of low volatility breeds increasing risk because life seems easy; investors are more willing to take on leverage, market makers are more ready to provide liquidity and funding is easier to come by. Complacency comes into question.
If we look at our exposure to a rise in volatility through the simplest lens, that of Risk 1.0, it appears risk is minimal. We don’t have option exposure, so a mechanical calculation of the change in the value of our positions with a change in volatility will come out to be close to zero.
But when we go to Risk 2.0 and take into account the broader scenario of asset markets that tend to be affected by a rise in volatility, we find our exposure is not insignificant. Extending the Risk 2.0 scenario beyond public equities and fixed income to include our other asset classes — using both BlackRock and internal methodologies — we calculate a loss from a sudden 20% rise in equity volatility as measured by the index to be 5.8% in GEP and 6.1% in UCRP.
That, however, would not be the end of the story. If we move to Risk 3.0, we consider the cascades and contagion that will come from the dynamics and feedback. For example, those with high leverage and those who are targeting a predetermined level of volatility would be forced to trim their portfolios, and those invested in volatility-related Exchange-Traded Funds (ETFs) would react to large declines in the value of these instruments. We would expect after these rippling effects for losses to be larger than they would appear if we stopped with Risk 1.0 or 2.0.