Lower portfolio risk by screening for quality

By Scot Blythe | July 3, 2012 | Last updated on July 3, 2012
4 min read

Stock markets abound in risks: some old, such as beta or old-fashioned value traps, and some new, such as liquidity.

Yet investors aren’t being rewarded for these risks.

That’s one of the considerations behind low-volatility portfolios, argues Marcus Xu, this year’s winner of the AIMA Canada Research Award. His paper, “Naïve low volatility equity portfolios are risky: A practical study in successful implementation,” is based on his experiences running two low-volatility equities portfolios at Genus Capital Management in Vancouver.

Read: When markets give you volatility, invest

Modern portfolio theory suggests that the greater the risk, the greater the potential reward – for those with the appetite. People line up on the efficient frontier, choosing the maximum risk they’re willing to incur by mixing stocks, bonds and cash.

This is the means-variance approach to portfolio allocation: the trade-off between the risks and returns. But as Xu points out, indexes are not means-variance efficient. If they were, riskier, more volatile stocks would produce heftier returns. But researchers have discovered lower-volatility stocks can actually outperform the index.

Still, such minimum-variance portfolios aren’t easy to implement. Such an approach, which he calls naïve, focuses on trying to achieve the lowest possible standard deviation at all times. But then, “You’re constantly trading your portfolio to get to that lowest point. When that happens, your transaction costs are super high because things change every day.” That reduces alpha.

Thus, a pure minimum-variance approach doesn’t work. Xu tried putting on constraints, shifting from a minimum-variance model to a low-volatility portfolio. Instead of a nearly 30% monthly turnover, which the naïve model produced, he limited it to 10%. Otherwise, he says, “you can eat away most of your alpha.”

But even then, his backtests were not giving him a clear reading of alpha. One reason was another constraint: he limited individual position size to 4% of the portfolio. That ensured he wasn’t overexposed to Nortel at the peak of the tech bubble. But it also introduced some poorer-quality consumer names to the portfolio.

“When you try to focus only on risk, you don’t care about the quality of the [companies],” he explains.

Read: Gauging quality companies

That led him to consider adding two further sets of screens, or constraints. One was a group of fundamental criteria to eliminate poorer-quality companies: those with low dividends, for example, or alternatively dividends at risk. The second was to apply a statistical risk model.

With the fundamental analysis, Xu found some companies weren’t making enough to pay their dividends. Others had a high financial-leverage ratio. While those stocks may show low-risk attributes in good times, in bad times, they’re the first to be dumped as investors get nervous about the health of the company.

The fundamental criteria helped the Canadian portfolio a bit in overcoming the problems of the naive minimum-variance portfolio. For the global portfolio, it helped more. With the Canadian portfolio, however, Xu had to add a statistical risk model to get the best risk-adjusted results.

Advisors are already familiar with this kind of analysis, at least in one form. The R-squared tool can show how much a manager’s returns look like a benchmark’s returns. R-squared doesn’t compare the fundamental factors (such as dividend yield) of the manager’s portfolio and the index; it simply determines whether the two are driven by the same statistical factors.

A statistical risk model takes it a bit further, and through a series of mathematical exercises tries to determine which statistical factors drive the risk and return of a particular stock. Xu uses a 15-factor model to optimize a low-volatility portfolio.

“Actually, the statistical model did make a big difference in Canada, but if you look at the global setting, that doesn’t seem to be the case,” Xu explains. “For me, the key takeaway in my research is that it looks like there are both cases — there are different portfolio paths over 10 or 12 years. Sometimes, fundamentals drive risk, and sometimes it’s statistical factors.”

Read: Alpha versus beta

His conclusions? For starters, reducing volatility isn’t enough. The quality of the positions counts too.

“That complicates the computations, [but] if you put in a quality constraint, which is a stock-picking model, it actually makes the overall performance a whole lot better.”

Beyond that, it turns out that yield is proxy for quality. “A lot of the research showed that because these names are low-volume, they’re stable businesses and most likely pay a dividend,” Xu says. “So the portfolio is going to have a decent yield.” That’s a byproduct of the screening process, and it doesn’t hurt to beat the total return of a passive index.

Scot Blythe