Having selected a number of assets to include in a portfolio, a key decision any investor has to make is “What percentage of my available funds should I allocate to each asset held in my portfolio?”

In the Feynman Study I covered a number of common options:

1. Use of a Strategic Asset Allocation Plan (SAA) – Platinum Members will be very familiar with this (discretionary) approach since Lowell uses this method extensively on this site and in his blog posts to manage many of the ITA Portfolios.

2. Use of Modern Portfolio Theory (MPT) and Efficient Frontier (EF) analysis to calculate asset weights based on Minimum Variance Optimization (MVO) and optimal Return/Risk parameters. This method might also be combined with the use of a SAA Plan, as noted above, to provide some “asset class” constraints. Again, Lowell has described this approach in many of his blog posts.

3. Simple Equal Weight allocations to all assets in the portfolio. Although I did not apply this method to SAA portfolios (except for the 50/50 “asset class” portfolio), I did use it as a reference for momentum-ranked portfolios.

4. Weighting based on momentum “strength” for momentum-ranked portfolios (with or without the application of cluster analysis).

Although I did not use it in the Feynman Study, simple Rank Weighting (RW) might also be applied e.g. for a 4 (N) asset portfolio, funds would be assigned 40% (N/(Sum Rank No’s), 30% ((N-1)/(Sum Rank No’s)), 20% (N-2)/(Sum Rank No’s) and 10% (N-3)/(Sum Rank No’s) to the 1st, 2nd, 3rd and 4th ranked assets respectively. Assets might be ranked based on any reasonable parameter, but momentum is probably the most common.

One “popular” method that I did not use in the Study, but that has been mentioned elsewhere on this site, is the concept of Risk Parity (RP). The objective of Risk Parity is to distribute Total Portfolio Risk evenly between all assets in the portfolio. To be honest, I have never been a big advocate for RP – I never felt confident that it was totally valid (although I (thought I) understood the basic concept) but I didn’t really know why I didn’t trust it.

As regular readers will know, I am a big fan of Adam Butler and his colleagues at www.GestaltU.com. Butler et al have written many Papers/Blog Posts on the subject of Risk Parity and have coined the term “Naive” Risk Parity to the common form of RP that I have always understood. “Naive” Risk Parity, simply weights assets in proportion to the inverse volatility of each asset in the portfolio. What this means is that Bonds will generally be weighted heavier than Equities since they have lower volatility. So, does “Naive” RP achieve the objective of equal risk distribution? The answer to this question is *“sometimes”. *But why only “sometimes”? It took Butler et al to provide me with the answer to this question – ** the theory is only valid if the correlation between all assets is equal to 1** – which, of course, is generally a false assumption.

Butler et al provide some ideas as to how Risk Parity might be modified to account for correlations and variance but, unfortunately, I was not able to apply these ideas without the valuable insights/suggestions of David Varadi at www.cssanalytics.wordpress.com/ . Over the past few months I have been modifying the Momentum Ranking spreadsheets to include adaptations of many of the ideas provided by these excellent analysts.

Although the revised SS is still in development mode, and not yet suitable for distribution, I thought I would write a short series of Posts with a few screenshots showing new information that might be useful to Platinum Members.

The following screenshot shows current Rankings of the 18 ETFs in the Feynman Asset List. Regular readers will be familiar with this sheet. A major new feature is the ability to choose any period for either the mean variance (volatility) or the semi-variance.

The screenshot below shows possible allocations that might be used to construct a 10 asset portfolio:

Since this is a 10 asset portfolio, the portfolio comprises the assets ranked 1-10 in the top figure.

The first Column assumes an equal weighted portfolio with each ETF assigned a weighting of 10%. The SS automatically calculates (in the second column) the number of shares required to establish this position (based on the total portfolio value – $100,000 in this example).

The third column shows the required weightings using a Rank Weighted model (as described above) with the associated shares to be purchased calculated in the fourth column.

The fifth column shows the suggested allocations based on the algorithm we have been using to determine allocations based on relative momentum “strength”. Again, the shares required to achieve these percentages are shown in the last column.

Apart from the Rank weighting numbers there is nothing new in how these numbers are calculated.

Although a 10 asset portfolio is shown in the above figures, the number of assets can be changed by selecting the number of assets required from a drop-down menu (yellow cell).

Finally, I have included the ability to apply a Risk Parity correction to the allocations calculated in the second figure above. The figure below shows the adjusted allocation after applying the risk parity correction to the basic momentum ranking allocations:

The Risk Parity adjustment is a complex calculation that requires the generation of correlation and volatility information and the combination of these to produce the Risk Parity adjustment factor. This is done in new worksheets within the workbook.

The above data is generated without the need for any cluster analysis. In the next Post I will show examples of the RP adjustment to portfolios selected through cluster analysis.

Please let me have your feedback as to whether you feel these may be useful additional features to the Ranking SS.

David

Leland Felgner says

Hedgehunter, definitely very useful additional features for the ranking spreadsheet. I like the fact that multiple allocations are presented. The investor can choose whichever they are comfortable with.

I will be very interested to see your implementation of risk parity

Rick Rogers says

HH: this is discussion is very timely for me since I have been recently looking into Risk Parity for the first time.

The Hoadley add-in contains RP functions that I have started to experiment with. Perhaps that is what you are using.

I’m just learning how to use it in a SS and it was trying because I haven’t used Excel arrays in many years.

The function requires a correlation matrix and a volatility array for the proposed portfolio and these are available from the ITA spreadsheets that utilize Hoadley.

I’ll be curious to compare my results with yours.

thanks to you and Lowell for your generous contributions,

Rick

HedgeHunter says

Rick,

I’ve looked at the Hoadley add-in and have played with the RP options a little – although it’s a little complex to use. My current Ranking SS that I’m beginning to introduce here does not use the Hoadley add-in (other than for cluster analysis), but I have in mind that the outputs could be fed back into the Hoadley SS at some point in the future. It gets a little complex and definitely will not be for the majority of investors.

Be sure to read David Varadi’s blogs on Minimum Correlation and Minimum Variance RP adjustments – I’m using the first of these methods in the current posts, but I’ve looked at both.

David

Lowell Herr says

David,

1) I assume the number of shares are based on a $100,000 portfolio – correct?

2) Do you plan to back-test each of the six models and if so, over what period? Likely going back to June 6, 2006? Any gut feelings as to which might be preferable?

Lowell

PS Great article and eager to see spreadsheet.

HedgeHunter says

Lowell,

1) Yes, $100k as per second Paragraph below second figure.

2) I’m not sure that I will be backtesting all the possible options – as you probably appreciate (after your recent backtesting on 9 ETFs), backtesting is a VERY time consuming process, particularly if cluster analysis is involved. It took me about a week of concentrated effort to get through some of the tests for the Feynman study. In addition, my next couple of posts will multiply the number of options similar to the above post by 4 i.e. 24 in total (not including the option of changing the number of assets to be used in the portfolio) – so I will not be running 24+ new backtests – I will pick a few examples that, hopefully, give us some idea of the impact of using different weighting options.

As for which might be preferable, that will depend on what an investor is looking for. Here’s my “gut” feeling as to what the results “might” show.

Taking an equal weighted 10 asset portfolio as a baseline for comparison:

1) reducing the number of assets will increase returns at the expense of volatility (risk);

2) alternative weighting options (Rank, Momentum weighting) may improve returns (similar volatility?);

3) adding a risk parity correction should reduce volatility, but at the expense of returns – remains to be seen how severe this might be but Sharpe (and/or) Sortino ratios “should” be better.

Just my guesses – don’t act on them at this point or shoot me if I’m wrong 🙂

There’s another major option coming in addition to the risk parity correction – 2 posts from now – probably on Saturday.

David

Lowell Herr says

David,

I would never shoot anybody. I’m a “Not One More” advocate. As you know, we had a shooting here in the Portland area this week. Enough of that.

Looking forward to your next posts. This is interesting material. As I’ve mentioned before, I think Risk Parity has been working recently due to the decline of interest rates and that has aided bond performance. Risk Parity may be one of those “anomalies” that worked over the past few years as equities struggled through two major bear markets.

Lowell

HedgeHunter says

Lowell,

Yes, “Naive” Risk Parity can be a bit of an anomaly. Introducing the correlation correction may make it a far more reliable adjustment to portfolio weightings.

David