Robo-for-advisor: the return of the low-optimizing optimizers

Among the “buzzwords” of the moment, we certainly find “robo-for-advisor”. In other words, various analysis and portfolio building tools for relationship managers. It is a technology created to support professional figures: it empowers them for being able to reach the full potential of productivity and quality of their customer service. It seems to be the optimal combination between man and technology, given that pure roboadvisors are relegated to a niche market which is destined to remain like this for a while (considering that “the human touch” seems unavoidable, at least for now).

Great, theoretically.

In practice, however, I have noticed that several financial institutions lag a step behind. Therefore, they take remedial actions, quickly and impetuously equipping their relationship managers – i.e. private bankers, employee and independent financial advisors, branch operators – with portfolio construction tools. I’ve seen several of them, good and bad. Many of them suffer from a (long-standing) problem and I would like to share some reflections about it.

 

A never-ending story

The problem, which resurfaces periodically (it already happened with the first wave of roboadvisors), is that most of these robo-for-advisor systems are based on the naïve application of the Modern Portfolio Theory of Markowitz, in short naïve-Markowitz.

It is upsetting seeing that despite about thirty years of academic research, the Homo Sapiens manages to be so superficial that he turns an inspiring and brilliant idea into a pseudo-scientific garbage – in this case Harry Markowitz’s idea, which consists in explicitly seeking a trade-off between risk and performance, making use of various mathematical programming techniques. Unfortunately, on the other hand, the naïve-Markowitz theory is methodologically frightening and practically dangerous for customers, professionals and the reputation of the company. Let’s see why.

The trick is that the naïve-Markowitz process is simple, but seemingly scientific: you define the investment universe (asset classes, funds, ETFs, etc.), you take few years of historical series, you ignore the distribution of empirical probability, assuming instead that it is Gaussian, then you estimate the parameters (covariance matrix and vector of averages), you slam everything into a solver for problems of quadratic programming, and finally you press the button. Et voilà! You get the mythical efficient frontier of the portfolios, with a scenic curve and expected yields specified to the second decimal number, maybe even to the third, depending on the software.

But there’s a problem: those portfolios don’t make sense. Maybe only randomly. Literally: portfolio weights are de facto random. This is because the parameter estimation error is typically monster1. Moreover, portfolios are based on a historic line that probably will be repeated. Finally, the underlying hypotheses are far from reality (the yields are not Gaussian at all and the parameters of data generation processes do not remain constant over time) – but this, let me say, is the lesser of two evils.

 

It is intuitive that, out there in the real world, such portfolios are destined to have some issues. So, as soon as there are some shocks from markets, everything will seem much less scientific between the protests of customers and the complaints of consultants (“the system of robo-for-advisor does not work”).

I suspect that many of you think that I’m rambling around a subtle technical issue, that is irrelevant in practice. Nothing could be more wrong: it is a technical matter, but soon, figures will show you the extent of the problem in practice. That is, the impacts on the business.

In any case, at the root of the problem there is not the bad luck of the investor and his consultant, but the butterfly effect.

 

The butterfly effect

It is a remarkable concept of the chaos and complex systems theory. The idea, which you probably already know, is expressed as follows: a flutter of a butterfly’s wings in Brazil can cause a chain of events in the atmosphere that could in turn trigger a tornado in Texas. Generally speaking, small variations in the variables of a system can cause greater effects.

This is exactly what happens with the naïve-Markowitz models aforementioned: errors in the estimation of the inputs make their way into the algorithms that lead to the final asset allocation, they grow and end up having a huge impact, that can completely affect the accuracy of the output, to the extent that the application of the naïve Markowitz is known as “maximization error model”. Since the idea is a bit brainy, let’s see the situation first-hand using a little numerical example.

Imagine that you are the god of financial markets. Consider 25 asset classes, for which you kindly impose that the probability distribution of the monthly log-renders is Gaussian, with an increasing volatility from 1% to 25% and a Sharpe ratio of 0.3 for all the asset classes, covariance matrix with constant correlation (useful hypotheses to create a reasonable and understandable example, nothing else).

Under these conditions, according to the naïve Markowitz model, for an average risk profile, an “excellent” long-only portfolio has the weights of the various asset classes (ordered according to volatility) shown in the figure below.

 

grafico 1 - pesi veri | amCharts

 

At first glance, the portfolio already seems rather reasonable: weights are well distributed, less risky assets account for more (remember that it is a portfolio with a medium risk), about 50% of the portfolio, while the most volatile Cuban assets account for about 20%. The diversification index is 96%, very high.

 

In this hypothetical world this is the absolute truth, because there is not a specification error linked to the choice of model nor any measurement error (estimation) of the parameters: we are facing the “real” optimal portfolio.

Now let’s shift our perspective: let’s say that you are a robo-for-advisory with a given sample of five years of data generated by the distribution of probability mentioned above, that of the god of the market. Given the hypotheses, it can be demonstrated that the specification error of the model is equal to zero. There is only a measurement error, pure sample error. So, you can calculate the optimal weights again according to the naïve Markowitz model and put them aside.

Then, like in a time warp, you are given another 5 years of data generated by the same multivariate distribution. Another sample. Another “possible world”. So, you repeat the exercise. And again, 10,000 times, 10,000 possible scenarios.

Now let’s find out in the following chart how much the estimated portfolio’s weights differ from the real ones: for each asset class I include the interval that contains the endpoints of the variation of the deviations. The error committed in the “god portfolio “varies cheerfully from -12% to just under 90%. The diversification index of these portfolios has a median value of 35% (remember that the “real” one is 96%), which means that the very idea of diversification is largely compromised.

 

grafico 2 - ampio errore pesi | amCharts

 

Let’s consider for example asset 2 (low risk, with a 2% of volatility and an expected return of 0.6%), in the optimal “real” portfolio weigh is 12%. Take a look instead at how it oscillates in the various optimizations made by the robo-for-advisor: it often assumes 0% value, and any admissible value, arriving even to dominate the portfolio. No wonder that the diversification goes head over heels.

 

grafico 3 - peso asset i | amCharts

 

I think it’s clear the magnitude of the error associated with the naïve Markowitz and the reason for the nickname “maximization error model”: the estimation error generates random portfolios that even a monkey could generate. This is not due to a lack of mathematical-statistical finesse. No. These are random and unstable results (for those who are mathematically oriented, to get an analytical idea of instability just take a look at the Jacobian matrix containing all the partial derivatives of the optimal weights in the closed form solution of Markowitz, i.e. w*, compared to the expected performance m, i.e. ∂w*/∂m). These numbers are often very far from the real solution and therefore practically worthless. Classic “garbage-in, garbage-out”.

Moreover, consider that in reality is much worse than this: in the example there is only the error related to the sample estimate, while in practice there is also a substantial error related to the specification of the model, to which is added the fact that the market parameters change continuously.

I hope that it is now obvious what an immense idiocy those beautiful efficient borders and the expected yields specified to the second decimal place are.

Using naïve Markowitz like this – just like many financial advisors and private bankers are enthusiastically doing – at the end of the day will only lead to one thing: a disaster, which cannot be easily explained to the client.

And when disasters occur, whose fault is it? Of the roboadvisor/engine of the portfolio construction in the first place, along with the consultant who stands up for it and the parent company who has set up the fairytale. A nice operational risk.

 

Solutions?

We have good news: you can avoid wasting your budget on a sophisticated machine to produce financial trash and instead give some tools to the advisors. There are two meta-ingredients needed:

  • a methodology, which cannot be a simple one-size-fits-all model, but rather an “investment recipe” made up of a combination of portfolio construction methods and robust estimators, embedded in a rational, disciplined and financially well-founded investment process, with a clear storytelling regarding the client;
  • a central and competent control of the process, starting from the parent company. Without a solid method and a strong presence on the construction of the portfolios, it is inevitable that some financial advisor or private banker, behaving like Warren Buffet or Ray Dalio, will sooner or later do some damage.

It’s not difficult to do things right. All you need is process knowledge and some theoretical-practical know-how of statistical and financial modelling that goes beyond Markowitz and Black-Litterman. Unfortunately, it seems that many organizations do not have them.

 

Virtual B Fintech solutions

Virtual B has been working for years in the financial sector, with a close focus on data and data analysis. Our experience has led to numerous solutions that generate value and solve issues for financial and insurance intermediaries.

LifeCycle Portfolio Builder is the solution developed by Virtual B for banks and insurance companies. LifeCycle Portfolio Builder is able to identify the best financial products that optimize the financial well-being of customers

More on LifeCycle Portfolio Builder


Download our free white paper “Wealth Management and Financial Data Science: a short guide”