A comparison between the old theory of risk and rating and quantitative complexity analysis

Heads or Tails. Credit Wikimedia Commons
Heads or Tails. Credit Wikimedia Commons

The concepts of risk and uncertainty were defined in the economic ambit by Knight. He distinguished between measurable uncertainty, or proper risk, and the unmeasurable one. The word uncertainty is reserved for something not quantitative. Mathematical risk is objective and measured through some statistical variable, in the economic world, typically the probability of default is used. Cohen added the concept of subjective risk, this kind of risk is measured but the probabilities are chosen in a subjective way. Objective risk should be got from physical parameters or from a set of historic data, and subjective risk should be established without an objective base to assign probabilities.

For instance, if we shake a dice, the probability of default when you bet for a fixed value should be got from the physical properties of the dice. A dice has six faces, if the dice was built homogeneously the probability of success should be 1/6 and the probability of fault would be 5/6. Sometimes we cannot have so easy a physical description of the problem but we can get a model from a set of historic data. If we shake the dice a high number of times and write the results on a list we can finally extract the probability of fail as the number of faults over the number of attempts. The results for a wide sample of data would be very near the theoretical value of 5/6.

Uncertainty would be considered as something that a probability cannot be assigned although we can know all the possible results. If we know that the dice was not manufactured homogeneously and we have not an historical set of data, we cannot assign probabilities to every possible result although we know that there are six possible results.

When we are analyzing the risk of an investment we usually manage two kinds of variables: some of them can be characterized and included in a model, and other ones are not characterized inside that model. The former ones would be under the concept of risk and the latter ones would be under the concept of uncertainty. The risk should be objective if we can define properly the probability for every possible result, and the risk would be subjective if we cannot define properly the probability for every possible result and we assign a value for those probabilities subjectively.

This kind of risk analysis has two characteristics: the risk is modelled from the past assuming the future would be equivalent, and the uncertainty is considered negligible.

Analyzing the first hypothesis, we cannot assume that an investment will have the past probability to provide some profit for the future because businesses and markets are evolving through time. Risk is never objective, it is always subjective because we are assuming subjectively that the future will be similar to past and this cannot be guaranteed specially if we enter in an unstable global scenario.

The second hypothesis is not valid when complexity increases. In this case, uncertainty can move easily from a point of system to another one providing a typical unexpected behavior in our system making our model invalid because uncertainty is, due to its own nature, unpredictable by definition.

If we cannot assume the first hypothesis, we cannot trust in our model, then model free techniques can provide an advantage from the classic way to analyze risks, and if we cannot assume the second one, we should find additional techniques that can let to consider uncertainty inside the analysis.

It is moving towards a global scenario and in turbulent markets, where the concepts of complexity and fragility begin to show themselves as more proper ways to analyze risks than classical techniques.

If we want to analyze the risk of a share at the stock exchange, the concept of systematic risk is related to the fluctuations of the market and the specific risk should be linked to the inherent value of the share. Both components would provide the total risk of the asset following Sharpe’s model of market. Modern complexity analysis defines fragility as the complexity of the system multiplied by the uncertainty of the environment. I am going to try to show you the differences.

Sharpe considers fluctuation of market a risk, then, it is something following an objective or subjective distribution of probability (a model) and it cannot include the uncertainty of the environment.

Techniques of quantitative complexity analysis consider complexity as a function of the structure and the internal uncertainty (due to characterized or known internal variables and uncharacterized ones). In order to make a similar analysis to the classic one, you should analyze market as a part of the system. In other words, in the same way as in classic risk analysis you use an index of the market to analyze the systematic risk, you can include the index of the market as an exogenous variable of the system that you are analyzing. The analysis would be model free but it will provide you information about how the effect of market fluctuations can be transmitted to other internal variables of the analyzed business.

An important thing is to realize that in quantitative complexity analysis uncertainty is something uncharacterized as in the classic theory but, as a difference, it is measurable. Uncertainty is something due to unknown or not modelled variables but it is analyzed quantitatively through entropy. This provides a huge step forward to analyze strategies and investment decisions when uncertainty cannot be considered negligible.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s