In this article, I would like to draw attention to a method inspired by the analysis of stochastic models typical of quantum physics, and to utilise it to test a financial strategy. We start from the principal question asked by anyone involved in financial investments: Are the results obtained due to a correct interpretation of the market, or are they merely fortuitous? This is a plain question and it can be given an equally clear answer. The results obtained are due to a correct interpretation of the market if the probability of obtaining equal or better results randomly is very small (i.e. tends to zero as the number of times the strategy is used increases).
Description of the methodology and demonstration of its soundness
The logic underlying this method is very simple in essence: it consists in calculating the probability of obtaining the same results randomly. As we will see in the examples below, this technique is applied not only to results from a trial phase: it is also applied as a control method when the strategy is used on a real trading account.
In what follows, I present a short logical proof of the soundness of this method. The term ‘soundness’ was introduced by the famous mathematician David Hilbert and is used to indicate the absence of any contradiction within a mathematical logical proof. Indeed, contradiction is one of the main defects of methods of analysis based on equity line (performance) assessment.
The short demonstration I’m going to outline is based on two fundamental axioms:
1) Whenever we understand any kind of deterministic market process, the probability of our financial operation being successful increases by more than 50% (Von Mises’ axiom of disorder from the early 1920s).
2) The probability of randomly obtaining a result that has been obtained through cognitive awareness of a deterministic market process tends to zero as the number of times the strategy is used increases.
The first axiom is derived from the famous “axiom of randomness” (or the ‘principle of the impossibility of a gambling system’) formulated by the mathematician Von Mises, whose original definition I quote: “the essential requirement for a sequence to be defined as random consists in the complete absence of any rules that may be successfully applied to improve predictions about the next number”.
As a consequence of the two axioms given above, any correct market analysis will always tend to increase the probability of our prediction beyond the 50% mean, and this results in a consequent decrease in the probability of obtaining the same result randomly.
I will demonstrate this to you with a simple example. Suppose we are playing heads or tails with a rigged coin that gives us an above-50% probability of winning (let’s say it’s 60%).
What is the probability of losing out after 10 coin tosses? Approximately 16.6% ...and after 50 tosses? Approximately 5.7% ...and after 100 tosses? Approximately 1.7%. As you can see, the probability tends to zero, and here the rigged coin represents a financial strategy that is implementing a correct market analysis.
By basing our method of assessment specifically on the calculation of this probability, we develop a method that is by definition free of contradictions. The absolute value of the probability turns out to be a sound estimate of the validity of our strategy.
The term ‘deterministic process’ which I used during the proof refers to the utilisation of a correct financial strategy, definable as the identification of a deterministic and non-random component that regulates the system we are studying (in our case, a financial market).
Methods based on studying the equity line may produce a positive outcome and at the same time have a 50% probability of obtaining the same amount of profit by chance. In this way, such methods lead to a contradiction, given that obtaining the same outcome randomly implies the absence of a cognitive process, which is just what is meant by assuming a “correct interpretation of the market”.
Figure 1 shows an equity line obtained with a purely random strategy. The algorithm is defined as follows: Each day you toss a coin to decide whether to open a buy position on the Nasdaq Index. If the position is opened, you toss another coin the next day to decide whether to close it or leave it open. As you can appreciate, this strategy functions in a completely inane and random way. Nevertheless, the equity line achieved is satisfactory: indeed, if we calculate the probability of obtaining an equivalent or better result randomly, we get a probability of approximately 50%. We therefore know that the result is void of significance, in spite of the equity line.
To conclude, it follows that the parameter to be linked to the validity of a financial strategy is not its performance but its statistical property of generating non-reproducible results in a random way.
Description of the techniques used in operational practice of this verification method
How does one calculate the probability of randomly generating an equivalent or better performance? There are two ways: the first (and more precise) is to estimate this probability using the Monte Carlo method. The accuracy of this method is linked to the number of times we carry out the simulation. Its strength is the ability to obtain very precise values: its drawback is due mainly to the long calculation times required to obtain the estimate of probability.
The second method (which I have identified) uses exact formulae taken from statistics; each formula is applied to a particular class of random variables. Unfortunately, financial operations do not fall under any type within this class of variables. This problem is solved by applying a transform to the financial operations, which renders them suitable for the chosen analytical formula.
This transform adds an error to the calculation of our probability, but it has the advantage of being calculable in one single equation and therefore without requiring massive computational resources, as do the Monte Carlo methods.
Use of the method as a control parameter of a strategy
This method is utilised not just during the test phase, but is also extremely useful as a way of monitoring the trading system. Each time we carry out an operation, we update the probability value for obtaining that result randomly. We do not calculate this probability across the whole sample of operations conducted, but extrapolate it from to the N most recent operations. The optimal value for N depends on the strategy, and in particular on the frequency with which operations are conducted over time. Once N has been set, we calculate our probability and compare it with a probability we have established that represents the level of risk we take to be acceptable (this definition of risk will be explained in a separate section). If the probability value exceeds the parameter set by us, the trading system locks itself and continues trading in virtual mode only. When the probability falls below the threshold parameter we have set, the trading system resumes actual trading. In this way, trades are effected only when the market is understood, and the trading system is blocked when we are operating in a regime considered to be random.
This method is much more efficient than the usual performance-based methodologies; such methods carry the risk of allowing themselves to incur unnecessary losses. It may happen with this method that the strategy is blocked even when trading at a profit, given that a random regime has a 50% probability of success.
Having said this, obviously a trading system will have its internal performance controls, but their purpose is purely to monitor for possible system crashes or any programming bugs.
What we have described thus far can be fine-tuned. A characteristic of all good quantitative trading systems is to be capable of operating even at high frequencies while leaving unchanged the logical schema on which the trading system is based. This enables us to run a trading system solely as a method of monitoring (hence in virtual mode), at a very high frequency of operations, and to obtain thereby a much more numerous statistical sample in less time, increasing the reactivity of our method of control.
Use of the method to in the process of developing a financial strategy
This approach has another great merit, which is to help us direct our research in the right direction. Let us assume we develop two strategies:
1) The first has profits on a historic series of 10% annuities, but with a very high probability of obtaining the same results randomly;
2) a second, instead, has low profits of 1%, but with an extremely low probability of obtaining the same results randomly.
It is perfectly obvious that, if we follow the theory we have expounded, we will discard the first strategy, as there is a very high probability that the 10% profit has been obtained by mere chance. The second strategy yields low profits but the low probability value obtained means we are on the right track for understanding a deterministic and non-random market process (which, if studied more closely, could lead to more profitable financial strategies).
If we had not applied this method, we might have thought that the first strategy (with higher gains) was the right one. But this would have been at risk of losing money over the medium to long term. We would have ended by discarding the second strategy and missing an opportunity to study and understand something important we had sensed about the market.
A new definition of risk
There are many definitions of risk in the financial field: risk is in any case seen as the probability (uncertainty) of incurring a given loss. If we recall the statistical example given above of tossing the rigged coin, we can see how risk is intimately linked to our understanding of the market and as such how it tends to zero the more times we repeat the statistical experiment described above.
The value of this probability can never be zero: think, for example, of the actions we perform in our daily lives, actions that all have a certain level of risk – understood as the probability of bringing about a negative event. We take these actions into consideration nevertheless, because we know that the risk associated with them is so low as to be statistically acceptable – for example the risks associated with travelling by plane.
It therefore becomes extremely important to implement methods that evaluate the validity of our strategy in a sound way, so that we can estimate risk and plan the investment correctly.
Conclusion
In this article, I wanted to draw your attention to a different way of viewing the performance of a trading system, a way that is not bound to its absolute value, but linked to one of its statistical properties. As I demonstrated in the first section, this involves well defined behaviours when we operate with cognitive awareness on the market. The approach is a fundamental one because it recognises the high likelihood of being successful on financial markets, even over long periods, in a completely random way. Let’s not forget that financial markets have only two possible directions. This implies that, even fortuitously, there is a 50% chance of making the right choice. Furthermore, such trends can continue for years. Therefore, it is crucial to look away from the profit line and to appreciate in a rigorous and scientific way whether our strategies are the product of chance or of a true understanding of the market. One thus trades only if one understands the market, thereby actually reducing the element of fortuitousness. Investing in this way has nothing to do with chance but becomes cognitively aware and more secure.
Gambling is defined as follows:
“The gaming activity in which profit is sought after and in which winning or losing occurs predominately by chance, skill being of negligible importance”
From this definition, it follows that if the element of fortuitousness is not factored into the investment decision-making process, it is never possible to prove that money invested is free of exposure to chance, and therefore to uncontrolled risk. The calculation of probability illustrated above therefore becomes an essential and irreplaceable requirement for bringing investment out of the area of gambling, making it more cognitively aware, and therefore less risky.
Author
Andrea Berdondini
Graduated in Physics, has worked as a researcher for years, specialising in the development of simulators based on the Monte Carlo method, on which he has published numerous scientific articles. Currently engaged in quantitative trading as a consultant and professional trader. andrea.berdondini@libero.it
Article provided by Trader’s
Only a little more than half a year is remaining until implementation of PRIIPs and MiFID II, and th
This presentation will look at profiting from the use of Ichimoku Clouds.
Confronto su come competizione, statistica, gaming e psicologia ne rivoluzioneranno il mondo.
Lo Shipbrokers and Shipagents Dinner è uno degli eventi più importanti nel panorama mondiale della
Within the framework of this presentation, Viktor Pershikov, MFTA, (a leading technical analyst in R
Updates and news from MarketPlus
SMI® PR | 9072.92 | -0,53 % |
LYXOR DAX INAV | 120.958 | -0,79 % |
FTSE MIB Index | 20790.86 | -1,02 % |
FTSE 100 | 7434.36 | -0,17 % |
CAC 40 | 5258.58 | -0,71 % |
S&P 500 | 2419.38 | -0,81 % |
NASDAQ Composite | 6146.6226 | -1,64 % |
HANG SENG INDEX | 25772.59 | -0,26 % |
Powered by Yahoo Finance |