Understanding the FVE Algorithm (Part 3)
Hello FVEr Invest Subscribers!
Thank you to our new and continuing subscribers for your support of our platform. Our goal is to bring you weekly market insights each Monday (excluding holiday weeks), using our proprietary FVE algorithm to help you gain a clearer understanding of market dynamics.
Recap
To begin, we start with a review of last week’s content. As you recall, we are in the process of outlining the algorithmic concepts and mechanisms which underlie the Fair Value Estimator (FVE), and the FVEr Trading Strategy. Last week (Part 2 of the series) we covered:
Exponential Regression Workflow: The main steps in this process involve (1) converting the historical price of an ETF (over the desired time interval) to logarithmic scale. (2) Using a statistical tool called the correlation coefficient to detect linearity of the resulting data points. Assuming a linear relationship exists, (3) employing least squares linear regression to calculate the best fit line through the linearized points. (4) Undoing the logarithmic scale by applying an exponential function to regain the original scale. This workflow produces an exponential best fit, or regression, through the price. Notably, the historical price of an ETF is an example of a time series, a sequence of data points which are indexed by time. Exponential regression is one of many tools that can be used for modeling time series.
Understanding the FVE Algorithm (Part 3)
With this review, we will explain how the FVE algorithm employs an “ensemble method” to analyze thousands of regression curves over different time subintervals, and layers on top a filtering strategy to disregard outlier data.
First, we want to motivate the construction of our model by highlighting some of the shortcomings of time series exponential regression, and how our model attempts to mitigate these issues.
The Logarithmic Scale: When the price data points of the ETF are linearized via the logarithm function, the resulting time series is in logarithm scale. A best fit line is then calculated to create the best linear trajectory through these points. If the data points have significant dispersion off the best fit line (a poor linear correlation), this dispersion will only be further amplified when returning to the original scale. This is one of the challenges of modeling a non-linear phenomenon like a stock market index.
Starting Point Bias: In Part 1 of the series, we referenced two websites which applied a single exponential regression to the S&P 500 Index over a long time interval. One issue with this approach is starting point bias. Suppose the chosen time interval began at the peak of the dot com bubble in early 2000, until the current day. This could create a materially different exponential trajectory compared to one that started a year or two later, after the market bubble had burst. A reasonable attempt to mitigate this issue would be to blend multiple regression models with different starting points, which is what we will do.
Future Data: For accurate historical fair value modeling, it's crucial to minimize the use of future data. The two websites discussed above highlight this pitfall: the fair value at a past point is determined by a single exponential curve that incorporates data unavailable at that time.
Our FVE algorithm attempts to minimize these shortcomings. The workflow goes as follows:
20 years of historical weekly price data is retrieved for the ETF under consideration using end of week closing prices.
Choose a particular end of week date in the past (or present). For example, let’s choose Friday March 9, 2018.
Consider all the weekly close prices from 20 years ago (Friday, July 8, 2005) until the date of interest (March 9, 2018).
Index the following time sub-intervals between July 8, 2005 and March 9, 2018:
Sub-interval 1: July 8, 2005 – March 9, 2018
Sub-interval 2: July 15, 2005 – March 9, 2018
Sub-interval 3: July 22, 2005 – March 9, 2018
…
Sub interval 661: March 2, 2018 – March 9, 2018
For each sub-interval, record the correlation coefficient for the associated linear regression, and calculate an exponential regression from the starting date of that sub-interval to the ending date (March 9, 2018), which is the same for each sub-interval.
Filter out exponential regressions whose correlation coefficient is below a certain fixed threshold.
If a high enough percentage of regressions remain after the filtering process, there is sufficient data to calculate the FVE for March 9, 2018, which is the average value of the remaining regressions on March 9, 2018. Otherwise the model is not “robust,” and no FVE is outputted.
Repeat steps 2 - 7 for every week between Friday, July 8, 2005, and Friday July, 4, 2025. If there is a date where the FVE is not outputted, all previous FVEs before that date are deleted. The time period where no FVEs are calculated is called the data build up phase. The length of the data build up phase is related to the volatility of the underlying ETF.
Repeat steps 2-8 for different correlation coefficient thresholds, and choose the optimal threshold.
Only securities where FVEs could be calculated consecutively for at least 5 years are shown.
Some important observations can be noted:
For each date, the FVE is using a filtering process to throw out exponential trajectories whose linear regressions in the log scale have poor correlation. This reduces the issues related to logarithmic scale dispersion discussed above.
The FVE uses an ensemble method to blend hundreds of regression curves with different starting points, thereby reducing the effect of starting point bias.
The algorithm puts a higher weight on near term historical prices, due to the overlapping nature of the sub-intervals, which we believe is the correct approach.
With the exception of the correlation coefficient optimization step, the algorithm does not use future data to predict fair value in the past.
Next week, we’ll continue our discussion of the FVE algorithm and talk about three features that we look at to evaluate our model. In future weeks, we will talk about the mechanisms of the FVEr Trading Strategy.
Dollar Cost Averaging the FVEr Trading Strategy
In this section, we will break down a reasonable projection of how your money will grow if you allocate $50 a week into the FVEr Trading Strategy over the course of a 20 year time period. We think $50 is a reasonable amount of money to invest per week for a number of Americans.
One point we want to emphasize is that the FVEr Trading Strategy involves the use of leveraged ETFs which are higher risk investment products. We do not think it is prudent to allocate a large portion of your savings to our strategy, because while our model is highly sophisticated, it is not infallible; it can and will occasionally make incorrect allocations into these leveraged products, potentially leading to adverse outcomes, at least temporarily.
With that said, we recommend a weekly allocation strategy that weights equally the six broad market ETFs (SPY, IJH, IJR, IWM, QQQ, and DIA), with less weight on the sectors (XLV, XLU, and XLP). Sectors are inherently less diverse and more unpredictable than broad market ETFs, leading to lower confidence in the strategy's reliability at the sector level.
The reason we think it’s important to equal weight the broad market ETFs is that some weeks the strategy may get it right in one ETF, but wrong in another. A perfect example of this occurred during the tariff sell off in early April of this year. While the strategy pulled capital from the SPY leveraged ETF just as the market began to rebound, it conversely maintained exposure to the leveraged ETF in IJH, which turned out to be correct.
Based on our backtesting, we think it is reasonable to assume that capital will appreciate seven times every ten years using our more aggressive 3x FVEr strategy, assuming equal weight exposure across the market ETFs. This is about half the performance of our backtesting, because we like to air on the side of being conservative. That corresponds to a compound annual growth rate of 21.8%, or 0.38% per week. Twenty years corresponds to 1,040 weeks of investing $50.
The first $50 will compound for 1,040 weeks at 0.38%
The second $50 will compound for 1,039 weeks at 0.38%
…
The last $50 will compound for 1 week at 0.38%
The total net worth in 20 years is an example of a finite geometric series, and it is summable to $668,993 (via the finite geometric series summation formula), which will be the projected net worth in twenty years. And importantly, the total amount of invested money during that time would be only $52,000.
FVEr Weekly Market Update: July 7, 2025
Current Allocation Status: As of our model updates on Friday, here are the major updates:
Due to a small cap rally last week, IJR (iShares S&P Small-Cap 600) moved out of the leveraged status. Every broad market ETF is now in the neutral unleveraged status. At 8.53% overvalued, the SPDR S&P 500 ETF (SPY) is looking expensive, and is nearing the 1 star level which would trigger inverse leverage.
XLV (SPDR S&P 500 Healthcare) still has leveraged status.
As you recall, the SPDR S&P Consumer Staples ETF (XLP) moved into leveraged status last week due to the “buy the dip” signal being triggered. This turned out to be appropriate as the sector rallied a couple percent to push it back into the unleveraged status.
Market Valuation: The US stock market is starting to look expensive at the large cap level, especially growth equities.
See you next week. In the meantime, please don't hesitate to reach out if you have any questions.
The FVEr Team
Unlock Deeper Insights: Schedule a Learning Session
As a valued member, we encourage you to take advantage of a personalized 30-minute learning session with one of our co-founders. This is your opportunity to get tailored guidance on how to interpret our data and effectively implement our strategies in your own investment approach.
To schedule your session, simply email us at info@fverinvest.com with the subject line: "Learning Session". (Please note: We do not provide specific investment advice.)