Weekly Algorithm Review: 08/05/2023 to 08/11/2023

Performance Rankings

  1. Long Term Portfolio: +0.75%

  2. Variable Sector Neutral: +0.47%

  3. Market Neutral: +0.47%

  4. Variable Market Neutral: +0.3%

  5. Base Algorithm: +0.18%

  6. Sector Neutral: +0.13%

  7. Overall Market: -0.27%

Our portfolio did fantastically this week - outperforming the market by 102 bps. Regretfully, I can’t say the same for the algorithm. With an under-performance of 57 bps, this marks the algorithm’s worst performance this quarter. It is for this reason that, as of Wednesday night, we have put a hold on the system. We will continue publishing recommendations while the hold is active but, at this time, we don’t believe the algorithm will out-perform the long term portfolio. We will notify users when we resume support of the algorithm.

To Trade Or Not To Trade - Evaluation Of The Algorithm Hold

For the first time this year, I feel that our algorithm is currently failing to meet our standards. This is only the second time since March we’ve had consecutive weeks of under-performance by the algorithm, relative to the portfolio. Furthermore, our current weekly under-performance is fairly non-trivial. I’d like to run through a few possibilities of why this might be happening, and our thoughts on each probable cause:

  1. Unfavorable Market Conditions. At this time, we think this is a primary culprit, but not the only one. From roughly the middle of the previous week, to the middle of this one, the market was in a rather confused state, with minimal direction, and minimal signs of taking a direction. This lines up well with the worst performance of the algorithm. Historically, times like these have always been difficult for our systems - so while it isn’t unusual that this would be a problem, it’s unusual that it’s this much of a problem.

    Having done a round of development and updates recently, we currently have several versions of the algorithm that we consider viable. Most of them have experienced the same drop in performance these weeks, but interestingly, a few earlier versions have been able to mitigate it well in recent backtests. This brings me to the second probable cause - one that has begun to seem more likely with recent testing.

  2. Unsuitable training and testing data. We tried something new during this round of algorithm development: updating the algorithm at the same time we updated the portfolio. It seems that this has worked against us. Our new portfolio was created on or around July 1st. The updated algorithm spent 2 weeks in development before being rolled out in mid-July. This meant a problem for our training data: survivorship bias. All of the data we used to develop the updates to the algorithm was based on a portfolio chosen as of July first, but at the time, all of the data available was from before July 1st.

    The problems with this were noticeable in testing. Our algorithms under-performed, far outside of expectation, but this was to be expected. After all, they were competing with a portfolio made by someone who, for the time-frame of the backtests, could see into the future. For this reason, we restricted our backtesting to April 1 - July 1 of 2023. Over this, relatively recent timeframe, the effects were not as extreme. However, survivorship bias was still present - not to mention the smaller amount of data available for backtests.

    In conclusion: I suspect our updated version of the algorithm is flawed. I tried a new system to design and test updates, meant to better fit algorithms to current market conditions, and portfolios that were updated more often. However, it seems I underestimated the impacts a smaller data set and survivorship bias would have. These have both, in all likelihood, contributed to the subpar performance of our algorithm in recent weeks.

What Comes Next?

I can’t give definitive next steps at this time, as testing is ongoing. At this time, I would say the most likely scenario is that we revert the algorithm to an earlier version from the development cycle. It’s unlikely that we go with the original algorithm - rather, a version with fewer changes implemented than our current one. I also don’t want to say that the current version of the algorithm will be dismissed permanently. It’s been 6 weeks since our new portfolio was selected, meaning 6 weeks since the end of the data used to design the updated algorithm. That’s not enough data to definitively choose 1 system over another, but enough to see some trends.

I would say the odds are >90% that we switch to a different algorithm before lifting the hold. While I can’t give a definitive date on that, I hope to have a stronger idea of our next move by next weekend.

As for future development, there are 2 main changes I’ll be making for algorithm updates:

  1. New algorithms will be tested on larger datasets (and larger testing datasets).

  2. Data used in developing algorithm updates will be exclusively from after the corresponding portfolio was selected.

These should both help to prevent things like this, and insure that future updates go out much more smoothly than this one has.

Misc. Data For The Week

Previous
Previous

HaiKhuu Daily Report 8/14/2023

Next
Next

HaiKhuu Weekly Preview August 18, 2023