We decided to launch our first scale strategy on Greyhound Racing after manually experimenting with it for a few weeks. What caught our attention was how money is coming into these markets, one race at a time, and we thought that using this to test strategies on a large scale could yield results and data quickly.
The first strategy was as follows:
Entering conditions:
- Minimum £1,000 matched in the market
- At least 5 minutes before the trade
Bet:
- Lay on the favorite
- Back the favorite with 2% profit from our lay bet
- £5 exposure
If the back bet to lock in 2% doesn't match, we wait for the settlement. We saw that in most cases the favorite would not win, and we would receive a 40%-50% profit on the exposure on average when we win. We use this as our "stop-loss" methodology.
This ran for 8 days, generated profit but resulted in a net loss, as shown in the screenshot below:
We decided to keep this strategy as our "data gathering" strategy and bear its costs as our data acquisition costs.
Then, we aimed to make it more accurate by identifying the right minimum odds to enter. For that, we used a tool we built for deep analysis of a strategy. As you can see in the screenshot, the Odds Potential section shows 2.8 odds with a nice number of bets, and a satisfactory total/net profit.
We ran the same strategy with a minimum odd of 2.8 as an entry criterion for a few days. At £5 exposure, it appeared to work quite nicely:
Yesterday, we increased the exposure to £10, and we are waiting to see how it goes. I can't attach more than 3 images in a post, so I'll post the current status in the next reply.
Would love feedback and suggestions for improvements to our strategy and thoughts