worth a try. one thing that MIGHT be quite interesting to do would be to ratio out the back/lay odds to book% and then explore the difference between the back/lay as a book%. you could then look to capture the point(s) where the divergence on the spread starts to tighten - and more interestingly, explore how the tightening on one runner affects the others. by proxy. you might see a cascade effect that you can work with.Archery1969 wrote: ↑Mon Feb 12, 2024 12:26 pmHi,
Been trialing something new to use when Greyhound markets are first formed. But no reason why it couldn't be used on all markets. It looks for value based on the odds being offered and how they change as money enters the markets.
I don't care about form, previous history, just how the market changes. Bets are placed at random times from x seconds to y seconds and amounts offered are also random from x amount to y amount. All designed not to spook the markets.
Be interesting to see how the professional market makers react although I guess they probably wont care. But I will be jumping in front of them if value is percieved to be there.
Below is just a screenshot of the workings, the BF back/lay and amounts are just random to makesure my calculations were working. Its now being coded in Python accessing the BF API by some coders as I want maximum speed. Eventually it will run on a VPS.
Lets see where this one goes.
Greyhound Mystique
-
- Posts: 3248
- Joined: Thu Oct 24, 2019 8:25 am
- Location: Newport
Yep, you read my mind. Thanks Jim.jimibt wrote: ↑Mon Feb 12, 2024 6:08 pmworth a try. one thing that MIGHT be quite interesting to do would be to ratio out the back/lay odds to book% and then explore the difference between the back/lay as a book%. you could then look to capture the point(s) where the divergence on the spread starts to tighten - and more interestingly, explore how the tightening on one runner affects the others. by proxy. you might see a cascade effect that you can work with.Archery1969 wrote: ↑Mon Feb 12, 2024 12:26 pmHi,
Been trialing something new to use when Greyhound markets are first formed. But no reason why it couldn't be used on all markets. It looks for value based on the odds being offered and how they change as money enters the markets.
I don't care about form, previous history, just how the market changes. Bets are placed at random times from x seconds to y seconds and amounts offered are also random from x amount to y amount. All designed not to spook the markets.
Be interesting to see how the professional market makers react although I guess they probably wont care. But I will be jumping in front of them if value is percieved to be there.
Below is just a screenshot of the workings, the BF back/lay and amounts are just random to makesure my calculations were working. Its now being coded in Python accessing the BF API by some coders as I want maximum speed. Eventually it will run on a VPS.
Lets see where this one goes.
But if I'm just 'finding the gap' , I still need to do the background data collection to know if the odds are likely to drift or shorten, no?Archery1969 wrote: ↑Mon Jan 22, 2024 3:18 pmI for one worked out that I was trying to make things too complicated. Each race day is different. Better to trade them by using a combination of “find the gap” and “leap frogging” plus some additional adjustments. Which you can apply to any low liquidity markets, not just greyhounds. The key is working out when the value disappears, which actually turned out to be very simple.lavenham wrote: ↑Fri Jan 19, 2024 9:20 amHi,
With the shortage of decent horse racing lately due to bad weather, I decided to revisit greyhounds. I came across this interesting topic, which has about 79 pages of high quality input, spreadsheets etc. Sadly there has been no input since 2021 and I was interested to know why. Has everyone found a better approach which they do not wish to share or have they abandoned greyhounds all together? The last greyhound input I could find was Dallas's 'Leap frog' servant, which might possibly work well with Archery 1969's approach.
There are a host of 'scraping' approaches using Python but not sure if they work anymore, but if they do then they would be a great way to check out different angles.
So - where have all the experts gone?
Thanks
-
- Posts: 3248
- Joined: Thu Oct 24, 2019 8:25 am
- Location: Newport
Fugazi wrote: ↑Mon Feb 12, 2024 11:09 pmBut if I'm just 'finding the gap' , I still need to do the background data collection to know if the odds are likely to drift or shorten, no?Archery1969 wrote: ↑Mon Jan 22, 2024 3:18 pmI for one worked out that I was trying to make things too complicated. Each race day is different. Better to trade them by using a combination of “find the gap” and “leap frogging” plus some additional adjustments. Which you can apply to any low liquidity markets, not just greyhounds. The key is working out when the value disappears, which actually turned out to be very simple.lavenham wrote: ↑Fri Jan 19, 2024 9:20 amHi,
With the shortage of decent horse racing lately due to bad weather, I decided to revisit greyhounds. I came across this interesting topic, which has about 79 pages of high quality input, spreadsheets etc. Sadly there has been no input since 2021 and I was interested to know why. Has everyone found a better approach which they do not wish to share or have they abandoned greyhounds all together? The last greyhound input I could find was Dallas's 'Leap frog' servant, which might possibly work well with Archery 1969's approach.
There are a host of 'scraping' approaches using Python but not sure if they work anymore, but if they do then they would be a great way to check out different angles.
So - where have all the experts gone?
Thanks
Not with this new method I am working on. It will keep offering prices as long as the % price difference between the average price and calulcated price is >= + 50% or whatever % I put in the bot settings.
You should be able to work out from that screenshot of how I am getting the calculated price.
One thing you should note though. Nobody is going to give you a complete system to print money on sports or any other type of betting. They would be mad too.
As others have said, it takes years to find edges or a system that is profitable over the long term.
Some do manage to find things quickly but thats probably less than 5%.
And remember, even if you do find something, it might work for 1 day, 1 week, 1 month, 1 year and then stop working. Or it might go on forever.
Those that have something that works are not going to broadcast it in its entirety.
Personally, I have thought i found something, gave up, started again, gave up, started again, gave up, started again, gave up, started again......
You get the picture ?
Of course.Archery1969 wrote: ↑Mon Feb 12, 2024 11:14 pmFugazi wrote: ↑Mon Feb 12, 2024 11:09 pmBut if I'm just 'finding the gap' , I still need to do the background data collection to know if the odds are likely to drift or shorten, no?Archery1969 wrote: ↑Mon Jan 22, 2024 3:18 pm
I for one worked out that I was trying to make things too complicated. Each race day is different. Better to trade them by using a combination of “find the gap” and “leap frogging” plus some additional adjustments. Which you can apply to any low liquidity markets, not just greyhounds. The key is working out when the value disappears, which actually turned out to be very simple.
Not with this new method I am working on. It will keep offering prices as long as the % price difference between the average price and calulcated price is >= + 50% or whatever % I put in the bot settings.
You should be able to work out from that screenshot of how I am getting the calculated price.
One thing you should note though. Nobody is going to give you a complete system to print money on sports or any other type of betting. They would be mad too.
As others have said, it takes years to find edges or a system that is profitable over the long term.
Some do manage to find things quickly but thats probably less than 5%.
And remember, even if you do find something, it might work for 1 day, 1 week, 1 month, 1 year and then stop working. Or it might go on forever.
Those that have something that works are not going to broadcast it in its entirety.
Personally, I have thought i found something, gave up, started again, gave up, started again, gave up, started again, gave up, started again......
You get the picture ?
As it happens I'm in the lucky 5 percent. But I had a background of successful value betting (only one bookmaker that hasnt gubbed me yet) and profitable playing poker so I'm quite good at spotting nuances when a system fails and finding ways to exploit the failure
I should add... my method of value betting wasn't replicable at physical stores their odds are too much lower hence I'm not a millionaire I was just abusing football 2up offers where bookmaker odds were close to exchange odds. Once im gubbed on the last account i'll stick the method on the forum somewhere its pretty simple.
- wearthefoxhat
- Posts: 3235
- Joined: Sun Feb 18, 2018 9:55 am
Alas, that's what happened to one of my trusty bots a while ago.Archery1969 wrote: ↑Mon Feb 12, 2024 11:14 pm
And remember, even if you do find something, it might work for 1 day, 1 week, 1 month, 1 year and then stop working. Or it might go on forever.
Might dust it off and try it out again as the greyhound landscape may have changed since then. I tend to over complicate things, so a more simplified version might be the way forward.
You should send it to me to fix and never returnwearthefoxhat wrote: ↑Tue Feb 13, 2024 11:52 amAlas, that's what happened to one of my trusty bots a while ago.Archery1969 wrote: ↑Mon Feb 12, 2024 11:14 pm
And remember, even if you do find something, it might work for 1 day, 1 week, 1 month, 1 year and then stop working. Or it might go on forever.
Out of interest though - why do you think it stopped working? Over saturation of similar bots?
Might dust it off and try it out again as the greyhound landscape may have changed since then. I tend to over complicate things, so a more simplified version might be the way forward.
Out of interest - why do you think it stopped working? Over saturation of similar bots?
Last edited by Fugazi on Tue Feb 13, 2024 12:50 pm, edited 2 times in total.
- wearthefoxhat
- Posts: 3235
- Joined: Sun Feb 18, 2018 9:55 am
In the spirit of the thread, I was thinking of posting it up, but can never be sure of some of those that are never willing to offer feedback and just take.
I put it down to others catching up and getting ahead. The liquidity pool is fairly small, so it wouldn't take much.to do it.
- firlandsfarm
- Posts: 2720
- Joined: Sat May 03, 2014 8:20 am
You could always post it and then use a BOT that takes the opposite position!wearthefoxhat wrote: ↑Tue Feb 13, 2024 6:17 pmIn the spirit of the thread, I was thinking of posting it up, but can never be sure of some of those that are never willing to offer feedback and just take.
- wearthefoxhat
- Posts: 3235
- Joined: Sun Feb 18, 2018 9:55 am
firlandsfarm wrote: ↑Thu Feb 15, 2024 9:54 amYou could always post it and then use a BOT that takes the opposite position!wearthefoxhat wrote: ↑Tue Feb 13, 2024 6:17 pmIn the spirit of the thread, I was thinking of posting it up, but can never be sure of some of those that are never willing to offer feedback and just take.
You do not have the required permissions to view the files attached to this post.
Is the code still working? I know nothing about coding. Tried to run it with an online "compiler" no luck. Chatgpt thinks the compiler doesnt have the libraries i need. I also got it to make the bracket corrections for python but still no luck.
What should I use to run it? Does it need modifying ?
What should I use to run it? Does it need modifying ?
Code: Select all
import re
import requests
from bs4 import BeautifulSoup
from requests_html import HTMLSession
def extract_times(input):
times_regex = re.compile(r'Best: (.....)sLast: (.....)s')
best_times_regex = re.compile(r'Best: (.....)s')
match = times_regex.search(input)
best_match = best_times_regex.search(input)
if match:
if float(match.group(2)) < float(match.group(1)):
return float(match.group(1))
else:
return round((float(match.group(1))+float(match.group(2)))/2,2)
if best_match:
return float(best_match.group(1))
return 100
session = HTMLSession()
baseUrl = "https://www.sportinglife.com"
str="/greyhounds/racecards/20"
res = requests.get("https://www.sportinglife.com/greyhounds/racecards")
soup=BeautifulSoup(res.text,"html.parser")
summary=soup.find_all("a", class_="")
x=0
for link in soup.find_all('a'):
link = link.get('href')
if str in link:
res = session.get(baseUrl+link)
soup=BeautifulSoup(res.text,"html.parser")
race = soup.find_all('h1')[1].get_text()
distance =soup.find(class_='gh-racecard-summary-race-class gh-racecard-summary-always-open').get_text()
summary=soup.find_all(class_="gh-racing-runner-key-info-container")
Runners = dict()
for link in summary:
Trap= link.find(class_="gh-racing-runner-cloth").get_text()
Name =re.sub(r'\(.*\)', '',link.find(class_="gh-racing-runner-greyhound-name").get_text())
Average_time = extract_times(link.find(class_="gh-racing-runner-greyhound-sub-info").get_text())
Runners[Average_time]= Trap+'. '+Name
if bool(Runners) == True and ('OR' in distance or 'A' in distance):
x = sorted(((k,v) for k,v in Runners.items()))
if (x[1][0]-x[0][0]) >=0.1:
timeDiff =round((x[1][0]-x[0][0]),2)
print(f"{race},{x[0][1]}, class {distance}, time difference {timeDiff}")
If it's an import the compiler would tell you exactly what is missing, but trying to scrape from the Sporting Life, very naughty posting this.Fugazi wrote: ↑Fri Feb 16, 2024 9:25 pmIs the code still working? I know nothing about coding. Tried to run it with an online "compiler" no luck. Chatgpt thinks the compiler doesnt have the libraries i need. I also got it to make the bracket corrections for python but still no luck.
What should I use to run it? Does it need modifying ?
Code: Select all
import re import requests from bs4 import BeautifulSoup from requests_html import HTMLSession def extract_times(input): times_regex = re.compile(r'Best: (.....)sLast: (.....)s') best_times_regex = re.compile(r'Best: (.....)s') match = times_regex.search(input) best_match = best_times_regex.search(input) if match: if float(match.group(2)) < float(match.group(1)): return float(match.group(1)) else: return round((float(match.group(1))+float(match.group(2)))/2,2) if best_match: return float(best_match.group(1)) return 100 session = HTMLSession() baseUrl = "https://www.sportinglife.com" str="/greyhounds/racecards/20" res = requests.get("https://www.sportinglife.com/greyhounds/racecards") soup=BeautifulSoup(res.text,"html.parser") summary=soup.find_all("a", class_="") x=0 for link in soup.find_all('a'): link = link.get('href') if str in link: res = session.get(baseUrl+link) soup=BeautifulSoup(res.text,"html.parser") race = soup.find_all('h1')[1].get_text() distance =soup.find(class_='gh-racecard-summary-race-class gh-racecard-summary-always-open').get_text() summary=soup.find_all(class_="gh-racing-runner-key-info-container") Runners = dict() for link in summary: Trap= link.find(class_="gh-racing-runner-cloth").get_text() Name =re.sub(r'\(.*\)', '',link.find(class_="gh-racing-runner-greyhound-name").get_text()) Average_time = extract_times(link.find(class_="gh-racing-runner-greyhound-sub-info").get_text()) Runners[Average_time]= Trap+'. '+Name if bool(Runners) == True and ('OR' in distance or 'A' in distance): x = sorted(((k,v) for k,v in Runners.items())) if (x[1][0]-x[0][0]) >=0.1: timeDiff =round((x[1][0]-x[0][0]),2) print(f"{race},{x[0][1]}, class {distance}, time difference {timeDiff}")
conduirez wrote: ↑Fri Feb 16, 2024 10:52 pmIf it's an import the compiler would tell you exactly what is missing, but trying to scrape from the Sporting Life, very naughty posting this.Fugazi wrote: ↑Fri Feb 16, 2024 9:25 pmIs the code still working? I know nothing about coding. Tried to run it with an online "compiler" no luck. Chatgpt thinks the compiler doesnt have the libraries i need. I also got it to make the bracket corrections for python but still no luck.
What should I use to run it? Does it need modifying ?
Code: Select all
import re import requests from bs4 import BeautifulSoup from requests_html import HTMLSession def extract_times(input): times_regex = re.compile(r'Best: (.....)sLast: (.....)s') best_times_regex = re.compile(r'Best: (.....)s') match = times_regex.search(input) best_match = best_times_regex.search(input) if match: if float(match.group(2)) < float(match.group(1)): return float(match.group(1)) else: return round((float(match.group(1))+float(match.group(2)))/2,2) if best_match: return float(best_match.group(1)) return 100 session = HTMLSession() baseUrl = "https://www.sportinglife.com" str="/greyhounds/racecards/20" res = requests.get("https://www.sportinglife.com/greyhounds/racecards") soup=BeautifulSoup(res.text,"html.parser") summary=soup.find_all("a", class_="") x=0 for link in soup.find_all('a'): link = link.get('href') if str in link: res = session.get(baseUrl+link) soup=BeautifulSoup(res.text,"html.parser") race = soup.find_all('h1')[1].get_text() distance =soup.find(class_='gh-racecard-summary-race-class gh-racecard-summary-always-open').get_text() summary=soup.find_all(class_="gh-racing-runner-key-info-container") Runners = dict() for link in summary: Trap= link.find(class_="gh-racing-runner-cloth").get_text() Name =re.sub(r'\(.*\)', '',link.find(class_="gh-racing-runner-greyhound-name").get_text()) Average_time = extract_times(link.find(class_="gh-racing-runner-greyhound-sub-info").get_text()) Runners[Average_time]= Trap+'. '+Name if bool(Runners) == True and ('OR' in distance or 'A' in distance): x = sorted(((k,v) for k,v in Runners.items())) if (x[1][0]-x[0][0]) >=0.1: timeDiff =round((x[1][0]-x[0][0]),2) print(f"{race},{x[0][1]}, class {distance}, time difference {timeDiff}")
Guilty as charged. Hoping to make some edits to it myself with some help of gpt.