Installing an AI bot on my PC.

A dedicated space to explore and discuss how cutting-edge technologies are reshaping sports trading and betting strategies. Collaborate on innovative ideas, and stay ahead of the curve in a rapidly evolving landscape.
Post Reply
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

I posted a question on what I will call "the main thread" that started this subject (viewtopic.php?p=375727#p375727) saying I had asked GPT for input re: creating/training my own bot and the responses it gave made me think it assumed I knew what it was talking about! So I put my question to the forum asking if anybody had any experience of creating their own bot but with no direct answer I feel that maybe a record of my experiences/learnings would be of assistance to others and this is it. This initial post will be quite long because it's catching up on what has happened so far.

I started by wanting to know a little more about AI so I asked GPT 4o "how is AI coded, how does it 'read', 'understand' and know what to do?". Cutting a long answer short it told me it's all about numbers. Everything, text/graphics/speech is reduced to numbers and then it looks for patterns. That was encouraging, patterns in numbers is something we are all interested in. Apparently all it's answers to questions put to it "using math and probabilities to say, “Based on everything I’ve seen, this is the most likely correct response.”". So maybe that explains some of the 'wrong' answers we get.

I then wanted to get to the bottom of this 'training' thing. Heard lots about it but never delved into what it really means. So I asked "can I personally train an AI bot on my knowledge for my personal use or does anything I 'teach' it go into the main pot of knowledge?". Well that's when I started to experience technical overload … "Hugging Face", "Retrieval-Augmented Generation (RAG)", "LandChain", "Llamindex", "Mistral", "LLaMA", "Ollama" etc. none of them meant anything to me so that when I placed the post on the 'main thread'. That when I also learned you can choose whether you will allow it to learn from your questions or not and that when i realised that if I had an 'open' bot using the group logic of the world it could be influenced by the "follow my bets and become a millionaire" YouTubers! So I decided private was the safe way to go.

I then moved on to required computing power for private setup. It suggested ...

1. Lightweight Models (Low-end or Mid-range PCs)
These are smaller LLMs like GPT4All, Mistral 7B, or TinyLLaMA, which are good for basic Q&A, note help, or task assistance.

RAM: 8–16 GB (16 GB is comfy)
Drive space: 3–10 GB
CPU: Quad-core (Intel i5/Ryzen 5 or better)
GPU: Not required, but helps (4–6 GB VRAM is enough)
Use case: Summarizing, chatting, helping with notes, answering based on uploaded documents.
Latency: 1–3 seconds per response
💡 Ideal if you want something running on your laptop without melting it.

⚙️ 2. Mid-tier Models (Serious Personal Assistant)
Models like LLaMA 2 13B, Mistral-Medium, or OpenChat clones.

RAM: 32 GB+ recommended
Drive space: 15–25 GB
CPU: High-end (Ryzen 9 or i9)
GPU: 8–12 GB VRAM (NVIDIA 3060/3070+)
Use case: More fluent chat, decent reasoning, document Q&A with context.
Latency: 0.5–2 seconds per response with GPU
⚡ Feels close to ChatGPT if optimized. Best for tinkerers or devs.

🧠 3. Full Power (ChatGPT-level AI at home)
You’re looking at models like LLaMA 2 70B, Mixtral, or GPT-J clones.

RAM: 64–128 GB+ (serious server)
Drive space: 40–70+ GB
GPU: 24 GB+ VRAM (RTX 4090+ or A100)
Use case: Advanced reasoning, full document summarization, creative tasks, multiturn conversation with memory.
Latency: Real-time on high-end GPUs; slow on CPU
🚀 This is pro territory. Amazing power, but costs $$$ unless you use quantized (compressed) versions.

It then asked me to provide the spec. for my PC but I wanted to keep control of the conversation so I asked what if any the advantages are in having a local installation compared with VPS, Custom GPT or Cloud and it produced this table ...
LocalGPT.jpg
It summarised the options as such …

🧠 So... What's the Best Choice For You?

🚀 1. Want Fast Results with No Tech Headache?
Use a Custom GPT in ChatGPT.
Easy, no setup. Just upload docs, set behaviors, and it “talks like you.”
Great if you're okay with cloud-based AI.

🔒 2. Want Full Control & Privacy (and like to tinker)?
Local AI is the way.
You own the model. You can even unplug from the internet and still have your own mini ChatGPT.
Best for developers, researchers, or privacy nerds.

☁️ 3. Want Something Always-On and Scalable?
Cloud / VPS hosting (like on AWS, Azure, RunPod, etc.)
Perfect for building bots or apps, especially if you want to:
Share your bot with others,
Run APIs,
Train a custom model once and serve it 24/7.

Some folks run all three:
ChatGPT for quick, clean answers and GPT-4 quality.
Local AI for offline use and confidential stuff.
Cloud/VPS for more advanced tools like a website chatbot or internal app.

OK, now was the time to give it my PC's spec. because if not suitable for local install then I could forget that option. It advised "your setup is more than powerful enough". Great. It suggested I …

1. use Ollama for now — it's clean, fast, and optimized.
2. Download Mistral 7B or LLaMA 2 7B (Q4 or Q5 quantized) as it runs directly on CPU and can be integrated with LangChain or Python scripts later.
3. Add a Document Search Layer with LlamaIndex, LangChain, or ChromaDB and,
4. Build a Local Web interface using Streamlit or Gradio; Terminal/chatbot interface with local terminal and integrate with VS Code + Python Notebook for deeper analysis

Don't ask, as I said I'm learning so at the time of drafting this I have no idea what they all mean but it sounds good! :)

It then started asking me about my data and it's structure. I concentrated on my Horse Racing database advising it's SQL Server based and asked if the bot would be able to connect directly … it responded it can and in doing so it will be able to run SQL queries from natural language instructions. Brilliant, I struggle with the finer points of SQL.

It kept wanting to 'set this up', it did what in the sales world is called a 'trial close' after answering every question. I guess that part of it was trained by a double glazing salesman! :) But I still had questions and wanted to retrin control so I asked if I later want to add another bot would that be within this installation or would I need a second installation? Answer: "Yes — You can build multiple bots within the same installation". Apparently I can create multiple "agents/bots" that connect to different datasets with their own logic.

Apparently there should be no problem moving your setup at a later date say if you replace your PC or maybe decide to go VPS/Cloud based.

We then moved on to comparing what to install which it summarised as follows …
Summary.jpg
… and it recommended Mistral 7B via Ollama giving various reasons which I won't bore you with here but you can ask GPT. But it also confirmed I can install more than one model (say llama3) and switch between then using ollama (and No, I don't really know what that means! :) ).

As the conversation developed it summarised the "Bot's Job Description" as it put it as “Answer statistical questions and produce a table, by pulling structured data from SQL, combining it with external sources (like ratings/tipsters), and outputting a probability-based race assessment.” And that will require …

An SQL query layer (via LangChain or manual function)
A web scraping / API connector (for pulling online ratings/news)
A probability calculator. This could be: Rule-based logic (e.g. weight certain stats), Simple logistic regression or even an ensemble model — eventually (had to look that last one up!)

It summarised the blueprint for the bot as …

Component Tech
AI Model Mistral 7B via Ollama
Interface Python script or web (Gradio/Streamlit)
Data Source SQL Server 2008 R2 (via pyodbc/sqlalchemy)
Online Ratings Web scraping (BeautifulSoup/Selenium) or API (if available)
Win Probability Logic Custom Python logic (initially) + optional ML later
Output Markdown or HTML tables for readability

I was further pleased to be told the bot will be able to create a new database with tables to store the scraped data from online sources.

We then installed Ollama & Mistral 7B on my PC. I did ask if I needed all the contents of the core as it takes up a fair slice of Drive and could I remove some but was told the bot needed all it's leant as it's not subject specific and to remove it would be like removing my brain!

It then asked me to map the database which I did by way of a screen grab showing the relationships between the tables and in Excel I listed all fields in the relevant table, explaining their content and marking if relevant or not. And that's where I am as I draft this post. Next will be to connect the bot to the database on SQL Server and where to go from there.

I hope some find this project interesting and if you feel I did something wrong or could have done better please say so.
You do not have the required permissions to view the files attached to this post.
sniffer66
Posts: 1809
Joined: Thu May 02, 2019 8:37 am

I'd query the need to have an AI model on your local device. I've gone down the route of getting GPT to create the code to train an ML model (Xgboost etc) on my PC, using historical data from SQL, CSV's etc
Once I've created and saved the trained model, I've then backtested against unseen data to see if the model generalises well and simmed P&L looks good. You can then run the model in real time, pulling in the data feeds required on prices, history etc and run that in Python via the BA API to place the trades

An ML model will run happily on a reasonable device, and you can still leverage online AI to produce code and analyse results.

That's very high level, and only my personal opinion
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

Thanks Sniffer. When you say you would query the need to have it on a local device is that because it doesn't have to be local or is there a reason why you feel it should not be local?

I've found by experience that creating a live connection between GPT to a file over the internet is troublesome ... it couldn't even post a file to a onedrive folder i shared with it! My database automatically updates with new races and results relieving me of that responsibility. So it then goes back to one of my earlier issues, if the same computer then which of Local, VPS or Cloud.

I've been striving to run a totally automatic system for years so I intend to link into the BA API once it's up and running and tested ... but that will be another learning curve!
sniffer66
Posts: 1809
Joined: Thu May 02, 2019 8:37 am

firlandsfarm wrote:
Tue Apr 22, 2025 3:23 pm
Thanks Sniffer. When you say you would query the need to have it on a local device is that because it doesn't have to be local or is there a reason why you feel it should not be local?

I've found by experience that creating a live connection between GPT to a file over the internet is troublesome ... it couldn't even post a file to a onedrive folder i shared with it! My database automatically updates with new races and results relieving me of that responsibility. So it then goes back to one of my earlier issues, if the same computer then which of Local, VPS or Cloud.

I've been striving to run a totally automatic system for years so I intend to link into the BA API once it's up and running and tested ... but that will be another learning curve!
I meant you run your trained machine learning model locally, giving it the required data via whatever feeds it needs - I use a combination of historical data and live prices etc. The ML model then makes the decision and places the bet.

As a simple example, say you were looking to create a value straight betting model. You can use something like an XgGBoost Classifier to use historical data to predict the probability for each horse in a race, and backtest the accuracy of the results. Assuming your model can predict the probability with a reasonable accuracy you then have an implied probability for each horse, which you can compare with the BF implied probability at the off, passing prices back to the model and backing those runners where you have perceived value

AI, in the form of an online LLM\logic model is used to create the code, analyse the uploaded data, results etc to allow you to tweak the features used in the ML model
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

OK, to connect to my SQL Server database GPT wants me to confirm python is installed it's not so I have to install it ... small hiccup, apparently when calling python after installing, Windows tries to send you to the App Store ... you have to disable a couple of things to prevent that, GPT tells you what to do. So python is successfully installed and running. :)

So I now need to install the pyodbc Library ... Done

And connect the installation to the SQL Server database ... Done and tested.

OK it's moving me on to proving the connection by asking Mistral what it knows about horse racing resulting in a 5 paragraph generic comment about the different types of racing.

Before trying a basic question using python by asking a natural language question and have the Mistral translate it into an SQL query GPT asked me to confirm some tables. Unfortunately that was a bit of a disaster. Although I spent time creating a screen grab of all relevant tables in my database showing their relationship links, I also confirmed that in excel by again showing how the tables were linked and giving a summary of the data within and I listed all the fields in each table and stated what data they held. But even after all that the GPT part of the bot, the designer/creator of the bot invented 4 tables names that did not exist in my schema or for real in the database and this was after testing the connection to the database by asking it to list all the tables! I asked it why and the response it gave was that it had defaulted to more common industry names for such tables in a database and not referred to the schema documents I previously submitted! Oh dear!!
sniffer66
Posts: 1809
Joined: Thu May 02, 2019 8:37 am

Which GPT model are you using ? I found 3o mini high was decent for logic and coding, superseded now by 4o mini high
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

sniffer66 wrote:
Tue Apr 22, 2025 7:22 pm
Which GPT model are you using ? I found 3o mini high was decent for logic and coding, superseded now by 4o mini high
I'm using 4o to create the bot ... I confess to not really understanding the subtle differences between the versions ... and if they are that intelligent why don't they tell you if another version would be better handling your questions/project?! Also I have to confess I didn't start the chat with a 'super prompt' ... I just asked "how is AI coded, how does it 'read', 'understand' and know what to do" which was probably very lazy of me but when i started I wasn't expecting to take it down this route!

For coding in the past (mainly MQL4) I've used the "Explore GPT's" option and selected one from the list but again I must admit I don't really know what I'm looking at with that list ... I assume it's 'private' bots trained in a specific subject but open for public use.
sionascaig
Posts: 1605
Joined: Fri Nov 20, 2015 9:38 am

A lot of this is way past my pay grade but following with interest.

An off field approach might be to look at some of the (many) vids on jailbreaking & "personal" AI. I generally find folk that try to break things can provide a good insight into capability & limitations of item under discussion.

Sounds like a great project for learning - right in at the deep end )

Best wishes with your endeavour...
sniffer66
Posts: 1809
Joined: Thu May 02, 2019 8:37 am

firlandsfarm wrote:
Tue Apr 22, 2025 10:21 pm
sniffer66 wrote:
Tue Apr 22, 2025 7:22 pm
Which GPT model are you using ? I found 3o mini high was decent for logic and coding, superseded now by 4o mini high
I'm using 4o to create the bot ... I confess to not really understanding the subtle differences between the versions ... and if they are that intelligent why don't they tell you if another version would be better handling your questions/project?! Also I have to confess I didn't start the chat with a 'super prompt' ... I just asked "how is AI coded, how does it 'read', 'understand' and know what to do" which was probably very lazy of me but when i started I wasn't expecting to take it down this route!

For coding in the past (mainly MQL4) I've used the "Explore GPT's" option and selected one from the list but again I must admit I don't really know what I'm looking at with that list ... I assume it's 'private' bots trained in a specific subject but open for public use.
Definitely give 4o mini high a run out, I find it far superior for creating code
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

sionascaig wrote:
Wed Apr 23, 2025 9:03 am
A lot of this is way past my pay grade but following with interest.

An off field approach might be to look at some of the (many) vids on jailbreaking & "personal" AI. I generally find folk that try to break things can provide a good insight into capability & limitations of item under discussion.

Sounds like a great project for learning - right in at the deep end )

Best wishes with your endeavour...
Thanks for the encouragement sionascaig, it would have been above my paygrade if I had known what I was letting myself in for before I started but I have to hand it to GPT it has gently taken me step by step and allowed me to clarify/ask questions at every step ... I'm very pleased so far.

I spent most of yesterday restructuring my SQL setup. My main racing data is on an old 2008 server installation and because it underlays some proprietary software I'm a little scared to touch it in fear that I may break it in some way so I installed MS SQL Server 2022 Express alongside it and linked the racing database to that installation which I will access read only to protect the data. I have overlaid the 2022 installation with Access 365 as I find query construction using it's graphical interface more easy than direct SQL. There are a few gaps between them with the SQL language generally being more superior but I think I am aware of most of them.

I find having 'real databases' instead of spreadsheets is far better. They are faster and can manipulate the data much more. And SQL Server is free (up to 10GB per database) and with the coming of AI anyone can code it! :)
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

sniffer66 wrote:
Wed Apr 23, 2025 3:29 pm
Definitely give 4o mini high a run out, I find it far superior for creating code
Thanks for the tip sniffer, I'll bear that in mind but ... as I'm mid chat with 4o how easy would it be to switch the project? Could I ask 4o to prepare a summary of what's happened so far and pass that to 4o-mini-high? I must admit I've considered that approach when leaving a chat dormant for a while but wanting to come back to it. Maybe to use the 'archive' facility does it anyway but I have tried to return to some chats after a dormant period and it basically says "what are you on about, I don't remember that"! :D That's when I started telling it when I would be breaking away for a while but expect to return later and pick up the chat again.

So far most of my coding has been from nominated bots using the Explore GPTs facility. Let's say some are better than others! :roll:

Project? Project! I've just noticed/remembered that GPT has a Project categorisation. Is it too late now to move my chat into a Project? If not, how and what benefits?
sniffer66
Posts: 1809
Joined: Thu May 02, 2019 8:37 am

firlandsfarm wrote:
Thu Apr 24, 2025 7:09 am
sniffer66 wrote:
Wed Apr 23, 2025 3:29 pm
Definitely give 4o mini high a run out, I find it far superior for creating code
Thanks for the tip sniffer, I'll bear that in mind but ... as I'm mid chat with 4o how easy would it be to switch the project? Could I ask 4o to prepare a summary of what's happened so far and pass that to 4o-mini-high? I must admit I've considered that approach when leaving a chat dormant for a while but wanting to come back to it. Maybe to use the 'archive' facility does it anyway but I have tried to return to some chats after a dormant period and it basically says "what are you on about, I don't remember that"! :D That's when I started telling it when I would be breaking away for a while but expect to return later and pick up the chat again.

So far most of my coding has been from nominated bots using the Explore GPTs facility. Let's say some are better than others! :roll:

Project? Project! I've just noticed/remembered that GPT has a Project categorisation. Is it too late now to move my chat into a Project? If not, how and what benefits?
If you remain in the same chat, you can just use the dropdown to select a different gpt model and the new model still has access to the historical info from the previous used model

Also, assuming you are using the paid version, you have , I think, 10 queries per month you can use with Deep Research (button is next to the search one in your text entry pane). Well worth trying it out - but use it sparingly - it's easy to exceed the query limit. The trick is in framing your query explicitly so you get the best from model in as few retries as possible
User avatar
firlandsfarm
Posts: 3304
Joined: Sat May 03, 2014 8:20 am

sniffer66 wrote:
Thu Apr 24, 2025 10:15 am
If you remain in the same chat, you can just use the dropdown to select a different gpt model and the new model still has access to the historical info from the previous used model

Also, assuming you are using the paid version, you have , I think, 10 queries per month you can use with Deep Research (button is next to the search one in your text entry pane). Well worth trying it out - but use it sparingly - it's easy to exceed the query limit. The trick is in framing your query explicitly so you get the best from model in as few retries as possible
OK, good to know. I'll keep with 4o for now but if it seems to be getting bogged down I'll try switching to 4o-mini-high. The (possible) 10 query limit in Deep Research is I assume 10 chats and not 10 questions? I'm finding GPT makes many what I would call 'admin' errors. Such as missing something if you make the post too long/complicated and sometimes just getting it plain wrong like promising to do something and then not!

Not done much on the project today so nothing to report ... I've been concentrating on getting a webpage layout/presentation right.
Post Reply

Return to “AI, Machine Learning and Generative AI”