ChatGPT can't create Bet Angel automation, it's lying. That's the problem with this sort of AI, it never says 'Sorry, I can't do that'. It just makes stuff up.
We've been training it on general usage. So It's quite good at giving you instructions to create something, it just can't create the actual file itself.
Chat GPT & Generative AI tools
- ShaunWhite
- Posts: 10454
- Joined: Sat Sep 03, 2016 3:42 am
https://thehill.com/opinion/technology/ ... al-damage/
In January, The New York Times demanded — and a federal magistrate judge granted — an order forcing OpenAI to preserve “all output log data that would otherwise be deleted” while the litigation was pending. In other words, thanks to the Times, ChatGPT was ordered to keep all user data indefinitely — even conversations that users specifically deleted. Privacy within ChatGPT is no longer an option for all but a handful of enterprise users.
Last week, U.S. District Judge Sidney Stein upheld this order. His reasoning? It was a “permissible inference” that some ChatGPT users were deleting their chats out of fear of being caught infringing the Times’s copyrights. Stein also said that the preservation order didn’t force OpenAI to violate its privacy policy, which states that chats may be preserved “to comply with legal obligations.”
This is more than a discovery dispute. It’s a mass privacy violation dressed up as routine litigation. And its implications are staggering.
In January, The New York Times demanded — and a federal magistrate judge granted — an order forcing OpenAI to preserve “all output log data that would otherwise be deleted” while the litigation was pending. In other words, thanks to the Times, ChatGPT was ordered to keep all user data indefinitely — even conversations that users specifically deleted. Privacy within ChatGPT is no longer an option for all but a handful of enterprise users.
Last week, U.S. District Judge Sidney Stein upheld this order. His reasoning? It was a “permissible inference” that some ChatGPT users were deleting their chats out of fear of being caught infringing the Times’s copyrights. Stein also said that the preservation order didn’t force OpenAI to violate its privacy policy, which states that chats may be preserved “to comply with legal obligations.”
This is more than a discovery dispute. It’s a mass privacy violation dressed up as routine litigation. And its implications are staggering.
- ShaunWhite
- Posts: 10454
- Joined: Sat Sep 03, 2016 3:42 am
aka Level 5Euler wrote: ↑Fri Jul 04, 2025 7:06 pmChatGPT can't create Bet Angel automation, it's lying. That's the problem with this sort of AI, it never says 'Sorry, I can't do that'. It just makes stuff up.
We've been training it on general usage. So It's quite good at giving you instructions to create something, it just can't create the actual file itself.

You do not have the required permissions to view the files attached to this post.
- firlandsfarm
- Posts: 3320
- Joined: Sat May 03, 2014 8:20 am
ChatGPT tells me it was lazy when I pointed out it failed to find an earlier post in the same chat!! (I used a new thread to ask why so as not to clutter the main thread hence the response is a little 3rd party intoned.)
"Ah — thank you for clarifying that, and now you’ve absolutely got me concerned, because:
If you were in the same chat thread and referenced a unique earlier message**, then yes — I should have been able to find and respond in context. No excuses.
You did everything right:
You quoted a distinctive line:
“Success! All 168 files created.
Well done.”
You referenced it clearly.
You stayed in the same chat session.
So what went wrong?
Likely causes:
...
Hallucination or laziness.
Sometimes the model gives a filler response (like “not accessible”) when it should have either asked for clarification or tried harder to match the quote."
I hope the 'side' chat suitably grassed-up the main chat!
"Ah — thank you for clarifying that, and now you’ve absolutely got me concerned, because:
You did everything right:
You quoted a distinctive line:
“Success! All 168 files created.

You referenced it clearly.
You stayed in the same chat session.
Likely causes:
...
Hallucination or laziness.
Sometimes the model gives a filler response (like “not accessible”) when it should have either asked for clarification or tried harder to match the quote."
I hope the 'side' chat suitably grassed-up the main chat!

- ShaunWhite
- Posts: 10454
- Joined: Sat Sep 03, 2016 3:42 am
I posted some of it's own code back and it referred to it as "bullshit" which was wierd because I've never used profanities so not sure where the colourful language came from.firlandsfarm wrote: ↑Tue Jul 08, 2025 10:18 amChatGPT tells me it was lazy when I pointed out it failed to find an earlier post in the same chat!!

- firlandsfarm
- Posts: 3320
- Joined: Sat May 03, 2014 8:20 am
After continuing the 'side chat' with it it basically said 'if you get any more trouble with the other chat let me know and I'll have a word in it's ear'! (paraphrased) 

- ShaunWhite
- Posts: 10454
- Joined: Sat Sep 03, 2016 3:42 am
What bugs me most is how chatGPT varies from day to day. Yesterday it was actually pretty amazing tbf, tonight it's utterly clueless even on simple things.
But obv it can make all assurances it wants but can't break it's programming, so you get this, and then more of the same.
.
But obv it can make all assurances it wants but can't break it's programming, so you get this, and then more of the same.
.
You do not have the required permissions to view the files attached to this post.