That’s like saying a roll of dice is deterministic. In theory and under controlled circumstances, yes. In the real world and how people use it, no. The OpenAI docs even mention this, it’s only about consistency.
If I encounter a new unknown command and ask chatgpt to explain it. For me, it is entirely unpredictable if the answer will be 100% correct, 95% correct or complete mansplaining bullshit.
Even if it may be close to the truth, with bash the difference between a 95% answer and a 100% answer can be very subtle, with seemingly correct code and seemingly correct explanation give very wrong end result.
Again, you've missed my point entirely. The reason for mentioning determinism was that I am telling you that the burden of proof for "DROP table1;" must be on someone who makes a claim such as yours, not me, and that such proof better come with some evidence, hence:
Now go find some instances where someone is presented with "rm -rf /" or "DROP table1;" when otherwise expecting a response to help with non-destructive commands!
For me, it is entirely unpredictable if the answer will be 100% correct, 95% correct or complete mansplaining bullshit.
Please, show me some evidence of this variance because it is either a bold or ignorant claim to say that the outputs are wildly unpredictable. 100% true vs "complete mansplaining bullshit". Run the numbers! Do 10,000 responses and analyze the results! Show me! I am completely unconvinced by your arguments based on direct experience with reality. You can easily change my mind by presenting me with reproducible evidence to the contrary of my beliefs and experiences.
Even if it may be close to the truth
This is just a classic motte-and-bailey fallacy... let me explain! The bailey is the claim that the outputs are "complete mansplaining bullshit", which is very hard to defend. The motte that you retreat to, "close to the truth", is exactly what I'm saying for prompts that are more of a translation from one language to another, English to bash, English to SQL, etc.
I have never claimed it would be 100% correct, just that the hallucinations are very predictable in nature (not in exactness, in nature). Here's an example of the kind of error:
Select name and row_id from table1 joined on table2 on table1_id.
SELECT table1.name, table2.row_id
FROM table1
JOIN table2 ON table1.table1_id = table2.table1_id;
Well, that should obviously be table1.row_id in the SELECT right? And I guess not super clear from the instructions, but standard that the JOIN should be table1.id Oopsie! Is it valid SQL? yes! Is it "complete mansplaining bullshit". Not. Even. Remotely.
If I encounter a new unknown command and ask chatgpt to explain it. For me, it is entirely unpredictable if the answer will be 100% correct, 95% correct or complete mansplaining bullshit.
Even if it may be close to the truth, with bash the difference between a 95% answer and a 100% answer can be very subtle, with seemingly correct code and seemingly correct explanation give very wrong end result.