ChatGPT is impressive but it makes a lot of mistakes. Siri can't afford that rate of errors for PR and legal reasons, so they need to use a technology that's less flexible but more reliable / safer. This is similar to self-driving cars: it's relatively easy to come up with a proof-of-concept but making it into a safe mainstream product is a different story.
ChatGPT will have the same problem, since it works on the text iOS gave it. It's not clear to me it would perform any better at reformulating your query.
1. I expect it to know it's not possible to increase brightness of an alarm and reject incongruent requests.
2. This particular implementation is limited to operating on an English text in its finalized form, but a different first-class LLM implementation of an AI assistant could work directly on some form of phonetic input. ChatGPT is pretty good at dealing with such ambiguities, e.g. it already understands questions written in IPA notation. It also understands Japanese written in romaji, which is a non-native phonetic spelling of a language with tons of homonyms.
> Siri can't afford that rate of errors for PR and legal reasons
Have we been using the same Siri? The only thing I trust it with is starting a timer. Everything else is literally a coin flip if it’ll actually understand me or mangle my request into something ludicrous.
IME Siri is much more limited than Alexa and Google Assistant. Have there been any lawsuits regarding those assistants? Or is Apple just being more conservative for other reasons?