By that definition all encryption would be end-to-end encryption, making the term useless.
The person sending the message and their intended recipient(s) are the "ends" in end-to-end encryption. The server is not an "end".
Incidentally, the client software is also not the "end": If the system includes a component designed to forward any data about the otherwise-encrypted content of the messages to someone who is not the sender or their intended recipient (unless at the direction of someone who is an intended party to the conversation) then the system does not implement end-to-end encryption. For example, Apple's iMessage app does this with their mandatory client-side scanning misfeature.
> For example, Apple's iMessage app does this with their mandatory client-side scanning misfeature.
There's a lot of incorrect information here. First of all, it is not mandatory, it's opt-in - parents have the ability to turn it on for children under 18 whose devices have parental controls enabled. (Technically you could argue that it is then mandatory for those children, but that's no different from other parental control features.) Also, it uses on-device machine learning to detect and blur NSFW photos. They even removed the feature that notifies the parents if the child chooses to view a photo that was detected as NSFW anyway, so the contents of messages are not sent to Apple or anyone else.
I think you're conflating it with the iCloud Photos CSAM detection, which would have been mandatory and sent results of on-device scans to Apple if you have iCloud Photos enabled, but they seem to have scrapped that (for now at least) as they quietly removed all mentions of it from their website.
You're probably correct that I was thinking of the scrapped (or deferred) iCloud Photos proposal. Though this part:
> Also, it uses on-device machine learning to detect and blur NSFW photos. They even removed the feature that notifies the parents if the child chooses to view a photo that was detected as NSFW anyway, so the contents of messages are not sent to Apple or anyone else.
… suggests that there was something similar in iMessage at one point, even if it was later removed. The "on-device learning" (or rather, on-device classification) means you're effectively sharing the data with Apple's agent running on the device, and the user doesn't have the ability to turn that off. Though it wouldn't be unreasonable to consider the parent who authorized this to be one "end" of the conversation since there is a minor involved—assuming there actually is a minor involved. (These "parental controls" have been abused to monitor adults.) It would be best if that fact was somehow communicated to all the other participants, for example with a "parental control active" or "monitored account" badge on the user's icon & profile.