I would expect algorithms like that to put out likelihoods, not hard identifications. All it does is say "this person looks like that person". I wouldn't read that as "wrong".
Facial recognition systems are image classifiers where the classes are persons, represented as sets of images of their faces. Each person is assigned a numerical id as a class label and classification means that the system matches an image to a label.
Such systems are used in one of two modes, verification or identification.
Verificatiom means that the system is given as input an image and a class label and outputs positive if the input matches the label, and negative otherwise.
Identification means that the system is given as input an image and outputs a class label.
In either case, the system may not directly return a single label, but a set of labels each associated to a real-valued number, interpreted (by the operators of the system) as a likelihood. However, in that case the system has a threshold delimiting positive from negative identifications. That is, if the likelihood that the system assigns to a classification is above the threshold, that is considered a "positive identificetion", etc.
In other words, yes, a system that outputs a continuous distribution over classes representing sets of images of peoples' faces can still be "wrong".
Think about it this way: if a system could only ever signal uncertainty, how could we use it to make decisions?