Hmm I don't think a universal language is implied by being able to translate without a rosetta stone. I agree, I don't think there is such a thing as a universal language, per se, but I do wonder if there is a notion of a universal language at a certain level of abstraction.
But I think those ambiguous cases can still be understood/defined. You can describe how this one word in lion doesn't neatly map to a single word in English, and is used like a few different ways. Some of which we might not have a word for in English, in which case we would likely adopt the lion word.
Although note I do think I was wrong about embedding a multilingual corpus into a single space. The example I was thinking of was word2vec, and that appears to only work with one language. Although I did find some papers showing that you can unsupervised align between the two spaces, but don't know how successful that is, or how that would treat these ambiguous cases.
> I don't think a universal language is implied by being able to translate without a rosetta stone.
Depends what you mean. If you want a 1-to-1 translation then your languages need to be isomorphic. For lossy translation you still need some intersection within the embedding space. The intersection will determine how good you can translate. It isn't unreasonable to assume that there are some universal traits here as any being lives in this universe and we're all subject to these experiences at some level, right? But that could result in some very lossy translations that are effectively impossible to translate, right?
Another way you can think about it, though, is that language might not be dependent on experience. If it is completely divorced, we may be able to understand anyone regardless of experience. If it is mixed, then results can be mixed.
> The example I was thinking of was word2vec
Be careful with this. If you haven't actually gone deep into the math (more than 3Blue1Brown) you'll find some serious limitations to this. Play around with it and you'll experience these too. Distances in high dimensions are not well defined. There also aren't smooth embeddings here. You have a lot of similar problems to embedding methods like t-SNE. Certainly has uses but it is far too easy to draw the wrong conclusions from them. Unfortunately, both of these are often spoken about incorrectly (think as incorrect as most peoples understandings of things like Schrodinger's Cat or the Double Slit experiment, or really most of QM. There's some elements of truth but it's communicated through a game of telephone).
But I think those ambiguous cases can still be understood/defined. You can describe how this one word in lion doesn't neatly map to a single word in English, and is used like a few different ways. Some of which we might not have a word for in English, in which case we would likely adopt the lion word.
Although note I do think I was wrong about embedding a multilingual corpus into a single space. The example I was thinking of was word2vec, and that appears to only work with one language. Although I did find some papers showing that you can unsupervised align between the two spaces, but don't know how successful that is, or how that would treat these ambiguous cases.