Aside from the obvious outcome that it would be manipulated (which anyone could predict, and if well thought out it would have had "learning guards"), it didn't require some deep artificial learning -- you could tell the thing to repeat various offensive statements. It was just a giant miscalculation.
However the legal department of every company on the planet makes a risk:benefit analysis, especially in fuzzy areas like copyright law (which we've seen with the Java case....an API isn't copyrightable, then it is, then it isn't, then it is). The assumption that if Microsoft did it therefore it must be without risk is folly.
However the legal department of every company on the planet makes a risk:benefit analysis, especially in fuzzy areas like copyright law (which we've seen with the Java case....an API isn't copyrightable, then it is, then it isn't, then it is). The assumption that if Microsoft did it therefore it must be without risk is folly.