Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well if we really believe the bot was AI, then it wasn't Microsoft's bot. It was was it's own "artificial intelligence".

But the rest of those have nothing to do with their legal team. They wouldn't implement a copy of another OS into this OS without making sure it was legal to do so.



You would think that Google wouldn't implement a copy of the Java APIs in their operating system without making sure it was legal to do so, but apparently not.

Ozweiller is quite right. Big companies copy other people's stuff, breach trademarks (Metro?) and generally mess up all the time.

I doubt the ABI emulation is actually a problem, but calling it "Windows Subsystem for Linux" might well be a trademark violation as it doesn't involve Linux itself. Imaging if Wine called itself "Linux Subsystem for Windows". I think Microsoft would be deploying their legal team right quick.


I think the AI comment was more to the fact that, they didn't safeguard against seemingly obvious outcomes - such as internet trolls trying to get the bot to say bad things. Many companies put no-go words during username creation, hitler, racist words, etc - so why didn't Microsoft?

It might not have been simple to do, but still - hard not to see the outcome.


lol what the hell are you talking about. this thing is SUPPOSED to learn. you can't have ai and restrict what it learns, it defeats the entire purpose. isn't this the same thing that happens to people too? they go around the internet and soak up knowledge, sometimes racist, harmful, misinformation, but they soak it up nonetheless.


Well, to be clear, i didn't say restrict what it learns - i said safeguard against outcomes. Or, are you arguing that Microsoft knew the bot would slur racist insults in a laughably short timeframe, and only planned to run the bot for said timeframe?

The very fact that they had to pull the plug seems to suggest that it was not desired, and as such, it should have been safe guarded against.

An example safeguard being, limit what it can say. If it has racist/etc stuff in it, literally don't send to twitter. The bot still learns, the algos don't change, and Microsoft still gets to see what the given AI will behave like in full public. And above all else, the bot isn't a Microsoft branded Hail Hitler AI.

It sounds like you believe what happened is perfectly within reason - if that's the case, why do you believe they pulled the plug?


Did they even have any sort of filter? If they at least blacklisted these words [0], then that seems like a reasonable enough effort on its own. However, these developers would have had to be living in a bubble to not know about trolls from 4chan.

All in all, this is a lesson that some high-profile person/group eventually had to learn on our behalf. Now, when an unknowing manager asks why your chat bot needs to avoid certain offensive phrases because, "our clientele aren't a bunch of racists", you can just point him to this story. The actual racists are tame by comparison to what trolls will do to your software.

[0] = https://github.com/shutterstock/List-of-Dirty-Naughty-Obscen...


Now to be fair, we restrict what humans learn all the time. We try to teach morals and ethics to our children. We generally don't let kiddos run wild and learn whatever is around without some sort of structure.


Aside from the obvious outcome that it would be manipulated (which anyone could predict, and if well thought out it would have had "learning guards"), it didn't require some deep artificial learning -- you could tell the thing to repeat various offensive statements. It was just a giant miscalculation.

However the legal department of every company on the planet makes a risk:benefit analysis, especially in fuzzy areas like copyright law (which we've seen with the Java case....an API isn't copyrightable, then it is, then it isn't, then it is). The assumption that if Microsoft did it therefore it must be without risk is folly.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: