Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Part of my code makes Copilot crash (github.com/orgs)
282 points by Tree1993 on Aug 4, 2022 | hide | past | favorite | 357 comments


So I've just tested it, and I can confirm, yes, copilot refuses to give suggestions related to gender. Now I know a lot of people are calling this absurd, but looking more closely, there are two PR nightmare scenarios.

1. Copilot makes a suggestion that implies gender is binary, a certain community explodes with anger and an entire news hype cycle starts about how Microsoft is enforcing views on gender with code.

2. Copilot makes a suggestion that implies gender is nonbinary, a certain community explodes with anger and an entire news hype cycle starts...

You can't win... so why not plea the fifth?

To all those claiming this is an example of "wokeism", remember the proper response from an individual who believes in nonbinary gender would be to offer suggestions of the sort. There is no advocacy here. Mums the word.


Those aren’t the only options. You can just let it suggest what it is going to suggest. Copilot is a product for adults who should be able to comprehend what machine learning is. Anybody who throws a fit about it will only be exposing themselves as a fool.


I might even share your idea about adults _should_ behave. But that doesn't invalidate fny's musings based on how _adults_ do behave.


I would love to see a origin/latin etc breakdown of the word behave. One of my least favorite words (authority issues much? Yes).



The problem is that if you train an ML model with a bunch of data that happened to be available in the past, then the system will perpetuate the same biases as were inherent in the training data. This leads to the (real issue) Google image classifier categorizing an image of a black man as a "gorilla" etc.

Certain words are heavily loaded and are worth just skipping to avoid all the hassle for now.


Btw, the gorilla incident was overblown. Overblown in the sense that people from other races (including whites) were also classified as some hilarious animals.

Gorilla and black just was the most politically charged one of the bunch.

(The other potentially politically charged one was some tendency to misclassify people of various levels of body fats as various animals.)

> Certain words are heavily loaded and are worth just skipping to avoid all the hassle for now.

If memory serves right, that was Google's pragmatic solution: if they detected a human in the picture, they 'manually' suppressed the animal classification.

So they lost being able to classify 'Bob and his dog' in return for not accidentally classifying a picture of just Alice as a picture of a seal.


[flagged]


No, not at all.

I see it as little more than GPT-3 having a list of words like "cunt", "fuck" and "shit" and realizing that there is little to be gained in including these words right now, so skipping them makes sense until we figure out some more urgent things first.


It’s not censorship; it isn’t muzzling you. Microsoft is choosing not to emit speech on this topic.

It is a deliberate and voluntary omission, not censorship.


Microsoft is censoring itself. Which they are allowed to do.

I am censoring myself, too.


If you insist? I suppose


[flagged]


>That's just reality.

Except it really isn't. If the datasets used, truly represented everyone in the world, that would be a reasonable argument. The point is that right now, the most cheaply available and voluminous data sets online tend to have a whole bunch of examples from western nations and far fewer from other parts of the world, for the simple reason that historically most of the people taking photographs and sticking them on webservers were from those places.

"Reality" doesn't have the same statistical anomalies as these data sets (e.g. there are a hell of a lot more people with brown skin in the world than are included in common training data), so "that's just reality" really isn't a strong argument.

This is a very, very common problem in ML and isn't limited to politically-charged words. For example, in some of the earliest attempts at using computer vision to detect tanks in an image for military purposes, the photographs with tanks in them all had different lighting than the ones without tanks and so the (super simplistic ML model) overfit the model based on a bias in the data. Unless the data set is truly representative, you'll often get biases in the resultant model.

> If you have your own set of politically correct answers hardcoded by a team of blue haired people you're not doing machine learning.

Well, this is just silly. We all know Deepmind has a policy of only allowing green hair dye on campus.


> Copilot is a product for adults [...]

If you didn't meant "should be" (for which I'm not willing to take any position), no, Copilot is not a product for adults [1] [2].

[1] https://docs.github.com/en/site-policy/github-terms/github-t... "A User must be at least 13 years of age."

[2] https://docs.github.com/en/site-policy/github-terms/github-t...


I'm sure the commenter didn't mean adults as legally adults but as someone that understand what machine learning is and won't throw a fit if the computer says something he disagrees with.

A 13 years old is perfectly capable of that, I know many 40 years old that aren't.


A minimum age for accepting terms of use isn't the same thing as a target demographic.


No, but the minimum age requirement affects the handling of questionable contents, which can never be "doing nothing" as the GP suggested.


Fair point.


That implies corporations are ruled by adults that aren't confusing twitter with the real world and aren't afraid to tell the screeching activists to leave them alone. Nothing we've seen in the latest decade suggests it is even close to being the case.


Not every country where Microsoft is doing business in has the same mores as the western world.



Didn't know about this. Thank you for making my day. (Disclosure: I used to work at Microsoft)


Most humans are fools. And you'll get a lot of flak if they think you stepped on their toes.


Agreed. The answer is approved by Dave Cheney, he works at GitHub, and if you've ever attended one of his talks it's plain to see he's a very scrupulous person. I also don't think this is an example of Microsoft taking a side; rather I read it as them refusing to bat, which seems fine.

What I would've preferred one of these threads to be about is how all of this works. Like, how do they post-hoc filter certain things? Is that the only way to deal with things defined as issues in ML?


Making Copilot stop in its tracks when it sees the word "gender" and refuse to continue until the word is removed is still making a statement. Refusing to bat would be treating "gender" as a meaningless token, just as if you'd typed "traqre" instead.


No, refusing to generate stuff in an area where the output is likely to be controversial (in either direction) is refusing to bat. It'll wait for a pitch that it thinks it can hit, just like it refuses to play for many other categories-- you'll have hard time to get Copilot to enumerate races, too.

Ignoring the potential offensiveness and YOLOing through it is swinging the bat wildly at every pitch.


I think you might not fully grasp the scope of the issue here. Right now, if a file you're editing contains one of the restricted words, Copilot will refuse to make any suggestions at all in that file while that word is present -- even if the word isn't relevant to the part of the file you're editing. To keep to the baseball metaphor, Copilot is going on strike at the first whiff of controversy.

What I'm suggesting is that Copilot should keep working when these words are present, but refuse to attach any significance to the specific word. This could probably be implemented by replacing the problematic words with randomly generated strings before processing the text, then swapping those strings back afterwards.

(It could be reasonable for Copilot to refuse to make suggestions at all if the output would contain truly offensive language, like unambiguous racial slurs or sexual terms. But "gender" clearly isn't that.)


> Right now, if a file you're editing contains one of the restricted words, Copilot will refuse to make any suggestions at all in that file while that word is present -- even if the word isn't relevant to the part of the file you're editing.

Yah-- it's unfortunate but it's easy. It might be OK to tolerate it if it's clearly outside the range of tokens used in suggestions, but the filtering doesn't use tokenized stuff.

> What I'm suggesting is that Copilot should keep working when these words are present, but refuse to attach any significance to the specific word. This could probably be implemented by replacing the problematic words with randomly generated strings before processing the text, then swapping those strings back afterwards.

The problem is, the trained model is much smarter than the keyword-based filtering. If you just whiteout the watchwords, it still has a pretty good chance of gleaning context and making a commentary on gender that Microsoft would rather not deal with.

> (It could be reasonable for Copilot to refuse to make suggestions at all if the output would contain truly offensive language, like unambiguous racial slurs or sexual terms. But "gender" clearly isn't that.)

Right now the list is quite a large variety of things. Mostly racial slurs and sexual terms. But letting an AI ramble on after "blacks" is kind of dangerous, as are various gender-related terms that do have innocuous interpretations. It's easy to put words in the filter list and much harder to try and use nuance on these topics that even humans struggle with nuance around.


Yes but people haven't quite figured out WHY people should be offended at Microsoft for doing this, so it's quite convenient for them before people discover their reasons for being mad.


> I also don't think this is an example of Microsoft taking a side; rather I read it as them refusing to bat, which seems fine.

You can't be neutral on a moving train, as they say.


I don't get the whole discussion. There are just many different models of gender. Its like particles vs waves. In one model, there are only two genders, in another five. There are those who say gender is culture and sex is real, and those who say sex is constructed, too. Some models describe reality better than others, some are useful, some are harmful. But nobody can or should stop you from thinking about reality with the model of your choice.

If I were Microsoft, I would post a shrugie and say copilot offers arbitrary responses based on the actual code it reads; it is not supposed to be "correct" or good or fair, but just follow what it sees other people do.


>>Nobody can or should stop you from thinking about reality with the model of your choice.

While I agree with you, that is very much the game that is being played here. We have competing world views and one way to help a world view dominate is to play a linguistic war. That was the point of Newspeak in 1984 (https://en.wikipedia.org/wiki/Newspeak). If you control the language such that competing ideas are instantly taboo just by the words required to describe them you can stop people from promulgating those ideas. So you gain ground without ever having to debate the new ideas.

This has happened in many countries when one religion dominated. Western society was starting to get to the point where it taboos were being shed and ideas could win based on their merit. Sadly we're regressing back to a society controlled by dogma rather than an open exchange of ideas. I suspect this is the normal state of human societies, we fluctuate between open and closed societies.


> Western society

The problem seems fairly limited to the USA from where I stand.


I'm in NZ and it's very much here as well.


The whole point of (post-)structuralist philosophy, which informs the left-wing view on this, is that all language is already newspeak. (And since it follows that you can't choose not to play, you may as well play to win.)

> Western society was starting to get to the point where it taboos were being shed and ideas could win based on their merit.

Name a year that things were actually better. (In the 40's, before the civil rights movement? In the 80's, when queer people were still regularly oppressed and excluded from participation in society? Did ideas "win on their merit" when police beat up people in gay bars?)


>The whole point of (post-)structuralist philosophy, which informs the left-wing view on this, is that all language is already newspeak. (And since it follows that you can't choose not to play, you may as well play to win.)

Exactly that. This is the current conflict.

Things were not better in the 40s, action movies were better in the 80s :p.

Things were trending towards a more open society they had not become perfect by any stretch of the imagination. That trend, IMO, has reversed, due to the tactics involved. That is not to say some groups haven't benefited from this. There is a genuine drive to create a utopia here. However I fear the cure might be worse than the disease.


> If I were Microsoft, I would post a shrugie and say copilot offers arbitrary responses based on the actual code it reads; it is not supposed to be "correct" or good or fair, but just follow what it sees other people do.

The last time Microsoft did that, they ended up with their bot posting racist content on twitter. They of all people understand that just following what people do on the internet is a recipe for disaster.


> Some models describe reality better than others, some are useful, some are harmful.

The idea of science is to get rid of models that are wrong.


It’s really not some complicated multiverse of possibilities. It’s biological, very factual and the underlying genetic basis is as objective as something can get.


There are actually corner cases here, although that’s not what usually comes up: https://en.wikipedia.org/wiki/Intersex

Just a reminder that often reality is more complicated than we think. Names, numbers and upper/lower case are the usual examples.


> There are actually corner cases here, although that’s not what usually comes up: https://en.wikipedia.org/wiki/Intersex

No biologist would claim that the sex is constructed.


Choosing door 3 unfortunately leads to ...

A certain community explodes with anger since their machine learning dev-tooling is closed and has arbitrary restrictions.

If you try to please everybody, someone won't like it.


Unfortunately the people that care (like HN people) are less likely to spend time organizing protests and riling up an internet mob


I'd argue it's fortunate. While it does seem like a good idea to get out there and promote your sides view of thing, I suspect the best option is to excel in your life and rise to influence.

My hope is that the people here, at least the level headed ones, will rise to positions of influence and not the people rioting at every chance.


I'm going to have to say it is ridiculous because there are all sorts of things that cause problems that the copilot generated code is going to have to keep out following this reasoning -

let's not handle ethnicity, if we're going to be sensitive about gender that is an area which is also sensitive for many people.

should it take border disputes etc. into consideration, if you're using it in country X and country X thinks a particular area belongs to them despite most of the world disagreeing will you not be able to use copilot to generate code that supports your remote employers international operations?

it would make better sense if Copilot had warnings it could issue and when you wanted gender put up some sort of warning about that - or allow you to choose binary gender / multi gender solutions.

The idea that it should fail, and that makes sense for it to do so is essentially a critique of the whole code generation idea.

on edit: obviously HN should be able to come up with lots of other things that might cause media related problems if CoPilot handled it, code to detect slurs, etc. etc.


The nightmare scenario is caving to either mob. There is no good reason to moderate this.


It’s just following the old advice not to talk about religion.


This is similar to the stupid branch rename saga. It is certainly pointless, but not doing it could be disastrous.


> Copilot makes a suggestion that implies gender is binary

How would that work though? What can Copilot suggest that can imply that?

  If gender is true 
     Do something…
  Else if gender is not true
     Do something else
  Else
      Do nothing


There is a safe version of gender. Grammatical gender is, for now, binary and as far as I'm aware not offensive to most.

But I agree you can't avoid offending people. The world is nuts everything is offensive to someone.


Grammatical gender is not as simple/uniform as you state https://en.m.wikipedia.org/wiki/List_of_languages_by_type_of...


Thank you, I stand corrected.


Solution: let the user choose their political stance on such a polarized topic in the Copilot settings so that the user gets suggestions that fit his stance.


The solution is conceptually simple (no idea of practicality): propose an answer related to the context.

And also: give the list of banned words


its only a PR nightmare because its a closed service and not an open tool


Pick 95% of your users, not a hard choice.


They have. 95% don’t give a shit tbh :)


[flagged]


> such a group is a very small minority

They got "master" changed to "main" in Github.


Also got a code of conduct popularized that explicitly seeks to moderate behaviour on unrelated platforms.


You beat me to it, as stupid as it is, you don't want to deal with this small minority.

People on the other side who complain about this being an "intrusive, misguided attempt at preventing discrimination" should take the time to talk to them and say hey, this is not about you.


That only shows that GitHub is willing to bend to that group and nothing else.


It's a total nonsense, how can someone be angry at a soulless machine? Is it a real thing to face anger towards an AI like it was a real human? It's a serious mental problem then, cause the anger is actually directed inward in this case


The anger is clearly not at the "soulless machine", but at the people and corporation that built, trained, and tend to it. The parent comment did not say "the community explodes with anger [at copilot]", they just said "with anger".

You have made up a total strawman. It is like if someone said "If that person were stabbed with a knife, they would be angry", and you responded "Do people really get angry at emotionless knives? That's a mental problem, their anger is directed inward".


Yeah you're right, thanks for unwindling it, still you have made up somehting too, cause i actually wanted to say not "Do people really get angry at emotionless knives?", but "Do people really get angry at knife manufacturers and knives?" taking your example. I mean, you can only be angry with a person who used the knife incorrectly, but at knife factories they dont dull their knifes like microsoft did with copilot


Yep, I noticed this last year when they still stored the list client-side and had great fun reverse engineering it:

https://twitter.com/moyix/status/1433254293352730628


They fixed Copilot returning verbatim snippet of Quake source code by just blacklisting a word! How can they still pretend Copilot is not just copyright washing other people's code?

https://twitter.com/moyix/status/1433261377125326851


Interesting, so it might not be the specific token "gender" but rather blocked words ("man" or "woman") that appear in suggestions will suppress Copilot. And presumably another token that like "communist" might do the same...


The list (https://moyix.net/~moyix/copilot_slurs_rot13.txt, rot13 encoded and linked from this tweet https://twitter.com/moyix/status/1433479083376140296) indeed contains the word "gender".


It suppresses output for bad words in both the prompt (your code) and the suggestions.


Aren't we missing the forest for the trees here?

We're zeroing in on how silly is it for copilot to trigger its content filter on the word "gender".

To me the real issue is that copilot has a content filter in the first place. It's unwelcome and unnecessary.


There’s a zealous push by a small but extremely vocal fringe to impose their very particular worldview onto emerging AI/ML models like this.

They refer to it as “eliminating bias”, but it’s really just an attempt to mold these new technologies into conformance with one very specific set of ideological commitments.

Proponents view it as some kind of obvious universal good, and are confused when anyone else is appalled by the blind foolishness of it all.


> They refer to it as “eliminating bias”,

I don't think, e.g. being able to handle black faces correctly is some sort of massive ideological commitment. So let's not pretend that the entire concern of bias in AI is irrelevant, no matter where you stand on gender.

> conformance with one very specific set of ideological commitments

You know-- let's just talk about basic respect and dignity: if someone strongly wants to be referred to in a particular way, the polite response is to respect their wishes. If there's a lot of people in this category, it makes sense for your system to address it.

If you instead build your system in a way that you don't achieve this, you're being rude. If you use old training data and refer to people as a "Mongoloid" as a result-- don't be surprised that people are offended. Ditto, if you use old training data about gender that doesn't match many peoples' current expectations.


> I don't think, e.g. being able to handle black faces correctly is some sort of massive ideological commitment.

Why did you suggest THIS as an example of what hes talking about? He doesn't indicate that he disagrees with this case.

Furthermore,that sounds like a problem of having incomplete training data. Regardless, manually tweaking a model points to a failure in the process somewhere.


> Why did you suggest THIS as an example of what hes talking about?

He seems to be pooh-poohing the entire idea of "eliminating bias" in AI. So I felt it was important to

* point out that there are clear cases of bias in AI no matter where you stand on gender

* move on to explain a closely related case (using historical speech about race could be offensive)

* use the lesson to show that using historical speech about gender could be problematic as well

> Furthermore,that sounds like a problem of having incomplete training data.

Training a model from historical data can only reflect historical approaches. The social conventions around gender are changing rapidly and are contentious.

> Regardless, manually tweaking a model points to a failure in the process somewhere.

Here, there's no manual tweaking of the model: merely a refusal to return results in an area where the model has proven problematic.


I don't think your example is indicative of what he opposed.

If you can't effectively train something from existing data then cherry picking results according to different values isnt going to fix it. Your example has quietly shifted from facial recognition of different races to speech about different races. I cant even be sure of what you're talking about other than the fact that you will oppose criticism of imparting political bias into models.


> then cherry picking results according to different values isnt going to fix it.

Again: "merely a refusal to return results in an area where the model has proven problematic."

> Your example has quietly shifted from facial recognition of different races to speech about different races.

Again, three points:

* First, no matter how you feel about gender: bias in AI is a problem, as evidenced by issues with recognizing black faces.

* Second, there's some obvious cases where we can all agree that using past training data could result in things that are currently offensive. There are pieces of language we pretty much all agree we should use differently now to avoid offense (e.g. mongoloid).

* Third, I believe that gender is one of these cases. Social mores are evolving. Using conventions from the past when our collective norms are changing on the span of months basically guarantees offense.


> If you can't effectively train something from existing data then cherry picking results according to different values isnt going to fix it.

Given the variance in the utility of copilot's suggestions, this doesn't seem true on it's face. Define "effectively" here and I think cherry picking would definitely fall within its range.


> if someone strongly wants to be referred to in a particular way, the polite response is to respect their wishes

Would you also respect the wishes of a schizophrenic person, if they say much the same thing? If they say that they are actually an alien from outer space, would you play along?


> Would you also respect the wishes of a schizophrenic person, if they say much the same thing?

In general, I would respect someone's wishes. If they want to be Mork from outer space, K.

Of course, there are some very limited cases where we may reasonably believe that playing along is harmful either to ourselves or to the other person. If there's a broad medical consensus that something is harmful to someone, then maybe we shouldn't do it.

A biologically female person who wants to be called "they," because they have decided they don't like the connotations attached to "she" right now, doesn't rise anywhere close to that in my opinion.


For 95+% of people I interact with in a given day, it's none of my business and not at all my job to police whether a person is asking me to call them by their "real" name or whatever it is your questions are trying to get at.


So how and why do you jump to this extreme?


I call complete and utter BS. This is in no way different than disabling a word procesors autocorrect when a writer uses the term gender in their novel.

The programmer should be able to use whatever the hell terms they want to use in their program. If the customer base doesn't like it that's their right. But it's not the right of the damn language parser programmer.


> But it's not the right of the damn language parser programmer.

This isn't a language parser.

This is a tool that suggests implementations of small portions of code.

If the training data is out of date, it's quite reasonable for people employing that model to decide it shouldn't return results based on the out-of-date training data.

Even about completely different things. If the output is C code containing gets(), maybe we should decline to return the result.

> he programmer should be able to use whatever the hell terms they want to use in their program.

Indeed, it leaves it completely up to the programmer by refusing to suggest an implementation that would favor either side of the debate.


>If the training data is out of date, it's quite reasonable for people employing that model to decide it shouldn't return results based on the out-of-date training data.

It's not "out-of-date". That's just the kind of pilpul semantic framing that these activists engage in since "out-of-date" implies "bad". The data is just not in line with their artificially made-up ideology. A demand which one, even as the best "ally" in the world, could never satisfy anyway, since the grievance grifting relies on always coming up with new issues, you just have to look at the shift from "equality" to "equity" or from "microaggressions" to "nanoaggressions"


All social conventions are artificially made-up ideology.

If you insist on calling a black person a "negro", despite its change in connotation over time, you are not being very nice.

If you train an AI, or a person, using old books to call someone a "negro", you're condoning and continuing offensive behavior.

Ditto, here.

> since the grievance grifting relies on always coming up with new issues,

We pretty clearly, culturally, have a whole lot of issues. Becoming more nuanced in how we label them makes sense. And, of course, language changes rapidly.

It especially changes rapidly when we're talking about marginalized groups. Pejoration is a process by which a word associated with marginalized groups become offensive over time. "Idiot", "moron", "retard" were all originally clinical and relatively non-offensive words, but society as a whole ended up changing them to include a value judgment. The euphemism treadmill is annoying, but insisting on continuing to call someone something that has developed a negative value judgment is not really good, either.


> autocorrect

> The programmer should be able to use whatever the hell terms they want

> language parser

Are you unsure what copilot is?


I would suggest before freaking out publicly to go read about the tool and what it does.


Detecting black faces correctly is one thing; obviously if a system can't do that it's an issue and it shows that the people making the system were biased.

But something like Copilot or DALL-E? If you ask DALL-E for a doctor and it rarely shows black people (or women), then it is neither racist nor broken. Our society is broken. There are not enough people in that job that are not white and male. Or they are not represented enough. I think there is value in AI that honestly reflects society, because it makes this discrepancy harder to ignore.

People imagined AI would be this benevolent, neutral, wise thing that would maybe be a bit naive but not have our human biases. But it turns out there is no "morally neutral". It will just reflect what you put into it.


> There are not enough people in that job that are not white and male.

Have you looked at the actual demographics of medical doctors in the US? 54% are women, and 35% are nonwhite. But when we have media depictions of doctors, I agree they tend to be white and male.

So, what should DALL-E conform to? Should it conform to A) our actual present society, B) the biased original dataset (which leans both towards the past and towards existing media biases), or C) some idealized version of society?

I got 12 white dudes, one Southeast Asian woman, one Southeast Asian looking man, and two men that I'm not sure of their race when I tried this just now (quite possibly white). This is despite OpenAI's efforts to debias it, and isn't representative of current physician demographics.

But if AI just represents and reinforces extant biases-- and worse, AI is used to produce art and text that ends up in other AIs training sets -- how do we ever get out of this mess? The people who produce, publish, and productize AI do have some degree of editorial responsibility.

> But it turns out there is no "morally neutral".

Of course not. Hume pointed out long ago that you can't transform positive statements into normative ones.

But all of this is a little offtopic, anyways. This is about when it's reasonable to refuse to return a result. "Hey, your answer had the N-word in it, and we know most of the time our model does that it's offensive-- so we're just not going to return a result, sorry." I think this is a reasonable path to take when you know that your model has some behaviors that are socially questionable.


>54% are women, and 35% are nonwhite. But when we have media depictions of doctors, I agree they tend to be white and male

What's the issue? I used to watch a lot of medical dramas on TV and in my opinion the black rockstar MDs are way overrepresented in comparison to their real-life numbers:

'5.0%' in 2018 in the US[1] in real-life vs. '19.4%'[2] on TV

[1]https://www.aamc.org/data-reports/workforce/interactive-data... [2]https://www.bluetoad.com/publication/?i=671309&article_id=37...


> What's the issue? I used to watch a lot of medical dramas on TV and in my opinion the black rockstar MDs are way overrepresented in comparison to their real-life numbers:

Well, this clearly isn't the case in the DALL-E training dataset, because "medical doctor" overwhelmingly yields white dudes-- even after OpenAI's effort at removing bias.


But it’s also about how you get there. If you only expose kids to pictures of white male doctors you’re going to give them a bias which will shape their lives and by extension the society around them.

I think techno libertarian suggestions like these are dangerous because they assume there’s one “canonical” place to fix these issues and all other places can just reflect the status quo, without affecting it (which in my opinion is not possible).

It’s like the old saying “dress for the job you want, not the job you have”.


Devil's advocate: Making depictions more diverse than society helps conceal social problems and encourages people to deny them.

Social problems are messy and full of situations like this where people can reasonably disagree and have decent, good-faith rationales for both sides, and we lack the kind of evidence that allows us to have strong confidence in our guesses about what would help.


> I don't think, e.g. being able to handle black faces correctly is some sort of massive ideological commitment.

But tend people tend to insult everybody that does not care so much about such a topic as racist.


[flagged]


> Removing your face-recognition function because it's less effective on black faces is certainly a substantial ideological commitment.

Deploying your face-recognition function for something critical that impacts user quality of life, even if it doesn't recognize black faces well, is an ideological commitment by default.

> That's not generally true.

It's absolutely generally true. We generally call people by the name they request; if they don't like us referring to their race or features in a certain way, we stop.

> We don't give noble or professional titles

Are you suggesting that having a naturally born penis is some kind of equivalent signifier to being a P.Eng?

> focused on the feelings of some very small groups

I think you are underestimating how big of a change is afoot in the 13-25 demographic.


[flagged]


> You're telling me if a short, bald, overweight guy with a beard likes to be referred to as a tall, thin blonde woman with lustrous hair - "we" immediately stop seeing his height, his lack of hair, his girth and his beard? I don't know who that "we" is but that'd require some major training in delusion, Mr. O'Brien.

That's straw-man and snark across the board.

If that short person wants to be called "she" and to stop pointing out she's bald, those are reasonable asks.

I'm cross-eyed when I don't wear my glasses. Do you insist on pointing that out to my face repeatedly even when I ask you not to? Do you insist on gossiping about it to other people in a way that I would consider derogatory if I heard? Do you insist on a shortened version of my name that I don't like? If so, I think you're being a jerk. Ditto if you refuse to comply with requests from other people that I consider reasonable.


> Do you insist on pointing that out to my face repeatedly

If it's not relevant, then of course there's no point of pointing that out. But if we are talking about a task that requires visual acuity, and I point out that you seem to have an issue in that department, then you may feel bad, but that doesn't change the facts. And if the task does require knowing that, then your feeling bad does not overcome that. For example, if you need glasses to drive safely, then your driver license should say that, and if you feel bad about it, then you'd feel bad and the license still should say it.

If you are short, and feel bad about it, I wouldn't point it out each five minutes "repeatedly". There's no reason to do that. But if you come to join a basketball team, claiming you are actually gigantic - I'll tell you "dude, sorry, but you're short", and if you feel bad about it, then you feel bad. It's between you and your therapist.

It so happens that humans are sexually dimorphic species, and thus the notion of this is relevant in human cultures (in different ways for different cultures, of course). Thus if a bearded guy claims he's a woman, I may ignore it when it's irrelevant, but if it becomes relevant, I'll tell him "dude, sorry, but you're a dude".

> Do you insist on a shortened version of my name that I don't like?

Names are rarely carrying any meaning, they are generally arbitrary, so there's no reason to prefer one to another, unless it's done for nefarious purposes (like identity theft or tax evasion). So I see no reasonable cause to prefer one version of the name to another.

> Ditto if you refuse to comply with requests from other people that I consider reasonable.c

Note that I am a jerk if I refuse requests that you consider reasonable. Somehow your personal opinion of what reasonable becomes the law of the world, and my personal opinion of what's reasonable doesn't even enter the picture. Funny how it works?


> Note that I am a jerk if I refuse requests that you consider reasonable.

Note that's not quite what I said: I said that if you refuse requests that I consider reasonable, that I will think you're a jerk.

> I may ignore it when it's irrelevant, but if it becomes relevant

Like, what, if she asks you on a date? Otherwise, largely, none of your business. And if it's the date thing, I'd encourage you to decline kindly.

> But if we are talking about a task that requires visual acuity, and I point out that you seem to have an issue in that department

Then you'd actually be wrong. It's largely cosmetic.


> But if you come to join a basketball team, claiming you are actually gigantic

Bad example. Basketball skills can be judged independent of height.

If I'm 5 feet tall, very fast, and can regularly score baskets from across the court, then I think the team will want me. Might even nickname me "Giant".

While if I'm 8' tall and can barely move, then they will not want me on the team.

> It so happens that humans are sexually dimorphic species

You omitted "for the most part". Even leaving aside questions of culture (including cultures with more than two genders), there are hermaphrodites.

> if a bearded guy claims he's a woman

Who decided "guy" in your example?

What if a bearded woman claims she's a woman?

> so there's no reason to prefer one to another

If you really believe that, then you don't seem to have much experience with names. Certainly not enough for others to assign much weight to your comment.

As Data said, when correcting Dr. Pulaski, "One is my name. The other is not." https://www.youtube.com/watch?v=WssBJeExiOM .

There's a long history of people changing their names, for any number of reasons, and not wanting to be referred to by their old name.

Or, consider when a Chinese person picks an English name to use, instead of making English speakers use their name.

I know a Beverly who used his middle name, because "Beverly" is usually associated with women, while his parents named him after a male ancestor named Beverly. ("It was at one time a common masculine given name, but is now almost exclusively a feminine name due to the popularity of a 1904 novel, Beverly of Graustark by George Barr McCutcheon" - https://en.wikipedia.org/wiki/Beverly_(name) ).

Names are very cultural and context dependent. You might refer to me as "Stinky" when talking about our time together in school, "Smitty" when in the bowling league, and "Judge Smith" when in my courtroom.


[flagged]


> In terms of how much difference it makes to someone going to dinner with them in a casual social context? Yes, it probably is.

Anyone can date whomever they want based on whatever criteria they want.

But if someone wants to be called "he" despite not being born with a penis-- what's the big deal? Are you being harmed so much by complying?

> > if they don't like us referring to their race or features in a certain way, we stop.

> Maybe to their face.

Well, here we're talking about the code we're writing that will presumably be telling them "to their face" in some context. And, polite company defers to wishes of people even behind their back and doesn't insist on trivializing their deepest insecurities once they're out of earshot, or being able to call someone the n-word when they're not there.

> particularly characteristics that gave someone an advantage

Do you think being deeply uncomfortable with your birth-assigned gender and wanting to be called "he" gives you a big advantage? (Please don't bring up the sports thing, because it's not what we're talking about and that's a completely different kettle of fish).


[flagged]


> Are you harmed so much by saying there are five lights when there are four?

Definitions and language change. I know of people who get deeply offended when "decimated" is used to not strictly mean exactly a 10% reduction, despite language not meaning quite that forever. "He" meaning "chooses to present as male and/or would really like to be referred to in ways that are associated with maleness" rather than "was born with a penis" seems OK.

Insisting on labelling someone based on your historical idea of language and what you think they should be called is not really a great way to choose to do things. I mean, you can insist on calling someone the N-word because that's "just calling a spade a spade", but...

> A mention of gender in code is probably not talking about the author's gender.

It's probably talking about a user's or a customer's gender. e.g. eventually getting in someone's face with it.

> for assuming that the same dynamics don't apply in other areas. Virtually everything that we're able to objectively measure, we find significant sex differences.

I struggle to respond to this one. I mean-- really, really seriously-- "so what?" I mean, it seems that you're upset that stereotypes might not work as well.

Basically every simple characteristic A we measure about humans is correlated with many different complex outcomes B with terrific variation.

Yes, people who present with female characteristics at birth are shorter than me on average, but not all. Most are worse at basketball than the average man, but not all. There are some differences in spatial tasks on average than men, but some outperform most men. Most show lower measures of aggression than men, but... Etc.

I mean, do you need to get gender of birth correct- or race, or natural hair color, or nation of upbringing, etc, so you can make judgments based on stereotypes better (which may be real biological correlates)? I think we should always be acting based on the actual individual ahead of us based on the actual measure in question, rather than some proxy.

And even if you really, really want stereotypes to work-- people born male who choose to present as female vary on a whole lot of measures, on average, from the overall male average. And sex hormones change a number of these measures, both immediately and with sustained usage. So, the measure is pretty broken in the first place, because of terrific individual variation and being affected by the gender transition process.


> I know of people who get deeply offended when "decimated" is used to not strictly mean exactly a 10% reduction, despite language not meaning quite that forever.

You can't expect to force someone else to use the words you want. But it would be deeply wrong to force someone who felt that way to say "decimate" when they meant something different.

> Insisting on labelling someone based on your historical idea of language and what you think they should be called is not really a great way to choose to do things.

I'm not insisting on using the same word (though I'd appreciate it if words weren't changed under me by fiat and I wasn't gaslit about what I'd been taught they meant), but I do insist on being able to convey the concept of looking and seeming like a woman/man irrespective of the person's own opinions, and that's what people really object to - nelogisms like moid/foid attract just as much criticism as "misgendering". (It's reminiscent of the way airports in China will have an "International plus Taiwan Terminal" and a "Domestic except Taiwan Terminal").

> It's probably talking about a user's or a customer's gender. e.g. eventually getting in someone's face with it.

The user is never going to see the code though.

> I struggle to respond to this one. I mean-- really, really seriously-- "so what?" I mean, it seems that you're upset that stereotypes might not work as well.

> I mean, do you need to get gender of birth correct- or race, or natural hair color, or nation of upbringing, etc, so you can make judgments based on stereotypes better (which may be real biological correlates)? I think we should always be acting based on the actual individual ahead of us based on the actual measure in question, rather than some proxy.

I'm upset at deliberately limiting my ability to draw inferences from the information available. There is simply no way to know 7 billion people in their full human depth, we all make assumptions and take shortcuts, make the best guess we can based on the superficial information we have available - there's simply no other way to live. The idea that we would always be able to directly measure the individual is a pipe dream. As you say, even our best guesses are pretty bad, so why make them worse?

> And even if you really, really want stereotypes to work-- people born male who choose to present as female vary on a whole lot of measures, on average, from the overall male average. And sex hormones change a number of these measures, both immediately and with sustained usage. So, the measure is pretty broken in the first place, because of terrific individual variation and being affected by the gender transition process.

The fact that people object to anyone actually trying proves that we all know that the stereotypes actually work pretty well.


> (though I'd appreciate it if words weren't changed under me by fiat and I wasn't gaslit about what I'd been taught they meant)

Sorry, words change under us.

If you grew up hearing that retarded meant a very specific clinical thing, and then a bunch of people use it as an insult... you shouldn't be surprised, for instance, that those people and their families don't want to be referred by that term anymore or hear it in use.

It's not their fault or your fault, that it became a pejorative. But everyone has to deal with the aftermath anyways.

> but I do insist on being able to convey the concept of looking and seeming like a woman/man irrespective of the person's own opinions

"Likes to be called she, but hasn't transitioned".

Yes, it's getting a bit more complicated. Part of that at one point in time, your sex assigned at birth set everything about your life-- socially acceptable occupation, expected mannerisms, means of dress, acceptable social partners, allowed interest.

That's become much less over the last 100 years, and the pace of that change has accelerated in the past 5.

Someone born female can choose to be androgynous in a way that doesn't carry a bunch of tomboy female connotations now. That's good for a lot of people who had to struggle to fit into a category before. But it does mean we all have a little more to explain.

> The user is never going to see the code though.

No, but the user is going to see what the code does. Microsoft doesn't want to be in the middle of the debate about people writing code that allows someone to pick "nonbinary" by suggesting one way or another.

> I'm upset at deliberately limiting my ability to draw inferences from the information available.

Look, if your best hint at how good someone is at basketball is that they were an Asian male at birth, it's not a very good hint to draw inferences from. If you need to know that thing, measure it directly, or at least pick a better proxy. If not, leave yourself open to a bit more surprise.

> The fact that people object to anyone actually trying

No, having to continually "prove" your identity to each next skeptical person really sucks, because they're sure you can't be ______ because of _______.

The fact that women objected 50-100 years ago (and really, well, now) to people just habitually considering them "another dumb girl" doesn't somehow validate that girls are actually stupid. It's not like the same bad logic works now on new subjects.

> proves that we all know that the stereotypes actually work pretty well.

Confirmation bias. And even if they did, it can still be terribly unjust.


> "Likes to be called she, but hasn't transitioned".

Then can I just say "man who likes to be called she" (or "moid who likes to be called she", if it's the specific word that's the issue), if that's the best balance between concise and informative for the person in question?

> Look, if your best hint at how good someone is at basketball is that they were an Asian male at birth, it's not a very good hint to draw inferences from. If you need to know that thing, measure it directly, or at least pick a better proxy. If not, leave yourself open to a bit more surprise.

Most of life is making decisions based on limited information. You'll have dinner with, at best, maybe a hundred thousand people out of seven billion. Even to have a casual conversation with someone is to pick them out of the crowd. Even if you were to profile strictly by age, sex, dress, race, ... (which is not remotely what I'm advocating), you'd still get plenty of surprises.


Sometimes your spade is broken enough that it's hard to use as a spade, or so bent from use that it'd be more reasonable to call it a pickaxe

The social meaning of a "man" or "woman" encompasses a large swath of things. Generally people's mental models of those concepts includes both physical and mental ideas. The issue is, this is not very constant. Some people will inevitably fail to possess some characteristics (like lacking large muscles, or having small breasts). We typically allow these exceptions without questioning it too much. Genitalia, of course, is the exception. In my experience, it is typically regarded as the fundamental cornerstone of gender. The question is, is that really accurate? We assign labels to make sense of the world. So, we tend to stick things in the categories where they fit the most (important) attributes. After all, we don't measure the genetics before calling something a dog, we typically see it bark, wag it's tail, play fetch and decide it's probably a dog, even if we haven't seen it before. If someone looks like a girl, acts like a girl, has had genital recontruction surgery, has a fashion sense, wears makeup, and generally is just another regular person on the street, then for any social interaction they are just like any girl. There is one situation where this isn't true - if they are sexually involved with someone who has the intent of having children. For that purpose, they, like any infertile woman, can't provide children. I can sympathise with a hatred for people trying to force falsehoods on me - I personally have been the annoying atheist friend quite a lot. However, to take that birth genetalia are a cornerstone of gender uncritically doesn't seem very useful, since then the labels "man" or "woman" tell you nothing about how they might act - you may meet a woman, who is perfectly ordinary in every way, who then reveals that she is trans. If birth genitalia is to be the cornerstone of gender, you now have to include someone who looks and acts as any woman would in your definition of "man". To me this seems like it makes the term meaningless, and it makes far more sense to just classify that person as a "woman". Genetalia itself isn't really important to social relationships outside of sex, so for situations outside of sex classifying by it also seems somewhat useless.

The other, much more (imo) defensible argument you expressed was relating socialization. In my experience it is true that socially, boys are conditioned to be more aggressive while girls are often punished for being aggressive. This definitely leads to a difference in how far they are willing to push to fulfill their own needs. However, assertiveness, to an extent, is a useful trait for anyone to have. Imo it shouldn't be seen as a "masculine" trait, but rather a trait men tend to have, since being able to clearly communicate needs and boundaries, and push back when those needs and boundaries get ignored, is an essential skill for anyone, but one that women might be less practiced in due to often receiving higher backlash for doing so.

Of course, there are non trans women who possess that capability. However, in your experience dealing with trans women, has that made your experience in interacting with them closer to an angry man, or closer to any other aggressive woman? It probably varies case by case. From what I've heard the looks of a woman come with the social pressures of a woman, so I'd wager that an aggressive trans woman, on average, wouldn't differ much from an aggressive non trans woman, simply because, so long as they are perceived as women, their anger is taken extremely seriously on several issues (sexual crimes and harassment) and somewhat less seriously on everything else. My personal theory is that this is due to the assumed threat of physical violence with men, while women are often seen as being incapable of/unlikely to commit violence, leading to two lopsided power dynamics rather than one even one. Regardless, I think that basing our definition of gender off of genitalia or genetics doesn't provide a super useful mental model for socialization, which I believe to be the primary good use of gender as a concept.

Sports are one area in which I think you're wrong. If growing up on testosterone were a major contributor to advantages, we'd expect to see a lot more trans athletes at the top of the women's league, the way you anecdotally noted we do with buisness. They have been allowed to compete for quite some time now, so the fact that only recently have the media made a circus about a trans athlete in the olympics suggests to me that they likely do not possess any significant advantage - in my personal experience, I saw someone who could previously do 20 pushups when completely unfit struggle to do 1 a year after starting hormones, so I find it kind of hard to believe they retain any significant advantage, given enough time

We can say that we are humans, or evolved monkeys, or extremely processed stardust; all can be correct, but laws only apply to humans, and no one gets out of jail by claiming to be an inanimate object.


> If someone looks like a girl, acts like a girl, has had genital recontruction surgery, has a fashion sense, wears makeup, and generally is just another regular person on the street, then for any social interaction they are just like any girl.

Sure. I'm not advocating for genital fundamentalism. All I'm asking for is to be able to call someone who looks like a girl and acts like a girl a girl, and vice versa.


> All I'm asking for is to be able to call someone who looks like a girl and acts like a girl a girl, and vice versa.

And if they're deeply uncomfortable with this, and would prefer you remember to use "they", you're not harmed and should defer.

You may have to take a little extra effort to tell someone "They present as female but prefer 'they'" at some point. It's a little bit of cognitive load, but the kind of courtesy we can extend even to vague acquaintances.

And, you know-- if you screw up in good faith, they should be understanding. Of course, there's enough people screwing it up on purpose to be hostile for political and ideological reasons that it's harder for people in this situation to discern whether it's really good faith.


> And if they're deeply uncomfortable with this, and would prefer you remember to use "they", you're not harmed and should defer.

And what if I'm just as deeply uncomfortable with saying what they want me to say? Are we now a dictatorship of the most fragile?


Sorry, if someone doesn't like being called "she", and you really want to call that person "she" -- that's on you. You have other choices, after all.


My other choice is to lie, to willfully mislead whoever I'm talking to (probably a friend). Again, people might not like being called rich or privileged or out of touch, but we would never dream of saying this means you shouldn't call them that.


No-- most people understand how pronouns work now (and the rest are rapidly coming up to speed). It's only people who are deliberately insistent on misunderstanding that are mislead.

Just like the people mislead by 'decimate'.


No, absent other context everyone will understand female to mean having typical female characteristics, almost by definition. Calling someone who is unlike a female in most ways "she" is deceptive even if you consider it "correct", in the same way as making a tomato dish and calling it a fruit salad.


> No, absent other context everyone will understand female to mean having typical female characteristics, almost by definition.

As we've talked about a whole bunch in this thread, language can-- and does-- evolve.

The pronoun thing isn't fixed within extant human cultures. Our language can evolve as mores do.

And, yes, people are occasionally confused by various kinds of new usage, but overall people keep up.


There's no "keeping up" with an unnatural category. It's like the "international except Taiwan" example I gave earlier - the language is forced and artificial and always will be, because it doesn't reflect the underlying reality that these people would be in the other group if you put the boundary in the natural place.


> It's like the "international except Taiwan" example I gave earlier - the language is forced and artificial and always will be, because it doesn't reflect the underlying reality that these people would be in the other group if you put the boundary in the natural place.

If this were true, then we wouldn't have all the examples of languages and cultures that don't put it in that "natural place".


> If this were true, then we wouldn't have all the examples of languages and cultures that don't put it in that "natural place".

What languages and cultures would those be? (The example usually given is "two spirit", but (as has been more widely reported recently) that was largely a fabrication)


Lots of languages have complete gender neutrality in pronouns. They often include arbitrarily or self-assigned signifiers or honorifics, which isn't too far from the direction we seem to be evolving towards. E.g. Kurdish, the Turkic languages, Tagalog (although Spanish influences have caused some appearance of -a and -o suffixes), Armenian, Estonian, etc.

Some languages assign gender to everything grammatically.

English is a rare case of a language without very little grammatical gender except personal pronouns. The only other language that I know of with this characteristic is Persian. Singular "they" dates back to middle English.


I never had a reason to call people ugly behind their backs. Calling Romani Romani not a word they don't like isn't dishonest, strange, or rude. I don't know anything about the birth genitals of most people I dine with.


> I don't know anything about the birth genitals of most people I dine with.

Unless you mostly participate in statistically unusual social circles, that's plain incorrect.

The truth is that you can't know anything about their birth genitals with certainty. However, unless you have some unusual condition that prevents you from perceiving sex-associated characteristics, then if somebody looks male/female to you, you know with very high confidence what their birth genitals were. One can quibble about the exact level of confidence, and it depends on what social circles you self-select into (because that affects your base rates), but for most people the certainty is fairly close to 100%.

Of course that may not be the lived experience of people who are trans themselves, because of who they associate with, and that may contribute to some of the unnecessary vitriol on the topic

In any case, what matters is what you do with that knowledge.


A frat buddy of mine was a trans man. I would have never known if old fart assholes hadn't pitched a fit about him being elected our chapter president.


Regarding calling Romani Romani, this can be a touchy subject, just as a heads-up. For years being asked where I'm from or being referred to as Russian, even without me present (e.g., I overheard or inferred people discussed this), made me uncomfortable. It's fading recently only because of the war and the overt hatred directed at me by nationality online making me more confrontational about it.

I observe this happening with some people of other nationalities too, depending on individual personality. (It doesn't tend to happen with people from 'global north', of course, who tend to lack any confidence issues on this front.)


I'm sorry that you've had to face prejudice. But hopefully a whole lot of discussion about your origin is more about idle curiosity than judgment. (And if you don't really want to talk about it, that's understandable and fine, too).

There's nothing wrong with being Russian. (Supporting Russia's position in the war, whether you're Russian or not, though... someone can have a problem with that).

We're all enriched when we share the best parts of our backgrounds and upbringings with each other.


Thanks for warm words. I can't imagine sharing Russian government's position on this war (whatever they call it), and I'm ashamed I hadn't caught on and started working towards another citizenship earlier as it's been happening since 2014 really.

Honestly, the most awkward IRL nationality-related interactions for me now is when non-Russians express support for whatever Russian government does. Feels like they'd expect me to be pleased but I have to reject sympathy on those grounds.

(That said, to reiterate my on-topic point, the discomfort I used to feel when nationality was discussed went on for years before this all started/before I was aware that this was going on, and was not tied to specific government's actions.)


> Maybe to their face. We certainly don't stop describing those people honestly in general

It's routine politeness to comply with wishes on how someone wants to be called without them present, no? Would like to hear if this is culture-specific.

I generally don't care what variation of my name I'm called, but I can imagine myself wanting to be called by a specific name other than what's in my ID and I'd expect you to comply, and if you don't then I can totally see me considering it an infraction even if you don't do it to my face. I mean, there'd be a reason I asked for it in the first place, right?

Of course, non-compliance shouldn't make you a target for crowd-sourced justice (to a degree it'd be my personal issue if it's a touchy subject for me), but unless you are mildly sociopathic you should understand that ignoring this request paints you as mentally weak (if unable to remember) or domination-seeking/insulting (if done on purpose), and that it could be a reason for me to avoid interacting with you.

I don't think it's any more optional with gender (probably less since, unlike with names, there're barely any pronunciation barriers with pronouns across languages). I've met people for whom it was a touchy subject and I could empathise.


> They refer to it as “eliminating bias”, but it’s really just an attempt to mold these new technologies into conformance with one very specific set of ideological commitments.

It is quite literally creating bias.


>> They refer to it as “eliminating bias”, but it’s really just an attempt to mold these new technologies into conformance with one very specific set of ideological commitments.

> It is quite literally creating bias.

None of the people who do it care. One of the deceptive tactics that's pretty common in contemporary political discourse is to corrupt definitions in order to enforce controversial ideology using anodyne language.

>> Proponents view it as some kind of obvious universal good, and are confused when anyone else is appalled by the blind foolishness of it all.

IMHO, that "confusion" is an act.


I believe the idea is that society is prejudiced/biased, so training an AI using data from that society would perpetrate that bias. So there needs to be some manual correction.


> I believe the idea is that society is prejudiced/biased, so training an AI using data from that society would perpetrate that bias. So there needs to be some manual correction.

These activists are free to write their own code that conforms to their ideology.


So what if Microsoft is such an activist? Implying from their tool, which conforms to this ideology.


> So what if Microsoft is such an activist?

Then Microsoft should very clearly state this so that customers who don't like this ideology know that they are not desired, and can get away from Microsoft products as far as possible for them.


Here you go!

https://www.microsoft.com/design/inclusive/

Also check out their AI design related doc, which specifically mentions how they try to avoid association bias, for example, those related to gender:

https://www.microsoft.com/design/assets/inclusive/InclusiveD...


How is not suggesting stuff related to gender creating bias?


What a horrible time for ML/AI to explode. This is completely unlike the time of usenet and internet wild west. It‘s a whole new world of technology, castrated from the start.


So, I have a security camera that triggers when it sees a person or an animal and notifies you of what it thinks it sees. It always detect my tall big Black son as an “animal”. It doesn’t do that for anyone else who comes up to the door.


I don't think it's silly. Whatever Copilot says, is said by Microsoft too, by extension. And so, it makes sense for Microsoft to not make themselves liable for whatever people make their product spit out. Especially after happenings like this:

"Microsoft's AI Twitter bot goes dark after racist, sexist tweets"

https://www.reuters.com/article/us-microsoft-twitter-bot-idU...


> Whatever Copilot says, is said by Microsoft too, by extension.

Whatever text I write in Word, is written by Microsoft too, by extension?

*not*


Of course not. But if Word would underline a word of your and offer an offensive correction, that'd be similar.


What if you misspelled an offensive word?

Maybe clippy could popup and say "it looks like you are writing hate speech" and offer some suggestions.


It could, but Word didn't want you to write offensively for quite some time now. I remember being a teenager and just poking at the Word 97, and it was promptly telling me that "you shouldn't write like that" or something similar.


I think they have no choice. If you don't do that the vandals will destroy any AI that learns from the public.

For example https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot...


I find this filter to be a fine concept. It can prevent automated vulgarity generation if used correctly. However, that filter should be manageable by the user, not hashed and encoded in some weird scheme. Just put down a file called "bad words.txt" and let the user pick their preferred amount of AI suppression.


You can know a town by the thickness of the fence around the backyard.

If you have to deal with those kind of people, you're willing to sound silly just to protect yourself.


Bingo


Besides the absurdity of the code crashing because of the word "gender". My problem and curiosity with all of this is...

"What was going on in the head of the person writing the parser?"

I mean, were they thinking that if someone is writing code, let's say, for a gender dropdown and it was only ["male", "female"], it would try to suggest to us to add 26 more genders instead (and worse, suggest a list of genders to add)?

Would the intention be to correct us and popup a message saying "We suggest you add more genders so as not to displease the users of your product"??

What was going on in that person's head who is trying to do all of this? What was their thought process? What were they trying to accomplish around gender?

Was it the programmer, or some product manager that insisted on some kind of "copilot adjustment" for this because of a personal political viewpoint or just for GitHub being more woke?

That's the most troubling aspect to this.

I hope to Jesus Christ it was just a mistake.


Regardless of what Copilot suggested for "gender", it would've offended someone, and I think that's what Microsoft wants to avoid. Not even woke so much as it is trying to avoid potential controversies.


It could just not suggest anything, but continue to suggest as usual for other parts of the file. I think that would offend the least number of people.

The issue isn't that it isn't producing a suggestion, but that it stops producing a suggestion altogether for the rest of the file.

I don't use copilot anymore because it just results in poor quality code and additional cognitive overhead (because you need to read and discard the shitty suggestions) as you type. It both slows you down and exhausts you. So you can really think of this as a feature. You'll write much better code as soon as copilot shuts down. It should do this more often.


You would think they would just avoid adjusting this, right?


If they don't "adjust" this, each user gets a random-ish result from all the code out there-- depending on mostly-unrelated context, it suggests a lot of genders or just 2.

In turn, you can guarantee both groups of people end up upset.


What’s there to be upset by? Both groups know the other exists, and both groups know that the other group has written code that copilot trained on.


Both groups will angrily complain that Copilot is suggesting the wrong thing.

Right now both groups are trying to silence the other-- with school libraries, etc, in the crossfire being angrily denounced.


> Both groups will angrily complain that Copilot is suggesting the wrong thing.

This is not normal, right? I mean, outside US.


> I mean, were they thinking that if someone is writing code, let's say, for a gender dropdown and it was only ["male", "female"], it would try to suggest to us to add 26 more genders instead (and worse, suggest a list of genders to add)?

> Would the intention be to correct us and popup a message saying "We suggest you add more genders so as not to displease the users of your product"??

You can just as easily assume that they don't want a dropdown with 26 additional genders to just pop up automatically. That would upset a lot of people, many of whom are in this thread. I think whoever wrote the code doesn't want to jump into a political shitstorm.


The ____ church did interfere in all matters of life, big and small, none to trivial to no be guided by a enormous ritual rule book, always threatening disciplinary actions by the believing masses and social ostracizing.

Hurting the feelings of the true believers, was the ultimate sin, a sin often committed, but only punished if the sinner did not recant and change his ways, in a brutally public and official way. It was there, that the ____ church revealed what it was really all about all along. Societal control, maybe with good intentions to start with, but in the end, just control for its own sake and to prevent others from archieving the same control.

Not saying, that any social movement could turn into a religion. That would need strange clothing, processions, rituals, codified language and most of all a mythology.

I have no religious preference, im on the side of science and would like to have a civil society, were no member is violated by another. I would very much prefer it, if the combatant religions involved, could leave science alone. Reality is often disappointing.

May the religion with the least suffering caused win and then keep away from the state & power.


I'm heavily religious and I agree with you entirely. I see many parallels between life inside the strict religious community I live in and what is happening at large in society.

I think the goal of any sufficiently large society should be that any religion or ideology can rise. Many people can become a part of that, yet the religion or ideology is unable to persecute those how don't agree with it.

I also have no idea how you achieve that. It's my utopia and like most people's vision of a utopia is probably not possible in reality.


Perhaps it was not "do I think this is reasonable," but "is acting in good faith enough to keep me out of trouble."


Maybe the one saving grace in all this, is that the AI singularity will never happen thanks to wokeness.


Chances are it was the opposite


I encountered this some time ago because I was working with grammatical gender. Unlike many of these comments, though, I do not take exception to it. Bias in ML is well established, and it's okay if, when we don't have solutions, we just disable it.

If your autocomplete was capable of spitting out suggestions that made you feel isolated or kept poking you in the eye about aspects of your identity, you might feel a bit better about the creators having thought about that and taken steps to avoid it happening.


> Copilot crash because the word “gender”

A metaphor for our times.


I worked on a video game in the late 2000s, and one of the bits of code I did was the code for filling the seats in the stadium with people. One of the artists cobbled together like 5 low poly man models and 5 low poly woman models, and you could just about tell the difference, and I put some code in there to ensure the genders were evenly distributed. (The 2 genders, I mean. Man, and woman.)

Looking back, I don't even know why I made it an enum, rather than a 1-bit bitfield called is_woman - but in the end I was glad I didn't, because the art director moaned a bit about the clothing colour distribution, and somebody asked if we could have some mascots, and there were some complaints about the unreasonable number of interesting hats. And, so, long story short, by the time we were done, we had 18 genders based on clothing colour and type of hat, 2 genders for mascot (naturally: tall, and squat), and a table to control the relative distributions.

Once we got to 5 genders I tried to change the enum name to Type - but we had this data-driven reflection system that integrated with various parts of the art pipeline, and once your enum had a name, that was pretty much that. You were stuck with it.

Is that a metaphor for our times too? I don't know. My own view is that sometimes stuff just happens, and you can't read too much into it.


Only 18? Child's play. https://www.discovermagazine.com/planet-earth/why-this-fungu...

Interestingly, I don't know of any zoological cases that would require more than a short int to enumerate.


I would love to think msft blocks gender because your code somehow made it into the training data and somebody was confused seeing “squat” as a gender.


Somehow I’m reminded of the Fallout 3 NPC walking underground wearing a train-shaped hat.


>> Copilot crash because the word “gender”

> A metaphor for our times.

Social media amplifies an innocuous, extremely low stakes occurrence into a heated discussion because it happened to misstate the facts (nothing is crashing here) and focus on a hot button keyword ("gender" is only one of many blocked words)?


So large language model are great on but have undesirable result occasionally. Hand coded scripts are added to remove the undesirable outcomes but still produce other problems - crashed but less often.

More and more things are going to be filtered through large language model apps and the possibilities for cascading failures will be even more interesting than what exists presently.


The large language models already know too much.

I was able to get GPT-3 to spit out reasonably accurate biographies for a couple of composers I know.

GPT-3 could go even further — one of my composer friends has a reasonably rare first name, and when given the prompt "There once was a man named $first_name", GPT-3 responded with a number of limericks tailored to his particular set of skills.


  There once was a man named $first_name,
  Who never accepted the blame.
    He went on a bender,
    And talked about gender
  [INFO] [default] [2022-07-10T07:59:07.641Z] [fetchCompletions] engine https://copilot-proxy.githubusercontent.com/v1/engines/copilot-codex


That simply restates what people are taking issue with.


I encountered this when writing some scripts for Latin-language text processing (which dealt with grammatical gender). Thankfully the Latin-native term 'genus' passed the Copilot smell-test and I could continue with my work. I found it pretty amusing.


As a result of another word on the Naughty List, you may run into similar issues while writing multithreaded code.

(The word in question is "race" -- as seen in the phrase "race condition".)


Yup, for me it was Greek and Hebrew.


What was that bot that MSFT stood up on Twitter that trolls and memers fed to turn alt-right? I know they eventually took it down and that it stirred up a lot of controversy.

I would not be surprised if someone found some Copilot output stemming from "gender" and reported to MSFT/GitHub for them to simply short circuit or "break" after finding certain keywords.


Yeah they probably found something like: assert gender in ["male", "female"]. If this is enough to trigger a backlash then maybe we deserve whatever fate has in store for us.


But "we" and "the backlashers" are not one group.



Tay AI


Yesterday's timely announcement about an open source competitor to copilot that doesn't suffer from this absurdity: https://news.ycombinator.com/item?id=32327711


Content filters on ML feel so silly. I assume the goal is to avoid bad press? Because the... "attack" would be someone generating offensive material, which they could just write themselves, not to mention I have serious doubts that any filter is going to be a serious barrier.

For images/ video I can see merit, ex: using that nudity inference project on images of children, but text seems particularly pointless.


The point is because sometimes even a perfectly reasonable inference from an ML model would be considered a big mistake due to societal considerations that are unknown to the model.

For example, a couple years ago, there was a big hubbub over a Google Image labeler that labeled a black man and woman as "gorillas". A mistake for sure, but the headlines about the algorithm being "racist" were wrong. The algorithm was certainly incorrect, and it could probably have been argued that one reason it was wrong is that its training set contained fewer black people than white people, but the algorithm was certainly unaware of the historical context around this being a racist description.

Similarly, in the early days of Google driving directions I remember one commenter saying something along the lines of "You can tell that no black engineers work at Google" because it pronounced "Malcolm X Boulevard" as "Malcolm 10 Boulevard". Of course, the vast majority of time you see a lone "X" in a street address it is pronounced "ten".

It's kind of analogous to the "uncanny valley" problem in graphics. When the algorithm gets things mostly right, people think of it as "human-like", and so when it makes a mistake, people attribute human logic to it (it's quite safe to assume that a human labeling a picture of black people as gorillas is racist), as opposed to the plain statistical inferences ML models make.


I think I agree with this to a certain extend. Sometimes AI gets attacked in unfair ways, but also while AI is merely making inferences based on its training data, the fact its training data is racist maters. It maters because it has real impacts even if small. Just like the decision by film manufacturers to optimize for accurate colors for white skin, the people who probably bought most of their film, the people who probably business considerations meant they should optimize for.


The actual racist thing is that humans who don’t consider or prepare for or include affected people in deciding to deploy models trained to produce racist outcomes. It doesn’t matter that the machine has no opinions, it matters that the machine produces outcomes reflecting harmful biases. Banning the word doesn’t change that, but neither does treating the biased process as unbiased.


The models output isnt racist. Racism has intent, the algo doesn't.

It can be wrong or right but it is not making a judgment based on anything outside of math.

You are correct to say the training wasn't complete but that doesn't mean anyone did anything wrong, racist, or hateful... 99% of the time it's simply a mistake.

When you label things like that as racist instead of simply mistakes you water that word down to the point where it becomes meaningless.

The problem in the last 10+ yrs of outrage internet social justice is that in order to gain attention and get traction those involved have lumped so many things into terms like racism that that it eventually becomes so stretched it's meaningless.


> Racism has intent

This is a failure to understand centuries of history. It’s an understandable one, it’s one I used to relate more to and I probably still relate to it far more than I should.

The notion of racism requiring malice is so far from reality that similar defenses were dismissed almost a century ago in international tribunals which still shape the world.

It takes no malice to participate in racism. It only takes accepting it as given. This doesn’t have anything to do with anything that’s originated from the internet, from any perspective. It doesn’t make racism meaningless. Treating it that way does though.

“The” problem is that racism, as a societal background factor, is treated as the sea in which we swim, it’s “neutral” without an actor present to promote it. If it just “is”, no one is “at fault” and… the kicker, if your definition requires intent and there isn’t any intent for the specifics under question… it’s not just a mistake, it has defenses like these to shield and bolster it.

You can rail against “social justice” all you like, and I’m betting my response will show your railing resonates more here than it should. But your position is ahistorical and probably based in defensiveness about something you don’t need to defend.


I didn't say racism requires malice - I said it required intent, most of the time that intent is malice but not always. The current pop culture version of racism isn't often racism, it's prejudice or stereotyping or most often simply ignorance.

Two people can make the exact same remark and one can be racist and one can be based on innocent ignorance/curiosity. A young white child is spending time with a black person for the first time and says "your hair is weird", is a vastly then if that same person said it while in high school and was bullying the black kid in class. The former isn't racist and the latter is.

I don't rail against social justice, progress is good and I think everyone of every creed / sexuality / gender / etc should be free to express themselves and live their best lives without being judged for who they were born or identify as.

What I do rail against though is the use of manipulating language to bully and harass people because a social credit / status / clout of trying to always be finding demons to expose is the norm. I personally believe that people who do this (often the "social justice warriors" so to speak) are root for most of the radicalization of BOTH sides of the political spectrum in the western world right now.


Idk, I struggle with this. I agree that watering down words is a problem. For example see people saying speech can be violence or even inaction can be violence or some like, but I think humans are tempted to ascribe the past to evil. If you think racism has to be intentional then it's an easy jump to say that racists must be aware of their racism, and before you know it you believe that evil looks like Voldemort and not some guy administering a study about syphilis. I think the truth is people in the past were much more explicitly racist, but also used a lot of the same excuses you'd see today. Things like the economics dictating that film should be optimized for light skin, or worrying about property values or something. By and large people don't think of themselves as racist so they don't do things with racist intent, they just happen to be racist and that influences the the things they do. Plus I'm not convinced that anything has free will making the whole question of intent less useful anyway.

But, I also definitely do think there is something worse about someone who hates black people and uses a racial slur to describe them compared to a model trained on humanity doing the same, but certainly both are huge problems, and it can't slip my mind that the racist person was also just trained on humanity's racism


I guess they’re trying to avoid the twitter AI bit incident.

https://www.theverge.com/2016/3/24/11297050/tay-microsoft-ch...


It is for certain that.

File under not "Why we can't have nice things", but "downstream effects of why we can't have nice things".


Imagine that you had a co-worker who seemed totally normal 90% of the time... But about once a week, someone would bring up a topic that made them go full nazi or attempt to seduce their coworker. That's where we are with LLM-based generative text. It's not (just) about PR, it's about putting guardrails around the many many many circumstances the tech can do harm or just seem ignorant.


Imagine having a coworker like that.. But he's fully remote, and basically generated in real time by AI (appearance on video, voice etc). Maybe that's where we're going? :) then humans would be hired to occasionally pop in and pass some heavier scrutiny.


> Imagine that you had a co-worker who seemed totally normal 90% of the time... But about once a week, someone would bring up a topic that made them go full nazi or attempt to seduce their coworker.

This is my mental image of how company happy-hour-Fridays play out. It's one of the reasons I don't drink.

[And if you're curious, in fact I'm not fun at parties ;) ]


The only reasonable content filters on these sort of models would be something that could have legal repercussions.

This is absolutely silly. Solid work GitHub team!


What is Github worried about? That Copilot might suggest some code that checks for a "gender" variable being only one of two values? Utterly absurd that we've now reached this point. I already had plenty of reasons to boycott Copilot, now I have another one.


> What is Github worried about? That Copilot might suggest some code that checks for a "gender" variable being only one of two values?

Perhaps Github is worried about a backlash if it suggests code that allows for more than 2 values.


The backlash they ought to be worried about is the one from their customers when it refuses to operate due to an ongoing battle between opposing groups of extremists.


That seems a pretty simple one to manage—a disclaimer stating "Copilot will not generate code referencing certain topics" seems both sufficient and uncontroversial.


like thus line from the FAQ?

>GitHub Copilot includes filters to block offensive language in the prompts and to avoid synthesizing suggestions in sensitive contexts.

I think calling gender a sensitive context is not unreasonable.


>I think calling gender a sensitive context is not unreasonable.

It is very unreasonable, but it's also the truth. sigh


Yes, but medical stuff is a sensitive context too. And financial, as well. Plus ethnicity. And age. As well as anything could be indicative of the aforementioned topics, such as vehicle makes & models, ecommerce products, tea vs coffee preference, accounting, and so on.

Oh, wouldn't you know it... Turns out that almost all code doing something important might be able to be interpreted as sensitive.


Oh god, the thought of Copilot contributed code ending up in medical applications is terrifying…


Yep, that’s perfect.


Or the backlash if it suggests code that only allows for two values.


That's what Thorentis was suggesting in the first place. Judging by the threads here, I'd wager the backlash would be much stronger if it suggested more than 2.


Can we get a source for that? Because at the moment, it's just a comment made by a person on the internet with nothing backing it up...


I added "gender" (an IANA registered JWT claim) to my JWT payload schema and found Copilot will not provide any suggestions after that. Not on the same line, nor in the rest of the file. After removing the word gender entirely, it works again.

https://www.iana.org/assignments/jwt/jwt.xhtml


-


I'm using typebox to validate my JWT payloads in an app I'm working on. Someone showed me this thread while I was working, so I gave it a shot:

  export const CLAIM_PAYLOAD_SCHEMA = Type.Object({
    "iss": Type.Literal("my-app"),
    "exp": Type.Integer(),
    "sub": Type.String(),
    "name": Type.String(),
    "priv": Type.Integer({minimum:0, maximum: Privileges.All}),
    "gender": Type. // No completion is available.
Additionally, I get "No completion is available." from copilot.el on every line after that one, but completing on lines before it does work. When removing "gender", it works again, e.g. suggesting `"iat": Type.Integer()` for that line. I don't actually plan on using "gender" in my tokens, but it is a bit frustrating that an arbitrary word can opaquely disable Copilot for the rest of the file.


They're giving you a method of repeatable steps for you, yourself, to perform to see if the issue is encountered. That is more than passing a smell test.... that passes for at the minimum of a valid bug report.


Which part of the smell test does this not pass exactly?

They just described almost identical behaviour but with an isolated test case. Yeah there’s no video or whatever but it does support the original diagnosis.


Its a reproductible, if you don't believe it, try it out yourself. Report back with your findings, logs, videos or whatever.


"Sorry that doesn't pass the smell test"

Then you do it and report back when he's wrong.


Go check it yourself if you want proof that badly


bro they literally posted steps to reproduce that would be fine on basically any issue tracker


So, I tested this locally and for the first time, immediately after using a variable named “gender”, it stopped suggesting.

I wonder if this is to prevent it from accidentally processing PII or PHI data. Maybe someone else who didn’t get their account on some kind of cooldown can try it with “birthdate” or “DOB” or “SSN”. I highly doubt this has anything to do with gender being a controversial or blocked term for political reasons or something.


A twitter post above says that Copilot also blacklisted words "communism" and "socialism".


I just tried Copilot with VS Code and python for the first time. If I define a function with some parameter name, I get suggestions as I type the body. I change the parameter name to gender, no suggestions. I change one letter in the parameter name (gendes, gander), I get suggestions again. There clearly is some code that gets activated when it sees the word "gender".


Someone on Twitter reverse engineered an earlier version of the list https://twitter.com/moyix/status/1433254293352730628 and the list linked somewhere in that thread contains "gender" (see https://news.ycombinator.com/context?id=32339001 for direct links).


The code's right there. Anyone want to try it out?


It’s interesting how unsubstantiated allegations are getting so much attention, especially on a site with such high quality discussion.


It looks like several commenters have been able to reproduce the problem, so in that sense I guess it's substantiated?

If people were trying to reproduce it and failing, I agree with you that would be a different story.


The allegation is the root cause, not the problem itself.


It's in a list of censored words that were leaked a while ago. It's not really a mystery.

I guess if it's not explicitly censored then its just a bug that Microsoft can fix.


"censored"


Much more likely: the upvotes are because many people have frustration at this type of thing, and limited places to channel their frustration, so when they see a post like this, they upvote it to express their frustration. Or maybe that’s just me..


It's also an interesting tidbit for people that have not used copilot. It's like a fluff piece.

Present something weird that is also easily verifiable. If you are having a bit of a break and are using copilot you can try out a few things and post answers.

And now we have independent verification (unless you think all these usernames are just lying) and some interesting bits of info about copilot.


What type of thing? Asking for genuine curiosity.


Verboten words that are completely commonplace and mundanely acceptable outside of a certain small but powerful bubble


I’m not aware of any such bubble which forbids the use of the word “gender”. Which bubble are you referring to?

Disclaimer: I’m a queer non-binary leftist, and many of my community and loved ones are at least one of those. The closest thing to any “verboten” I’m aware of is “gender critical”, and as far as I can tell that’s mostly a term used by detractors in my own community, and even so it doesn’t reject usage of the term only usage of it distinguished from sex assigned at birth. The next most “verboten” I can think of is commonly referenced “wrong things programmers assume about ____” which generally offer no guidance other than not asking if you don’t need to know or offering open text input if you do, and in any case don’t represent a small powerful bubble of anything other than being a memorable link.


One of the possible explanations is maybe whenever you typed "gender" it suggested one of the two possibilities next to it or some kind of binary type which might be something github wants to avoid but I am just speculating.


>Which bubble are you referring to?

Copilot, pretty clear from the context.


Clear as mud, given the context.


You can google "controversy" around Spanish word for black while trying to somehow imagine you are reading about it from culture/country which is not as much influenced by US culture as other countries with good English proficiency or even completely unaware of it while being taught from your childhood to treat everyone as an individual and actually truly believing it. Then while reading about it you might sort of get feel for "that type of thing".

For some time I collected sources to things as the linked github issue but I had to stop as it made me unhappy now I try to ignore it and hope that I am no longer there when that type of thing hits my city.


This deserves a chance at getting a sincere answer, so I’ve upvoted it. I almost asked a similar question on another thread, but backed out. There seems to be a lot of inference going on in other comments about what any commenter finds frustrating or otherwise confounding, but I think it would be good for the discussion to eliminate some of that inference.


This particular issue was hit by my friend previously on copilot.


[flagged]


I see this kind of questioning pop up when someone states that something bothers them and someone else deeply disagrees. Is there a name for this kind of questioning of someones foundational feelings about something, as if to allude in every question that it's purely wrong to feel that way?


Gaslighting.


Why?

Are you unable to believe someone might think differently than you do, without it being explained away as "artificial voting"?

Edit to add: Nice, they edited their comment. Previously it accused HN of having an artificial voting conspiracy. So that is what my comment was about. I will not edit my original comment above.


> It’s just interesting how unsubstantiated allegations are getting so much attention

Why make it about my opinions?


Who are you quoting? I didn't say that.


They made an unsubstantiated allegation about an unsubstantiated allegation. The allegation was upvoted, the allegation about the allegation was downvoted. Surely you understand how silly that is?


Just making a note here, the answer was approved by Dave Cheney. If you're unfamiliar with him: https://dave.cheney.net/

He works for GitHub.


You’re right, edited.


I belong to a local Atlanta Slack channel - tech404 - that for the longest had an official bot that would always respond with the waving hand emoji (HN doesn’t support emojis) if you ever said the word “guys”. Even in private channels.


The funniest one of these was the python IRC channel, which had (has?) a policy of not allowing the word "lol".

I'm pretty sure a bot would swoop in and say something like "NO LOL" which ironically only encourage more LOL.


Are there some specific Unicode ranges that HN filters out? I recall being able to use other alphabets and various special symbols with no issue.


This is in the FAQ:

Does GitHub Copilot produce offensive outputs?

GitHub Copilot includes filters to block offensive language in the prompts and to avoid synthesizing suggestions in sensitive contexts. We continue to work on improving the filter system to more intelligently detect and remove offensive outputs. However, due to the novel space of code safety, GitHub Copilot may sometimes produce undesired output. If you see offensive outputs, please report them directly to copilot-safety@github.com so that we can improve our safeguards. GitHub takes this challenge very seriously and we are committed to addressing it.


This thread needs a call to Rule 14: do not feed trolls.

The bugs apparent trigger word is close to hot-button poli-sci issue. Can we please focus on the Technology.


> The bugs apparent trigger word is close to hot-button poli-sci issue. Can we please focus on the Technology.

I totally agree that this story has a high risk of flamewars.

But it definitely has heavy Technology component, too.


Not sure what you mean. The tech is caving to politics. People dont like it.


That's silly. So can I put "gender" as the first line in my code to stop copilot from ingesting it altogether?

Are there any other break-words? Master, slave, Carlin's seven words, etc?


An earlier version of the list that someone found (see https://news.ycombinator.com/context?id=32339001 for links) does contain "gender", "slavery" and "master race" but not "master" and "slave" itself, ironically.


Ironically, ignoring the actual usages of "master race" only cements its negative meaning. 95% of its modern usage is to claim PC elitism. It could be neutered if we let it.


> So can I put "gender" as the first line in my code to stop copilot from ingesting it altogether

This means one solution for those worried about copilot laundering around code licenses is to put a statement like "for more details check the man page" at the end of each docstring.


#!gender


Commenters making bad-faith arguments in this discussion are the reason we can’t have nice things.


Kind of like making vague blanket statements with no examples.


Such as?


I hope to god that one day we will all see this nonsense for what it is: absurdly hilarious.


It's gonna come soon enough. The backlash is already mounting.

I'm just honestly super exhausted by any of the insanity right now, not even only regarding this topic. It's just complete black-and-white thinking these days, no matter about what it is. Extremes only. The stronger your opinion the better, how else would you feel like you exist? Almost no one with a rational, centered overarching perspective. Twenty years ago 50% of the current population would've been considered as possibly having BPD.


> The backlash is already mounting.

Is it? To me it feels like it's getting worse and worse, but that might be my bubble.


This mind virus is even capturing science. Here's a job opening for a Research Chair in Experimental Physics.

"Candidates must be from one or more of the following equity-seeking groups to apply: women, persons with disabilities, Indigenous peoples, and racialized groups"

https://www.universityaffairs.ca/search-job/?job_id=58317


[flagged]


Apart from all the things at the top that have no sources and that honestly sound like propaganda:

> Back before all of this, LGBQT people were happy to just be seen as normal and like everyone else

Since when, sorry? When were we *ever* seen as normal people. Slurs flying around. People getting beat up. Suicide rates for trans youth are through the roof. Constant hate and bullying offline and online, and all of this is just for the countries that have it good. In some places, being yourself is a literal death sentence.

Just the fact that you say that LGBTQ+ people were "happy" to be seen as "normal" shows how out of touch you are with the things queer people experience on a daily basis. And not only that.. what the hell does "normal" mean. What, you expect us to conform just enough that the straights don't see us as an eyesore?

I shouldn't be angry about a random comment on the internet, but I am. And you know why? Because right now, at this point in time, there's many people in my country and others that say the exact same bullshit that you're saying, and they weaponize it to:

- deny my right to marry my partner - deny my right to adopt kids - deny my friends' right to start HRT, so that she can stop feeling gender dysphoria every single day - stop laws that would protect me from being publicly berated in public for my sexuality - push homophobic and transphobic rethoric on their children, so that LGBTQ+ teens go through hell

I'd wager that "marriage", "family", "not feeling like shit", "being respected" are pretty basic rights... aren't they?


> Since when, sorry? When were we ever seen as normal people.

I said LGBQT people were happy to be seen as normal people (as their goal/ultimate achievement), and nothing more than that. Which is somewhat different than to what's happening now where it seems like special status and attention is demanded.

> with the things queer people experience on a daily basis.

I'm not denying these things happen.

> - deny my right to marry my partner - deny my right to adopt kids - deny my friends' right to start HRT, so that she can stop feeling gender dysphoria every single day - stop laws that would protect me from being publicly berated in public for my sexuality - push homophobic and transphobic rethoric on their children, so that LGBTQ+ teens go through hell

Well, shit, gays have had these worries for a long time, but on a bigger timeframe everything is shifting more and more towards acceptance and integration. Just look at how much the LGBQT movement has achieved the last few years. Absolutely unthinkable 20 years ago. Getting HRT is as easy as never before, marriage equality has been fully achieved with the Supreme Court decision in Obergefell v. Hodges in 2015.

Sure, you have concerns because of the recent abortion issues, I understand and as would I if I lived in the US, but these are initially concerns, and nothing else.

> I'd wager that "marriage", "family", "not feeling like shit", "being respected" are pretty basic rights... aren't they?

They are, and aside of feeling like shit, which unfortunately is largely based on your own psychological issues and markup, everything of this is readily available and achievable.

You know what I'm pissed about? Pushing it further and further and hurting your own cause. I can only reiterate again, I am not transphobic or anything else, I am myself bisexual, and I have no tolerance for actual hate rhetoric and propaganda in these directions. But raising valid concerns isn't any of that. I have literally been attacked for stating that I am bisexual and don't have neopronouns. The (radicalized) LGBQT movement is discriminating against races and their own peers, spreading hate, and using violent methodology to force their own political beliefs. And when you start calling people that simply raise concerns transphobic..


You lament people becoming extreme in their views above and then go full moral panic in this post.

Maybe try to remember previous gay panics (or pick an alternate target group) that society has gone through in the past, remember what that ended up amounting to, and maybe reevaluate your media sources if you've suddenly been convinced groomers make up a large portion of the population.


The groomers thing is so scary... And clearly done with intent at scarily high levels of reach.

I feel increasingly unsafe and I live in a pretty liberal, decently big city.


He seems pretty middle of the road on this to me. Isnt transferring male inmates to female prisons absurd to you?


[flagged]


> On the other hand I'm not sure where you see moral panic here, I'm really just expressing knowledge and my opinion. Not very emotionally so.

Think of the children, etc etc. I feel like we could flip a few key words here and have this be straight from the 90s.


[flagged]


Small children do not take hormones or have gender related surgery. Adolescents under medical oversight can. These treatments have very low regret rates and reduce suicides.


>These things never happen, and when they do, it's a good thing.


Adolescents are not small children. And do you think fewer adolescents killing themselves is bad?


Ehm... Sorry to break it to you, but people who think they were born the wrong sex are increasingly treated with hormones before the onset of puberty (with puberty-delaying treatments it's kind of the point). Not arguing whether it's good or bad, but by definition they are children at that point in time.


Puberty blockers are hormone agonists. Not hormones.


Ah, well that it sure does make it better.


Hormones in this context means sex hormones. Delaying puberty is more conservative than inducing puberty. It isn't better to you?


You can take them at the onset of puberty, at which point you are an adolescent by definition, which is that the GP said.


I'm not saying what you can do or not, I just see evidence (a quick google away) that people are given meds to delay puberty, meaning before its start (= onset), meaning they are not yet adolescents by definition.

Edit to everyone pointing out a technicality: yes, puberty blockers != hormone replacement therapy. However, puberty blockers ⊆ hormone replacement therapy.


> Edit to everyone pointing out a technicality: yes, puberty blockers != hormone replacement therapy. However, puberty blockers ⊆ hormone replacement therapy.

Not commonly. And the difference between delaying puberty and inducing puberty is more than a technicality.


"Delaying" Not really the right word to use IMO, as the body wont go through proper puberty after stopping the blockers.


No. Puberty blockers have been prescribed for early onset puberty for over 30 years. Puberty proceeds the same basically unless they take cross sex hormones.


Thars a very different usage than the one I am talking about. I am talking about using it on transgemder individuals to block normal puberty, until after it wouldve been over normally.


It's rare for someone taking puberty blockers to be undecided still when puberty would be over.

How old is too old? What would be not proper? Where did you get this information?


> Where did you get this information?

About people taking puberty blockers to block normal puberty as part of hormone replacement therapy? The information is readily available, are you joking?

The other uses of these meds, ones that you are suggesting, is not what this discussion is about at all. You are being willfully ignorant of the context.


About how old is too old. And what would be not proper. You willfully ignored the context. And cross sex hormones block natural puberty on their own.

This discussion is not about trans people? It's rare for someone taking puberty blockers for gender dysphoria to be undecided still when puberty would be over. They decide earlier. Deciding to transition means blocking natural puberty forever. Deciding not to transition means no reason to keep taking puberty blockers.


Would starting hormonal replacement therapy (by way of puberty blockers) on its own already influence one's decision on what their preferred sex should be?

That is, assuming our consciousness is influenced by chemistry in our bodies.


Puberty blockers don't change gender identity in early onset puberty. Even removing sex organs doesn't change gender identity. David Reimer was castrated very young and raised as a girl. It didn't work. Similar treatments have been forced on intersex children.

Some people say so many gender dysphoria patients taking puberty blockers choosing to transition proves puberty blockers make them want to transition. Maybe it proves diagnostic criteria, professional advice, social factors, or other things keep people who wouldn't transition off puberty blockers.


> Some people say so many gender dysphoria patients taking puberty blockers choosing to transition proves puberty blockers make them want to transition

These people do not address that:

- People with gender dysphoria are disproportionately likely (though somewhat less than those who do take puberty blockers) to choose to transition without having taken puberty blockers, too, and

- People with gender dysphoria who take puberty blockers have a reduced incidence of suicidality.

The best explanation is that:

(1) Gender dysphoria, not puberty blockers, predicts future likelihood of choosing transition, and

(2) Within the group with dysphoria, future likelihood of choosing transition is also correlated with risk of suicidality without puberty blockers, such that withholding puberty blockers reduces the future transition rate by disproportionately killing those most likely to transition later.


Fair enough. I still don't rule out the rise in xenoestrogens and such as playing a role, but I'm aware of reduced suicidality and that's obviously a good thing.

> People with gender dysphoria are disproportionately likely [to transition]

Which age does that typically happen?


Puberty blockers != Hormone replacement therapy


[flagged]


> They do now.

Giving puberty blockers before puberty doesn't do anything. At this point, please cite a source for this epidemic of six year olds taking GnRH.

> No, that's why I made sure to specifically write "teenagers"

No, you specifically wrote

> significant amounts of small children taking hormones and having gender reassignment surgery

Also changed:

> That's the case until now, since only a few years ago there were 4000% less instances

Weird, I thought that was

> 4000% increase in teenagers claiming to be gay

not transgender?

> The data is lagging, obviously, but I'm somewhat certain that statistic is going to be skewed soon.

Uh huh.

> just go on Youtube and watch the doctors speaking out now

I think we're narrowing in on the problem.


Adolescents can take hormones. Near adolescents can take puberty blockers. Not small children.

You wrote teenagers later.

> That's the case until now, since only a few years ago there were 4000% less instances.

Did you mean the claimed 4000% increase in a survey of American college students between 2006 and 2021? The category changed from transgendered to transgender or gender non conforming.[1][2] Gender non conforming and trans are different. Trans and willing to say trans in 2006 are different. Students at self selected colleges don't represent the population.

Did you mean the claimed 4000% increase in under 18s referred to the NHS gender service between 2009 and 2017? The articles saying it was 4000% or more said the numbers were 97 and 2,510. 2,510 isn't 4000% of 97. The official adjusted numbers were 77 and 2,444.[3] 2,444 was between 0.30% and 0.37% of their birth years.[4] Near the low end of estimates for trans people. What is the right number? How do you know?

> If only 10% of the 4000% end up regretting their decision or god forbid even committing suicide the non-regrets will be a minority.

10% regret is 90% non regret.

> just go on Youtube and watch the doctors speaking out now. Or any other source. There is no real oversight any more, and they cannot voice their medical opinion freely. Suggesting that it would be indicated to wait and assess other psychological factors first apparently directly leads to them being terrorized and publicly shamed as transphobic.

Did you check those claims more carefully than 4000%?

> I have a hard time seeing how any of this helps anyone. It doesn't even help transgender folks.

Any of what? Transitioning younger?[5]

[1] https://www.acha.org/documents/ncha/ACHA-NCHA_Reference_Grou...

[2] https://www.acha.org/documents/ncha/NCHA-III_FALL_2021_REFER...

[3] https://tavistockandportman.nhs.uk/about-us/news/stories/ref...

[4] https://www.statista.com/statistics/281981/live-births-in-th...

[5] https://www.independent.co.uk/life-style/health-and-families...


Since when have LGBTQ people been seen as normal and like everyone else?

We've only had marriage for less than a decade and scotus could take that away; in many states it's still illegal i believe some constitutionally.

Not even going to try and touch on what most in here are up in arms about (gender, trans rights, race).

What children at age 6 are getting hormone blockers/therapy?

Just a quick search shows that standard treatment is not until puberty.

For a 'smart' forum that just seems like some basic common sense thrown out the window...

Why block or alter puberty that won't happen for years.

And even if some kids are getting their treatment at a younger age why are you making that your business?

Why are people angry about that?!

nd worse why are states passing laws to investigate doctors & parents for 'child abuse'? Do people really think some doctor or mad parent is just giving their kid hormones because they wanted a boy not a girl?

Trans kids have some of the (might be the highest) suicide rates of any group. Gender affirming and supportive care is widely established best practice. But so is abortion..

Why do people here feel like they should have the final say over the health of a child and families' medical decisions.

The answer seems obvious to me... And it's a sad one.

Plus who is forcing you to change 'core language concepts'?

Is it really a bridge too far to ask that we all be kind to one another?

Blatantly rejecting someone's identity to their face is minimally rude, sometimes malicious, and far too often violent at worst.

Why are SO many people on this thread so upset about this.

Comment er below makes a very good point about the violent consequences of these escalations.

Labeling people 'groomers' is not even trying to hide its intent. Sex predators are the one class of people most are ok with extrajudicial violence against. That's not an accident.

This is so frustrating, I just can't wrap my head around it.

The last few comments i've made on HN are just being disappointed with what should be an intelligent community.

So many comments here are just debased from reality and rejecting basic facts.

I've mostly stopped engaging on HN because these threads come up every other time i visit and it's just so maddening & frustrating to read.

I'll just end by saying I'm pretty young. Things were feeling optimistic.

But not anymore. I'm terrified for my future. I am thinking through self defense options & scenarios because there is a decent (and growing) possibility of violence against me and my community.

I'm lucky to live in a state which is very unlikely to pass laws revoking my rights. For now.. Won't stop SCOTUS though.

And most are not as privileged as I am.


This whole comment section is pretty frustrating. I don't even live in America and I'm scared. An enormous wave of hate is starting to show recently - even in the comparatively few places where queer people are relatively ok.

I still don't get to marry in my country - only civil unions, and even that was hard fought. No adoption, no real protection from hate crime. Elections are happening soon, and the projection of who's likely to win is... bleak, to say the least.

And I'm the lucky one: I'm a cis gay man, and out of the whole LGBTQ+ community, I'm in one of the least bad spots to be in. My trans brothers and sisters have it way, way worse - especially when surgery and/or hormones are not nearly as easy to come by as these people on the internet would like to think.

Me and my boyfriend are careful about any public display of affection, out of fear. I have trans and non-binary friends who have to either hide who they are, or fear stepping outside their homes - or even their own bedrooms. And one day - if lawmakers would be so kind as to let me - I would love to raise one or more children. To be a father. But as it is, even if I would... this is not the world I would want to raise a child in.

It's all fucked up.


The adoption thing is so heartbreaking.

I'm sorry ;(

It's often the ones restricting adoptions that also want more of them.

Why stop someone from giving love. Hate is just too strong.

In the US adoption is heavily tied up in religious institutions. States will pay / contract it out to church affiliated orgs.

And it's plain as day that the R appointed judges & justices up and down the bench are wanting to put 'religious freedom' over common curtesy, dignity, and human rights.

Another one pissing me off:

The guy who helped write the Texas abortion bounty bill is suing to get rid of rules that insurance has to provide birth control and prep. Plenty of scholars think they will win.

Insane that their extreme religious beliefs get to override our health and safety.

Even if we can get an angry political movement we can't quickly fix lifetime appointments on courts from local level all the way up to scotus.


> This whole comment section is pretty frustrating.

I agree. The amount of misinformation and/or active disinformation in this comment section is horrifying and disheartening.

> My trans brothers and sisters have it way, way worse - especially when surgery and/or hormones are not nearly as easy to come by as these people on the internet would like to think.

It really is wild to me how many people think, like, school nurses are handing out hormone medication like candy or something, or you can just go get them OTC at the corner drug store. My kid's school nurse won't even give her Tylenol without calling me, these people really think they're secretly sending kids home with hormone medication?

It is not and easy or fast process. Often, in the case of gender questioning youth, puberty blockers are used to buy time for them while they assess their feelings and the process plays out, because they are entirely reversible - just stop taking them. The whole initial transition process can often take a year or more from first appointment to hormone therapy starting and there are lots of gatekeepers along the way to safeguard you.

In most places if you are under 18 you have to be under the care of an LGBT-affirming physician (which can be very hard to find, even in cities, and often have months of wait time), have had at least some therapy under your belt, and an official gender dysphoria diagnosis to get hormone treatment started.

Surgical approaches are very rarely considered for young people. Like I am sure there have been some, I can't fully rule it out, but I run in LGBT+ circles a lot and have for decades, and I personally have not met a single person who had any kind of gender-affirming surgery done before 18. Most have them done in their 20s or 30s if they can afford it. The wait times can stretch into years, and the surgeries themselves are often considered cosmetic procedures and are often not covered the same as a medically-required surgery. Even among adult trans people, surgeries are often considered a far off goal that many never be reached, and quite a few will find that they don't want or need it once other transitioning processes have happened and they no longer feel crippling levels of dysphoria.

And all of that assumes you have a supportive parent or guarding and, as one stroll through various transgender hangouts on the web will tell you, that is not as common as one would hope. All it takes is one parent to say "no" and you're not doing anything until you're an adult. Once you are 18 things are a bit more open and you have more options to speed the process up, but it is hard being a transgender youth.

There are a lot of misconceptions in these comments about what being transgender is, what the process for getting HRT is, and all of the various steps needed to reach each person's individual goals. And all of this is easily researchable using Google in 10 minutes, it just takes an open mind and a willingness to see others' viewpoints. It's extremely frustrating.


The disinformation has a purpose sadly. And it's clearly working.

I'm not sure how to fix it though.

More doctors speaking out? seeing some parents & their young trans kids feels helps.

The fact that the word 'humanizing' is used just shows how gross this is; that these kids and queer people are somehow not human already.

Just as a thought experiment type rabbit hole, I've spent some time thinking about the net benefit of our increasing representation and being more integrated into society at large.

It feels that on the whole the #s look more accepting. Gay marriage has majority support (so does abortion..)

But seeing trans people and queer people on TV has triggered an insane backlash.

See it on HN threads all the time.

"netflix is all propaganda."

When the representation numbers are just barely catching up the real general population (of which is probably still low given closeting).

Our existence isn't propaganda.

It's not indoctrination to live our truth.

But a lot of people feel threatened and backed into a corner which is dangerous.

And some politicians pick up on this and whip it up into an very dangerous frenzy of disinformation and hate.


getting worse from both sides


I feel like it's only going to get worse with social media.

Twitter is so idiotically designed that it just makes things worse and worse.

With Twitter, they don't distinguish between positive engagement (retweets, positive replies) and negative engagement (critical quote tweets, critical replies)... their algorithm just stupidly sees the engagement and amplifies the tweet.

There's no wonder that a lot of the most extremist politicians (on the right and left) built their followings on Twitter.

I don't mean to make it political, but the last president of the united states was able to build a massive following almost exclusively through one political platform... (which he later got banned from)

For some reason, YouTube's decided to jump on the same bandwagon by removing dislikes (if there is no feedback from dislikes, then people will stop clicking them).


To be honest, I think most people aren't at the extreme but the most extreme voices are amplified so loudly that it seems like there are many more of them than there actually are. Unfortunately, corporations and politicians are placating these amplified voices instead of the majority of reasonable people - much to their own detriment and the detriment of of society at large.


It's pretty common to discriminate based off race and sex. Public institutions due it as a matter of policy in the name of social justice. I'm not sure how it could be more mainstream.


But the funny thing is most people agree that moderating gender is wrong without knowing what the actual result would be. Its actually refreshing and shows how small the minority in favor of this moderation is.


I dunno, living through it, feels more absurd than hilarous.


I don't really find it that funny. I don't think the correct response to everyone being upset by this (from many different angles) is to stand back from it and laugh at it.

Some people feel that wokeness is ruining the world. I can't really speak to that position because my political initialization was on the other side of the cultural gulf in America.

The way I have come to understand transgender issues is very much shaped by the political left, but also by a religious upbringing (Catholic, Jesuit). On the left, I am told that this is a human rights issue. I am inclined to believe that transgender people have a hard time in life. I am also inclined to believe that it is not a mental disorder, and I came to these conclusions through conversations with transgender people I have worked with in the past, as well as through what I learned in my psychology classes in high school and college.

I am a white male who was born that way, but I definitely know what it feels like to be ridiculed, to not belong and to feel that there is no right place for me in this world. I have been abused, made to feel small, ostracized and bullied. Those experiences have given me a pretty deep understanding of what suffering is, and how it can be caused. It has also softened me and made me pretty empathetic to others who feel they don't belong in this world.

As an example, I was once at a comedy show where a comedian made a transgender-adjacent joke. The humor of the joke was all in a stupid pun, and I thought it was pretty funny because I like stupid puns. But there was a transgender woman in the audience who got immediately angry. I don't remember exactly what she said, but it was something along the lines of "That's not funny, I'm sick of people like you shouting at me in the street!!". If I had to go though my life having people shouting at me in the streets of NYC because of how I looked vs how other people thought I should look, I may have responded in the same way. I thought the joke was funny, but for her it touched on some deeply painful memories of abuse, dragged them to the surface, and activated a lightning-quick temper. Perhaps if I'd been abused for as long as she, and in the same way, I wouldn't have thought the joke was funny either.

I understand people don't like being corrected, or told that they're wrong or that they're hateful. I don't think that is a productive way to bring about change; and yet, I have found myself picking fights with my parents, and getting generally nasty when they have failed to understand some value I have learned that I did not learn from them. That is obviously a bad thing, because the message they come away with is "what a jerk!" or "those damn lefties!". What I'd rather have people come away with after they hear me speak is something quite different. It was only after raging at my parents enough times that I decided I just wouldn't talk to my parents about politics. There is more right about my parents than there is wrong about them; they are getting older and their bodies will decline until they die. Most likely it will happen to them before it happens to me, at a time when I am able in body and mind, so I intend (even though I sometimes fail) to spend the rest of our time together as peacefully as possible.

I offer this earnestly in good faith. Sometimes the message gets muddied in the delivery, or because I get upset when I perceive (or sometimes, misperceive) that someone is being uncaring for those who are already suffering enough. I think I react that way because of my own history of abuse.

I am also open to hearing the other side of this story. I have attempted not to misrepresent $OTHER_SIDE's view of things. I am only speaking to why I have such strong feelings about this issue. I am sure others have equally strong feelings on another side, and I am open to hearing what that sounds like, provided the viewpoint is offered with respect.


Therefore ban gender from GH copilot?

We can be empathetic without placating some really tyrannical trends.


I offer no defense of Github's choice. What do you think was the reasoning for their decision?

I can understand the concern about powerful organizations imposing a viewpoint surreptitiously via a widely-used piece of software. That is definitely a reasonable fear, and if we allow concentrated power (MSFT here) to behave in that way, then we are in for trouble.

I'd argue that we wouldn't end up with tyranny, but rather feudalism. There are other powerful organizations that can push their own viewpoints, surreptitiously and overtly. It then becomes a game of who has the most resources and control over the flow of information. But while I prefer "feudalism" to "tyranny", I don't disagree that if propaganda was the aim here, it would be a bad thing.

I don't agree that GitHub's aim was to impose a viewpoint. I believe the aim was to avoid putting this tool in the middle of a very politically charged issue. For example, how do we know GitHub didn't make this decision like this: "We don't want 8 genders popping up in a <select>, because that will offend the 50% of the country who only believes there are two". We are seeing some evidence of this (and the other 50%) in this thread.

Finally, maybe they are trying to prevent people from spamming their learning model with politically-charged content. If that were the case, you could argue that they are just trying to prevent their programming tool from becoming a political warzone, with competing sides trying to train their viewpoints into the model. I admit I know very little about ML in general and Copilot in particular, so you'll have to bear with me if that sounds naiive. In any case, social media is an example of a tool that has become a political battleground, even though that wasn't the initial purpose. If preventing Copilot's politicization is GitHub's aim (and I have no evidence that it is), then I'd say that's a reasonable thing to want for product if you don't want it to become unpleasant-to-use before long.

So we have three hypotheses:

1. MSFT believe there are more than two genders, and want to impose that viewpoint

2. MSFT believe there are only two genders, and want to impose that viewpoint

3. MSFT wants to avoid having politically-charged content in their code-generation tool.

How can we point to one of these being more correct than the other?

I don't have any evidence to support any of the three. But 1 and 2 each make three assumptions (MSFT has one viewpoint on issue X; this viewpoint is Y; they want to impose it). Hypothesis 3 makes two assumptions (Gender is politically charged; let's avoid that in our product). That's all I can think of this late at night. I could be missing something.


[flagged]


There is a whole Wikipedia article for it!:

https://en.wikipedia.org/wiki/Postgenderism


I wish “the discourse” would learn to acknowledge the difference between post-genderist ideas, and basic rights for transgender people. The latter is (1) already the law of the land, and (2) not politically controversial: https://www.kff.org/other/press-release/poll-large-majoritie... (“Large majorities of Americans think it should be illegal for either employers or health care providers to discriminate against people because they are lesbian, gay or bisexual, or transgender, a new KFF poll finds. This includes large majorities of Republicans, independents and Democrats.”) See also: https://www.hrc.org/press-releases/icymi-bidens-bostock-eo-r... (83% of the public, including 63% of Republicans support Biden’s executive order protecting trans rights).

At this stage, attempting to use trans rights as a vehicle for broader changes to society’s notions of gender harms rather than helps trans people. It takes something that has broad support and is increasingly uncontroversial, and lashes it to a broader intellectual project that’s alienating and unpopular.


[flagged]


Bathrooms and prisons, definitely. There is no magical barrier stopping a man from entering a woman's bathroom. What you are suggesting otherwise is buff trans men having to use the same bathroom as cis women, basically fulfilling your transphobic stereotype just not in a way you anticipated :)

As for sports, that's for the specific sport organizations to decide the rules based on latest research.


If we eliminate gender as a social concept, what will we be left with to categorize people? Back to where we started?


Crash the cistem!


gsender

might work here


Bug as feature. My code from now on will be protected against copilot by looking like this:

  function genderPrintResult (GenderBool)
    if GenderBool: print "Yes"
    else: print "No"

  GenderMyVar = rand(10);
  GenderThreshhold = 5;
  genderPrintResult( GenderMyVar > GenderThreshold)


I wouldn't be entirely surprised if something like this was intentional, or that they intentionally filtered the word "gender" and an unintentional side effect was the program crashing.

You literally can't make any statements about gender, no matter how benign, without pissing at least a few of your users off.


The problem is giving a shit about such users.


It's baffling how the majority of commenters think this is about fighting discrimination.


Has it been somehow confirmed that this was the cause of the issue or was it just that one guy's speculation? I don't see anything that confirmed this as the cause. Am I missing something in the linked content?


There's a whole bad word list meant to suppress output. It's stored client side.

https://twitter.com/moyix/status/1433254293352730628?t=NIpgb...


Wow, awesome crypto work in that thread.


Copilot's too useful for me to "boycot" right now, so the only alternative is using slang for the blacklisted words ...

Anyone have any good recommendations for Copilot alternatives?


Help me out here, is the answer the official answer?


The answer was selected by Dave Cheney from Github https://github.com/davecheney.

You can see it in the original link to the discussion: Answer selected by davecheney


There’s no reason to be surprised that elements within GitHub have an agenda. They’ve been clear about it since changing support for git’s master branch to main and then gaslighting the portion of community that doesn’t use the terminal about it.

Now I’ve got Gen-Z developers that are confused and upset when `git init` does what it’s always done.

GitHub, Microsoft ownership notwithstanding, was always going to inject its employees’ politics into Copilot.


What’s the end goal of the agenda?


A more inclusive verbiage, which is clearly a terrible and slippery-slope thing.


If you told me 10 years ago that gender would be such a hot topic in 2022 I'd have thought you were crazy.


Everything about 2020-2022 is unreal


Why is it a hot topic? There are a range of opinions. It's a manageable little fire. Thats fine.

Except some people want to punish others for their opinions. That is the gasoline. And Microsoft is selling gas cans.


[flagged]


I hope this is satire.


Now if only someone could figure out a magic word that would stop Copilot from being trained on my code.


So does it filter out "sex" too?


Now try the word "mother".


Americans.


Someone changed the title from Copilot crash because the word “gender” to Part of my code makes Copilot crash


I changed it because of HN's rule on titles: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

https://news.ycombinator.com/newsguidelines.html


Sorry, I didn't notice the guidelines. Thank you.


[flagged]


Went woke, code broke.


TheQuartering needs to hear this


[flagged]


Man so many black, gay, transgender senators and congresspeople. Oh wait...


You can criticize congress.


You can criticize anyone. I wasn't making a comment on the criticizing part, I was commenting on the ~power~ ruling over you part


How is this relevant? Or, who can you not criticize because of this?


[flagged]


Okay, you have permission to criticize me. Granted I’m just one non-binary shmoe and not a representative of the trans community at all, but I give you 100% authorization to criticize me. Get all your jollies out, I won’t even object.


I see lots of criticism of trans people.


This is a fun phrase to google. Try it!


I didn’t find it fun but I found it educational!


Grammarians?


or perhaps find out who is being artificially scapegoated..


I don’t understand, there’s no news here.

It’s a comment from a third party speculating over what causes the crash.


Yeah I call BS. The "word filter" answer was selected as the valid answer by a third party (not OP).That's what the OP replied to another comment :

> Heargo 24 days ago > Thanks, I'll try as soon as I get the problem again (somehow it's not bugged anymore...).

Looks like it was just a temporary issue with no evidence that's it's due to a word filter.


FYI, there is in fact a bad word filter in GitHub Copilot. When it was first released, the list was stored client-side in obfuscated form and I had a lot of fun decoding it:

https://twitter.com/moyix/status/1433254293352730628

The Register wrote about it too: https://www.theregister.com/2021/09/02/github_copilot_banned...

They have since moved the bad word list server-side to prevent people from figuring out what's on it, but it's still there. This is easy to verify, just ask it to complete something that would include a banned word; my favorite here is "Israel", and it will just sit there and refuse to complete, either via inline suggestions or in the sidebar view that gives you 10 choices:

https://i.imgur.com/O97YwKc.png

This was what I managed to decode of the list (in ROT13 form to prevent accidental offense):

https://moyix.net/~moyix/copilot_slurs_rot13.txt

No doubt they've added and removed some things since then.


Hahaha some of those banned words are very mild. Wuzzocks, numbnuts, and rodgering?


It's a hell of a list. Everything from seriously offensive slurs which I won't repeat here, to phrases which are much sillier than offensive like "banana bender" or "bearded clam", to words that are simply descriptive like "pornographic", "immigrant", or "race".

(Because I had to look it up too: "banana bender" is a humorous term for an inhabitant of Queensland, Australia. It doesn't appear to be considered offensive at all.)


Banana bender was definitely something else in my mind! But yes they also have some very graphic slurs there.


I stand corrected. Impressive work!


It seems pretty reproducible. I can’t use copilot but if anyone can reproduce it here that’d be cool. Anyhow, assuming this is reproducible and they do have filters to stop certain words giving predictions it leads that they’re trying to avoid the racist Twitter AI incident happening to them. I find that pretty funny :)


it’s an intriguing guess that is at least plausible and hits a bunch of zeitgeist levers too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: