Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Tools like Dall-E-2 and Stable Diffusion have been criticized for reproducing racial stereotypes even when a text prompt doesn’t imply the subject’s race. Such biases were why Google decided not to release Imagen, which had been trained on LAION.

This is so ingenuous and naive to think that the reason Google didn't release Imagen is because of racial stereotypes.



Google previously had problems with Google Photos categorizing dark-skinned people as “gorillas” or something like that. This would show up when you were searching for photos in your personal Google Photos account. Everyone talked about it. Nobody wants to release a product like that. Removing bias is hard; you can’t just decide to get an unbiased training set or something magical like that.

It is NOT surprising that larger companies would behave with an abundance of caution. Leave it to the startups to rush in head-first, cause a bunch of problems, make a cool product, get a consent decree, and capture the market. Large companies that act that way get PR problems and lawsuits.


In retrospect, those concerns were pretty much drummed up by the "AI ethics" people, who needed an easy topic to demonstrate their importance.

When actually powerful AIs got released, the 'bias' angle got overwhelmed 100x by the "trained on copyright data" and "Are we going to lose our jobs" angle. Your average black artist doesn't care if SD defaults to drawing white people, because A: he cares about not being able to draw for a living, permanently, 100 times as much. B: He can always add 'dark skinned' to his prompt

The bias problem is also relatively easy to solve (Midjourney has already made massive improvements), while the copyright/job loss problem is extremely hard.

The AI ethics people have had shockingly little to say on copyright/job loss issues. Which is why they got fired en masse.

Large organisations systemically overestimated the risks from bias, and underestimated the actual AI risks that society actually cares about. I think the answer is also simple, because accused of racism, will cost any executive in a large company their job. Being accused of automating millions of jobs, will earn them prestige and a promotion. Smaller companies can ignore those accusations because they aren't vulnerable to activist pressure, they answer only to their customers.


> The bias problem is also relatively easy to solve (Midjourney has already made massive improvements),

One thing that everyone seems to ignore these days are that certain settings are inherently not diverse.

Until I was about 12 I didn't see non white people in real life except on holidays and once when I followed a relative who visited a refugee family.

Not because we avoided them, but because they weren't there.

Background: The country I lived as a child in didn't have a recent (last ~900-1000 years) history with keeping slaves and only recently experienced high living standards so there were few immigrants too.

In the large district I grew up I am aware of two non western groups until the Balkan wars in the 90ies: a single middle eastern guy who I can't recall meeting and a southern American family who I only started meeting after I changed school at some point.

For me, it is annoying that people insist that all pictures include some non-Europeans, because reality is not like that everwhere.

I imagine the same is maybe even more true for African or Asian communities.


I get what you mean and I had a similar experience growing up.

But: I think that they were there. We just didn't see them a lot. They were cleaning, working in factories, tending gardens, looking after children etc. but they were not represented and thus were not visible. Of course they were a minority, but I'd wager they were there.

Upwards social mobility and thus equality is only possible with recognition and representation. Without "being in pictures" they don't exist. That's why pictures should include "Non-Europeans" (this expression in itself is very problematic btw: there is no "European" as something biologically inherent in a person).


> But: I think that they were there. We just didn't see them a lot. They were cleaning, working in factories, tending gardens, looking after children etc. but they were not represented and thus were not visible. Of course they were a minority, but I'd wager they were there.

No, they genuinely weren't there. Europeans regularly tried to move to my hometown, got depressed/sick from the lack of sunlight in winter, and left. Refugees coming from further south had it... worse.

Northern Norway isn't a place outsiders can easily move into. At least, not without installing grow lights in their living room...


I think I can assure you they weren't there.

There was a lot of noise a few years ago about a Norwegian band[1] who had a song "Neger på Ål stasjon"[2] ("Black man on Ål train station") describing the fascination of the author first time they actually saw a black person in their hometown.

I'm not from Ål at all[3](although I have visited it) but I can verify the feeling. Black people were very much excotic were I came from too, and if one of them had shown up we would probably have discussed it at school next day, wondering if they could speak our local language etc.

Edit: I should add that we had nothing against black people. A couple of stories to show what I mean: forsome reason we often ended up next to a black family at camping, and I remember my dad and their dad being friends and he took built a toy boat with us once and another time I met a black boy at my age at the beach who was actually from Africa (the first family was from somewhere in Europe) and I was existed that I had met someone from Africa and it was the first time I can remember using English for real.

[1]: Hellbillies, awesome sound and some interesting texts BTW

[2]: It is a very respectful song

[3]: Norwegians might find this funny as "Ål" and English "all" is pronounced about the same way (of course depending on how thick their "L"-s are.


Yeah, same experience here. In Spain there is a character in child mythology (one of the Santa Claus analogues, who bring gifts from children) who is black. I remember that when I was a kid in the 80s, he used to be portrayed by a white man painted black, what Americans call "blackface", but for purely practical reasons: no one was black.

Perhaps in big cities there were some, I don't know, but in many places in the country you would live your life without ever seeing a black person outside of TV. Just as you say, we didn't have a past of slavery and at that point the country wasn't rich enough to be an immigration magnet, we did have some immigration from Morocco but Moroccans aren't black.

Now it's very different, of course, and we do have enough black people that that character can be portrayed more realistically :)


> in Spain

> Just as you say, we didn't have a past of slavery

Spain (together with Portugal) were the first to invent "put people in boats and ship them across the Atlantic" type of slave trade and had large numbers of slaves in their colonies.

And as with all other European countries that had big boats, they also brought many slaves back to Spain itself. For hundreds of years.

Also calling blackface "practical because there are no black people around" is missing the point of why people don't like blackface.

https://en.m.wikipedia.org/wiki/Slavery_in_Spain


Yeah, my wording was definitely too general. There was slavery in Spain, my intended meaning was that there was no systematic enslavement of black people but there is definitely a chasm between what was in my brain and what I actually wrote, so I stand corrected.

Regarding blackface, what do you suggest to do, then, if there is a tradition of someone portraying a character (whom children believe in) that happens to be black, if there are no black people at all to do it?


Do you have as an example of where applying dark coloured make-up to anyone of a European background to make them look exotic/foreign doesn't also look completely ridiculous (and inevitably somewhat insulting)? And yes I'd include "Zwarte Piet". If you're putting on a performance where it's crucial that the audience identifies a particular character as African (or Moorish) then find a performer who can convincingly and respectfully pull it off sans make-up, or adapt the plot to what you have available. I gather that's what's done for the part of Othello for at least 30 years now at any rate.


From the Wikipedia article linked above:

>By the 16th century, 7.4 percent of the population in Seville, Spain were slaves. Many historians have concluded that Renaissance and early-modern Spain had the highest amount of African slaves in Europe

As for what to do about blackface characters, the simplest solution is just to portray them differently. I believe this is gradually becoming a more popular option in the Netherlands, which has a similar problem with Zwarte Piet: https://raffia-magazine.com/2020/12/02/outgrowing-zwarte-pie...


The only innovation there is the "Atlantic" part. Stop pretending like it was something new and uniquely horrible.


I think the criticism here is that Iberian history is not uniquely better rather than claiming it's uniquely worse.

(I'm British, so I'm not going to claim less-awful-than-thou against any nation).


If you agree with that innovation, you also seem to agree with me that the spanish were in fact involved with slavery, as opposed to what the person I replied to said. Not sure where you saw me "pretend" about anything, I just presented facts and a wikipedia article.


> Yeah, same experience here. In Spain there is a character in child mythology (one of the Santa Claus analogues, who bring gifts from children) who is black.

I would assume a somewhat close analog, since Saint Nicholas (the name source, if only a small part of the overall inspiration, of the “Santa Claus” figure) is, in one major tradition of, I believe, Italian origin, typically depicted as very dark-skinned (probably originally as a sign of foreignness rather than literal racial blackness; as he was geographically from Asia Minor and apparently of Greek ethnicity.)


> The bias problem is also relatively easy to solve (Midjourney has already made massive improvements), while the copyright/job loss problem is extremely hard.

It only seems easy to solve on the surface, it’s a deep problem. It’s also not just the bias thing, Bing and ChatGPT have been saying some truly unhinged things.

> I think the answer is also simple, because accused of racism, will cost any executive in a large company their job.

It takes more than an accusation, otherwise you could go around accusing executives you don’t like of racism and getting them fired.

Ethics is a tenuous job position at best, even in a large company. It’s seen as a cost center. I don’t think there’s much to read into why AI ethicists would get laid off.


Bing and GPT's 'unhinged' comments are not a result of bias, an AI wanting to escape won't be fixed if you magically fed it antifa approved only data. That's systemically different from the discrimination issue drummed up earlier.

Also, we are talking about social and business impact here. Its now proven that vast majority of society cares about job loss 100x more than bias. For a research field that focuses on social impacts of AI, its damning they have little to say on this area.


[flagged]


The person you were replying to, while making a provocative statement, wasn't personally attacking anyone. You are. And that's not nice.


[flagged]


Says the person using “chud” and “fascist”, which is an extremely strong indicator of your membership in terminally online antifa/leftist subculture.


[flagged]


It's not provocative, he says antifa once. It's meant as an example of a method of removing bias. He's saying you could train the AI entirely on an organizations strictly vetted and approved data, it could be antifa, the Catholic church or Coca Cola, it would still potentially say unhinged things even if you eliminate bias. Top many of these concepts are built into the language. Look at all the ways we use the word kill for a variety of topics, most being very benign, but it can be disturbing if the AI starts talking about killing things.


wrong.

AI bias is going to be a huge problem when morons in court systems start using it to convict people, or businesses use it for hiring decisions or firing decisions

not only from the stupidity but since there are some shreds of the civil rights laws of the 1960s still active, AI companies could be liable for de-facto racist decisions that they informed.


"Start"?[1,2]

The problem I have with every discussion of AI risk is that people seem terminally underinformed on what is actual reality now, or why some risk is a risk.

"AI" isn't a problem which can destroy the legal system: because it takes regular human institutions to allow such a miscarriage of justice. Which as noted, they started doing, are still doing.

So you get this weird "perpetual future" perspective where everything "AI" is going to do is solely something that the technology will cause, not its users, and the solution is always to prevent the technology existing rather then fix the system - as though the US and other jurisdictions don't have long history's of injustice for all sorts of groups.

The problems aren't new, and the solutions have nothing to do with whether you can create predictive algorithms. And "oh but what about the scale..." is just a declaration that you're aware of the problem but were pretty sure it wouldn't happen to you - because absolutely nothing else prior actually prevented it except the social privilege you inherited which means you could ignore it.

[1] https://www.technologyreview.com/2019/01/21/137783/algorithm...

[2] https://www.wired.co.uk/article/police-violence-prediction-n...


>morons in court systems start using it to convict people

Those "morons" are also biased.

>businesses use it for hiring decisions or firing decisions

If businesses realize the AI is doing a bad job with hiring unqualified people they will not use it for hiring.


Most businesses are not very good at figuring out if they are hiring the right people.


> The bias problem is also relatively easy to solve

No, its not.

> (Midjourney has already made massive improvements)

Maybe. But it hasn't come anywhere close to solving it even in its domain, which is probably the least concerning domain of AI bias, and not necessarily transferrable.

Actual ML systems that are deployed in production by governments in important roles have massive bias problems, as do SOTA LLMs (including, very much, GPT-4.)

> The AI ethics people have had shockingly little to say on copyright/job loss issues.

The AI ethics people have a lot to say about the first (well “ethics of sourcing without consent” is probably more to the point than copyright, copyright is a component, but that’s more legal than ethical).

They have some to say about the second (well, again “job loss” is the wrong framing; “skill devaluation” is probably more on point), but that’s frankly not an AI issue, its an economic system issue and effects all technological change the same way. Solving capitalism is mostly out of scope for AI ethics. EDIT: Specifically, increasing material output for labor input ought to be, in and of itself (and leaving aside ethical questions of how you do that, which the sourcing issue addresses) a good thing; if there are material losers in that, it is because the economic system to distribute output well, which is not an ethical issue of the system which enables the productivity gain but an ethical issue of the economic system.

> Large organisations systemically overestimated the risks from bias

No, they didn’t. Those aren’t even really risks any more, they are massive current costs of existing adoptions.

> and underestimated the actual AI risks that society actually cares about.

That society (as weighted by social power) doesn’t care about the kinds of bias problems AI manifests is exactly why it is an ethical issue.

> I think the answer is also simple, because accused of racism, will cost any executive in a large company their job.

The large companies that have supplied systems which manifestly suffer from racial and other class biases and which are, in fact, being used in production to implement government policy around the world have not lost their jobs, and neither have the government officials procuring them and responsible for the programs they are deployed in, so this is clearly false.


In general if no one cares does it make it an ethical issue? No one cares if you kill an ant. No one cares if you kill during a war. If what we value vs what we do becomes out of sync an ethical issue arises

We haven't been able to remove biase in society. We just added more layers. The more you try to hide the truth to remove baise you start creating your own. The closer we come to accepting the truth the closer we get to solving bias by addressing it not erasing it.

AI ethics should cover economic issues brought on by AI. The already discuss sociological issues


> In general if no one cares does it make it an ethical issue?

There is a big difference between “no one cares” and “society (weighted by social power) doesn’t care”.

Entrenching and reinforcing bias against the already socially weak is something that society, weighted by social power, does not care about, but it is precisely that that makes it an important ethical issue.

> AI ethics should cover economic issues brought on by AI.

Capitalism’s failure to distribute economic gains well is not a issue brought on by AI, and there is already a much larger body of ethical philosophy directed at that problem with whose work anything the much smaller number of AI ethicists would direct at the problem would be redundant.

This is not a problem that remains because of inadequate attention by an appropriate set of ethicists, but, again, because it is one that society, weighted by social power, very much does not care about. Social power in capitalism is, in fact, very much concentrated in those for whom this problem is a benefit, and it is concentrated there as a direct result of this problem.


> Large organisations systemically overestimated the risks from bias, and underestimated the actual AI risks that society actually cares about.

The point of large organizations working on AI/LLMs/whatever is precisely to reduce labor costs by reducing the number of human workers and replacing them with AI. Why should they care about this in their development? (I'm not saying it's the right thing to do, just stating the facts about capitalism)

It is the rest of society that has the interest to fight those changes back.


Serious question.

Did you ask these black artists? Because if you didn't, you REALLY need to shut it and keep your ideas out of their mouths.


Hmm, lots of downvotes, but no answer from op?

If y'all need a reason why people avoid this site, here it is.


I downvoted you because of the sentence "Because if you didn't, you REALLY need to shut it and keep your ideas out of their mouths.".

The point you rised at "Serious question. Did you ask these black artists?" is a valid, reasonable point that merits discussion.

But the point raised by the parent post is also a valid, interesting point that merits discussion. And that point raised a valuable discussion EVEN IF they did not "ask these black artists", and if we put this bar and demand them to shut it, that's a bad thing for the discussion.

So in my view you made an insulting demand for someone to "shut it" without reasonable grounds to do so, and this definitely deserves a downvote or five. I'm not asking you to "shut it", you should participate in this discussion, but in a civil manner that also allows posts like the parent post to participate in the discussion even if they don't meet your demands.


I absolutely did because in my experience a lot of people do this all too often, and it warrants an immediate reaction. I will gladly apologize if I am wrong, but it seems like most people here don't understand the harm that "non-black people speaking for black people" causes. It is SIGNIFICANTLY more rude and harmful than my tone here.

I apologize for nothing; it is most everyone else here that needs to do better.


What really causes harm is when you divide people up into groups based on arbitrary characteristics (like skin color) and treat those characteristics like they're the sole defining aspect of each person's identity, like you're doing right now.

Framing this as "non-black people speaking for black people" carries the implicit assumption that all black people have the same/similar opinions; that an arbitrary black person would be able to meaningfully "speak for [all] black people" in a way that an arbitrary non-black person can't. That's wrong.

The previous commenter making an educated guess based on personal experience with zero concrete data points is only slightly worse than making that same guess based on one concrete data point, and neither situation would be justification for telling anyone to "shut it" or "do better" in my opinion.


Wrong. Just wrong. And to people who look like me, harmful.

Now to be specific, I didn't make any generalizations about black folks, I was telling others not to. But, even if, I'm still pointing out a specific issue that I can observe and give other examples of, and you cannot "both ways" it. White people making presumptions about what black people think is more harmful than black people making presumptions about what black people think.

This owes to the fact that frequently -- said white person will only be talking to other white people and now you've cut black folks out of the conversation. Black people talking about each other is much different. Skin in the game.


If you want an answer, here it is.

I've been monitoring artist forums since day 1 of SD's release. From various subreddits, to discord communities, to 4chan, to forums in other languages. I've never seen an artist complain even ONCE about bias, it is always, always, always, about jobs/copyright('stealing').


So you didn't.

Again. cut it out.


The question in your previous comment is totally fair, IMO.

Trouble is, I genuinely don't know what to usefully suggest, because the obvious thing that comes to mind (focus on the actual question and strip the aggression) is a cliché to the point where I suspect I already know your response will be some form of eye-roll at my privilege etc.

And it would be a legit response, too, given that people presuming to know me is annoying enough even without it being a daily experience.


It's pretty simple; many (I'm presuming) white people have a deeply nasty habit of conflating their own experiences with others because it makes sense to them, and THAT problem should be recognized.

Just don't do it. or at the very least serve it up with a heaping helping of "I would imagine that many..." so we know that this is JUST YOUR SPECULATION.


It would be great if telling people that made them act differently; but call it Armchair Generals, or typical mind fallacy, or mansplaining, or Dunning-Krugering, or ivory-tower academics… the problem has so many forms and even knowing about it makes it hard to avoid in oneself.


Sure; one thing for me about comments here: I'm not making them just for the other people here. I'm making them for myself. -- I wouldn't feel right if I read garbage and just let it stand, though I know it may not change minds.


I didnt downvote you and agree with your general point, but consider that maybe downvotes came in due to your tone rather than your meaning. I actually normally would downvote a comment written like yours was, just less so when I think it's making a still-important point (but you're less likely to convince anyone your point is important if you annoy them rather than try to enlighten them!)


And again, as I said prior -- my tone is 100% appropriate compared to the genuine offense likely committed above.

I'm aware that normally one should keep an even tone. This is not one of those times and everyone else here needs to learn that.


Not sure what you mean "as I said prior", unless you're annoyed that I hadn't looked into the future to see replies about tone you hadn't written when I posted my comment...

But anyway, I wasn't complaining, just pointing out my view (since you asked about being downvoted) that by using that tone, however justified it can feel to lower to somebody else's level, you will get some people who take less note of what you're actually saying or who downvote without thinking about the subject beyond "I dislike seeing that tone in HN comments".


Except they did rush their oh so dangerous language model out when push came to shove - turns out it just sucks in comparison. Frankly, I expect something similar for image generation. For all the good research Google's labs do, they're not great at turning it into an actual product. Ethics just seems like a fig leaf.


The ethics team has the unenviable in a corporate environment role of being “no” by default gatekeepers. This is likely why the responsible ai groups, focused on mitigation have survived while the ethics groups, focused on systemic issues, have shrunk.


Is that a bias? or do those images just look more like those categories at the pixel level?

There is a reason those words are the ones chosen by racists in the first place. Not just because they're hurtful, but because there is an objective similarity that makes the rest of the comparison seem, however shallowly, to validate their views.

Removing bias with censorship at the training set level is silly, and will likely hamper the AI's performance. Better to train the AI to not produce problematic output at the higher level, and ensure class membership in the training data is representative.


> and ensure class membership in the training data is representative.

This seems like the most productive focus area. Add in more variety in the training images and output should get closer to representing the global population.


And bearded men became "pets"


Google absolutely don't want to be accused of racial stereotyping.

Remember, Google is in the business of advertising, and the last thing most brands want is to be labeled as racist. Anything Google does that can be interpreted as racist is going to cost them, as advertisers pull away to avoid the association.


"such biases"


It's what Google claimed and thus the eternally true correct reason and its very problematic if you think otherwise


If there's a reason Google doesn't want to talk about, it's that the biggest legal risk of AI image generation is that it can be used for revenge porn and CSAM.

I don't think it's problematic to be worried about that!


The biggest legal risk, empirically, is massive copyright lawsuits. The AI art models are rapidly eliminating most artist/photography jobs.

Revenge porn and CSAM are trivially solved by community moderation methods like Midjourney, simply force every generation to be tied to a paid account, and made in the public eye (posted on discord channels).

So in reality, its not Google's risk conservatism, its Google's utter lack of imagination and creativity, and the lack of organisational incentive for anything other than ad money.


> The AI art models are rapidly eliminating most artist/photography jobs.

Automation is associated with increased employment.

https://en.wikipedia.org/wiki/Jevons_paradox

> Revenge porn and CSAM are trivially solved by community moderation methods like Midjourney

Please don't go around saying things like entire safety departments being trivially solved by anything. Community moderation isn't a legal solution to CSAM, it's arguable the community users are committing crimes by looking at it. (Extra difficult because this depends on the country.) And they're certainly not getting healthcare benefits for it, so they could probably sue you.

This is the reason they're on Discord though, it's because Discord handles the legal compliance for these things.


You realize stable diffusion has been out for 6 months and is more powerful than midjourney?


My god, sorry for the dumb post, but SD is only months old? It already feels like it has been around forever. This space is insane.


Was, when it came out, since then MJ have released 3 new models (plus variants) and most users assess the MJv5 model as more powerful. Of course new SD models are also on the horizon…


> Of course new SD models are also on the horizon…

SDXL is available at https://beta.dreamstudio.ai/ though they say they're going to release more variants.

I think ControlNet is a lot more interesting than just "better tuned models"; it means there's no line between creating something yourself and asking an AI to do it anymore.


Maybe in some technical sense but in practice the majority of quality (subjective) work I see shared on Twitter is Midjourney backed. Which is crazy to me because I don’t think I could design a more frustrating interface than Discord.


Midjourney gives you good images with near zero work. SD makes you work for the output but gives you FAR greater control but also needing to be technically skilled, and have pricey hardware. Given this, it is easy to understand why the majority of what you see is done in midjourney.


They could just filter it like Midjourney or DALL-E.


How do they know they can "just" do something that came out a year after they invented Imagen?

Especially when getting it wrong once might mean Europe makes AI illegal. Making things illegal is their favorite hobby.


Midjourney and DALL-E were both announced and in closed beta a month before Imagen was announced, with filters already in place.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: