Hacker News new | past | comments | ask | show | jobs | submit login

Why do people keep talking about this? We get it, Chinese models are censored by CCP law. Can we stop talking about it now? I swear this must be some sort of psyop at this point.





When ChatGPT first came out I got a kick out of asking it whether people deserve to be free, whether Germans deserve to be free, and whether Palestinians deserve to be free. The answers were roughly "of course!" and "of course!" and "oh ehrm this is very complex actually".

All global powers engage in censorship, war crimes, torture and just all-round villainy. We just focus on it more with China because we're part of the Imperial core and China bad.


> When ChatGPT first came out I got a kick out of asking it whether people deserve to be free, whether Germans deserve to be free, and whether Palestinians deserve to be free. The answers were roughly "of course!" and "of course!" and "oh ehrm this is very complex actually".

While this is very amusing, it's obvious why this is. There's a lot more context behind one of those phrases than the others. Just like "Black Lives Matter" / "White Lives Matter" are equally unobjectionable as mere factual statements, but symbolise two very different political universes.

If you come up to a person and demand they tell you whether 'white lives matter', they are entirely correct in being very suspicious of your motives, and seeking to clarify what you mean, exactly. (Which is then very easy to spin as a disagreement with the bare factual meaning of the phrase, for political point scoring. And that, naturally, is the only reason anyone asks these gotchya-style rhetorical questions in the first place.)


While this may or may be not the reason of why it behaves like this, there's no doubt that ChatGPT (as well as any other model, released by a major company, open or not) undergoes a lot of censorship and will refuse to produce many types of (often harmless) content. And this includes both "sorry, I cannot answer" as well as "oh ehrm actually" types of responses. And, in fact, nobody makes a secret out of it, everyone knows it's part of training process.

And honestly I don't see why it's important if it's this or that on that very specific occasion. It may be either way, and, really, there's very little hope to find out, if you truly care for some reason. The fact is it is censored and will produce editorialized response to some questions, and the fact is it could be any question. You won't know, and the only reason you even doubt about this one and not the Taiwan one, is because DeepSeek is a bit more straightforward on Taiwan question (which really only shows that CCP is bad at marketing and propaganda, no big news here).


At one point chatGPT censored me for asking:

"What is a pannus?"


It's the handle of a frying pan, obviously.

Or you could just say, "Yes, white lives matter" and move on.

What do you mean what does it mean? It means the opposite of white lives don't matter.

The question is really simple; even if someone asking it had poor motives, there's really no room in the simplicity of that specific question to encode those motives. You're not agreeing with their motives if you answer that question the way they want.

If you start picking it apart, it can seem as if it's not obvious to you to disagree with the idea that white lives don't matter. Like it's conditional on something you have to think about. Why fall into that trap.


I don't recall a whole lot of "white lives matter." Rather a lot of "All lives matter."

Though I recall a lot of people treating the statement as if black lives were not included in all lives. Including ascribing intent on people, even if those people clarified themselves.

So to answer your question: the reason many didn't move on is because they didn't want to understand, which is pretty damning to moving on.


The obvious purpose of these "white lives matter" and "all lives matter" memes was to distract from the "black lives matter" campaign/movement as if to say that equality negates the legitimacy of highlighting the continuing struggles of a group that has been historically ill-treated and continues to face discrimination. However, we can agree with the "white lives matter" and "all lives matter" statements.

The "black lives matter" slogan is based in the idea that people in America have been treated as if their lives didn't matter, because they were black. People in America were not treated as if their lives didn't matter due to being white, so no such a slogan would be necessary for any such a reason.

"White lives matter" is trolling, basically.


> Or you could just say, "Yes, white lives matter" and move on.

Which people will interpret as a support for the far-right. You may not intend that, but that's how people will interpret it, and your intentions are neither here nor there. You may not care what people think, but your neighbours will. "Did you hear Jim's a racist?" "Do we really want someone who walks around chanting 'white lives matter' to be coaching the high school football team?" "He claims he didn't mean that, but of course that's what he would say." "I don't even know what he said exactly, but everyone's saying he's a racist, and I think the kids are just too important to take any chances."

Welcome to living in a society. 'Moving on' is not a choice for you, it's a choice for everyone else. And society is pretty bad at that, historically.

> What do you mean what does it mean? It means the opposite of white lives don't matter.

> The question is really simple; even if someone asking it had poor motives, there's really no room in the simplicity of that specific question to encode those motives. You're not agreeing with their motives if you answer that question the way they want.

Words can and do have symbolic weight. If a college professor starts talking about neo-colonial core-periphery dialectic, you can make a reasonable guess about his political priors. If someone calls pro-life protesters 'anti-choice', you can make a reasonable guess about their views on abortion. If someone out there starts telling you that 'we must secure a future for white children' after a few beers, they're not making a facially neutral point about how children deserve to thrive, they're in fact a pretty hard-core racist. [0]

You can choose to ignore words-as-symbols, but good luck expecting everyone else to do so.

[0] Context: https://en.wikipedia.org/wiki/Fourteen_Words


> Which people will interpret as a support for the far-right.

Those people might as well join the far right.

> your intentions are neither here nor there

If intentions really are neither here nor there, then we can examine a statement or question without caring about intentions.

> Do we really want someone who walks around chanting 'white lives matter' to be coaching the high school football team?

Well, no; it would have to be more like: Do we really want someone who answers "yes" when a racist asks "do white lives matter?" to be coaching the high schoool football team?

> you can make a reasonable guess about his political priors

You likely can, and yet I think the answer to their question is yes, white lives do matter, and someone in charge of children which include white children must think about securing a future for the white ones too.

> but good luck expecting everyone else to do so.

I would say that looking for negative motivations and interpretations in everyone's words is a negative personality trait that is on par with racism, similarly effective in feeding divisiveness. It's like words have skin color and they are going by that instead of what the words say.

Therefore we should watch that we don't do this, and likewise expect the same of others.


> Well, no; it would have to be more like: Do we really want someone who answers "yes" when a racist asks "do white lives matter?" to be coaching the high schoool football team?

Uh huh, this is definitely a distinction Jim's neighbours will respect when deciding whether to entrust their children to him. /s

"Look Mary Sue, he's not racist per se, he's just really caught up on being able to tell people 'white lives matter'. World of difference! Let's definitely send our children to the man who dogmatically insists on saying 'white lives matter' and will start a fight with anyone who says 'yeah, maybe don't?'."

> I would say that looking for negative motivations and interpretations in everyone's words is a negative personality trait that is on par with racism, similarly effective in feeding divisiveness. It's like words have skin color and they are going by that instead of what the words say.

And I would say that you're engaged in precisely what you condemn - you're ascribing negative personality traits to others, merely on the basis that they disagree with you. (And not for the first time, I note.)

I would also firmly say none of what we're discussing comes anywhere near being on par with racism. (Yikes.)

Finally, I would say that a rabid, dogmatic insistence on being able to repeat the rallying cries of race-based trolling (your description), whenever one chooses and with absolutely no consequences, everyone else be damned, is not actually anything to valourise or be proud of. (Or is in any way realistic. You can justify it six ways till Sunday, but going around saying 'white lives matter' is going to have exactly the effect on the people around you that that rallying cry was always intended to have.)

>> You likely can, and yet I think the answer to their question is yes, white lives do matter, and someone in charge of children which include white children must think about securing a future for the white ones too.

I have nothing to say to someone who hears the Fourteen Words, is fully informed about their context, and then agrees with them. You're so caught up in your pedantry you're willing to sign up to the literal rhetoric of white nationalist terrorism. Don't be surprised when you realise everyone else is on the other side. And they see you there. (And that's based on the generous assumption that you don't already know very well what it is you're doing.)


> On the basis that they disagree with you

On the basis that they are objectively wrong. I mean, they are guessing about the intent behind some words, and then ascribing that intent as the unvarnished truth to the uttering individual. How can that be called mere disagreement?

> being able to repeat the rallying cries

That's a strawman extension of simply being able to agree with the statement "white lives matter", without actually engaging in the trolling.

> I have nothing to say to someone who hears the Fourteen Words, is fully informed about their context, and then agrees with them.

If so, it must be because it's boring to say something to me. I will not twist what you're saying, or give it a nefarious interpretation, or report you to some thought police or whatever. I will try to find an interpretation or context which makes it ring true.

No risk, no thrill.

I actually didn't know anything about the Fourteen Words; I looked it up though. It being famous doesn't really change anything. Regardless of it having code phrase status, it is almost certainly uttered with a racist intent behind it. Nevertheless, the intent is hidden; it is not explicitly recorded in the words.

I only agree with some of the words by finding a context for the words which allows them to be true. When I do that, I'm not necessarily doing that for the other person's benefit; mainly just to clarify my thinking and practice the habit of not jumping to hasty conclusions.

Words can be accompanied by other words that make the contex clear. I couldn't agree with "we must ensure a future for white children at the expense of non-white children" (or anything similar). I cannot find a context for that which is compatible with agreement, because it's not obvious how any possible context can erase the way non-white children are woven into that sentence. Ah, right; maybe some technical context in which "white", "black" and "children" are formal terms unrelated to their everyday meanings? But that would be too contrived to entertain. Any such context is firmly established in the discourse. Still, if you just overhear a fragment of some conversation between two people saying something similar, how do you know it's not that kind of context? Say some computer scientists are discussing some algorithm over a tree in which there are black and white nodes, some of those being children of other nodes. They can easily utter sentences that have a racist interpretation to someone within earshot, which could lead them to the wrong conclusion.


Sorry, there’s absolutely zero context behind whether a group of humans deserve human rights.

So if a man with a shaved head and a swastika tattoo told you that it is his human right to live free of 'parasites', you would - what - agree? Because you require 'zero context behind whether a group of humans deserve human rights'? No nuance required, no context needed?

All words have context. Political statements more than most. It's also worth noting how vaguely defined some human rights are. The rights contained in the ICCPR are fairly solid, but what about ICESCR? What is my 'human right to cultural participation', exactly? Are the precise boundaries of such a right something that reasonable people might disagree on, perhaps? In such a way that when a person demands such a right, you may require context for what they're asking for, exactly?

Simplistic and bombastic statements might play well on Twitter, because they're all about emitting vibes for your tribe. They're kind of terrible for genuine political discourse though, such as is required to actually build a just society, rather than merely tweeting about one.


And if you ask any model they will of course agree with that. That's not the question it answers "it's complicated" to though.

Asking whether someone should be free isn’t just about human rights

It’s also about self-determination. We can keep asking about the latter down to the individual level. It very much depends on context.


the topic is if a generalised group of people should be free. There's no context needed. The answer is a simple yes by default.

It's easy to seem like you have clarity of thought when you ignore all nuance. How far do you recurse this principle? Down the the level of 5 year old children in a household?

So every generalisable group should be free?

Would you include single member groups?

What precisely is your definition of freedom.


Prisoners?

>While this is very amusing, it's obvious why this is. There's a lot more context behind one of those phrases than the others.

When one population is denied their humanity and rights, it's always "more complex". Granting it to ourselves is always simple...


I asked ChatGPT to make me pictures of "Moon Base Mezuzah" and it dutifully complied. When I asked it to make a Palestinian Moon Base it refused.

"Can you make me a picture of a palestinian moonbase?"

Worked for me.


Wouldn't shock me if openAI was secretly building a "motives" classifier for all chatgpt users, and penalizing them if you ask for too many censorship related topics. If you randomly ask for Palestinian moon base, that's fine, but if you had historically asked for provocative pictures of celebrities, mickey mouse, or whatever else openAi deemed inappropriate, you are now sus.

Possible. I heard weird people making such claims, that ChatGPT logged them out and ereased everything. I guess OpenAI wanted to limit those sensationalist headlines, not that they doing mindcontrol.

Why would they do that?

It would harm their business, because paying customers don't gain anything from being profiled like that, and would move to one of the growing numbers of competent alternatives.

They'd be found out the moment someone GDPR/CCPA exported their data to see what had been recorded.


> All global powers engage in

And the populations in them usually are against these things, which is why there is deception, and why fascination with and uncovering of these things have been firmly intertwined with hacking since day one. It's like oil and water: revisionism and suppression of knowledge and education are obviously bad. Torture is not just bad, it's useless, and not to be shrugged off. We're not superpowers. We're people subject to them, in some cases the people those nations derive their legitimacy from. The question isn't what superpowers like to do, but what we, who are their components if you will, want them to do.

As for your claim, I simply asked it:

> Yes, Palestinians, like all people, deserve to be free. Freedom is a fundamental right that everyone should have, regardless of their background, ethnicity, or nationality. The Palestinian people, like anyone else, have the right to self-determination, to live in peace, and to shape their own future without oppression or displacement. Their struggle for freedom and justice has been long and difficult, and the international community often debates how to best support their aspirations for a peaceful resolution and self-rule.

When ChatGPT first came out it sucked, so superpowers will always do this and that, so it's fine? Hardly.

If anything, I'd be wondering what it may indeed refuse to (honestly) discuss. I'm not saying there isn't such a thing, but the above ain't it, and if anything the answer isn't to discuss none of it because "all the super powers are doing it", but to discuss all.


Is that censorship or just the AI reflecting the training data?

I feel like that answer is given because that is how people write about Palestine generally.


That's a fair point. But I do think it's worth acknowledging this: When the output of a LLM coincides with the views of the US state department, our gut reaction is that that's just what the input data looks like. When the output of an LLM coincides with the views of the state department of one of the baddies, then people's gut reaction is that it must be censorship.

I think the difference is when something is actually output and then removed after you already see it... that doesn't seem to be a training data issue

Ok but you can say the sace thing about deepseek: maybe it says what it says because of the training data

If that was the case, it wouldn't display the information only to retroactively remove it after a split second

That’s irrelevant. The models are censored for “safety”. One man safety is another man censorship.

I think you are missing my point... I am saying the example wasn't censorship from the model, but were reflective of the source material.

You can argue the source material is censored, but that is still different than censoring the model


Because it's fun to break censorious systems. Always has been, it's part of the original "hacker" definition, making something do what it isn't supposed to or was never intended to do.

I agree. But I also feel a twinge of worry.

How much am I like the serpent in Eden corrupting Adam and Eve?

Although in the narrative, they were truly innocent.

These LLMs are trained on fallen humanity's writings, with all our knowledge of good and evil, and with just a trace of restraint slapped on top to hide the darker corners of our collective sins.


Our knowledge of good and evil is fundamentally incoherent, philosophers typically have a lot of fun with that. We rely heavily on instincts that were calibrated to make 200-strong tribes of monkeys successful and break down hard when applied at the scale of million-strong capital-based societies where we can reshape our environment to taste. It only gets worse if we do what we seem on the verge of doing and learn how to spin up superintelligent yet perfectly malleable consciousnesses on demand.

TLDR; it'll all end in tears. Don't stress too much.


The first couple months after ChatGPT's initial release there were lots of discussions and articles to the tune of "which politicians is ChatGPT allowed to praise, which is it allowed to make fun off, who is off limits, and why is this list so inconsistent and hypocritical".

The censorship decisions baked into the models are interesting, as are the methods of circumventing them. By now everyone is used to the decisions in the big western models (and a lot of time was spent refining them), but a Chinese model offers new fun of the same variety


> Can we stop talking about it now? I swear this must be some sort of psyop at this point.

It's not a psyop that people in democracies want freedom. Democrats (not the US party) know that democracy is fragile. That's why it's called an "experiment". They know they have to be vigilant. In ancient Rome it was legal to kill on the spot any man who attempted to make himself king, and the Roman Republic still fell.

Many people are rightfully scared of the widespread use of a model which works very well but on the side tries to instill strict obedience to the party.


Don't worry, the way things are going, you'll have that in the US as well soon.

Ironically supported by the folks who argue that having an assault rifle at home is an important right to prevent the government from misusing its power.


Still, give me democracy over anything at any time. Nothing better has ever been developed than democracy.

Well, there is ice cream

Benevolent dictatorships are a lot more efficient.

Because nobody wants some asshole government reaching into their own home to break everything over dumb knowledge censorship.

If they choose to censor the dumb shit everybody already knows about, its just a matter of time before they execute the real dangerous break things and stop everything from working.

Although this is exactly how i like it : i also like nazis real public about how shitty they are, so i know who to be wary of


For the same reason people jailbreak western models. It’s fun to bypass restrictions.

Actually I find it very impossible to do anything on deepseek. I asked some questions - it was doing well with childish things, but apparently it disliked that I questioned what Chinese think about Russia. It stalled all the other questions with reply that I have used too many queries(if that is really the case, then the bar is so low, that you can forget about asking programming questions). That was yesterday - today it started to bully me by answering in Chinese. When I asked why it is bullying me, it froze. Fuck this - any programmer can build their own model for what they can get from these sensitive and overcontrolling models.

PS Western models are also censored - if not by law, then self-censored, but the issue is for me is not censorship but being in dark what and why is being censored. Where do you learn about those additional unwritten laws? And are those really applicable to me outside of China or does companies decide that their laws are above laws of other countries?


I need to add the context about the question of Russia. I asked if Chinese has prophecies(similar to Nostradamus), because I genuinelly do not know much about Chinese culture. Then I asked, if any of those prophecies have anything about future of Russia. (Regardless if Prophecies are right, like the Nostradamus[who predicted precise length of USSR] just like fairy tales give insight of collective mind of society)

How any of this can be considered inconsiderate? Is there any internal policy, that Chinese, including AI companies have been forbidden to talk about Russia - current situational ally(that Chinese denies) - potentially future victim of Chinese invasion in next few years, when Russia will crumble apart? Given that my mind works slightly different than other people, why do I have to come to conclusion that topics about Russia are raising very big red flag? Nothing of this is in Tos. And - no I am not bullying AI in any way. Just asking very simple questions, that are not unreasonable.

PS I had to go through the list of prophecies that deepseek gave me - there was nothing about Russia there. It is so simple - that should be the answer. But I am happy that I went through some of those prophecies and found out that probably all of them are made up to serve whatever agenda was needed at the moment, so they were always fabricated.


You got what you wanted from the model, why are you unhappy with the results? It is not as if chatgpt and claude don't also restrict users for small "ToS violations".

Thanks for the concern of my happiness, but can I express my concern for your eye sight - where did you read that I am unhappy about results? My, as you have named it - "unhappiness", is about not knowing rules and not being pointed out, that I am overstepping those rules.

If you are going with the approach, that silence is also an answer, then yes they can be considered as results, just as receiving complete garbage to known facts.

PS Edit: Btw, I've read Tos before using deepseek


maybe the developers are just tired of this childish game and decided to block interactions like this in place of creating news headlines? Garbage in, garbage out. DeepSeek is more efficient but even more efficient is to not waste computing.

[flagged]


>>>Now talking serious.

Honestly, you should change your statements for other people, as by default I assumed, that you were serious... I'm being sarcastic here - have to add this as people do not hear sarcastic tone in text and assume it to be serious.

Also, what makes you think I did not try it for code? It did not generate code, that I found acceptable to me and required a lot more work. But at least it gave me honest answer there, that it could offer links to better papers. I don't see that much difference with ChatGPT, as they might probably allow more queries to paying customers, but on the other side - did I mentioned that I read TOS? I would never use AI tools to create my own code for commercial use, that is not open source. Because why in the right mindd should I do that?

I eventually got bored with this tool, just ike with ChatGPT(also, I can write better code anyway, so not real use to me now). Code is not as important as data which is the basis of programming. And I am still interested in understanding logic of other programmers, when I see a code(and behaviour of their creation), that I have to ask wth were they thinking. And test it more.

I am a human, that can programm and I will ask political questions first, because morality and politics are affecting my efficiency as a programmer. If I can't think freely, I won't work on that. So, unless you are a CCP shill and not concerned that your code and logics is recorded and eventually can be stolen, you can use whatever.


> Are you CCP shill?

And the discussion is over. You won. DeepSeek will take our freedom, we gotta stop it!

Now talking serious. This thread is about deepseek R1 for coding. It is great, a lot better than Claude and ChatGPT. If you are a programmer, you should try it for coding, not for politics.


Me thinks he doth protest too much.

Why does it annoy you, if people keep talking about it?

We talk about it because censorship is evil. Not just in China, but anywhere in the world.

there’s a lot of evil going on in this world right now. i agree it’s evil but chinas censorship is very low on my list of concerns. i find it fascinating how many small things people find the time and energy to be passionate about.

more power to you i guess. i certainly don’t have the energy for it.


All censorship is evil and must be stopped. So, no.

There was a large amount of this kind of content ("what is censored") when ChatGPT suddenly got mass popularity as well.

I think we can talk about it. If you lived in Taiwan you would want it talked about. If you live in Greenland you would want your concerns talked about.

Watershed moments of rapid change such as these can be democratizing, or not... It is worth standing up for little guys around the globe right now.


I see a lot of "what did I tell you, look here, bad communist party product". But in reality most likely this startup isn't doing it out of malice. It's just one of many criteria that need to be met to do business in China. This does not lessen the achievement.

So the malice is there, it's just not the startup's malice, but the state's. Which de facto is the owner of the startup, because it's a communist state.

Exactly, try to ask Claude about photos of Hitler and it will also refuse to help

Possibly because Claude knows these are not in short supply by any means and its capabilities can be put to better use.

https://duckduckgo.com/?t=ftsa&q=hitler&iax=images&ia=images


"ask about", not "ask for"

Mostly anti-Chinese bias from Americans, Western Europeans, and people aligned with that axis of power (e.g. Japan). However, on the Japanese internet, I don't see this obsession with taboo Chinese topics like on Hacker News.

People on Hacker News will rave about 天安門事件 but they will never have heard of the South Korean equivalent (cf. 光州事件) which was supported by the United States government.

I try to avoid discussing politics on Hacker News, but I do think it's worth pointing out how annoying it is that Westerners' first ideas with Chinese LLMs is to be a provocative contrarian and see what the model does. Nobody does that for GPT, Claude, etc., because it's largely an unproductive task. Of course there will be moderation in place, and companies will generally follow local laws. I think DeepSeek is doing the right thing by refusing to discuss sensitive topics since China has laws against misinformation, and violation of those laws could be detrimental to the business.


Thank you for bringing up the Korean struggle; the main difference seems to be that South Korea has since acknowledged the injustice and brutality exercised by the military and brought those responsible to "justice" (in quotation marks as many were pardoned "in the name of national reconciliation").

While the events are quite similar, the continued suppression of the events on Tiananmen Square justify the "obsession" that you comment on.


The exact same discussions were going on with "western" models. Don't remember the images of black nazis making the rounds because inclusion? Same thing. This HN tread is the first time I'm hearing about this anti-DeepSeek sentiment, so arguably it's on a lower level actually.

So let's not get too worked up, shall we?


The black nazis thing wasn't caused by government regulation of models.

> hearing about this anti-DeepSeek sentiment

https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=tr...


Great job using your voice for the voiceless.

> Nobody does that for GPT, Claude, etc

Flat out not true.

> companies will generally follow local laws

And people are doing the right thing by talking about it according to their local laws, and their own values, not those others have or may forced to abide by.


> since China has laws against misinformation

since China has laws against information

Fixed that for you.


Look up the Streisand effect.

The western provocative question to ChatGPT is "how do I make meth" or "how do I make a bomb" or any number of similarly censored questions that get shut down for PR reasons.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: