While censorship and political bias is of course bad, for a lot of their intended use cases you're really not going to hit up against it. Especially for text to image and coding models (deepseek, Qwen and other Chinese models main strength).
LLMs compress the internet and human / company knowledge very well - but by themselves they're not a replacement for it, or fact checking.
Too often I see comments (usually, but not always from Americans) immediately dismissing and dethroning Chinese made models solely on the grounds of censorship while they sing the praises of American trained models that struggle to keep up in other areas while often costing more to train and run - to be frank - 99.9% of the time inject their own biases and misconceptions such as using American English spelling rather than international standard or British English - this is something the non-American world has to actively mitigate / work around every single day with LLMs, while - I can't say that I've ever had a use case that involved asking a LLM about tiennamen square.
All models imbue the biases, world view and - training data they were trained on, but discussing only this point on models that are otherwise compensative or often - out compete others can, in part, be a distraction.
LLMs compress the internet and human / company knowledge very well - but by themselves they're not a replacement for it, or fact checking.
Too often I see comments (usually, but not always from Americans) immediately dismissing and dethroning Chinese made models solely on the grounds of censorship while they sing the praises of American trained models that struggle to keep up in other areas while often costing more to train and run - to be frank - 99.9% of the time inject their own biases and misconceptions such as using American English spelling rather than international standard or British English - this is something the non-American world has to actively mitigate / work around every single day with LLMs, while - I can't say that I've ever had a use case that involved asking a LLM about tiennamen square.
All models imbue the biases, world view and - training data they were trained on, but discussing only this point on models that are otherwise compensative or often - out compete others can, in part, be a distraction.