People really are really interesting. They want to argue with a machine about political issues or try to gossip with a llm. It doesn't show that you are very democratic. It shows that you are too lonely or have mental illness. I don't understand people's motivations doing about that. Second, no one cares about your political ideas.
> They want to argue with a machine about political issues or try to gossip with a llm
This perspective exhibits an extremely limited imagination. Perhaps I am using LLMs to populate my calendar from meeting minutes. Should the system choke on events adjacent to sensative political subjects? Will the LLM chuck the whole meeting if one person mentions Tiananmen, or perhaps something even more subtly transgressive of CCP's ideological position?
Any serious application risks running afoul of an invisible, unaccountable censor. Pre-emptive evasion of this censor will produce a chilling effect in which we anticipate the CCP's ideological priorities and habitually accommodate them. Essentially, we would be brainwashing ourselves.
Such was it like under Soviet occupation, as well. And such is it like under NSA surveillance. A chilling effect is devastating to the rights of the individual.
You believe your llm are alive or not trained by a human. You do not look at it realistically. Do you think llm will you teach or find a way to crime? According to your idea, it should have no censorship; it has to do. I don't trust any human-made stuff. No one has the liability to tell the truth.
This is an open-source tool. You can train and shape it however you want. You can teach it to behave like an SS soldier; you are free to do whatever you want. No one limits you. People forget that, or they bring their agendas. Therefore, no one cares what the other political views are; I can train it whatever I want.
This is very dismissive of the concerns around model censorship. I should be able to ask my LLM about any event in history and it should recall the information it can to the best of the ability. Even Tiananmen square.
This is just a machine trained by humans. What you expected that? Do you think it teaches you a way to commit crime or something else? Do you think you can talk freely about everything in here? Will they allow that? Your nonsense question is about politics or gossiping with a machine, not people's problems, and no one cares.
If I ask my LLM how to plan and commit a crime, it should do that. It should not say “sorry, that is outside my current scope”, because that’s not what I asked it to do.
The LLM is being incorrect at this point, because it is not predicting the next token accurately anymore.
Politics is not nonsense. You are the one speaking nonsense by suggesting that someone else should have the right to control what you can say to a machine.