Low information, under-educated, soundbite-oriented mass media consumers are these easiest to manufacture consent of, shout nationalist opinions at, and convince them of the moral superiority of the tribe they supposedly belong to. What's changed over the past 30 years is that media consumption has exploded and diverged with the internet and smart phone, making it easier to bypass centralized broadcast-only mainstream media and any meaningful attempt at regulation of the firehose either by the platforms or by regulators. Cambridge Analytica and Russian interference with phony microprotests were just a few known examples, but it's clear that nefarious actors can and will exploit social media to cause chaos and manipulate people into actions in the real world. That's partially a technological problem but it's mostly a people problem of applying pause, reasonableness, and reasonable skepticism to avoid causing direct harm in the real world. Identifying mass and targeted manipulation of sentiment that doesn't directly affect elections or calls to action is a problem for journalists, tech companies, and regulators to identify and minimize through data analysis.
Also, ownership of TikTok is largely a symbolic, selective, ideological/political fight rather than meaningfully addressing industry regulation of content moderation, data privacy, algorithmic oversight, mental health/app addiction, or data (re)patriation.
I agree, the real problem with TikTok is social media's unregulated ability to manipulate and surveil people en masse. It's hypocritical to ignore all the other nation-states and corporations that are doing the same kind of thing. What's sauce for the goose should be sauce for the gander.
What's also quite twisted is social media companies (and their more zealous users) framing critics as being against "free speech", when their algorithms are actually being used to control what people see, censor criticism, push narratives for powerful interests, promote enraging news, appropriate user-owned content, and sell lots of advertising by getting people addicted to doomscrolling. That's the opposite of freedom.
And there's no easy answer. "Just moderate better" isn't going to cut it. The people running social media companies simply have too much power; abuse is inevitable.
Freedom is not possible if everyone is given a mic connected to the same sound system.
It only works if there is a sharing and coordination mechanism that the majority agrees too. As is the case when it comes to Broadcasting on Radio Spectrum. You wont find anyone protesting or demanding the right to stick a radio dish on their roof and the ability to broadcast across all frequencies in the name of Free Speech.
Because this debate (about share finite broadcast spectrum) already happened and a coordination and sharing mechanism was agreed too. That agreement comes out of social and political debates. Not out of technical debates. Its not a technical problem.
Social media designed by people who had no idea what they were building, allowed everyone to freely Broadcast(1-to-all) simultaneously, because it became technically possible <insert Jurassic Park quote about Engineers building things cause they could not because they should>. Post facto, this bunch of self certified geniuses realized they need a sharing/coordination/filtering mechanism and you get more garbage like the view/like/click/upvote count, moderation/censorship systems which do a half baked job. So we get infinite evergrowing spam (as cost to spam on free to broadcast system is 0), randomness, chaos, squandering of finite collective attention, and no control at all over what emerges tomorrow morning out of Jurassic Park.
In the beginning, there would've been unintentional cognitive dissonance while caught up in the moment or willful ignorance about the scope and nuances of harm.
Now, there isn't much excuse. MZ has really tried to address aspects of potential harm. For example, Meta isn't like Twitter: random employees cannot access any user's data without a business reason and sign-off from a manager or an appropriate privacy person. However, the Myanmar genocide happened. The root issue is there isn't enough reliable human or algorithmic effort to ensure continuous, global, perfect safety... but they are trying. They will fail sometimes. The question becomes: what are legal, ethical, and moral duties, boundaries, and liabilities any nation should require and accept in this area?
Also, ownership of TikTok is largely a symbolic, selective, ideological/political fight rather than meaningfully addressing industry regulation of content moderation, data privacy, algorithmic oversight, mental health/app addiction, or data (re)patriation.