What do you make of something like Reddit (RDDT, down -15% at this moment)?
It's unaffected by tariffs, but its insane valuation is driven by the narrative that Reddit posts can be used to train AI. Without that narrative, you have a semi-toxic collection of forums and the valuation would probably be somewhere in the millions at best, not the current $20 BB.
When a correction happens, everyone with short-term funds pulls them out. Doesn't matter if the issue has a direct connection to the stock or "makes sense" at all.
Loss-making businesses money can be expected to fail more often when a recession occurs, which looks increasingly likely. After all if they can't make any money today how will they make a profit if add spend is down by 30%?
I mean, not to say that you might not have some explanatory power here, but the market is complex and difficult to untangle, and at least some analysts are predicting recession which will certainly have effects on Reddit even if it's not directly affected by tariffs. We can all cherry-pick individual stocks.
That's certainly one take, but I suspect historians will actually link the beginning of this trend to the European/Western reactions to Russia in February of 2022.
It could be framed as "cancel culture overruled the courts". The second Putin became the "literally Hitler" of the moment well anything could be done - even things they didn't do when actual Hitler was around.
This meant extra-judicial seizures including "preventive" seizures. No law was broken or sanction placed yet, but they're going to seize your assets now and figure out how to make it "legal" later on.
Even the Swiss - neutral during WW2 - abandoned over two centuries of neutrality and went along with the EU in this.
The message these countries sent was clear: if you ever oppose us, rule of law will not protect you.
The American Sunlight Project which coined the term "AI grooming" and is the reference for this article should be familiar to many:
> The American Sunlight Project is a left-of-center organization that seeks to counter what it considers “disinformation” online.
> Founded in 2024 by former Biden administration “disinformation czar” Nina Jankowicz, the organization supports President Joe Biden’s claim that modern people live in the age of disinformation, advances the idea that Russia interfered in the 2016 election to benefit then-Republican candidate Donald Trump, and conducts open-source investigations to undermine those who challenge disinformation researchers.
> CRC was founded in 1984 by Willa Johnson, former senior vice president of The Heritage Foundation, deputy director of the Office of Presidential Personnel in the first term of the Reagan administration, and a legislative aide in both the United States Senate and House of Representatives. Journalist and author Marvin Olasky previously served as a senior fellow at CRC.[6]
I think a right wing think tank isn’t a unbiased watcher of left projects
> Welcome to CRC’s work-in-progress, InfluenceWatch.
>Capital Research Center conceived of this project after identifying a need for more fact-based, accurate descriptions of all of the various influencers of public policy issues.
>The Capital Research Center (CRC) was founded in 1984 by Willa Johnson to “study non-profit organizations, with a special focus on reviving the American traditions of charity, philanthropy, and voluntarism.
Prior to founding CRC, Willa Johnson had been Senior Vice President of the Heritage Foundation, then worked as deputy director of the Office of Presidential Personnel in the first Reagan Administration, and as a legislative aide in both the U.S. Senate and U.S. House of Representatives.
The Capital Research Center has expressed concern that “Many charities and foundations now urge lawmakers to expand the powers of the welfare state, regulate the economy and limit individual freedoms.
As part of the conservative campaign to ‘Defund the left‘ the Capital Research Center produces a range of publications targeting foundations, unions and activist groups that it views as supporting liberal causes.
> In an interview with CNN in April 2024, American Sunlight Project executive director Nina Jankowicz claimed that investigations into so-called disinformation researchers make the U.S. less safe. 14 Shortly after the organization was launched in April 2024, Jankowicz sent a letter to U.S. Representatives Jim Jordan (R-OH), James Comer (R-KY), and Dan Bishop (R-NC) claiming that they have “done little to improve the health of our information environment” since the 2016 presidential election. 15 16 The group also alleged that they intimidated disinformation researchers through the House Judiciary Select Subcommittee on the Weaponization of the Federal Government 2 and that Jordan, Comer, and Bishop’s actions have led to federal agencies doing less to censor online speech in the name of disinformation. 17
Ah yes, the answer to bad
-think is to censor it using the federal government.
> The Appointments Clause of the constitution says that Officers of the United States must be appointed by the president with the advice and consent of the Senate.
Wasn't that the legal genius in how Trump structured it? DOGE is a re-branded USDS (United States Digital Service).
Given that USDS is not a federal agency, that clause of the Constitution would not apply.
That also would assume that Musk is the administrator of DOGE, which he is not. It's Amy Gleason, and she, just like every administrator of the USDS going back to Obama, was not confirmed.
USDS had a purely consulting role, helping to build websites. What DOGE is doing now is ordering agencies to do various things like cancel contracts, hire and fire people, stop work, etc.
And the idea that Amy Gleason is in charge is a fig leaf. Trump himself has said that Musk is in charge.
The argument is that by wielding this new power, Musk is effectively an Officer of the United States, regardless of his official title. And under the constitution an Officer of the United States must be appointed with the advice and consent of the senate.
Whether or not the Supreme Court will buy this argument is to be determined, but that's the argument.
Where they spell out a formula for the speed of sound in this type of planet given some assumptions. It's beyond my understanding, but I thought it was pretty cool. Section 4.4 if you want to have a look.
Very cool! Isn't it amazing what happens when you don't assume bad faith or incompetence on the part of the researchers? You have to like a paper that has a section discussing the precipitation of vaporized iron out of a planets atmosphere...mindblowing. Of course, that means doing a little homework to actually understand what they already know and avoiding the lazy "me and my preconceived notions and lack of domain knowledge are here to tell you why your wrong".
Given that it's being done through xAI, it seems reasonable to think that most parties from December's series C would also at least entertain the idea:
> A16Z, Blackrock, Fidelity Management & Research Company, Kingdom Holdings, Lightspeed, MGX, Morgan Stanley, OIA, QIA, Sequoia Capital, Valor Equity Partners and Vy Capital, amongst others. Strategic investors NVIDIA and AMD also participated and continue to support xAI in rapidly scaling our infrastructure.
A law like that would probably be unconstitutional if it applied broadly to speech in general. Compare United States v. Alvarez, where the Supreme Court held that the First Amendment gives you the right to lie about having received military medals.
It might work in more limited contexts, like commercial speech.
> I wouldn't want humans pretending to be bots, for a variety of reasons.
I don't have an opinion yet, but I can't think of a specific reason to object to that (other than a default preference for honesty). Could you give an example or two?
Probably the main risk would be people trusting what they think to be an automated system and trusting it to act for a specific purpose. That or people saying things they think should be private. I’m not saying it’s actually safe to interact this way with bots, but the trust expectations are different enough.
What you'll find is that most people form a knee jerk opinion first, most often in opposition to change, then retrospectively seek reasons to justify their opinion after the fact.
In other words, people, generally, cherrypick evidence for their opinions, rather than picking opinions for their evidence.
A good sign this is occuring is when the reasons provided are vague, the prevalence of that negative outcome is rare, or are hypothetical scenarios which rely on companies or people behaving in unlikely and unnatural ways (like ignoring broader incentives).
The result is luddite-ism, and proposals exactly like this one, whereby regulations are proposed even in the absence meaningful and demonstrable harm.
So compel speech from a person? "Congress shall make no law..." Really the most basic civics education would benefit us all, I think there are some youtube videos about this.
https://en.wikipedia.org/wiki/Hacker_News#History