It’s the same thing. Obviously withdrawals and such are different but the core mechanism of disregulated reward processing leading to compulsive behavior engagement is exactly the same.
One obvious risk would be blunting of longer term GLP-1 receptor activation. Imagine type 2 diabetes but for ghrelin.
To use an analogy amphetamines have a honeymoon period, and it feels like a lot of people on these weight loss drugs haven’t been on them long enough to get past the honeymoon period and see what the effects are after 10, 20, etc years
It's possible. But, we've had another GLP-1 medication in use for about a decade and a half now - liraglutide. So far, we haven't seen evidence of that occurring.
I don't think anyone who is both informed and sane would suggest that it is impossible that there are negative long term impacts from taking the medication. Just that we have no current indication of them, and that being afraid about a "what if" without any concrete concerns when the alternative is the "continue being in one of the riskiest states possible for human health" is silly.
People don't realize that Ozempic is already a third generation GLP drug, Mounjaro is a 4th, and the try generation drugs are already in wide scale clinical trials.
We do in fact know a lot about how these drugs affect people by now, and as you point out, we have well over a decade of data on them.
It's less about the NSA having AI capabilities and more the inverse - the NSA having access to people's chatGPT queries. Especially if we fast-forward a few years I suspect people are going to be "confiding" a ton in LLMs so the NSA is going to have a lot of useful data to harvest. (This is in general regardless of them hiring an ex-spook BTW; I imagine it's going to be just like what they do with email, phone calls and general web traffic, namely slurping up all the data permanently in their giant datacenters and running all kinds of analysis on it)
I think the use case here are LLMs trained on billions of terabytes of bulk surveillance data. Imagine an LLM that has been fed every banking transaction, text message or geolocation ping within a target country. An intelligence analyst can now get the answer to any question very, very quickly.
> I suspect people are going to be "confiding" a ton in LLMs
They won't even need to rely on people using ChatGPT for that if things like Microsoft's "Recall" is rolled out and enabled by default. People who aren't privacy conscious will not disable it or care.
Because he was a coder working on his startup with no concern for the massive amounts of harm he was causing and many of the denizens of HN can relate to that.
I mostly agree, but I would like to see the full context of the hitman stuff. My understanding is that it was a law enforcement plant that encouraged him to hire a hitman and then referred him to one, which was of course also a plant. Still bad, but also seems a bit like entrapment.
> I think one solution could be in licenses that force companies/business of certain sizes to pay maintenance fees. One idea from the top of my head.
This just doesn't work. Fully open source software (as opposed to source available) is so much more useful than the alternative that there's always going to be an OSS fork for any sufficiently useful project. AFAICT Elasticsearch and Redis have not really "won" by their respective license changes but rather have just fragmented their own market and sown the eventual seeds of their destruction.
That seems pretty crazy, although I suppose to play devil's advocate the ongoing, erm, 'conflict', was clearly interfering with his ability to output work ("I can't work. I code for 5 minutes before their bodies come back")
I bet that average homeless person does too. 2% seems ridiculously low. $15 a month total on drugs? That only makes sense for someone who does no opioids, no stimulants, and just smokes 1 pack of cigs and has a single beer across an entire month.