Unfortunately it looks like the headline numbers may not be representative:
“The research team did not initially receive a complete participant contact list and the CBO staff
led in facilitating recruitment, resulting in a sample that does not represent all DCT+ participants. The limited
sample size further limits the representativeness and generalizability of findings. The evaluation sample of 63
participants represents only 54% of the total 117 program participants. Therefore, the study population may not
adequately represent the broader DCT+ experience. Additionally, participants who completed both initial and
exit surveys may differ systematically from those who did not, potentially skewing results toward more positive
outcomes among individuals who remained engaged throughout the evaluation period.”
it’s a youth program ran by a youth organization. Young people dealing with family problems due to gender identity, sexuality etc. are a very large portion of homeless youth. I would guess transgender people are underrepresented at just 18%.
By that standard, it can never be verified because what is running and what is reviewed could be different. Reviewing relevant elements is as meaningful as reviewing all the source code.
Let’s be real: the standard is “Do we trust Meta?”
I don’t, and don’t see how it could possibly be construed to be logical to trust them.
I definitely trust a non-profit open source alternative a whole lot more. Perception can be different than reality but that’s what we’ve got to work with.
That’s not my experience, using various VPNs, public networks, Cloudflare and Apple private relays. A captcha is common when logged out but that’s about it, I have not encountered any shadow bans. I create a new account each week.
The first name and the second name were both terrible. Yes, the creator could have held firm on "clawd" and forced Anthropic to go through all the legal hoops but to what end? A trademark exists to protect from confusion and "clawd" is about as confusing as possible, as if confusing by design. Imagine telling someone about a great new AI project called "clawd" and trying to explain that it's not the Claude they are familiar with and the word is made up and it is spelled "claw-d".
OpenClaw is a better name by far, Anthropic did the creator a huge favor by forcing him to abandon "clawd".
Interesting, I dont read claude the same way as clawd, but I'm based in Spain so I tend to read it as French or Spanish. I tend to read it as `claud-e` with an emphasis on the e at the end. I would read clawd as `claw-d` with a emphasis in the D, but yes i guess American English would pronounce them the same way.
Edit: Just realized i have been reading and calling it after Jean-Claude Van Damme all this time. Happy friday!
I agree that absolute deference to doctors is a mistake and that individuals should be encouraged to advocate for themselves (and doctors should be receptive to it) but I'm not so convinced in this specific case. Why do high blood sugar levels matter? Are there side effects associated with the alternative treatment? Has ChatGPT actually helped you in a meaningful way, or has the doctor's eventual relenting made you feel like progress has been made, even if that change is not meaningful?
In this context, I think of ChatGPT as a many-headed Redditor (after all, reddit is what ChatGPT is trained on) and think about the information as if it was a well upvoted comment on Reddit. If you had come across a thread on Reddit with the same information, would you have made the same push for a change?
There are quite a few subreddits for specific medical conditions that provide really good advice, and there are others where the users are losing their minds egging each other on in weird and whacky beliefs. Doctors are far from perfect, doctors are often wrong, but ChatGPT's sycophancy and a desperate patient's willingness to treat cancer with fruit feel like a bad mix. How do we avoid being egged on by ChatGPT into forcing doctors to provide bad care? That's not a rhetorical question, curious about your thoughts as an advocate for ChatGPT.
I know what you mean and I would certainly not want to blindly "trust" AI chatbots with any kind of medical plan. But they are very helpful at giving you some threads to pull on for researching. I do think they tend a little toward giving you potentially catastrophic, worst-case possibilities, but that's a known effect from when people were using Google and WebMD as well.
Are you asking why a side effect that is actually an entire health problem on its own, is a problem? Especially when there is a replacement that doesn’t cause it?
Side effects do not exist in isolation. High blood sugar is not a problem if it is solving a much bigger health issue, or is a lesser side effect than something more serious. If medication A causes high blood sugar but medication B has a chance of causing blood clots, medication A is an obvious choice. If a patient gets it in their head that their high blood sugar is a problem to solve, ChatGPT is going to reinforce that, whereas a doctor will have a much better understanding of the tradeoffs for that patient. The doctor version of the x/y problem.
I have type 2 diabetes. Blood sugar levels are a concern for me. I switched medications to one that was equally benign and got my blood sugar levels decreased along with my blood pressure. I don’t know why you assume that the medication I switched to might have higher or worse side effects. That wasn’t a choice I had to make given the options I was presented with.
Look, anyone can argue hypotheticals. But if one reads the comment being discussed, it can be deduced that your proposed hypotheses are not applicable, and that the doctor actually acknowledged the side effect and changed medications leading to relief. Now, if the new medication has a more serious side effect, the doctor (or ChatGPT) should mention and/or monitor for it, but the parent has not stated that is the case (yet). As such, we do not need to invent any scenarios.
The comment being discussed advocates for people to use ChatGPT and push their doctor to follow its recommendations. Even if we assume the OP is an average representation of people in their life, that means half of the people they are recommending ChatGPT to for medical advice are not going to be interrogating the information it provides.
A lazy doctor combined with a patient that lacks a clear understanding of how ChatGPT works and how to use it effectively could have disastrous results. A lazy doctor following the established advice for a condition by prescribing a medication that causes high blood sugar is orders of magnitude less dangerous than a lazy doctor who gives in to a crackpot medical plan that the patient has come up with using ChatGPT without the rigour described by the comment we are discussing.
Spend any amount of time around people with chronic health conditions (online or offline) and you'll realise just how much damage could be done by encouraging them to use ChatGPT. Not because they are idiots but because they are desperate.
As a physician, I can give further insight. The blood pressure medication the commenter is referring to is almost certainly a beta blocker. The effect on blood sugar levels is generally modest [1]. (It is rare to advise someone with diabetes to stop taking beta blockers, as opposed to say emphysema, where it is common)
They can be used for isolated, treatment of high blood pressure, but they are also used for dual treatment of blood pressure and various heart issues (heart failure, stable angina, arrhythmias). If you have heart failure, beta blockers can reduce your relative annual mortality risk by about 25%.
I would not trust an LLM to weigh the pros and cons appropriately knowing their syncophantic tendencies. I suspect they are going to be biased toward agreeing with whatever concerns the user initially expresses to them.
> How do we avoid being egged on by ChatGPT into forcing doctors to provide bad care?
I don’t ask it leading questions. I ask “These are my symptoms, give me some guidance.” Instead of “these are my symptoms, I think I have cancer. Could I be right?” If I don’t ask leading questions it keeps the response more pure.
Yeah, and "I found one paper that says X" is very weak evidence even if you're correctly interpreting X and the specific context in which the paper says it.
The one child policy only really mattered in the cities, rural China had different rules. There is also no incentive for China to lie, quite the opposite, underreporting their population would be a boon for their success on the global stage: imagine if they are achieving what they achieve, with half as many people?
Many companies setup branches and sent IP to China in exchange for access to those billion consumers. Fewer consumers means a company might target India, etc. instead of China first.
Yes, except that China also uses its population as a military threat. It going down would take away some of the impact of that. So it always needs to go up, to reinforce it.
Does it? Russia has 1/10th the purported population of China, lost most of their military aged men in a conflict that has exposed Russia's supposed military might as a work of fiction and yet the west remains scared senseless of Russia because of the nuclear threat. China has nuclear weapons, whether they have 10 million or 100 million men they can send to the frontlines to absorb bullets is irrelevant to their national security.
Europe is right to panic. It exposed that Europe was relying only on the US and the general world order. And the US turned out to be unreliable.
Imagine this scenario: tomorrow the Ukrainian front collapses, and Russia rapidly captures a significant part of Ukraine. Then the Ukrainian government gets Venezuellaed by Putin (maybe with Trump's help), and the new government becomes loyal to Moscow.
Then a new charismatic military leader deposes Putin and forms an alliance with the Ukrainian army against Europe. With rhetoric like: "Look, Europe just used you. They never gave you enough weapons to win against Russia but just enough for a stalemate. They were giving Putin hundreds of billions for gas and oil, too afraid of cold weather while you were dying on the front lines. They hoped to kill both of our countries. Now let's join together and show them how the war should be fought properly". And then a battle-hardened 500000-strong army marches towards Kaliningrad, locking Poland and the Baltics behind the front lines.
It's an exceedingly unlikely scenario. But not impossible. And it's not the _only_ similar scenario anymore. There's also Turkey with a dictator dreaming about writing his name in history books. Serbia is getting more anti-EU.
Mostly in the past before they were well industrialized. When you had India with over a billion people as a threat, it was a good measure. Now most of the surrounding countries have fallen below population replacement rate excess population can cause issues with economic growth in places resources and space are constrained.
"Freetrade built a profitable trading app, got acquired by IG Group for £160M after targeting a £700M valuation"
That is not an accurate recollection of history. Freetrade raised money at a £700 million valuation at the very top of the market when money was plentiful, then, when the money dried up, and they were forced to go from losing money to making money, and they cut all advertising, they were able to just about scrape profitability. At the point of the acquisition, Freetrade either needed more investment to fund more advertising, or get acquired. After 10 years, a dozen rounds of fundraising and capital drying up, £160M is a good exit. Freetrade was significantly overvalued at £700 million.
And also, IG Group is a British company, HQ'd in London, traded on the London Stock Exchange. "British stock trading company acquired by British stock trading company" is a pretty boring event.
You could not possibly compare Sydney and London, they are very different. London is a bustling diverse city, Sydney is a nice (big) town. Sydney is a great place, and London is far from perfect but they are not in the same conversation. They are different.
The Global Liveability Index is, essentially, highlighting the most middle of the road cities. London (as with New York) has huge disparities and that guarantees it will never rank well on the Global Liveability Index. The people who choose to live in London and love London (as with the people who choose to live in New York and love New York) do not choose it because it is average.
I have lived in London and Sydney and many other cities. I have fallen out of love with London. I would rather live in Sydney than London. I still cannot imagine ever describing Sydney as a better city than London. Just as I can't imagine describing Copenhagen better than London.
Healthcare in London is world class. A city is crowded. The weather is very average for Europe.
People from Sydney who move to London come to hate it, once the novelty wears off, just as they would with New York, because the Australian way of life is very different. Sydney is closer to island life than city life.
Even with London's ongoing decline due to the U.K's inexplicable self sabotage, it still has something to offer.
> People from Sydney who move to London come to hate it, once the novelty wears off, just as they would with New York
Just sharing a different perspective, I'm from Sydney and have lived in all 3 and don't agree with this generalisation. I know plenty of Sydney-raised people who've lived in London or New York for decades, love those places, and don't plan to move back to Sydney any time soon if at all.
“The research team did not initially receive a complete participant contact list and the CBO staff led in facilitating recruitment, resulting in a sample that does not represent all DCT+ participants. The limited sample size further limits the representativeness and generalizability of findings. The evaluation sample of 63 participants represents only 54% of the total 117 program participants. Therefore, the study population may not adequately represent the broader DCT+ experience. Additionally, participants who completed both initial and exit surveys may differ systematically from those who did not, potentially skewing results toward more positive outcomes among individuals who remained engaged throughout the evaluation period.”
reply