I believe this should be lean mass, not total mass. I think people tried to calibrate this metric since most people don't have scales that can measure composition... but if you're obese, you're going to be consuming more than you need to, which is counter productive if you're obese.
I think there was a study last year or so that investigated whether protein rich meals actually made people consume less calories, and i think it didnt really, despite the fact that it feels more satiating and the TEF is also higher than for carbs.
So i think for long term weight changes it doesnt really help, at least not via its satiety response. Probably more through displacing other stuff from the diet and improved body composition.
Protein is more satiating "if and only if you are not getting enough protein for optimum body recomposition" which Menno in another video puts at 0.8g per lb of body mass.
I truly believe that satiety is dependent entirely on 1) what you're used to eating and 2) what you expect/culture. Years ago I was watching a video that interviewed a guy who owned an international fast food franchise somewhere in Asia, a burger place, like a McDonald's. He was saying a big difference between America and wherever they were was that they absolutely, positively MUST serve rice because in their culture most people don't find that burgers produce satiety, you need the rice otherwise you're still hungry.
I've never had rice with burgers nor do I have an "Asian eating expectation/culture", but I absolutely do avoid McDonald's and the like because I feel hungry and lethargic shortly after eating there.
However, after a nice home-made burger I won't feel hungry again until the next meal and am full of energy. This isn't a tiny burger, either, I'll usually slap an egg on a 150g patty with some cheese for good measure. Since this is an "I'm too lazy to actually cook" meal, this tends to go with some kind of potatoes. I think the only difference between the two is the quality of the ingredients (added sugar in ketchup = bad, tomatoes are plenty sweet).
I think the difference absolutely comes down to what I eat. I don't put sugar syrup or whatever makes the McDonald's sauces so sweet in my burger, just basic boiled tomato sauce (so that it's thicker and doesn't make a mess). And I think that not only typical fast-food places are guilty of this. I've had similar outcomes after eating in "regular" brasseries around Paris what, on the face of it, wouldn't be considered "fast food".
Agree 100%. But lemme tell ya, "protein fluff" make from 150g skim milk, 10g protein powder, and 3g Xanthan gum whipped into a still meringue by a stand mixer is the most satiating thing I've ever eaten and it isn't even close. It is like the meringue doesn't collapse back down right away in your stomach so it is like eating (tasty) closed cell foam. I used to make it from a full cup of milk but had trouble finishing it. It's crazy filling and a godsend when cutting.
I found it needs to be skim milk. Otherwise the fat in regular milk seemed to prevent the meringue from setting up.
Practically speaking it doesn't matter. Just use your healthy (men 15-20% fat; women +8%) weight and calculate based on that.
If you are healthy fat percentage, just use your own weight. If you are a bit highter, and can financially and practically afford it, just use your weight as well. Won't hurt and might actually help a bit.
So it is only a concern for severely obese people. If you are 50+kg overweight, you can scale it down a bit.
Similarly, these obese people shouldn't use the "my current diet - 500kcal a day reduction" which is sensible for already lean bodybuilders. They should just use the "my maintenance diet if I were of healthy weight".
Yes! The "lean mass" caveat is oft ignored by bro scientists, and even LLMs have incorporated the error due to training on bro science forums.
I use this as a bit of a canary. If you see somebody making this basic mistake (like the post you're replying to did), you should be highly skeptical of their other claims too.
I have never felt less confident in the future than I do in 2025... and it's such a stark contrast. I guess if you split things down the middle, AI probably continues to change the world in dramatic ways but not in the all or nothing way people expect.
A non trivial amount of people get laid off, likely due to a finanical crisis which is used as an excuse for companies scale up use of AI. Good chance the financial crisis was partly caused by AI companies, which ironically makes AI cheaper as infra is bought up on the cheap (so there is a consolidation, but the bountiful infra keeps things cheap). That results in increased usage (over a longer period of time). and even when the economy starts coming back the jobs numbers stay abismal.
Politics are divided into 2 main groups, those who are employed, and those who are retired. The retired group is VERY large, and has alot of power. They mostly care about entitlements. The employed age people focus on AI which is making the job market quite tough. There are 3 large political forces (but 2 parties). The Left, the Right, and the Tech Elite. The left and the right both hate AI, but the tech elite though a minority has outsized power in their tie breaker role. The age distributions would surprise most. Most older people are now on the left, and most younger people are split by gender. The right focuses on limiting entitlements, and the left focuses on growing them by taxing the tech elite. The right maintains power by not threatening the tech elite.
Unlike the 20th century America is a more focused global agenda. We're not policing everyone, just those core trading powers. We have not gone to war with China, China has not taken over Taiwan.
Physical robotics is becoming a pretty big thing, space travel is becoming cheaper. We have at least one robot on an astroid mining it. The yield is trivial, but we all thought it was neat.
Energy is much much greener, and you wouln't have guessed it... but it was the data centers that got us there. The Tech elite needed it quickly, and used the political connections to cut red tape and build really quickly.
We do not currently have the political apparatus in place to stop the dystopian nightmares depicted in movies and media. They were supposed to be cautionary tales. Maybe they still can be, but there are basically zero guardrails in non-progressive forms of government to prevent massive accumulations of power being wielded in ways most of the population disapproves of.
Thats the whole point of democracy, to prevent the ruling parties from doing wildly unpopular things. Unlike a dictatorship, where they can do anything (including good things, that otherwise wouldn't happen in a democracy).
I know that "X is destroying democracy, vote for Y" has been a prevalent narrative lately, but is there any evidence that it's true? I get that it's death by a thousand cuts, or "one step at a time" as they say.
> I know that "X is destroying democracy, vote for Y" has been a prevalent narrative lately, but is there any evidence that it's true? I get that it's death by a thousand cuts, or "one step at a time" as they say.
I suggest reading [1], [2], and [3]. From there, you'll probably have lots of background to pose your own research questions. According to [4], until you write about something, your thinking will be incomplete, and I tend to agree nearly all of the time.
[4]: "Neuroscientists, psychologists and other experts on thinking have very different ideas about how our brains work, but, as Levy writes: “no matter how internal processes are implemented, (you) need to understand the extent to which the mind is reliant upon external scaffolding.” (2011, 270) If there is one thing the experts agree on, then it is this: You have to externalise your ideas, you have to write. Richard Feynman stresses it as much as Benjamin Franklin. If we write, it is more likely that we understand what we read, remember what we learn and that our thoughts make sense." - Sönke Ahrens. How to Take Smart Notes_ - Sonke Ahrens (p. 30)
Hard disagree, I'm in the process of deploying several AI solutions in Healthcare. We have a process a nurse usually spends about an hour on, and costs $40-$70 depending on if they are offshore and a few other factors. Our AI can match it at a few dollars often less. A nuse still reviews the output, but its way less time. The economics of those tokens is great. We have another solution that just finds money, $10-$30 in tokens can find hundreds of thousands of dollars. The tech isn't perfect (that's why we have a human in the loop still) but its more than good enough to do useful work, and the use cases are valuable.
It's true, but do you really trust the AI generated + Nurse Review output more than Organic Nurse generated?
In my experience, management types use the fact that AI generated + Nurse Review is faster to push a higher quota of forms generated per hour.
Eventually, from fatigue or boredom, the human in the loop just ends up being a rubber stamper. Would you trust this with your own or your children's life?
The human in the loop becomes a lot less useful when it's pressured to process a certain quota against an AI that's basically stochastic "most probable next token", aka professional bullshitter, literally trained to generate plasuible outputs with no responsibility to accurate outputs.
It works because we are in a health care crisis and the nurse doesn't have anything close to enough time to do a good job.
It is really one of the few great examples that LLMs are good for in an economic sense.
In a different industry, such inefficiency would have been put out of business.
It is a unique economic condition that makes LLMs valuable. It makes complete sense.
To the wider economy though, it is hard to ignore the unreasonable uselessness of LLMs. The unreasonable uselessness points to some kind of fundamental problems with the models that are unlikely to be solved by scaling.
We need HAL to solve our problems but instead we have probabilistic language models that somehow have to grow into HAL.
These same questions could be asked about self driving cars, but they've been shown to be consistently safer drivers than humans. If this guy is getting consistently better results from ai+human than it is from just humans, what would it matter if the former results in errors given the latter results in more and costs more?
If the cars weren't considerably safer drivers than humans they wouldn't be allowed on the road. There isn't as much regulation blocking deploying this healthcare solution... until those errors actually start costing hospitals money from malpractice lawsuits (or not), we don't know whether it will be allowed to remain in use.
You can't compare an LLM output with a self driven car. That's the flaw of using the term AI for everything, it brings two completely different technologies to an artificial level ground.
TFA's while point is that there is no easy way to tell if LLM output is correct or not. Driving mistakes provide instant feedback if the output of whatever AI is driving is correct or not. Bad comparison.
Many of the things that LLMs will output can be validated in a feedback loop, e.g., programming. It's easy to validate the generated code with a compiler, unit tests, etc. LLMs will excel in processes that can provide a validating feedback loop.
I love how everyone thinks software is easy to validate now. Like seriously, do you have any awareness at all about how much is invested in testing software by the likes of Microsoft, the game studios, and any other serious producers of software? It's a lot, and they still release buggy code.
I trust it alot, in our tests the times a human nurse picked up on something the AI missed are pretty rare. The times the AI found something the nurse missed are common, almost the majority.
That might not be relevant to OPs use case. A lot of nurses get tied up doing things like reviewing claims denials. There’s good use cases on the administrative side of healthcare that currently require nurse involvement.
I think they were referring to the costs of training and hosting the models. You're counting the cost of what you're buying, but the people selling it to you are in the red.
wrong. OpenAI is literally the only AI company with horrific financials. You think google is actually bleeding money on AI? they are funding it all with cash flow and still have monster margins.
OpenAI may be the worst, but I am pretty sure Anthropic is still bleeding money on AI, and I would expect a bunch of smaller dedicated AI firms are too; Google is the main firm with competitive commercial models at the high end across multiple domains that is funding AI efforts largely from its own operations (and even there, AI isn’t self sufficient, its just an internal rather than an external subsidy.)
Dario has said many times over that each model is profitable if viewed as a product that had development costs and operational costs just like any other product from any other business ever.
What that means, and whether it means much of anything at all depends on the assumed “useful life” of the model used to set the amortization period assumed for the development costs.
> You think google is actually bleeding money on AI? they are funding it all with cash flow and still have monster margins.
They can still be "bleeding money on AI" if they're making enough in other areas to make up for the loss.
The question is: "Are LLMs profitable to train and host?" OpenAI, being a pure LLM company, will go bankrupt if the answer is no. The equivalent for Google is to cut its losses and discontinue the product. Maybe Gemini will have the same fate as Google+.
It appears very much not. There has been some suggestion that inference may be “profitable” on a unit basis but that’s ignoring most of the costs. When factoring everything in most of these look very much upside down.
While there demand at the moment, it’s also unclear what the demand would be if the prices where “real” aka what it would take to run a sustainable business.
Those sound like typical bootstrap-sized workflow optimization opportunities, which are always available but have a modest ceiling on both sales volume and margin.
That's great that you happened to find a way to use "AI solutions" for this, but it fits precisely inside the parents "tech wise, I'm bullish" statement. It's genuinely new tech, which can unearth some new opportunities like this, by addressing many niche problems that were either out of reach before or couldn't be done efficiently enough before. People like yourself should absolutely be looking for smart new small businesses to build with it, and maybe you'll even be able to grow that business into something incredible for yourself over the next 20 years. Congratulations and good luck.
The AI investment bubble that people are concerned about is about a whole different scale of bet being made; a bet which would only have possibly paid off if this technology completely reconfigured the economy within the next couple years. That really just doesn't seem to be in the cards.
Folks were super bullish tech wise on the internet when it was new and that turned out it be correct. It was also correct that the .com bubble wiped out a generation of companies and those that survived took a decade or more to recover.
The same thing is playing out here… tech is great and not going away but also the business side is increasingly looking like another implosion waiting to happen.
I hope someone develops an AI that can do your job at a few dollars often less. That would be great, wouldn’t it? The economics of those tokens is great. It would be a solution that just finds money.
Of course, you can still be on the loop to double check its work, but no worries, you can do it part time.
1) welp, I hope this is not my healthcare provider.
2) Do you realize the cost fallacy just extends one level deeper, since those pennies on tokens soon will become hundreds your nurses become cheaper again?
3) "Our AI", come on. What exactly are you using? this is a technical forum.
But seriously, I find it helps to set a custom system prompt that tells Gemini to be less sycophantic and to be more succinct and professional while also leaving out those extended lectures it likes to give.
There is no way that's what they meant. 50k is an absurdly large dose that's way outside the safe intake range. 10k is used sometimes under medical supervision and even then it's a very short term measure. For long term intake, 4000IU is a widely accepted safe upper limit. 50k is an order of magnitude more than that.
There's plenty of documentation of people taking 50k for a period of time and having no side effects. There's been something like a dozen trials using high doses like this to treat TB, and they're usually successful, with no significant negative symptoms.
Conversely, some studies have shown that 4k IU does contribute to hypercalcemia in a small number of cases (4 per 1000). So actually 4k is deemed "not completely safe" as a limit.
The point is, the amount you take needs to be adjusted by a clinician, as the safe range for you is unknowable otherwise.
I took a blood test several weeks ago, my Vitamin D level was 14 ng/ml. I was so fatigued there were times I had to lay on my office floor because I didn't even have the energy to sit in my chair. I started taking 50k IU's weekly and then 10k IU's daily, and the results were dramatic. I went from having 0 energy to nearly normal. I also had soreness in my legs which went away.
As an Architect, i feel like a large part of my job is to help my team be their best, but I'm also focused on the delivery of a few key solutions. I'm used to writing tasks, and helping assign it to members on the team while occasionally picking up the odd-end piece of work myself, focusing more on architecture and helping individual members when they get stuck or when problems come up. But with the latest coding agents, i'm always thinking in the back of my head (I can get the AI to finish this task 3x quicker, and probably better quality if I just do it myself with the AI). We sit on SCRUM meetings sizing tasks, and i'm thinking "bro, you're just going to take my task description paste it into AI and be done in 1/2 hr" but we size it to a day or 2.
" Blockchain is probably the most useless technology ever invented "
Actually AI may be more like blockchain then you give it credit for. Blockchain feels useless to you because you either don't care about or value the use cases it's good for. For those that do, it opens a whole new world they eagerly look forward to. As a coder, it's magical to describe a world, and then to see AI build it. As a copyeditor it may be scary to see AI take my job. Maybe you've seen it hilucinate a few times, and you just don't trust it.
I like the idea of interoperable money legos. If you hate that, and you live in a place where the banking system is protected and reliable, you may not understand blockchain. It may feel useless or scary. I think AI is the same. To some it's very useful, to others it's scary at best and useless at worst.
You need legal systems to enforce trust in societies, not code. Otherwise you'll end up with endless $10 wrench attacks until we all agree to let someone else hold our personal wealth for us in a secure, easy-to-access place. We might call it a bank.
The end state of crypto is always just a nightmarish dystopia. Wealth isn't created by hoarding digital currency, it's created by productivity. People just think they found a shortcut, but it's not the first (or last) time humans will learn this lesson.
I call blockchain an instantiation of Bostrom's Paperclip Maximizer running on a hybrid human-machine topology.
We are burning through scarce fuel in amounts sufficient to power a small developed nation in order to reverse engineer... one way hashcodes! Literally that is even less value than turning matter into paperclips.
If gold loses its speculative value, you still have a very heavy, extremely conductive, corrosion resistant, malleable metal with substantial cultural importance.
When crypto collapses, you have literally nothing. It is supported entirely and exclusively by its value to speculators who only buy so that they can resell for profit and never intend to use it.
Well, not literally nothing. You have all that lovely carbon you burned to generate meaningless hashes polluting your biosphere for the next century. That part stays around long after crypto collapses.
The “$10 wrench attack” isn’t an argument against crypto—it’s an argument against human vulnerability.
By that logic, banks don’t work either, since people get kidnapped and forced to drain accounts. The difference is that with crypto, you can design custody systems (multi-sig, social recovery, hardware wallets, decentralized custody) that make such attacks far less effective than just targeting a centralized bank vault or insider.
As for the “end state” being dystopian, history shows centralized finance has already produced dystopias: hyperinflations, banking crises, mass surveillance, de-banking of political opponents, and global inequality enabled by monetary monopolies. Crypto doesn’t claim to magically create productivity—it creates an alternative infrastructure where value can be exchanged without gatekeepers. Productivity and crypto aren’t at odds: blockchains enable new forms of coordination, ownership, and global markets that can expand productive potential.
People now have the option of choosing between institutional trust and cryptographic trust—or even blending them. Dismissing crypto as doomed to dystopia ignores why it exists: because our current systems already fail millions every day.
What they are saying is that we have a system that evolved over time to address real world concerns. You are designing defenses to attacks that may or not be useful, but no one has been able to design past criminals and this is evident because if we could there would be no criminality.
> Dismissing crypto as doomed to dystopia ignores why it exists: because our current systems already fail millions every day.
This only makes sense if crypto solves the problems that current systems fail at. This have not been shown to be the case despite many years of attempts.
Do you have any proof to support this claim? Stable coins use alone is in the 10's (possibly hundreds now) of billions in daily transaction globally. I'd be interested to hear your source for your claim.
First of all, you are the one who stated this as a fact, and then provided only anecdotal evidence in support of the broad claim. Your singular limited experience in one country cannot blindly extend to all such countries, so the onus is on you to provide support for your claim.
____________
I was also under the impression that adoption was fairly strong in many of these regions, and after looking into it, I see far more evidence in favor of that than a single anecdotal claim on a discussion board...
>Venezuela remains one of Latin America’s fastest-growing crypto markets. Venezuela’s year-over-year growth of 110% far exceeds that of any other country in the region. -Chainalysis
>Cryptocurrency Remittances Spike 40% in Latin America -AUSTRAC
>Crypto adoption has grown so entrenched that even policymakers are rumored to be considering it as part of the solution. -CCN
>By mid-2021, trading volumes had risen 75%, making Venezuela a regional leader. -Binance
_______
It actually wouldn't surprise me if most of this was hot air, but certainly you have actual data backing up the claim, not just an anecdotal view?
Using relative growth is one of the favorite tricks of hucksters that allows them to say "100% growth over the last year" to hide the fact of it growing from $5 to $10.
I don't really care enough about this to do proper research, but as another anecdote from someone living under 40% yearly inflation: nobody here gives a shit about cryptocurrencies. Those who can afford it buy foreign stock, houses and apartments; those who cannot, buy up whatever USD and EUR we can find.
Cryptocurrency was used by very few people for short-term speculation around 5 years ago, but even that died down to nothing.
It may not be the absolute most useless, but it's awfully niche. You can use it to transfer money if you live somewhere with a crap banking system. And it's very useful for certain kinds of crime. And that's about it, after almost two decades. Plenty of other possibilities have been proposed and attempted, but nothing has actually stuck. (Remember NFTs? That was an amusing few weeks.) The technology is interesting and cool, but that's different from being useful. LLM chatbots are already way more generally useful than that and they're only three years old.
Gambling! That's actually the number one use case by far. Far beyond eg buying illicit substances. Regular money is much better for that.
Matt Levine (of Money Stuff fame) came up with another use case in a corporate setting: in many companies, especially banks, their systems are fragmented and full of technical debt. As a CEO it's hard to get workers and shareholders excited about a database cleanup. But for a time, it was easy to get people fired up about blockchain. Well, and the first thing you have to do before you can put all your data on the blockchain, is get all your data into common formats.
Thus the exciting but useless blockchain can provide motivational cover for the useful but dull sounding database cleanup.
(Feel free to be as cynical as you want to be about this.)
This is so true, from 2018-2021 my internal banking product was able to use blockchain hype to clean up a lot of our database schema. Our CTO was rubber stamping everything with the words blockchain and our customers were beating down the door to throw money at it.
Well that's hilarious. I wonder if LLMs might have a similar use case. I fear they tend to do the opposite: why clean up data when the computer can pretend to understand any crap you throw at it?
"I'm not the target audience and I would never do the convoluted alternative I imagined on the spot that I think are better than what blockchain users do"
I believe this should be lean mass, not total mass. I think people tried to calibrate this metric since most people don't have scales that can measure composition... but if you're obese, you're going to be consuming more than you need to, which is counter productive if you're obese.
reply