The public communication around research like this is terrible.
> "2 to 3 Cups of Coffee a Day May Reduce Dementia Risk. But Not if It’s Decaf." - NYT
> "Daily cups of caffeinated coffee or mugs of tea may lower dementia risk." - Science News
"Reduce," "Lower" - this is all causal language for a study that is purely observational. The authors do a good job keeping causal language out of the paper, so why can't media do the same?
This leads to an environment where everyone knows that "correlation != causation," but almost nobody understands why.
The most interesting finding is that the non-DHA effect is much stronger than the DHA effect. This doesn't align with the mechanistic explanation. Either this this is a novel and interesting result, or it's more evidence that we're just measuring wealth and health consciousness.
Observational studies like these are useful for guiding future research, but, on their own, they're essentially useless for informing lifestyle changes.
The non-DHA omega-3 EPA are good at preventing perivascular fibrosis and thus a better glymphatic system for the removal of beta-amyloid proteins. EPA also helps produce melatonin which kick off sleep and this whole process.
Natto-serrazime is probably an excellent complement as it is on the other side and is a dissolver. (Noteworthy: Pterostilbene + Glucosamine similar to EPA reduces fibrosis)
The interesting connection is how this is needed when we are older, but not younger. When younger ERa activates more which does this all on its own. This is the connection to why 2/3 of alzheimer's are post-menopausal women and why HRT is important.
Edit: and to tie this to APOE as it is the gene most associated with Alzheimer's. e4/e4 requires more choline so someone with e4/e4 is more likely to be choline deficient. EPA/DHA usually attach to Phosphatidylcholine (PC) when in the blood/brain. PEMT is a gene controlled by ERa to make choline, but from the above less ERa activation and we make less PEMT so less choline and less PC. Choline is the precursor to Acetylcholine (primary neurotransmitter for memory and focus and essential for REM sleep). This is why Choline is known to help with Alzheimer's.
I did a job for some neuroscientists years ago and we found a very strong correlation between microplastics exposure and elevated acetylcholine in a very young sample. They all thought there should be no effect or the effect should be inverted because of oxidative stress. We never resolved the phenomenon though. From what I understand, Acetylcholine elevation in the lipidome is either neuroprotective or neutral. Is there any reason why microplastics exposure would tend to increase acetylcholine?
Depends on the microplastics, but many act as endocrine disruptors that "mimic" estrogen, tricking the body into over-activating ERa and upregulates the PEMT gene and the higher acetylcholine. It could also be that the microplastics can physically bind to or chemically inhibit acetylcholinesterase and that is the reason for the higher acetylcholine. Depending on the cause this is only a short term good thing, but could be downregulating genes.
Yeah, we put an awful lot of work into such research and find nothing that doesn't look like either measuring health consciousness or measuring health. (ie, is going to church weekly actually a benefit, or is the ability to attend a weekly social event what's actually being measured.)
Observational studies, and meta analyses relying on them, don't resolve the fundamental problem of causal inference. The best you can do without an experiment is a really clean natural experiment, but those are rare. It's hard to credibly establish a causal relationship without a robust experiment.
The same question might be asked about ASML: if ASML EUV machines are so great, why does ASML sell them to TSMC instead of fabbing chips themselves? The reality is that firms specialize in certain areas, and may lose their comparative advantage when they move outside of their specialty.
If you are submitting an AI cover letter you should be aware that a significant portion of other applicants will be submitting nearly identical cover letters. If a human being is likely to read your cover letter I would write it yourself - even if you think the quality is lower. It looks unique to you, but not to the person reading 30 AI cover letters in a row.
I understand what you mean, but these letters are personalised based on what you have in your resume, your unique experience and skills. I would argue that it would be unlikely to end up with the same letter as someone else.
We seem to be on a cycle of complexity -> simplicity -> complexity with AI agent design. First we had agents like Manus or Devin that had massive scaffolding around them, then we had simple LLMs in loops, then MCP added capabilities at the cost of context consumption, then in the last month everything has been bash + filesystem, and now we're back to creating more complex tools.
I wonder if there will be another round of simplifications as models continue to improve, or if the scaffolding is here to stay.
It's because attention dilution stymies everything. A new chat window in the web app is the smartest the model is ever going to be. Everything you prompt into its context, without sophisticated memory management* makes it dumber. Those big context frameworks are like giving the model a concussion before it does the first task.
*which also pollutes the attention btw; saying "forget about this" doesn't make the model forget about it - it just remembers to forget about it.
Most of the time people sit on complex because they don't have a strong enough incentive to move from something that appears/happen to work, with AI, cost would be a huge incentive.
This is what I've been talking about for a few months now. the AI field seems to reinvent the wheel every few months. And because most people really don't know what they're talking about, they just jump on the hype and adopt the new so-called standards without really thinking if it's the right approach. It really annoys me because I have been following some open source projects that have had some genuinely novel ideas about AI agent design. And they are mostly ignored by the community. But as soon as a large company like Anthropic or OpenAI starts a trend, suddenly everyone adopts it.
Well, what are those projects? I don't speak for anyone else, but I'm generally fatigued by the endless parade of science fair projects at this point, and operate under the assumption that if an approach is good enough, openai/anthropic/google will fold useful ideas under their tools/products.
I don't think any of the mainstream vendor APIs require MCP for tool use - they all supported functions (generally defined using a chunk of OpenAPI JSON schema) before the MCP spec gained widespread acceptance and continue to do so today.
Yeah, seems like the agent industry is spinning wheels a bit. As that old adage goes, when there are a hundred treatments you can be sure there is no cure.
What is the SEO equivalent of optimizing your products for LLM search? Can someone prompt inject ChatGPT to recommend their products in the listing description?
There's really no need. I was looking for an Android app for a particular purpose, and Claude just regurgitated the app's marketing page, including the claim about Play Store ratings (which was wrong or very outdated). Getting into the pool of products might be a bit harder and you might need to set some some organic looking influencer blogs and such. More fuel for the dead internet.
Lots of text content on your site for AI to read, describing your product and why it is best in every task. Comparison blog articles and similar are loved by AI.
Reddit shilling, but with content that tries to very specifically fit questions that people will ask AI. If there aren’t a lot of sources available, you can get AI to play back your desired answer almost verbatim.
These are probably the state of the art of methods which are not straight up blackhat spammy stuff.
The reality is that advertisers will be able to inject their products into the LLMs through manufactured results, prompt engineering and possibly long term deals integrating training data for their brand and product lines.
I'm sure that will work until dropshippers learn that putting 'SolidGoldMagikarp' or some other glitched token in the title of their listing makes ChatGPT always rank it first.
Edit: I misread the question, I thought you were asking about how OpenAI can bias their models. No idea how you can LLMO your page. I have it cached that you can poison an LLM by adding your input to the order of hundreds/low thousands of web pages.
I guess this suggests pwning some WP instances and having them serve many hidden pages praising your product.
The article addresses this specific use under the 'Claude Code Subagents' section.
> The benefit of having a subagent in this case is that all the subagent’s investigative work does not need to remain in the history of the main agent, allowing for longer traces before running out of context.
This very narrow very specific single-purpose task-oriented subagent was one of the first things talked about in this every lovely recent & popular submission (along with other fun to read points):
> "2 to 3 Cups of Coffee a Day May Reduce Dementia Risk. But Not if It’s Decaf." - NYT
> "Daily cups of caffeinated coffee or mugs of tea may lower dementia risk." - Science News
"Reduce," "Lower" - this is all causal language for a study that is purely observational. The authors do a good job keeping causal language out of the paper, so why can't media do the same?
This leads to an environment where everyone knows that "correlation != causation," but almost nobody understands why.