> Groups of people don't have guilt or automatic responsibility, only individuals do.
This is not exactly correct. Governments are composed of groups of people and governments maintain continuity of responsibility even when all the people in that government are different. Black Americans were discriminated against as a matter of state and federal policy and thus both state and federal government is at fault for the treatment of Black American. There is clear precedent in American law that in such cases, harmed individuals and groups are due financial compensation. Precedent for financial compensation due to government abuse of power is an incredibly old precedent and the only reason Black Americans haven't been compensated for the discrimination they've suffered is because the continuation of racism, the tradition of discrimination and the sheer size of the problem have turned what should be a fairly straightforward legal case into a complicated political question.
When do we stop hold people responsible for the sins of their great great grandfathers?
I don't hold a grudge against the British for press-ganging my great^9 grandfather into involuntary navel service. He was taken because he was Irish and near the waterfront at the wrong time. How was his kidnapping, forced labor, and whipping with a cat'o nine materially any different from other forms of slavery? When he escaped by jumping ship and running as far inland as he could he was risking beating whipping and execution if he was ever caught. Is that any different than runaway slaves?
Should I get a pay off from the UK because they wronged my family taking my Great^9 grandfather away from his home family and job? They systematically oppressed the Irish for centuries too. His decedents have been mostly poor laborers only edging into middle class with my parents generation.
I don't think I deserve anything that would be ridiculous.
How about my sons they are 1/16 black on their mothers side even though they look to all appearances to be of blonde haired northern European stock. Do they get any special payments for their ancestors oppression? if not what percentage of your ancestors need to be of a single oppressed group to be owed restitution by the rest of taxpaying society.
oh but its was my government shitting on other racial groups. but I didn't vote for them. I usually vote third party and those candidates never win despite having better policy. Yet I am to be held responsible for crimes my ancestors didn't commit (mostly because they were to poor or not emigrated to America yet) and for government policy put in place either before my time or by representatives i voted against.
sure they suffered but why should i be penalized via taking more of my income I use to feed my family thus increasing their suffering for reparation to someone i haven't done anything to.
For historical injustices, it is untenable to discuss paying reparations. Hopefully more recent and/or future instances are resolved early among the actual perpetrators and victims. We should instead focus our efforts on actual problems (for African-Americans, there are many legitimate grievances; I've said my ideas of reform elsewhere). An issue money can't quite solve is mending peoples' perceptions of the government or of the victimized group by others, which is usually the case due to rampant stereotypes (e.g. Muslims and 9/11). Hopefully, the fact that the government would be funding genuine avenues for progress and enforcing policies for race-blind evaluations (why isn't this happening already?!) would slowly mend those wounds over time.
(I realize I'm focusing on the US; this can surely be applied to other countries too.)
Your last line is valid. Now (in the US, at least), where might massive and grossly unnecessary sums of wealth be found to be taxed? Oh yeah, billionaires. A progressive tax and elimination of loopholes such as charities are a start. If the US really is the land of opportunity, why not see if kids of rich people can work to the top themselves? They probably already have an advantage in schooling and whatnot, so inheritance should be capped heavily, or even cut entirely.
> Black Americans were quite clearly discriminated against as a group.
This is clearly true, but it also reduces the impact of what really happened. Black Americans were discriminated against on an individual level. Ruby Bridges is still alive for goodness sake! She's only 68 years old. It isn't hard to estimate financial damages due to the Jim Crow era and it's extremely easy to figure out who was harmed due to explicit government policy.
> It isn't hard to estimate financial damages due to the Jim Crow era and it's extremely easy to figure out who was harmed due to explicit government policy.
Have any reading materials off of the top of your head that you can point me towards with this?
I've done many database migrations from db2 to MySQL. It's a thing that happens quite a lot. ORMs and Java's usage of database access layers over the underlying sql drivers vastly simplified the process.
If misleading statements was the problem, then people would be a lot angrier about the blatant lies and propaganda large oil companies pushed for decades.
Dishonesty is the problem, but not in the way you suggest.
This is actually a very interesting point. I wonder what kind of answers we would get if we asked the same questions to humans of different age groups. Which age group begins to realize there is something awkward in the scenario? What would a common adult react in a less common social faux pas?
I was lead on migrating several systems from traditional deployment to a cloud provider for a major company. Part of the migration was switching a microsoft DB to MySQL. The java ecosystem and ORMs makes this so easy it's barely worth talking about. And that wasn't the only time db migrations have some up.
Not sure if that's enough to justify ORMs, but DB migration is a real use case and there are things you can do ahead of time to greatly simplify the process.
What is often missed is that chimeric viruses are easy to detect. The viral genome will show clear evidence of manipulation from random base insertions and clear homology with all the ancestral viruses. Hiding the signs of manipulation would either require vast amounts of time and resources (the expense and man power would make it very difficult to hide) or straight up science fiction technology. The chimeric origin hypothesis is not a plausible explanation for the origin of sars-cov2, which means the nature link is not relevant.
The other lab leak hypothesis is that a specimen collected and cultured by scientists, infected a lab employee and this patient zero then transmitted the virus to others. This is a plausible option, and it is being researched. However it is less plausible than wild transmission based on a simple numbers game. What is more likely, a breakout infection cause by a dozen scientists specifically trained and equipped against this possibility, or a transmission to one of the millions of other people who routinely interact with these bat populations? Both are possible, but one is much more likely. Before covid19, WIV had published research indicating that novel coronaviruses routinely jump from bats to humans in that part of the world. Most of these viruses aren't don't last in human hosts, but it's clear that it was only a matter time before something nasty got through. After all, it's already happened once before.
The real nail in the coffin is that research[0] has shown that there were at least two, independent transmissions of sars-cov2 to humans. For this to happen as part of a lab leak it would require WIV to have found and cultivated 2 different strains of sars-cov2, and then each of those strains would have to escape the lab.
Now the two distinct genomic lineages seem to indeed present a challenge to lab-leak hypothesis. It's explained in the original study[0] that the second lineage B came from A by intra-host evolution. Due to the molecular clock of the virus the single-introduction origin of the pandemic from a lineage A can be ruled out.
Have you looked at Pekar's full model, as described mostly in the supplementary materials? A typical molecular clock approach wouldn't give anywhere near the accuracy necessary to exclude evolution of lineage B (just two SNPs away) in humans. Pekar instead builds layer upon layer of complexity, with dozens of reasonable but somewhat arbitrary judgment calls, in the same general direction as econometrics. From the shape of the resulting modeled phylogenetic tree, he purports to exclude a single introduction into humans.
I'm not aware of any case where any similar model has been shown to have predictive power, and there's inherently no way to validate this one against any physical data. So I believe this result has been grossly oversold, per my comments and links at
> A typical molecular clock approach wouldn't give anywhere near the accuracy necessary to exclude evolution of lineage B (just two SNPs away) in humans
You're ignoring other data which is counter to the idea of B evolving from A in humans. Pekar's models are not the only evidence.
- Early cases were predominantly B
- A shows less generic divergence than B, this is what Pekar is talking about with regards to the discontinuity in the early clock.
When we first started discussing this - I spoke up because I was annoyed by you trashing peer-reviewed papers when it was obvious you weren't even attempting to grok the phylogenetics involved. Still annoyed.
It's been genuinely interesting watching the scientific debate to root the SC2 tree over the past few years because of the involved paradoxes.
"Just a few SNPs" is just such a silly argument when stacked against peer-reviewed phylogenies in high-impact publications.
Have you looked at Pekar's full numerical stack yourself, as described in their supplemental materials? If yes, then why are you confident that their choice of the Barabasi-Albert algorithm to generate a fixed infection network correctly models the earliest spread of SARS-CoV-2 in humans? In particular, why choose to study robustness against doubling time (which seems intuitively like it wouldn't affect the shape of the tree much), but not robustness against that connectivity (which seems intuitively like it would)?
The rest of their arguments depend fundamentally on the polytomy thing, because nothing else excludes an earlier (even September) first introduction into humans. With an earlier introduction and thus more extensive unsampled spread, it's much harder to insist that A and B would be first sampled in the same order in which they evolved in humans, or make any similar early claims with confidence.
You are correct that I hadn't fully understood their polytomy argument before you brought it up, and I appreciate you bringing it to my attention. I still don't think it's very good, though. I later found Erik van Nimwegen's criticisms, which roughly followed my own; so I don't think I'm taking a fringe position here. Indeed, I've never seen anyone citing or defending Pekar engage in any way with the numerical complexity of that model. It seems like anyone who's looked inside the box becomes a critic, thus my hope that you'll do so.
High-impact publications have shown unfortunate willingness to publish low-quality work that would exclude research-related origin of SARS-CoV-2. For example, I assume you followed Nature's publication, editor's note, and ultimate extensive correction of their pangolin paper, and that you agree pangolins aren't the proximal host. This makes me less inclined to trust in their reviewers here, and more inclined to trust my own judgment (or that of the two Twitter threads I've linked elsewhere).
> In particular, why choose to study robustness against doubling time (which seems intuitively like it wouldn't affect the shape of the tree much)
As I understand it, the doubling times observed in the simulations were primarily the result of the ascertainment and transmission rate parameters.
Care to elaborate why you think the robustness of the model with respect to transmission rate should be assumed? I don't share your intuition here, and note that the authors observe, "that sensitivity analyses with longer doubling times increase the support for multiple introductions."
You really fault them for robustness analysis here?
To be clear I don't fault them for studying robustness against doubling time; I fault them for not studying robustness against connectivity of the infection network, since that seems like it would be more important than any of the parameters that they did study. My intuition is that when spread is highly deterministic (e.g. if R0 = 2 and each patient infects exactly two others), it's easy to make inferences about past spread from the present. For example, in that case it really would be near-impossible for a later lineage to outcompete an earlier one.
But we know the spread of SARS-CoV-2 is actually stochastic, with most lineages dying out but a few exploding due to super-spreader events. In that case it's much harder to judge whether a clade is big because it had more generations to grow, or just big because of a few (un)lucky founder effects. In Pekar's epi simulation, that stochasticity is modeled by their connectivity network. I expect that a more overdispersed network (i.e. greater variance in the number of edges at each vertex, keeping the same average) would make non-modal outcomes--like the real pandemic's phylogeny, if it arose from a single introduction--more likely.
Their results of the simulations are stochastic. They discuss this in-depth, as it complicates their analysis.
I don't understand what you're trying to say. Everyone agrees that the spread is stochastic. Why are you starting with a hypothetical misinterpreation of an R value to make a deterministic strawman? You think that their simulations were too deterministic because of their connectivity network?
> -like the real pandemic's phylogeny, if it arose from a single introduction-
> You think that their simulations were too deterministic because of their connectivity network?
Yeah, pretty much; and it's what other critics, including well-credentialed mathematical biologists, are saying too. There's a continuum of dispersion, with my perfectly-deterministic strawman at the left extreme but extending to infinity. Their power-law network adds some dispersion, but how do we know it's enough? I believe they chose that distribution because it's been shown to fit some real data (including the spread of HIV) reasonably well; but how do we know it fits the early spread of SARS-CoV-2, in the earliest lineages of the virus with unknown biology, in an unknown group of people with unknown behaviors?
I don't know how to root the phylogeny, and I'm mistrustful of anyone who claims they can based on the limited information available. Anyone who's built and attempted to validate mathematical models knows that sometimes, there's simply not enough information to confidently reach any useful conclusions. Absent validation of the approaches used here (e.g. evidence that they've successfully made predictions in the past in similar situations), I believe that's our situation here.
> because nothing else excludes an earlier (even September) first introduction into humans. With an earlier introduction and thus more extensive unsampled spread, it's much harder to insist that A and B would be first sampled in the same order in which they evolved in humans
The tMRCA clearly excludes an earlier introduction. Because the tMRCA is based on genetic diversity, you cannot calculate a tMRCA based on all the known samples, get a date, and then say "oh, geez- well, there was also wide cryptic spread before that." It just doesn't make sense. Pekar addresses this point directly.
A race between the first A and the first B is a strawman. Rather, it's the predominance of lineage B over A in the early pandemic which is interesting. It would be unexpected for lineage B to dominate if A came first. Much of the modeling is to get a handle on how unlikely that situation would be. It shouldn't be surprising that the models don't support it as being likely. (But, that's not the only evidence.)
If you're willing to actually think about and engage on the phylogeny - stop with the "just a few SNPs" nonsense, and ask yourself what you really think the early origins looked like. If it really was a single introduction - Was lineage A ancestral? Was B ancestral? A C/C ancestor? A T/T ancestor? All these have interesting problems being supported by the data.
Finally, after reading some of your earlier comments, I'm realizing that you're conflating several techniques from Pekar's paper, eg:
> Have you looked at Pekar's full model, as set out mostly in the supplementary materials? This isn't any standard molecular clock approach. It's a byzantine stack of plausible but somewhat arbitrary assumptions, ending in a simulated phylogenetic tree.
His epi simulations are separate from the tree-building, with the possible exception of rooting, which he was using the output of the models to inform. Otherwise, the epi modeling which everyone is hand wringing over is really separate and doesn't end "in a simulated phylogenetic tree."
There /are/ novel methods used in the tree building (eg, non-reversibility of base substitutions), but that's a whole separate technique.
> Essentially Pekar's argument is a "two introductions of the gaps"--that if their model of a single introduction doesn't conform to reality, then it must have been two introductions.
BS. Again - understanding the paradoxes and debate involved in rooting the tree is basically required to understand the importance of this paper. The existing data is confounding and didn't conform to a logical understanding of viral evolution. A separate introduction elegantly explains the existing evidence.
If their modeling isn't strong enough evidence for you, fine. But that's different than throwing everything out because you don't understand how "just a couple SNPs" can still provide sufficient resolution to make phylogenetic inferences possible. If you think that "just a couple SNPs" /don't/ provide enough for experts in the field to inform their phylogenies, at least get to that argument directly instead of throwing ignorant shade at an unrelated portion of the paper.
Thanks for the links to those other threads. Nod's was interesting, but AFAICT, way off-base, starting around "Needless to say, early winter in Wuhan is not the Mardi Gras."
Here's Pekar's earlier thread which I recently reread and found helpful for understanding the significance of the phylogeny (#20 is where he gets into how lineage A breaks the clock):
I think you're talking about their model in "Inferring the MRCA of SARS-CoV-2", and I'm talking about their model in "Separate introductions of lineages A and B"? So you're saying they don't use the epi simulations to root and build the phylogenetic tree of real sampled genomes, which is true. I'm saying they do use the epi simulations to build a phylogenetic tree for each simulated pandemic, whose shape (polytomy structure) they then compare against the real tree:
> We simulated SARS-CoV-2–like epidemics (22, 23) with a doubling time of 3.47 days [95% highest density interval (HDI) across simulations, 1.35 to 5.44] (24–26) to account for the rapid spread of SARS-CoV-2 before it was identified as the etiological agent of COVID-19 (figs. S21 and S22, tables S3 and S4, and supplementary text). We then simulated coalescent processes and viral genome evolution across these epidemics to determine how frequently we recapitulated the observed SARS-CoV-2 phylogeny.
Coverage of this paper in the popular press usually said something like "study finds that SARS-CoV-2 arose from two introductions into humans", so I thought the latter was the more important result and started there. Like in your second link, Worobey says:
> [...] We then go on the explain, point by point, that it is not a two-mutation difference that is unexpected. It is a two mutation difference between two large clades like lineage A and lineage B, each displaying a MASSIVE polytomy at their root. This is something that [sic] DO NOT see in ~99.5% of simulations. That is the crux of the paper. Not the idea that two mutations can't happen in a single transmission event.
Are those "simulations" not the SIR-type epi simulations (followed by simulation of the mutations and sampling, then construction of the tree)? I believe his 99.5% is 100% minus the 0.5% from Figure 2C.
Their former model is of course independent of their SIR stuff, and indeed purports to independently establish tMRCA in humans too recent for significant cryptic spread. It carries a different set of plausible but arbitrary assumptions though, again about the stochasticity/overdispersion and sampling rate of early spread, just less directly.
Glad we're on the same page about the multiple techniques now. Statements you made like, "Pekar et al. do some complicated phylogenetic modeling that purports to show the MRCA in humans is too recent" and "This isn't any standard molecular clock approach. It's a byzantine stack of plausible but somewhat arbitrary assumptions" made it clear there was confusion before. Their tree is based off a couple novel modification to established techniques. Your characterizations were inaccurate and laughable.
> It carries a different set of plausible but arbitrary assumptions though, again about the stochasticity/overdispersion and sampling rate of early spread, just less directly.
So, you don't only have problems with the modeling of the authors, but their base phylogeny too? Do you reject their tMRCA? Good grief.
I'm still looking forward to discussing the molecular phylogenetics of this paper sometime.
On reflection, I believe the first of my statements that you've quoted was indeed incorrect, and that I was also incorrect when I just wrote:
> Their former model [...] purports to independently establish tMRCA in humans too recent for significant cryptic spread.
Even if SARS-CoV-2 really entered humans in December, with minimal cryptic spread, that's still enough time for the two lineages to evolve in humans, since they're (sorry) just two SNPs apart. I believe Worobey knows this, and that's the reason why he emphasizes the "Separate introductions" model, since their polytomy thing--and not any question of time for cryptic spread--is their best and only argument to exclude that. So I was wrong to mention the tMRCA at all, since even perfect knowledge of that wouldn't tell us confidently how the two lineages arose.
The second of my statements seems correct to me. Not only is their argument for two introductions not a standard molecular clock approach, but it's not a molecular clock approach at all, since "Inferring" provides no support. Their only support comes from the polytomy thing in "Separate". This makes the accuracy of their epidemiological simulation highly relevant, thus the "hand-wringing" over that.
I'd note that you yourself referred me to "Separate", back in:
So why did you switch to "Inferring"? I guess we could discuss that too, but per above I don't believe that could provide significant support for two introductions into humans, and thus not for natural vs. research-related origin. Do you believe otherwise? Or do you just mean the approach is of general interest, independently of that question of origin?
> Not only is their argument for two introductions not a standard molecular clock approach, but it's not a molecular clock approach at all, since "Inferring" provides no support
Okay, lets revisit this now that some of the terminology confusion is recognized.
"Inferring the MRCA of SARS-CoV-2" introduces their phylogenies. It was produced with BEAST as described in their methods. I believe this is the model you were referring to as "Inferring." Yes?
I don't understand what you're trying to say here. If you don't understand how their phylogeny helps support their theory of multiple introductions, I don't know what to tell you. Maybe just another clarification of what you're trying to say would help.
> I'd note that you yourself referred me to "Separate", back in ... So why did you switch to "Inferring"
Because we're discussing multiple things in the same paper?
> Even if SARS-CoV-2 really entered humans in December, with minimal cryptic spread, that's still enough time for the two lineages to evolve in humans, since they're (sorry) just two SNPs apart.
This isn't the evidence the authors present. The argument isn't "there isn't enough time to go from A -> B." IIRC, I've seen similar acknowledgements that even more rare mutations have been observed in a single transmission during the course of the pandemic. They're just highly improbable.
The most direct evidence (as I see it) for B not evolving from A in humans is the unexpected lack of genetic divergence in lineage A compared to B. Lineage B should show a younger molecular clock, it doesn't.
> I believe Worobey knows this, and that's the reason why he emphasizes the "Separate introductions" model, since their polytomy thing--and not any question of time for cryptic spread--is their best and only argument to exclude that. So I was wrong to mention the tMRCA at all, since even perfect knowledge of that wouldn't tell us confidently how the two lineages arose.
Nonsense. The tMRCA is key evidence in how the lineages arose. One of the reasons for the epi modeling was to figure out the plausible time between the primary case and index case. It shows there is at most a few dozen people infected before the genetic diversity was captured through sampling. (`Results: Minimal cryptic circulation of SARS`)
I don't think you understand their argument here, at all.
> Not only is their argument for two introductions not a standard molecular clock approach, but it's not a molecular clock approach at all, since "Inferring" provides no support
Please elaborate why you think their use of the molecular clock is novel. It's really not.
> Do you believe otherwise? Or do you just mean the approach is of general interest, independently of that question of origin?
As explained above, I think the authors provide compelling evidence of multiple introductions using solid phylogenetic inference and solid molecular epidemiology. Bottom line is that there simply isn't an alternate hypothesis which explains the available evidence, and they illustrate why.
Here's a video you might not have seen, with Pekar and Wertheim. I've cued up the portion with a great explanation of why the evidence in the MRCA and genomics is so important. If you're going to continue to try and tear down their arguments, you probably want to really get this part.
I think I understand what Worobey and Pekar write on Twitter, though I disagree with much of it. I don't understand what you're saying, so I'm afraid we're still talking past each other.
Do you agree that there are two mostly-independent models in the paper, one described in the section titled "Inferring the MRCA of SARS-CoV-2", and another in the section titled "Separate introductions of lineages A and B"? When I write "Inferring" and "Separate", I am referring to the models described in the sections with titles beginning with those respective words.
You wrote earlier:
> His epi simulations are separate from the tree-building, with the possible exception of rooting, which he was using the output of the models to inform. Otherwise, the epi modeling which everyone is hand wringing over is really separate and doesn't end "in a simulated phylogenetic tree."
As to "Separate", I believe that's incorrect. That model begins with an SIR-type simulation, and outputs the shape (polytomy structure) of the phylogenetic tree of that simulated pandemic, which they compare against the shape of the real pandemic's phylogenetic tree. Do you disagree? If so, what do you believe is the output of that "Separate" model?
I agree that the "Inferring" model does not depend on the epidemic simulation. I don't believe the "Inferring" model provides significant support for two introductions though. I believe that's the reason why most public debate has been about "Separate".
Yeah, I think we're basically on the same page with their methodology and models now.
I didn't realize you were nicknaming the models based on applying them to the result titles, so was quite confused, especially when we both used those words in the quoted sections, so it sounded like you were referring to portions of our conversation. So yeah, talking right past each other.
No, the two models don't correspond to the results cleanly. ie, when the authors claim "Separate introductions of lineages A and B" in the results, they provide evidence from both. (They're presenting the results of the models in support of their phylogeny.) I agree that "Inferring the MRCA of SARS-CoV-2" is pretty much independent of the epi stuff.
> As to "Separate", I believe that's incorrect. That model begins with an SIR-type simulation, and outputs the shape (polytomy structure) of the phylogenetic tree of that simulated pandemic, which they compare against the shape of the real pandemic's phylogenetic tree. Do you disagree? If so, what do you believe is the output of that "Separate" model?
I thought we were over this. We both agree that one of the results of the epi simulations was sampled genetics and a resulting tree from the simulation. That doesn't mean that their phylogeny is the direct result of their epi simulations. Their simulations are in support of their phylogeny. Their theorized phylogeny essentially existed prior to the modeling, and which is why I called them separate, ie, independent.
The `Materials and methods summary` is quite clear, especially `Phylodynamic inference and epidemic simulations`.
edit: Our thread is too deep for HN, might not be able to reply? I'll try and keep an eye for new replies if you want to fork off somewhere else.
But, where's your horse in this race? You speak a lot about what you think sucks and very little about what you actually believe here.
> I agree that the "Inferring" model does not depend on the epidemic simulation. I don't believe the "Inferring" model provides significant support for two introductions though. I believe that's the reason why most public debate has been about "Separate".
Funny. My theory is that most people don't have enough knowledge of molecular genetics to make heads or tails of the paper, and so are of course silent on those results. They didn't follow the debate over the past few years, and are showing up and trying to understand something without context or the requisite knowledge.
When you say "Public debate" you need to admit you're talking about a particular part of a particular website or two where a small number of people are picking at nits and can't even address the core of the findings the authors present here.
We're making some progress, at least. I believe this site rate-limits deep threads, but doesn't cut them off entirely.
So I guess we were also talking past each other on "Separate". By "simulated phylogenetic tree", I've always meant "phylogenetic tree for one of their simulated pandemics", not a tree for the real pandemic. We also agree that Pekar's argument isn't based on the time necessary for the two lineages to evolve in humans, since at least that much difference could arise even (with p ~ 10%) in a single human-to-human transmission.
So to exclude evolution of the two lineages in humans, they needed something else. Loosely, that's the observation that (stochasticity of spread aside) we'd expect the earlier lineage A to have more and more diverse descendants than the later lineage B. Their epi model in "Separate" is a formalization of that, and if they could correctly and confidently model that spread then I believe it would be sound.
It seems like we disagree as to what forms the paper's core result, though. I'm taking my own cue from Worobey's Twitter comments, because (a) he's an author, so he presumably should know better than most, and (b) while I disagree with his conclusion, I do see the flow of his argument. In the thread that you linked and I quoted, he describes the result of that "Separate" model--which fundamentally depends on the epi stuff--as the crux of the paper. That makes sense to me.
I believe you prefer to think in terms of construction of the phylogenetic tree for the real pandemic, like to frame the question of number of introductions in terms of the number of roots for the tree. That's in a certain sense equivalent, but it seems much less intuitive to me. The "Separate" approach makes the epidemiological assumptions explicit. Those assumptions are obviously always relevant though, so they're still relevant when you frame the problem in terms of the real tree; they're just much harder to express in the parameters (R0, serial interval, dispersion parameter k, etc.) typically used to model a pandemic.
When they built the real tree, they observed that any single root fits badly. (Per your other comment, I agree that's what they did in "Inferring" with BEAST.) More roots would fit better; but that's always true for any phylogeny unless there's a penalty for each additional root, since more roots improves all the other usual measures of fit. Without quantifying what that penalty per additional root should be, it's not possible to say whether the poor fit is because the tree really should have two roots, or for other reasons (unmodeled stochasticity of spread, imperfect sampling, etc.). It's not too easy to convert those pandemic parameters into that penalty. So it makes sense to me that they didn't try, and instead switched to the SIR-type simulations in "Separate", which they're treating as their most important result.
As I've noted earlier, I don't believe it's possible to reach any confident conclusion (as to research-related vs. natural origin, the number of introductions into humans, or most of the other topics of major contention) from the evidence currently available. I'd have little objection to this paper if it were framed as exploratory work, whose speculative conclusions should not be trusted without further verification. That's not how Worobey and others have portrayed it in the popular media, though, and also not how you've initially portrayed it here.
I think it might be productive to dive in on this part
> Loosely, that's the observation that (stochasticity of spread aside) we'd expect the earlier lineage A to have more and more diverse descendants than the later lineage B. Their epi model in "Separate" is a formalization of that, and if they could correctly and confidently model that spread then I believe it would be sound.
Yeah, that's the observation. However, you're invoking the epi model at the wrong time. If you read `Inferring the MRCA...`, all of this is already known and observed before the modeling is even run. The epi model doesn't contain these results. They constructed their SC2 tree, then brought it over to the epi model to play with it.
If you want a "formalization" of that observation, perhaps Table I will do.
The results are best read in order.
If you're trying to better understand the phylodynamic model, perhaps "Inference of Viral Evolutionary Rates from Molecular Sequences" by Drummond would be interesting.
I think you're failing to appreciate the reason why they built the "Separate" model. Their headline claim is that SARS-CoV-2 arose from two zoonotic introductions into humans. If you want to express that claim in terms of the real pandemic's tree, then the relevant tree is the tree in humans only, which would then have two roots.
The construction of such a tree inherently depends on our assumptions on the epi dynamics. For example, if you give me a hundred genomes and I propose a hundred roots, then that wouldn't usually be a very good tree; but if the disease in question were known to spread animal-to-human but not human-to-human, then that might be correct. Nothing in their "Inferring" model allows them to incorporate such obviously relevant information, so that seems like an obvious deficiency.
To put it another way, you write:
> If you read `Inferring the MRCA...`, all of this is already known and observed before the modeling is even run.
After "Inferring", I believe they know the real tree has structure that's obviously non-modal (i.e., not the most likely outcome) given any single introduction. I don't see how they'd know whether it's a p = 20% non-modal or p = 0.5% non-modal outcome without an epi model like "Separate", or some kind of ugly incorporation of the epi dynamics into BEAST that they wisely didn't attempt.
I believe that's why the authors built "Separate", and its basic form is good work. (If you don't, then why do you think they spent their time on that?) I just disagree with their parameter choices and excessive confidence in their result.
As to your other reply, I agree the 10% is a rough number, not considering mutation biases and such. That's just the probability in a single transmission though, and it's also possible (and more likely) that the two lineages formed in humans with intermediate lineages that went extinct before they could be sampled. I think we at least agree that timing alone is insufficient to exclude evolution of the two lineages in humans though, even assuming a December introduction? I'm just trying to confirm that none of the evidence you see for two introductions in "Inferring" comes from its tMRCA.
Sorry; maybe I'm too stupid or lazy, but I genuinely don't get your point. Is it just that when they construct the tree in "Inferring", it looks qualitatively surprising (non-modal) given any single introduction, assuming (as I do as well) that A predates B? But we've known that for literally years now. As I understand the paper, their novel contribution is to quantify how surprising that looks, whether it's p ~ 20% surprising (which wouldn't mean much) or their claimed p ~ 0.5%. That's what they do in "Separate", and it correctly and inherently depends on the epidemiological modeling that I don't trust.
Again, in the Twitter thread that you yourself linked, Worobey says:
> This [the real polytomy structure] is something that [we] DO NOT see in ~99.5% of simulations. That is the crux of the paper.
The simulations in question are the epidemiological simulations from "Separate". You've told me to disregard Worobey's comments here; but while it's possible that Worobey has misunderstood the significance of his own paper, it seems more likely to me that you have.
> (with p ~ 10%) in a single human-to-human transmission.
That math is absolute garbage. One, the odds of a C/T -> T/C double mutation in a single transmission for the clade-defining markers isn't the same as T/C -> C/T, so at the very least you need to state an ancestral lineage to do any math like this. It also doesn't take into account the different priors for reversions, synonymous mutations, and the C-T transition bias in humans.
> When they built the real tree, they observed that any single root fits badly.
No. Go read the paper again. ("Our unconstrained rooting strongly favors a lineage B or C/C ancestral haplotype...") It's when you try and root in lineage A that things go sideways.
> I believe you prefer to think in terms of construction of the phylogenetic tree for the real pandemic, like to frame the question of number of introductions in terms of the number of roots for the tree.
> More roots would fit better; but that's always true for any phylogeny unless there's a penalty for each additional root,
No, it's not multiple roots, they just place the likely MRCA of SARS-CoV-2 in animals. ("If lineages A and B arose from separate introductions...") It's one tree. With one root. However, that root is in an animal instead of a human.
You can calculate the MRCA for any portion of the tree, including the descendents from the two+ hypothesized introductions. This MRCA is distinct from the SARS-CoV-2 MRCA. Is this what you mean by multiple roots?
> It seems like we disagree as to what forms the paper's core result, though. I'm taking my own cue from Worobey's Twitter comments
If you're trying to understand the paper's core result, read the paper, not twitter.
The first paragraph in `Discussion` frames the crux of their argument I was trying to get across. Notice that they cite the paradox I'm trying to get you to understand, as well as citing genomic diversity as core evidence, as opposed to any argument about the exact timing of A and B samples, or the unlikelihood of multiple mutations.
Most of the public discourse on the current problems of capitalism is not serious. Many folks aren't actually comparing capitalism to an alternative, instead they're comparing their current situation to a mythical alternative reality. The is exacerbated by the fact that Marx himself and other communist/socialist authors make similar mistakes. The whole marxist obsession with "alienation" is a perfect example. They are largely delusional about the plight of the working class in non-capitalist systems.
Workers in socialist systems are inundated in propaganda in ways that would make the most ardent Fox News producer blush. They don't experience alienation between their work and their non-work life, they experience alienation between the life in their head and life in the physical world. Similarly, workers in a feudal system also experience fear and domination at the hands of a system that vests in them little power or autonomy.
> Workers in socialist systems are inundated in propaganda in ways that would make the most ardent Fox News producer blush.
This is ignoring the fact that not all workers in our economy are working for a for profit capitalist entity. There are non-profits, there are co-ops, there are even state corporations and institutions that employ millions of people. I’m not aware of any propaganda these workers are exposed to which workers in a for-profit capitalist organizations aren’t.
In fact the for-profit organizations I’ve worked for has many many mandatory “meetings” which only purpose seems to be to tout the superiority of that corporation, and spout propaganda on how much better it is to work there. The state provided jobs I’ve worked at don’t do this.
> The whole marxist obsession with "alienation" is a perfect example. They are largely delusional about the plight of the working class in non-capitalist systems.
You make two claims here. You provide some examples of the second claim in the second paragraph; for the first one, do you have any justification for why obsessing over alienation is bad?
More precisely, do you agree or disagree with the premise that alienation exists (in some form) in the capitalist system? If you agree, do you think workers would be better off if they were not alienated?
If you don't agree that alienation exists, how would you describe/judge modern IP rights and corporate hierarchy structures?
Would you say it's a good or bad thing that all of an employee's work product (during and outside of office hours) belongs to the company (assuming you accept my premise that this is enforced)?
I'm concerned that you've selectively ignored parts of my comment and have read meaning out of it that I did not put into it.
>do you have any justification for why obsessing over alienation is bad?
Obsessing over alienation is bad for Marxists (and good for capitalists). As I said, Marxists are not being serious (maybe credible is a better word here) when using alienation to critique capitalism. Of the economic systems in discussion, capitalism has the least alienation. Marxist solutions are either pure fantasy, or have been tried and lead to worse outcomes and other socio-economic systems sfrom history are also worse than capitalism. In other words, Marxist concerns with alienation are hypocritical.
>do you think workers would be better off if they were not alienated?
Again, I'm discussing the Marxist use of alienation and how they undercut themselves when discussing it.
> Would you say it's a good or bad thing that all of an employee's work product (during and outside of office hours) belongs to the company (assuming you accept my premise that this is enforced)?
Nothing in my comment can be taken as arguing one way or another on this topic. However, given that you've decided to focus on the goodness/badness of alienation, it sounds like it's important to you. How do you feel about alienation?
> Of the economic systems in discussion, capitalism has the least alienation
is it true though?
I believe it's never been measured by anybody and you're only speculating here.
> Marxist solutions are either pure fantasy, or have been tried and lead to worse outcomes
If that was true, why the most capitalistic power in the World and recent history was so scares by them that went to war against them and used every dirty trick in the book to replace them with dictators or puppets (sometimes they were literally Nazis...)
> how they undercut themselves when discussing it.
you keep saying it, but the how it's not clear to me.
It looks to me your knowledge of Marxism is incomplete.
Marx was impressed by capitalism, he simply thought that capitalism was detrimental for the working class and that through the class struggle they could improve their conditions and participation to the wealth.
Marx wasn't against capitalism, but he knew it was tuned to favour the ruling classes and the bourgeoisie, but also argued that it was the most productive system the World had ever seen.
It's only a matter of where you stand: with billionaires that amass capital like never before while their employees do not earn enough money to make a living, while also being alienated by the work they do, or not.
It's bad enough to be alienated, it's much worse if the system only rewards those that do not actually do the work and/or do not need or deserve so much wealth.
Marxists systems were not worse of capitalistic ones on average, for example at the times Yugoslavia wasn't in worse shape than Greece and what happened in Romania wasn't much different from what Franco did in Spain, a fascist dictotator supported by the USA in exchange for military bases. Life in Cuba or Peronist Argentina was probably similar to Portugal, if not slightly better.
Of course USA had a better life style than communist Poland, but they literally had the highest standard of living in the World, it really doesn't describe capitalism in general, USA are an outlier where the good and the bad of their system show themselves to the extremes (and now it's mostly the bad i.e. the tribalism and the violence).
it's the distribution of wealth that is much different in the two systems, capitalists simply don't like that: to share
But even if it was true that all non capitalistic countries were much worse than capitalistic countries, literally everyone was in the same boat and services were free for everybody.