If you read the trial data this is a foregone conclusion. The Moderna vaccine produces antibody titres over twice as high as an actual infection from the data I've seen.
My theory is that these companies knew they only had one chance to make a working vaccine, so they did everything they could to make it not fail. Double doses, 5x what was needed in monkeys, gold standard adjuvants.
I would not be amazed in the slightest if they decide to half-dose the vaccine or do away with the second shot or even both. They probably didn't have time to run the trial any other way
If you look at some of their published Phase 2 data, they were actually getting good responses from their 25mcg dosing. But you're correct--this was the fastest vaccine development in the history of mankind, and they errored on doing the 100mcg version just to make sure the efficacy was high enough because they didn't have time to test it on a wider population or do challenge trials.
Imagine the blowback on mRNA vaccines in general if they went with the low-dose version and got lowered efficacy...
However, I found it interesting that the vaccine seems to possibly even work somewhat against SARS-1 and MERS, which inspires a lot of hope that the vaccines could be at least partially protective against future variants.
How closely related are SARS 1/2 and Mers to the four coronaviruses that cause (a small minority of) the common cold, could we be gaining partial protection from them too?
Immunity to cold viruses is so prevalent, there is no chance the effect is pronounced, by now. It has been researched tho, so you may find some studies discussing the issue.
I think you misunderstood their question. Obviously immune response to common cold isn't effective against rona. But if this vaccine is effective against a wide range of coronaviruses, could it also be effective against the one that causes the common cold?
Ah. Thanks for clearing that up! That makes a lot more sense in hindsight.
On the other hand, the cross-immunization to SARS1 and MERS is not surprising, since the mRNA vaccines were developed for these and modified for SARS2. (It is not a completely novel vaccine and "rapid development" was only possible, because a lot of basic research was already done on SARS1/MERS. Would have taken a lot longer to identify a molecular target otherwise.)
Well, the Clalit preprint relying on more data by an order of magnitude will be out soon, and that should be the definitive analysis.
However, current results for a Pfizer 1st-only regiment aren't very encouraging IMHO. It's possible the first dose starts to be really effective exactly at the time Pfizer suggested to have the second dose, and that this increase is rather sharp - But it would be some coincidence, and antibody tests (showing not so good efficiency against variants following 1st dose, and also much larger antibody count after 2nd dose) are contraindicative.
Didn’t Israel test antibody levels post-1st dose, and come out with results that it’s around 50% by week 2 and 90% by week 3, before taking the second shot?
If I’m not mistaken the claims that the levels are low (and efficacy only 50%) were because some sources took average of days 1-21 instead of 14-21.
I'm not sure what your 50% and 90% refer to. 50,90 percent of what? There were tests showing an order of magnitude more antibodies a week following 2nd shot (e.g. [0]).
The efficacy claims were because disease levels took more time to drop off than what one would expect following lockdown+start date of 1st shot vaccination if 1st shot were effective. I lack the qualifications to evaluate these claims, but I can tell you all Israeli HMOs prefer the 2 shots regimen.
True, but I am not sure that changes things. It's possible that the dose increase doesn't lead to a relevant clinical difference.
Ultimately, someone will have to do real world analysis and get conclusions from that. Right now, IMHO, there's better reason to think that a smaller Moderna dose is fine than to trust a single shot Moderna vaccine regiment.
He explains this at his github repo, it's a statistical parameter to try to adjust for lack of proper control group (this is based on real-world data and not on an experiment):
"However, an underlying assumption here is that the incidence rates of those that were vaccinated early are similar to the general population. Previous analyses have shown that this is not the case as older populations have lower incidence and lower socio-economic groups have higher incidence.4 Therefore, we perform a sensitivity analysis by adjusting incidence rates using different levels of beta values (Figure 1)..."
It is unfortunate. You can't even avoid downvotes in a fairly rational place like HN :) .
I've been following the vaccine development pretty closely and everything looks like a "this can't fail" mentality.
People need to realize that it's rational to ask if we really need the dose this high and really need a booster because if we find out we don't, it could double or even quadruple our vaccine supply. That could save half a million lives.
It's good to ask questions like this especially when there's a massive worldwide vaccine shortage
Maybe because HN is rational, and thus doesn't like the factual inaccuracies in your post? 1) There were no adjuvants used 2) Antibody response doesn't linearly correlate with protection. Many vaccines provoke a higher antibody response but much less efficacy compared to the Moderna and Pfizer/BioNTech vaccine. 3) mRNA is an entirely new modality, it's not like people knew before-hand what would work, given point 2.
I don't think anyone is arguing that IF the booster is not needed, then we shouldn't double the vaccine supply. Consider the following risk however: There is such a thing called vaccine resistance. A weakly immunized patient becomes a training ground for novel variants of the virus that then starts spreading & vaccine doesn't work.
I don’t think OP claimed any of this, at least in my reading.
All of these things are trade offs anyway - a weaker immunised patient might become a training ground for a variant, but a weakly immunised patient might also have a reduced risk of spreading which results in fewer cases and fewer chances for mutations.
I'd volunteer. I'm 41 and about to be up for my turn as an educator. I feel OK about the mitigations in my classroom and my overall level of risk, and a bit guilty about getting my dose earlier than most; a 50-50 chance of delaying my second dose doesn't appreciably make the risk picture worse.
If my second dose was delayed, odds are I'd end up with better long term immunity, too, even though my risk in the intermediate period would be higher.
> It would be horribly unethical to give people a placebo who didn't volunteer.
No shit. That's why I said "enroll" -- this includes getting consent. I suspect finding 20k people to volunteer out of the ~10M dosed per week wouldn't be that hard.
The same kind of people who already enroll in any other kind of drug trial. "Who would volunteer for that study" doesn't make much sense of a question considering people are willing to do phase 0 studies of new drugs which are theoretically far more dangerous.
It could be beneficial for people that are further down the priority list and want to get vaccinated sooner. Especially since the people further down the list are less likely to die if they do get covid
We at least know for sure that 1 dose is highly effective against actual COVID-19 before the second dose. We have indications from titrating antibody responses that a lower dose might be useful, but that's not quite the same thing.
Just delaying second doses for a month would drastically change our posture against the disease, and it's an easier thing to try (same materials, we have lots of people halfway done with the trial condition, etc).
We do not know for sure the 1 dose is highly effective. There's significant real-world data from Israel suggesting the reverse for the Pfizer vaccine (linked to some in my other comment in this topic), and since it's very similar to Moderna's the results may well transfer.
I strongly disagree. e.g. in the Pfizer trial, during the entire time period from dose 1 to dose 2 VE was 52.4% (95% CI 29.5, 68.4) and almost the entirety of infections happened in the first 10 days; maximum likelihood efficacy rises to 90% if you look at days 10..21. And, of course, there's figure 13. https://i.imgur.com/wEcClPr.png
I think the Israel data is being misinterpreted. Your likelihood of being hospitalized 15 days after the vaccine is not affected much, because you were probably exposed between a few days before to a few days after the first dose. And as pointed out-- there's an obvious confound: people may rush out and partay after getting their first dose in the days before protection starts to kick in.
[EDIT: this was written before I saw the OP's edit]
There's a trial and there's real world data, and real-world data is eventually more important. Current real-world data in Israel does not indicate good 1st dose efficiency [0]. It's possible the detailed release of the Clalit data [1] will change our 1st dose picture, but judging by the fact they still demand a 2 dose regiment, I suspect that Clalit data shows the same result.
A theory is that the Pfizer data was accurate for the 'vanilla' virus, but 1st dose isn't enough for the B117 variant [2], and that's why the 1st dose efficiency was disappointing.
[EDIT following OP's edit: Well, it's possible the data was misinterpreted, but there was a statistical attempt to attune for these effects, and antibody tests seem to give a plausible theory for why 1st dose was effective in trials but not real world. Even if you disagree, it's enough of a substantial objection that we can't say we know for sure 1st dose is enough.]
> but there was a statistical attempt to attune for these effects
Can you please explain how the statistical attempt to adjust for latency of "hospitalization" at days 14-21 post-vaccination would have taken place? I believe almost all people hospitalized at days 14-21 would have been infected at times we don't expect the vaccine to be effective from the trial data-- so seeing efficacy near 0 against hospitalization at days 14-21 is indeed exactly what we'd expect.
Surely if the vaccine is preventing hospitalization right after the second dose, it prevented infection some time before that, right??
Edit: Observational data is always a mess. This looks particularly bad. Efficacy is higher in the old than in the young?? This is opposite of what we saw in the RCTs. Evidence of massive betas even right after vaccination when we expect efficacy to be 0?? This effectively says the vaccinated group is nothing like the unvaccinated group.
I believe estimate of infection date is by MOH data, which tries to estimate infection date in patients by clinical data. As for other effects, the idea is to "present the full range of reasonable scenarios and show how they affect the estimation of effectiveness."[0].
I'm not saying the analysis is right (I lack the credentials to say that) but I can tell you Israeli HMOs have been disappointed by 1st dose efficiency and have been vocal about it in media.
Soon there'll be a published preprint by Clalit alongside Miguel Hernán and Marc Lipsitch (so not some Israeli-only effort, there's external review), and that is likely to be definitive. For now, Clalit is keeping a 2 dose regiment and applying no pressure to change it.
Seriously.. I'm going to drop this now, but this data is a mess.
The methodological assumption of inferring beta from the first few days implies, looking at the infection data, that people receiving the vaccine are significantly more careful/less susceptible to the virus than those not.
But looking at hospitalization, it implies that they are WAY WAY WAY more susceptible.
last edit, for real: I looked at the preliminary Cialit findings. This is much more sane: a case control study. They find benefit-- about 33% efficacy-- post day 14, unlike the data you've been showing.
It's not blinded, so it is potentially wholly consistent with the Pfizer data if those vaccinated are taking 2-2.5x the risk of the control cases. To me, this seems pretty likely.
And if not, it is a different subpopulation--- we don't know what the data would look like for a similar demographic in the Pfizer study.
And, of course, I don't mean that we positively know that 1 dose is super effective in the real world for a longer stretch. But, it's something that's extremely likely to be shown in a blinded RCT based on the data we have so far.
I lack the ability to defend that analysis, but as I mentioned, it's not the only analysis showing reduced 1st dose efficiency. AFAICT, there's a consensus with Israeli HMO researches it's not as good as the Pfizer trials.
>I looked at the preliminary Cialit findings. This is much more sane: a case control study. They find benefit-- about 33% efficacy-- post day 14, unlike the data you've been showing.
I suspect you're talking about a different analysis then the preprint I'm talking about. IIRC, the preliminary analysis you're talking about was released earlier and without external verification.
Anyway, I don't see how your criteria can be falsified. If the data were to show 16% 1st dose efficiency, it would have been possible to claim the vaccinated are taking 4-5x risk. But how can you separate that from the possibility 1st dose is 4-5x less efficient than in the trials? Same thing for the reported 33% efficiency, it could be vaccinated are merely 1.25x less careful.
If so, there's a reasonable chance 1st dose isn't enough for Pfizer and by implication Moderna.
> Anyway, I don't see how your criteria can be falsified. If the data were to show 16% 1st dose efficiency, it can be claimed the vaccinated are taking 4-5x risk. But how can you separate that from the 1st dose being 4-5x less efficient than in the trials?
It can't be. This is why you can't really trust case control trials, especially when the participants are likely to behave differently between the groups. This is why, if you have a choice between a CCT and a RCT, you take the RCT data.
And we have a RCT, so...
And the difference between the RCT and CCT is in the direction we'd expect... and the magnitude isn't crazy high, so...
Yes, but there are also plenty of ways bias can creep into an RCT. Also remember we're not talking about the same virus - the trials were on 'vanilla' virus while by the time Pfizer arrived to Israel en mass B117 was prevalent. It could be the 1st dose is less efficient with B117. That's close to what the other link I provided earlier[0] argued. Can decision makers really take that risk?
> but there are also plenty of ways bias can creep into an RCT.
Yes, but in this case generally in the direction towards less efficacy too. People are unblinded inadvertently by their side effects and then act more recklessly, etc.
And this is immaterial: yes, RCTs can be fucked up, but CCTs are almost always fucked up.
> Also remember we're not talking about the same virus - the trials were on 'vanilla' virus while by the time Pfizer arrived to Israel en mass B117 was prevalent. It could be the 1st dose is less efficient with B117. That's close to what the other link I provided earlier[0] argued.
It could be. Anything could be. The question is the likelihood. But the other source arguing negative efficacy when even shitty case control data is showing positive efficacy is a little dubious.
Indeed, the hwole core assumption on that time series data is so bad-- assuming that people will behave the same from days 0-7 and from 7 on, when they've been cautioned that the vaccine is not immediately effective... Seriously, smh.
> Can decision makers really take that risk?
To do what I suggested? Yes. I suggested to run a trial. Don't pretend that I suggested anything different to strengthen your case.
It's a trial exceptionally likely to yield positive results for delaying the second dose.
There were suggestions earlier that 1st dose was "for sure" more effective, and judging by UK response some countries are desperate enough to try this without a trial. You didn't mean it, but it's possible some places will go for it. I'm just asking people to look before they leap, it's not a sure thing.
>Indeed, the whole core assumption on that time series data is so bad-- assuming that people will behave the same from days 0-7 and from 7 on, when they've been cautioned that the vaccine is not immediately effective... Seriously, smh.
Well, according to his github explanation[0] there was an attempt to account for that:
"It is assumed that on the days following the vaccination, there is increased caution to avoid social encounters."
>Well, according to his github explanation[0] there was an attempt to account for that:
> "It is assumed that on the days following the vaccination, there is increased caution to avoid social encounters."
Yes. He fudges beta by 0.25, when it has values >3. Uhh..
Edit: unless you actually propose that the vaccine is increasing risk of infection beyond changes in behavior, there's clearly something very big methodologically wrong here. And even that can't explain the drastically different betas assumed between the hospitalization series and the infection series. (I mean, they're on OPPOSITE SIDES OF 1 AND FAR AWAY FROM IT)
Well, you're trying my patience by putting words in my mouth that I didn't say: I didn't suggest decisionmakers immediately give everyone one dose. Are you making a dishonest argument, or did you just fail to read what I originally said.
I didn't mean to put anything in your mouth you did not say. If I accidentally offended you please accept my apology. Lets keep this to the science, as far as my limited ability allows me to engage, OK?
As for your link, according to the original PDF[0] this is raw data before any statistical corrections, which makes it not very useful. The person you're quoting has been noted as a relative pessimist, assumes that 1st dose effect is very small due to behavioural changes[1], and that a lot of the effect is due to the lockdown. (IMHO, he's too pessimistic, but again I'm far from an expert).
It's time series data. Yes, it's not case controlled. I said this already.
RCT > CCT > Time series.
So we've got RCTs for each Pfizer and Moderna that show similar first dose efficacy. We've got prelim CCT data that seems to show Pfizer efficacy but a bit lower than we'd like, confounded by possible behavioral changes. And we have time series data that seems to show first dose efficacy.
And then we have the very-confused-seeming over-tweaked analysis of time series data that you keep linking, too, comparing to base population rates without any effort at case control, which shows negative efficacy.
Anecdote: the people that I know who received their first dose decidedly did not become less active, either in the near term (actuality: slightly more active) or the longer term (actuality: considerably more active).
Assume that we do the RCTs etc. and discover the 'bit lower than we'd like efficiency' (33% vs the original 80%) the CCT you quoted argued, is the actual efficiency* , what then?
IMHO, Judging by what happened in Israel at the time (a rise in cases), I for one wouldn't recommend a one dose regimen. So I don't consider what I'm saying supported by only one analysis, but by all of the Israeli CCTs so far. Judging by the consensus of Israeli HMOs, they probably feel the same.
[EDIT: Basically, I don't consider a low efficiency all that different from '1st dose isn't efficient'. So we give people the 1st dose, they party, and we come up even.]
* Yes, people probably became more active, but remember there was a lockdown too, and a different variant than in the original RCTs. There's justification for a lowered efficiency beyond bias in the RCT.
> I believe estimate of infection date is by MOH data, which tries to estimate infection date in patients by clinical data.
I'm reading the methodology PDF and there appears to be no effort to correct for infection date.
edit: Examining the MOH data-- the date of first positive test result is used for infections, and the date of hospitalization is used for hospitalization.
We know a lot about how vaccines and the human immune system works without going through the ritual of a full scale trial for every parameter tweak.
That knowledge strongly indicates that half doses will work just as well as the dose that happened to be chosen for the first trial.
Waiting several months for these studies have the major downsides that thousands of people die each day we wait. To make a rational decision, that potential benefit needs to be weighed against the potential risk, but that is usually left out of these arguments.
Well, this really can't fail. The challenge was to develop a ready-to market vaccine in the shortest possible time and several companies managed this task. That is why millions are already vaccinated.
Yes, you are allowed to ask questions, but why do you assume that all the involved experts at the pharmaceutical companies didn't ask those questions? And as far as I am aware, they did investigate different dosages.
So what we got now is their best effort given the many competing requirements. And yes, they might have a bit generous, but only a bit, as higher doses do create stronger side effects. And I am very happy they did, considering the mutations appeared in the meantime and the vaccines are still effective.
I am sure, the vaccines are going to be further optimized as more time goes by and especially more data becomes available.
> why do you assume that all the involved experts at the pharmaceutical companies didn't ask those questions?
Pharmaceutical company employees have no power over FDA decisions.
> And as far as I am aware, they did investigate different dosages.
They did, and the result was that half doses produce as strong an immune response as full doses.
This is the basis for this "let's do half doses" argument!
The study was done with the bigger dose, and showed 95% protection.
So we know that (1) dose A has the same effect as dose B, and (2) dose B provides 95% protection.
Obviously (1)+(2) logically implies that using dose A would give twice as many people the same immunity. But logic is not part of the FDA regulatory framework. You have to blindly repeat the protocol of the study, whether it makes sense or not. Whether thousands die or not.
Why is the development “this can’t fail” but the distribution isn’t? If we cut corners on distribution (and compound that with the real world flaws in the process) and you screw up, it would be a huge disaster. You would complete undermine the public’s faith in vaccines and lockdowns.
If the vaccine developers fail their companies lose lots of money and their bosses are mad at them. There’s a tiny possibility they might lose their jobs and it certainly won’t look good for promotion purposes. Also, developing vaccines is something they actually know how to do. If civil servants fuck up in ways that delay or prevent progress in fighting COVID there are no consequences. No one gets fired or reprimanded, whether for what they said about masks, preventing home testing, telling the Seattle Flu Study they couldn’t test for COVID in February, fining distillers for making hand sanitizer... And note the US FDA and CDC are at worst only a little worse than the European Medical Authority. Slower at approving more or less everything, like KN95s or the J&J vaccine, but they’re not actively trying to kill people. They’re just making sure they can’t be blamed for approving something unsafe. The people who die as a result of that delay are not their problem.
They did have vaccine variants that failed: We abandoned those in trials. We just don't give them to the general public without this. Some failure is expected when developing drugs, actually - some simply fail and this is part of r&d costs. If someone fabricates data to say it works, well, that is another matter entirely.
And yes, government gets reprimanded as well. Civil servants can lose their jobs, politicians get voted out. Unfortunately, the average person isn't always the best person to say that something was handled poorly and react badly when folks change their minds due to changed information.
That’s nonsense government griping. In fact, we had an election and fired all of those decision makers.
Rushing through things like KN-95 has consequences. My brother is a firefighter paramedic and found out that 90% of the kn95 masks issued to him were completely defective.
The people developing the vaccine have no control over how individual countries distribute it. They were tasked with "this can't fail", and understandably so. They did their job, why some countries can't handle distributing it is a whole different can of worms that would require a discussion country by country.
It's a gamble the British government are already taking, they decided to extend the time between doses to give me people the first dose sooner. We'll start to see in 8 weeks or so if it was worth it.
We can save those experiments for when there isn't 450,000 Americans dead from covid. I prefer the go early go hard approach. There will be a second round of Covid-21, the South African mutated version is proving that.
Indeed. But to bash the large corpos, when medical staff figured out they could get an extra dose from most vials the drug companies immediately reduced their shipment contract requirements accordingly. Pretty scummy.
Of all the failures in the system that could have prevented lives, this seems like the least egregious.
What people need to realize is politically inclined or no, restricting political involvement to an election every 2-4 years, letting it optimize for fiscal efficiency, not resiliency and reliability, has killed a whole lot more people than extra caution over dosage.
None of the general public control drug companies. They could control the government.
I’m all in on being a big corp sycophant because the general public are clearly incompetent political agents by enabling a bigger mess than necessary in the first place.
Without specifically criticizing this vaccine process, I would add that if you start digging into medical research, you will find this is the rule rather than the exception. I've had my own semi-obscure issues I've gone trolling through the literature for assistance with, and one of the things I've found is that you'll find something that is tested; for a specific example, I found some examination of taurine supplementation for heart arrhythmia issues. There were multiple papers that all studied the exact same dosing schedule. This strongly suggests to me that the initial schedule was basically someone thinking for a moment and giving an informed guess about what might work.
This is just an observation, not a criticism, because I don't have a better suggestion for what they can do, nor do I particularly believe there is a much better suggestion necessarily. You have to start somewhere. My point is just that you shouldn't overestimate the exact numbers and how much effort was put into exploring all the details. Obviously some things are deeply studied, but most things won't be.
This implies that, statistically speaking, there very likely are drugs that were formulated, created, and put into a testing regime that they failed that actually work great, but were just dosed out to the study participants such that the good effects didn't emerge yet, or were dosed on a poor schedule, etc. (Or would have worked great if paired with a hit of grapefruit. [1]) I also wonder how that compares to the number of drugs that were never tested on humans because they failed animal testing, but it turns out that in reality they would have been fine in humans. Just idle musings on life.
I think a lot of software developers have swallowed the "be lean all the time" that the projects managers are selling :) . "Good enough is good enough" is another one I hear a lot from PM. Anyway there's something to be said for paying $1000 for that vacuum cleaner that you won't have to replace for 30 years.
That's a serious question. Last time I looked at expensive vacuums, e.g. Dyson discontinued parts after 10 years. It might not break, but you can't buy a roller for 10-year-old models. Shark was way shorter. I found no long-term brands.
Well, these are the numbers we have and which have been confirmed by the proper trials. I am sure, there is more research getting performed to optimize the vaccines, but we fortunately can vaccinate with good results based on these numbers.
They should be sacred until we have completed studies with other regimens. What we have is a theory but we need to collect data to correctly identify the best timing.
Interesting, I hadn't known that. I imagine the short booster dose time also helps in implementation, because an individual gets maximum immunity 6 weeks after the first dose (4 weeks pause + 2 weeks) rather than 28 weeks (6 month pause + 2 weeks) for a longer pause. Having a longer pause to maximize individual immunity might not be the fastest way to maximize population immunity during an ongoing pandemic.
First dose gets you to 80% efficacy after about 10 days.
If you e.g. produce 1 dose per month, you're way better off giving the doses to ABCDEFABCDEF than AABBCCDDEEFF. Six people are 80% protected after 6 months, instead of 3 people 95% protected after 6 months... assuming that everyone ends up at the same efficacy at the end (which we don't know).
But... we know this is probably effective-- probably even more effective after the delayed second dose.. which isn't the same as showing that it's effective in a study.
I think the mRNA vaccines don't contain any adjuvants at all. They are so awesome, because they are comparatively simple systems.
I would choose the mRNA vaccines over adenovirus-based ones, because of that. Drive-by immunity to the adenovirus carriers could render future vaccines useless, when there is no alternative vaccine technology available *. I hope when it's my turn, I will have a choice.
* Edit: My guessing. I am no medical professional, or researcher in a related field. Maybe my worries are completely unsubstantiated. I am also of the "worrying type" https://www.youtube.com/watch?v=pUDv3h09VBc
There's not an added adjuvant in either of the mRNA vaccines. The lipid nanoparticles may be acting as one though (in addition to ferrying the mRNA into cells).
There's no adjuvants being used at all with mRNA vaccines.
Also, there were smaller sub-trials done with just one shot, and those showed significantly weaker immunization, although telling exactly how much weaker is hard due to little data available and thus very large margins of error.
Polyethylene glycol is an adjuvant, as are many of the lipids chosen. It's looking like PEG is maybe a bit too effective at getting the immune system's attention, and that's why we're getting so much anaphylaxis.
We know that both the Moderna and Pfizer vaccines look about 80% effective from 10 days after dose 1 up to dose 2. We really should be doing a trial on delaying dose 2 a bit: it might very well make the dose 2 side effects smaller, increase efficacy, and stretch vaccine supplies.
The vaccines have a higher side effect rate than we're used to, but they're orders of magnitude safer than the risk we have from being exposed to COVID, infected, and then having a bad outcome.
Some of this relates to the rush to market and the desire for very high efficacy even if we were somewhat surprised/incorrect on required neutralization titers. When COVID-19 is rarer, we likely will need safer vaccines against it, though. This may just be a question of turning down dosage a bit.
Agreed, and current estimates are more on the order of 4 to 5 per million. A high proportion of these cases are among people with a history of anaphylactic reactions to drugs, needle sticks or food/tree nuts ( many were already carrying their own epi pens! ) so it includes people who probably would skip an annual flu vaccine but of course do not want to skip this one. I'm not trying to suggest that the rate isn't higher than other vaccines, but it may not be as much higher as it first appeared. ( Also, to be fair, now that there is so much vigilance, it's possible that a number of pre-anaphylaxis are being caught early and treated with epi or benadryl before they could become full-blown anaphylaxis. So, this could make the rate appear lower. These things always need to be considered thoughtfully! )
As others have pointed out, the risk/benefit calculation here is extreme: a few bad reactions per million vs. a disease that has already killed more than 1000 people per million population in the US.
Thing is, the "not healthy" group is bigger than most people think. Anaphylaxis is a very controllable situation, whereas severe COVID is not. Non of the reacting subjects hasn't had a history of anaphylaxis AFAIK.
Nothing in the mRNA vaccines was chosen to enhance immune response, nothing is an adjuvant as with other vaccines. On the contrary, the polymers and co are there to protect the mRNA from immune system clearance (and are used in other non-vaccine medications too). It's worth noting, an allergic reaction is mediated by a completely different branch of the immune system. Non of the "adjuvants" aren't to be found in common household items, food and cosmetics, which is probably why the allergic people reacted in the first place.
Very few, very allergic people, seem to react to these. If so, you treat them with an antihistamine and similar.
> Nothing in the mRNA vaccines was chosen to enhance immune response, nothing is an adjuvant as with other vaccines.
This isn't -exactly- true. There's no traditional adjuvant added, but... the fact that the formulation caused adjuvant-like effects was seen as a feature, not a bug. Added care to minimize immune response has been used in the development of mRNA drug platforms
> Non of the "adjuvants" aren't to be found in common household items, food and cosmetics
You can find PEG in lots of things, but PEGylated large molecules aren't exactly common items around the home.
> I don't get it. Either they want the immune response or not.
When mRNA is used in typical drug platforms-- you don't want the immune response.
mRNA vaccine platforms-- you do want the vaccine response.
So, e.g. Moderna has an "N1GL" formulation intended, in part, to tamp down reactogenicity that it is using for its drugs in development, and "V1GL" formulation intended for vaccines where they don't care so much. N1GL both more completely envelops the mRNA and consists of components that themselves are less "adjuvant".
(And N1GL is probably not quite good enough, yet...)
But the mRNA vaccine isn't what the immune system is responding to directly, but the spike protein produced by the cells, which got infused with the mRNA. Anything that makes the immune system attack the vaccine, prevents the delivery of the mRNA. Sorry, I don't think your reasoning is sound and very based.
I don't know how my reasoning could be more "based", but here--- listen to some pharmaceutical and vaccine experts and see what you think.
Here's what Derek Lowe had to say: https://blogs.sciencemag.org/pipeline/archives/2021/01/21/mr... So to get a good RNA vaccine, you frankly have to make it work like a particularly stealthy virus, and not trip every single alarm before your payload gets a chance to enter the cells. But as mentioned, some activation of the innate system is needed to get the adjuvant boost. A lot of that work has to be done empirically, which is why we’ve seen so many RNA and DNA formulation ideas over the years. None of these have been stupid ideas (far from it) but some of them work better than others, and the current lipid nanoparticle ones are the current state of the art – the LNPs themselves activate the innate immune system, but in a way that doesn’t seem to trigger too much of a self-defeating cytokine response. It’s a useful enough effect that they’re being proposed as adjuvant additions to other, more traditional vaccines for that effect alone.
Me: Both the LNPs and the mRNA itself stimulate the immune system. Basically the vast majority of work in mRNA drugs for the last decade has been to try to reduce that-- and associated toxicity and unpredictable clearance. Meanwhile, the (right amount of) immunogenicity has been accepted as a feature in vaccine candidates.
And in a series of journal articles e.g. in Frontiers in Immunology: https://www.frontiersin.org/articles/10.3389/fimmu.2018.0222... "This was observed in the case of administration of empty PEGylated liposomes, which were able to elicit IgM response in an in-vivo model. (27, 28). Besides their potential to deliver various immune stimulators to the specific sites as well as into the deep tissues where vaccine molecules alone may not able to reach, these NPs have also been exploited as adjuvants to augment immunogenicity of vaccine candidates."
Nature / npj Vaccines: https://www.nature.com/articles/s41541-020-0159-8 "Thus, multi-antigen candidates necessitate a significant amount of LNP for a given dose. LNPs are known to have inherent adjuvant properties."
Nature / npj vaccines: https://www.nature.com/articles/nrd.2017.243 "Exogenous mRNA is inherently immunostimulatory, as it is recognized by a variety of cell surface, endosomal and cytosolic innate immune receptors (Fig. 1) (reviewed in Ref. 35). Depending on the therapeutic application, this feature of mRNA could be beneficial or detrimental. It is potentially advantageous for vaccination because in some cases it may provide adjuvant activity to drive dendritic cell (DC) maturation and thus elicit robust T and B cell immune responses. ... The immunostimulatory properties of mRNA can conversely be increased by the inclusion of an adjuvant to increase the potency of some mRNA vaccine formats. These include traditional adjuvants as well as novel approaches that take advantage of the intrinsic immunogenicity of mRNA"
The above is ~1/100,000. IFR (which is compatible with what I said) for 19 and younger is 3/100,000, but that includes younger people that are even less impacted.
So it is sort of the same ballpark for the very youngest adults, but a factor of 3 isn't nothing. It's higher for people in their 20s already. We can be talking about different things though, as it isn't 100% likely a given person will be infected.
It's likely pretty close to the same. 2-4x the rate if infected; about a 1/2-1/4th chance of becoming infected eventually if no vaccine.
But dying is slightly more severe than a couple hours being treated for a moderate anaphylaxis reaction. We're also ignoring any risk of morbidity from COVID.
No traditional adjuvants is interesting, but I know the RNA is modified. It's not the exact mRNA spike protein. I believe they modify the amino acids to modulate immune reaction which may accomplish the same thing.
The immune system doesn't like non-human RNA. Certain sequences, or even abundance of certain amino acids out of balance with human codes can set it off. Special RNA coding may be used in place of traditional adjuvants but much of it is proprietary at this point
You've kind of got it backwards. The RNA modifications (uracil depletion and using less immunogenic analogues) and in these vaccines are to avoid the getting too much immune response to the mRNA itself, and allow time for spike protein to be translated and lead to the immune response.
It could be both. The goal is to evade the innate extracellular immune system while priming adaptive immunity. I don't think we know enough about their RNA modifications to say they put all their focus on one instead of the other.
Traditionally, mRNA vaccines were not very effective. There's definitely some magic "trade secret" sauce in these vaccines.
I can tell you for sure that the amino acid content isn't changed in any obscure way - it's exactly the original Wuhan genome's spike protein, with two proline mutations that put it in a prefusion conformation that makes a more stable target for antibodies, a strategy developed in research for SARS/MERS vaccines. The codons and overall RNA sequence are heavily optimized in ways that outsiders and I suspect they companies themselves don't really understand. Pfizer's production version of BNT162b2 is actually BNT162b2 V9, suggesting they tried combinations of many different strategies and took the one that worked the best. You could be right that some of the optimizations had immune effects that no one knows or understands yet, but I doubt there was real intention behind it. Magic sauce indeed!
The sequence is actually public, but as far as I know no one has figured out what optimizations worked in the end aside from the uracil-depletion + 1-methylpseudouridine trick to avoid an immune response to what looks like viral RNA.
https://mednet-communities.net/inn/db/media/docs/11889.doc
There is a conceptual misunderstanding here. Raising the dose doesn't mean you have a higher efficacy, in fact it can just as easily have the reverse effect. Vaccines aren't beer where you consume twice as much and get twice as drunk.
Granted. But there is a minimum dose necessary to produce antibodies. If you are in a situation where you are guessing completely blindly (because no trial like yours has been done before), do you worry more about falling below the minimum required dose or above the maximum dose past which you get diminishing returns or even lower efficacy? I think it’s reasonable to worry more about the minimum dose because you can always half the doses later. Finding out at the end of a multi-month trial that you didn’t give enough in your doses would be a big setback. Just look at what happened with Assyria Zeneca when they started mixing up dosing in their phase III. They were the most promising vaccine last summer and they haven’t even concluded their US phase III trial yet as a result.
I mean technically an alcohol could exist that makes you less drunk after twice as much but how likely is that? IMO about as likely as a vaccine that produces less immunity with higher dosage
Neither Pfizer-BioNTech or Moderna's vaccines bring any additional adjuvants to the yard. mRNA is inherently immunogenic. This speaks generally about mRNA vaccines and touches on that property, among others: https://www.cell.com/trends/molecular-medicine/fulltext/S147...
Coming from Israel,
What I'm writing is about Pfizer but related to the article above.
Since the Pfizer vaccine is much more complex to maintain. Once you unfreeze it, there's a time window for it to be usable.
In addition to many other factors "modified" by the Israeli HMO, an interesting one related to the above is the allowance of an extra dose from each bottle. The Pfizer bottle were tested as 5 dozes per bottle.
The HMO here allowed 6 dozes.
Yup, Pfizer had signed contract on number of vaccine shots, not number of bottles supplied. So Pfizer is technically correct, but it's still a jerk move from their side.
I'm not a molecular biologist or medical experimental. But that sounds a bit bad medical testing. To just increase the dosage like this with out proper longitudinal studies. Anyone with that area of expertise care of pitch in here?
The studies will eventually come out with "proper longitudinal studies". Some countries are choosing not to wait for that. I do hope they do those studies - it feels like half the current dose size might work fine
This isn't a failure. This is the quickest vaccine rollout ever. The way you do statistics for this means that the overkill speeds up how quickly you can verify effectiveness. For example, if you do a booster after 6 months, that adds 6 months to your study time. If you try to thread the needle to make the smallest possible vaccine, you need way more testing to prove it works. The approach taken is why we have 10% of the US vaccinated now instead of having a vaccine approved in 2 years.
> This isn't a failure. This is the quickest vaccine rollout ever
Tell that the to the 400,000 dead, buddy. They made the vaccine in January 2020. GOVERNMENT rules mean I STILL have not gotten it well over a year later. My NINETY year old Grandma doesn't have it yet. Nice effort though. How does that leather boot taste?
We've warned you several times before. If you keep posting flamewar comments to HN we will have to ban you. I don't want to ban you, so please review https://news.ycombinator.com/newsguidelines.html and use HN in the intended spirit—thoughtful, curious conversation—from now on.
For more of the intended spirit, here's an extra guideline on my list that hasn't made the official list yet: If you're hot under the collar, please cool down before posting.
At first, it wasn't clear we needed to use them. It wasn't just Fauci. Not only that, but we didn't have enough production for everyone to have them and it was more important that certain population groups had them. The message changed when it was more clear they were necessary and we started to get enough production that we could have the general public wear them.
When were numbers made up? Can you give specifics that aren't due to learning new facts?
My theory is that these companies knew they only had one chance to make a working vaccine, so they did everything they could to make it not fail. Double doses, 5x what was needed in monkeys, gold standard adjuvants.
I would not be amazed in the slightest if they decide to half-dose the vaccine or do away with the second shot or even both. They probably didn't have time to run the trial any other way