A problem I see is that you can never be sure that it also works for other people. Just because one person reacts to a specific treatment does not mean that people on average react to the treatment in the same way. That’s the problem of having a sample size of 1. In other words: We cannot say much about whether the effect generalizes to a larger population from knowing that it has an affect on just one person.
If the elevator is behind the center of mass/lift they need to push the back down, so they need to point up to raise the nose of the aircraft (other side of cg/cl).
If the surfaces are in the front of cg/cl, they need to do the opposite. To pull the nose up, they would have to point downwards.
Exactly. The site calls them ailerons (presumably because they are on the wing and can roll the plane) but they are quite far back from the cg and so function more like elevators when used together.
Concorde worked this way [1], and called them 'elevons'. When I used to work on Tornados these were called 'tailerons'.
A funding body grants you money and demands open-access. They often state very clearly that costs for publications (submission or publication fees, open access fees etc) cannot be paid from grant money. Thus, you need another source. The first address is your institute / department / faculty / university. If they decline to pay the open access fee, you are in trouble.
That’s actually common practice in a lot of fields.
> A funding body grants you money and demands open-access. They often state very clearly that costs for publications (submission or publication fees, open access fees etc) cannot be paid from grant money.
I don't believe you. Show me one source for this, and from a decently sized funding body if it's such common practice.
> They often state very clearly that costs for publications (submission or publication fees, open access fees etc) cannot be paid from grant money.
I think you've been misinformed. At least in the US, EU, Canada, and Australia, that's just not true. Public or private funders are telling grant writers to put open access publication costs in their budget or have other funds to pay for them. I only speak English, I'm so unsure of other non-EU countries. But this took just a few minutes of searching to find each agency's official policy or advice to grantwriters on this:
US NSF: "The proposal budget may request funds for the costs of documenting, preparing, publishing or otherwise making available to others the findings and products of the work conducted under the grant. This generally includes the following types of activities: reports, reprints, page charges or other journal costs" [1]
US NIH: "NIH continues its practice of allowing publication costs, including author fees, to be reimbursed from NIH awards." [2]
EU ERC: "publishing costs (including open access fees) and costs associated to research data management may be eligible costs that can be charged against ERC grants, provided they are incurred during the duration of the project and the specific eligibility conditions of the applicable Model Grant Agreement are fulfilled" [3]
All Canadian government research funding: "Some journals may require researchers to pay article processing charges (APCs) to make articles freely available. Costs associated with open access publishing are considered by the Agencies to be eligible grant expenses" [4]
Australia National Health and Medical Research Council "over the grant lifetime, funds can be used to support costs associated with publications and open access such as article processing charges, which are the result of the research activity and which are in accordance with the DRC Principles." [5]
Gates Foundation: "The Foundation Will Pay Necessary Fees. The foundation shall pay reasonable fees required by a publisher or repository to effect immediate, open access to the accepted article. This includes article processing charges and other publisher fees. " [6]
Howard Hughes Medical Institute: "May use their HHMI budget to pay publication fees charged by open access journals" [7]
The study apparently only involved male participants. The control group consists of 6, the two treatment groups of 10 and 9 participants. The total N of the study is 25 participants. They conduct a one-way ANOVA on the interaction of time and treatment group indicators. I conclude that the study is woefully underpowered. I do not trust the apparent significance of their results.
I really don't understand statistics. My interpretation of "the study is under-powered" is that since the study has so small groups it will be difficult to find any result that is statistically significant. But wouldn't that mean that for any effect to be significant the effect size would have to be huge?
My hunch is that if you have a large enough group even very small effect sizes will be significant, but in small groups only the very largest effect sizes will be significant. Or am I simply bad at statistics?
(See bio for my background) All of the replies you've gotten so far are very good and I upvoted all of them.
In particular:
cuchoi acknowledges the publication bias version of the risk here. Let's say your average effect is 1 unit with a confidence interval that is 0.9 units at a desired level of confidence. We can interpret this confident interval two different ways: one is that assuming 1 unit is the true effect, then repeated sampling would produce a sampling distribution of estimate effects that span 0.1 to 1.9 at the desired level of confidence. Another is assuming that, say, 0.1 was the true effect, effects as large as the one we see (1 unit) would occur a non-trivial portion of the time. Now, imagine many researchers do this experiment and the true effect is 0.1. Some researchers find negative effects, some find small effects that are not significant, others do larger studies and find small effects that are significant, others find larger effects. Now, imagine the journal will only publish effects that are both statistically significant and substantively interesting. The only person that submits for publishing is the version of the study that finds the large effect (1 unit). cuchoi is very correct to suggest that when your design can only find large effects, the published effect will likely be overestimated.
fpoling and sandgiant highlight the sensitivity risk argument. Suppose that the outcome is heavily sensitive to some confounders (socioeconomic status, nutrition, smoking status, race, etc.) And suppose poor people are slightly more likely to get treatment, just from coin flip chance. Because poverty correlates with both the effect and the probability of being treated (even though you tried to assign randomly), some of the visible effect is the relationship between poverty and treatment, not effect and outcome. There are designs other than simple randomization that try to explicitly deal with known confounders, but they can't deal with unknown confounders. Larger sample sizes mitigate the risk of imbalance of both known and unknown confounders.
Under-powered also means that the minimum detectable effect is high (that's why it is harder to get a "significant" result).
Which means that it is more likely that if you will find an effect only if you are overestimating the real effect. The real effect might not even be detectable!
There might be someone able to better explain this, but one way it could work wold be to say that for small samples any bad assumptions you make (such as variables or measurements being independent) will affect the result more than if you had a larger sample.
The assumption I make to make this work is that dependencies are more likely to be drowned out by internal variation in a larger sample. So you get to pick which assumption you like better.
A simple rule of thumb is that for samples with N < 100 be very skeptical for the results as those can be archived simply by randomness on top of small systematic errors. Proper statistics helps to rule out randomness, but not systematic errors. Which pretty much rules out most of the sport studies.
I studied this many years ago, but you have formulas for survey sizes based on confidence levels and, if I recall correctly, the number of variables you want to study.
People publishing these studies should know and use these formulas, but I imagine there's a lot of pressure to publish high impact/visibility stuff so they just go for the cheapest and fastest (aka wrong) approaches some times.
short answer: its complex and there are books on the topic.
lesser-disappointing answer:
You have a hypothesis how STUFF works differently when you make an intervention (experiment, i.e. collect data, change something or go to the control group, collect more data).
Your default assumption is that your experiment won't show a meaningful difference, OR it could show a difference (positive/negative). Now what you observe may not be the reality. Which leaves you with 4 possible situations:
Most statistical methods used in data analysis take great care to minimize the probability for a false positive (probability our methods yields 'positive', when in fact there is no effect in reality. This probability is the famous 'p Value' (sometimes p Value also refers to a threshold of this probability).
So when you do certain statistical tests, you receive a p-Value, apply a threshold consideration p<5% for example, this means that you assume that only every 20th experiment where in reality there is no effect results in a 'significant' finding (i.e. a false-positive).
So naively increasing your sample size will not lower your false-positive probability if-and-only-if your analysis method corrects for it. However the sample size strongly influences the false-negative rate, i.e. a Student t-Test with p<0.05 will with sample size N=3 yield a false-positive with still a 5% probability, which in practice then means, that there is a slim chance to get a true-positive results.
The criticism here about sample size does from this perspective not make too much sense, however: we need to keep in mind:
A) There is a whole field of problems about controlling variables (i.e. adding more columns to your data table). Each variable adds another dimension to your problem, and this quickly leads to a 'curse of dimensionality' problem. Is the observed effect explained by your experimental intervention, or is it in differences between your control group and your study objects (sex/gender/socioeconomic status/age/training level/ overall health). Quickly not being able to control for a variable can lead to false-positive results.
B) complexity of the method at play. The study uses ANOVA (analysis of variance). Its been years that I last looked at it so I am not making statements here.
C) Crucially: Many methods actually assume Normally-distributed data (Gaussian distribution). However, if you collect data it is rarely normally-distributed, one can use methods for normally-distributed data on non-normally-distributed data because of the "law of large numbers", i.e. mixtures of non-normally-distributed datasets typically tend to end up being normally-distributed. but this does not happen at N=10.
There are a few finer points to mention here, which is that many HN commenters have a machine-learning background and may be a bit biased against smaller-sample-size studies for multiple reasons that are specific to what they are used to in the machine-learning world. And on the other hand, from my experience majoring in biophysics, many health-related studies on sports and obesity really have low-quality stats and overestimate the predictive power of their datasets.
tl;dr: I would only conclude from this study that HIIT is better than nothing, not that it is better or worse than other cardio exercise.
PS: The above text tries to break down complex stuff and thereby by definition contains mistakes.
First studies with new findings should be small, surely, so we can weed out effects to test with larger studies. The question is whether the larger studies are being done? The incentives are such that democratic governments need to be leading the research, IMO.
How do we ensure at a national level (because studies probably need to be repeated in different nations) we're doing good science, backing up key results, informing the population what the better ways of behaving are?
Everything around nutrition in general... But yeah with sports your throwing in groups of insanely driven outliers. Even more fun is mapping that back on the average human like much of this post is doing. No you should not try and follow Michael Phelps pre Olympics training routine.
I'll do you one better: all of the subjects were untrained. Any kind of training will improve fitness in virtually every single marker that can be measured. Your aerobic fitness will improve from lifting heavy weights. You'll get stronger from running. These studies are totally useless.
I’m on a brand new 2016 model iPhone SE A1662. Full charge capacity is 100%. I rarely get through the day before I have to charge again with moderate use.
I’ve been using a iPhone SE as my primary phone since summer 2016.
If I recall, my original new iPhone SE had great battery life. It was first on iOS 9.
Up Until iOS 12, Battery life was fine on it. With iOS 12, had to get a extended battery case; that held up As a workable solution until iOS 13.
At the end of last year, after iOS 13 came out I replaced that iPhone SE with a brand new one I bought on fire sale.
That had an accident this week, which I replaced with another new iPhone SE.
For it, I set it up as a new device, without any apps, and not restoring from any backup, I still have the same battery life.
I’m going to chalk it up to iOS background process bloat and third party background app refresh, etc.
It will be I nteresting to see how iOS 14 Holds up on the iPhone SE 2016.
What can I say, I refuse to use a phone without flat sides, no headphone jack and not of a reasonable size. I could take a few of those exceptions but all three has been a no go for me. Really the new “meh” battery life is the only complaint I have about this device.
Really excited for the rumored iPhone 12 5.4 inch since it may be about the same size as the old SE; with flat sides.
Yes. I can get a day and a half out of my XR easily with quite heavy usage. And the battery health is 90% after 18 months of ownership. Best phone I’ve ever had.
Well no not really. It’s two data points. Let’s work out what is different and work out why yours sucks :)
I run mine with default apps, couch to 5k, nutracheck, YouTube, eBay, Santander, OS maps, prime video, Netflix, slack, PayPal, Uber eats, duo mobile TOTP and that’s it. No social crap.
Mix of battery aging out and possibly more background activity.
The gen 11s have rather good battery life (I rarely dip below 50% at day's end even though I use my phone a lot) — though some folks have apparently reported drain issues — but they have almost double the battery capacity of an '8.
Yes. iPhone XR already had great battery life and 11 Pro and 11 Pro Max have more than 4-5 hours additional battery life compared to Xs and Xs Max. My 11 Pro can easily last 1.5 days with heavy usage.
Don't know about iPhones, but the only way my Nokia 7 plus will do 60 hours is without radios, reading music from internal storage, and piping sound do a powered speaker.
I didn't catch it. Is Xcode 11 (beta) available already? Do I need macOS Catalina (beta) to run it? I seem to have missed the crucial information and cannot find it on apples developers pages...
Thank you, but that's how far I got myself. It is telling me there are no downloads available at this time. I am asking, because I am still on 10.14.5 and I thought the Xcode 11 beta might only be available on the 10.15 beta.
Check the “Applications” tab, the Xcode download is there (they redesigned the page, which makes it IMO worse). No idea if it runs on Mojave, but I would think so based on history.