> Joe didn't care whether his kid was in Princeton or Harvard, > as long as it was one of them.
And oddly enough, JFK went to both Princeton and Harvard. He attended Princeton for six weeks before getting ill and withdrawing and then going to Harvard the next year.
Agreed, 1% body fat is extremely dangerous and your body does not have enough fat for it to go about its normal processes. It is certainly not sustainable for any period of time. Professional bodybuilders have trouble getting down to ~3% body-fat for their competitions and they only have to hold that level for a day or 2.
You seem to assume that the pattern of top submissions and top comments is unrelated to what users post. It seems to me that as quality of the top submissions and top comments improves, the quality of the worst submissions and comments will improve as well. When there is no longer a reward for posting "bad" material behavior will change accordingly.
My understanding is that academia is very competitive, so I don't understand how you are only going to work 9 hours a week. Further, you'll probably have classes to teach which surely take up more than 9 hours a week. Also, since tenure is largely tied to the quality of your output, you can't always work on what interests you--you have to work on what you can publish.
I think you can certainly make an argument for military spending in terms of the side benefits associated with developing any complex technology, but I don't think the societal cost of increased militarization justifies military spending.
I did not find that comment condescending, though I expected to when he mentioned that he was going to explain how the budget worked. In fact, the thing that struck me while reading the entirety of the piece was how well he advocated for his position and explained difficult ideas in an accessible way while not talking down to the nun or criticizing her for not sharing his point of view. If all scientists could write in such an effective way their politics would be much better received I'm sure.
Though I understand that you aren't denying the placebo effect, I'm confused about why you feel that a large amount of the placebo effect is due to bias. After all, most drug trials are double-blinded and placebo controlled, so surely in these trials the placebo effect is legitimate (assuming no scientific fraud).
The placebo is actually there in double-blinded tests to remove any bias and other factors related to the experiment. And no matter how good/legitimate the experiment, any measurement pretty much always causes changes.
The actual effect of the placebo is usually at most a small portion of all the factors that are measured by it. Of course this depends heavily on the experiment in question and some drugs (like psychological ones) have a higher placebo effect than others.
A lot of people seem to have the idea that the placebo effect is very big for medication outside of experiments - but most of the time it's very small to non-existing.
There are complex interactions between the nervous and immune systems.
I don't think that anything has been demonstrated regarding allergies, but I wouldn't rule out a therapeutic effect in this case. The absence of evidence is not an evidence of absence.
I must say I honestly don't know if they were allergies in the medical way. For quite some time I couldn't drink large (>1L/week) of milk while milk products were fine. I always called that an allergy and only learned much later that it's not;)
> I'm confused about why you feel that a large amount of the placebo effect is due to bias
Two reasons: 1) bias is a sufficient explanation, 2) the prior probability for biases effecting experiments is huge, while the prior for beliefs having a strong effect on physiology (with the exception of highly subjective phenomena like pain and mood) is comparatively quite low.
>so surely in these trials the placebo effect is legitimate
It would superficially seem so, but the key point to remember is that cognitive biases are not something we consciously apply. They are instinctive heuristics that worked really well at helping us survive in the ancestral environment (long before the concept of empirical tests), and even when we know about them, we can't turn them off. Cognitive biases work a lot like optical illusions. You've probably seen the checker shadow illusion[1]. You can understand how the illusion works, and know full well that the two squares are the same color, and you can even watch an animation that proves it to you[2], but when you look at the final image, square A will always look darker than square B. Knowing about the illusion doesn't fix it.
So even placebo controlled studies do not allow us to be unbiased in our perception. What they do is allow us to measure the effects of our biases, so that we can compensate for those effects in our calculations.
This is actually under debate. There is disagreement among neuroscientists whether intelligence is a result of brain modularity (specific regions of the brains optimized for certain tasks), or due to brain size alone. All that said, I'm skeptical of the singularity since it assumes that a recursive process can continually improve intelligence without significant diminishing returns. The problem with this thinking is that it fails to take into account evolution. All this research into AI is based on the assumption that "intelligence" should be like human intelligence. But human intelligence has had several billion years to evolve and is highly optimized for our environment (actually our environment from several tens of thousands of years ago when we were hunter gatherers). It seems naive to me to assume that we are not nearing a local optimum in what is possible with human intelligence. I don't believe a singularity is possible because by recursively improving "intelligence" you will near the local optimum of that form of intelligence but that does not mean you can continue improving that intelligence indefinitely.
This assertion is groundless. Why would Moore's law accelerate? In fact, Moore's law is currently decelerating and gains are increasingly harder to find. Further, Moore's law says nothing about computational speed, but only the density of transistors. These two are related but not identical.
Edit: also, from my understanding, the gains in transistor density largely depend on new discoveries in physics. Before a computer can aid in accelerating Moore's law it would need to be sufficiently advanced to generate new discoveries in physics, but at this point you'd have a computer smart enough that it Moore's law wouldn't matter much. Seems like putting the cart before the horse to me.
Transistor count and computational are so tightly correlated that's nitpick.
Now I did not assume "Moore's law" would accelerate. I assumed it would stay constant. The key point comes from the fact the AI (or AI hive) would trivially benefit from hardware speed-ups.
And yes, Moore's law won't really count, compared to the rest we will be able to do. I was just trying to be as conservative as possible. (Though Moore's law still holding until strong AI is quite wild).
I think the comment was supposed to point out that Monier is supposed to be one of the principle inventors of reinforced concrete and that he passed away in 1906 ... hence trying to figure out how the gentleman in the story fit in.
What formula? It is concrete reinforced with rebar. My guess is the OP just misremembered the specific thing the patron invented. It's a great story either way.
Might just be misremembering the specific retort he made - he is famous for use of reinforced concrete. (I checked that after the last time I told the story, to make sure I hadn't invented that part.)
And oddly enough, JFK went to both Princeton and Harvard. He attended Princeton for six weeks before getting ill and withdrawing and then going to Harvard the next year.