Saw the title and thought "oh, I'll bet it is at some insane high pressure or some other exotic condition". Clicked through to an image of a diamond anvil. Not disappointed.
I'm actually really happy they put the catch at the top of the article.
It's so annoying to read science articles about how X will revolutionize Y, but you have to dig through the comments section to find out why it won't work.
Most new research findings only have very specific applications. It's only groundbreaking when something can (eventually) be implemented in real life for a reasonable cost.
Quanta does some of the best science reporting. They have a knack for making highly complex and technical concepts accessible to the general public without sacrificing accuracy.
They don't get points for putting it at the top of the article - all they're doing is correcting their own misleading title.
And yes, I say misleading. Technically true but misleading, because their omission is absolutely critical to the nature of their breakthrough and as you implied, anyone who knows the first thing about room temperature superconductors will want to know if the material has a drawback stopping it from functioning outside of a strictly lab setting.
Very fun read. There is something really enjoyable about stories of people solving the problem at hand in the most straightforward way possible, even when that means digging into all kinds of low-level details that are usually abstracted away for us.
Correction: The article states that Thomson's problem has only been solved for 2, 3, 4, 6, and 12 charges. However, in 2013 the 5-electron case was solved by Richard Schwartz. Here's the paper:
I'm glad William included slide 10 calling attention to the hostile and insulting attitude Wolfram Research has toward mathematicians and reproducible science in general. (I think some of Sage Math Inc's other closed-course competitors likely have similar attitudes, but Wolfram Research seems to be the worst.)
"You should realize at the outset that while knowing about the internals of
Mathematica may be of intellectual interest, it is usually much less
important in practice than you might at first suppose. Indeed, in almost all
practical uses of Mathematica, issues about how Mathematica works inside
turn out to be largely irrelevant. Particularly in more advanced
applications of Mathematica, it may sometimes seem worthwhile to try to
analyze internal algorithms in order to predict which way of doing a given
computation will be the most efficient. But most often the analyses will
not be worthwhile. For the internals of Mathematica are quite
complicated."
For comparison, if you want to audit the Sage Math algorithms that your research depends upon, all you need to do is fire up a text editor (or browse their github). And you won't find any statement in the Sage Math docs telling you not to bother because you're too dumb to understand what you're reading anyway.
This is why when Big Bang Theory debuted, I thought Sheldon Cooper was specifically supposed to be a parody of Stephen Wolfram.
In math, how you came by the results you came by is always relevant. Don't tell mathematicians they don't need to know that. It's their job to know that.
Well, no. They are not at all alike. Stephen Wolfram has a lot more _people skills_ than Sheldon Cooper. Sheldon Cooper is the stereotype of the Asperger scientist who succeeds despite his inability to interact with people, while Stephen Wolfram has been at the helm of a tech company for 30+ years, and that's not something you can really do without having to interact with other people.
I suspect Wolfram also succeeds despite the nature of his people skills. I found this letter from Feynmann[1] to be an interesting and early read on his behavior.
Wolfram invented v1 of Mathematica, and founded a company to sell it to technical users. It takes minimal people skills to hire people and to sell a technically good product.
As a consequence, math publications that rely on closed-source computations are not independently verifiable, and fail to meet the standard of a rigorous proof.
Yes, though of course here Stein is referring there to the Wolfram quote that's on slide 28 (roughly: certain kinds of development can't be done in academia) and not the condescending rejection of inquiry about mathematica's internals from earlier in the presentation.
you can do such a thing with matlab. many, many of the function calls and algorithms in matlab can be directly viewed.
and honestly, it seems like diving into sage may not be as trivial as you make it sound. is it not a massive glue of many different languages and implementations?
The point is not that it is trivial, but that the system is set up so you can do it. This describes why that is important: http:/www.ams.org/notices/200710/tx071001279p.pdf.
It's not a filter, as others have suggested, but rather a standard artifact of vacuum-tube-based television cameras that were in use at the time. (I don't know why this lecture wasn't filmed, but the extensive dark halo effect makes it clear this was shot with a TV camera. This was the early era for magnetic video tape, but I assume that's how the recording was preserved.)
Anyway, the point is that these TV cameras are based on the fact that incoming light will dislodge electrons from a thin plate in a vaccum tube in an amount proportional to brightness. A very bright spot in the image produces a shower of electrons that is more powerful than the rest of the tube (the part that detects the electrons) can deal with. The net result of this "splash" of electrons is a mild desensitization of the detection apparatus around the bright spot. This makes the nearby stuff appear darker.
You mentioned that dark spots also seem to have a bright halo, but I don't see that in the video, and it isn't consistent with the usual artifacts of these cameras. Are you sure?
This looks nice. What I'd really like to see, along these lines, is a python library for automated document metadata extraction with confidence assessment, like this:
I thought about the metadata thing but decided to exclude it for the earliest versions of textract to keep things simple. If you'd like to see it in there and have a good example of how you'd like to use metadata, please feel free to throw an issue on the issue tracker https://github.com/deanmalmgren/textract/issues/
As far as I have been able to tell, the public state of the art in academic paper metadata parsing is Grobid: https://github.com/kermitt2/grobid
Not quite as simple a commandline interface as you suggest, but not too hard to set up, and pretty impressive. Now if only Google Scholar would open-source whatever they use...