I know absolutely nothing about physics. But I still found this paper enjoyable to read, because it's written in the format of a discussion between fictional characters: https://arxiv.org/pdf/hep-th/0310077v2.pdf
Accessibility is almost certainly a large part of the reason that the greeks tended to write in dialogues. The greek dialogues tend to be very approachable, due to the way they have to step through issues.
I'm on the opposite camp. Nothing makes me groan so deeply as dialogues. And I know I'm not alone either: plenty of people are bored out of Plato because of the dialogic writing, specially because every other line amounts to nothing more than "Why, indeed Socrates...".
Don't get me wrong. I like Plato (quite a bit). But dialogues seem as approachable to me as poetry. They might be approachable in the sense that, since they simulate a conversation between two people who might not know eachother, they necessarily lead to a much clearer exposition of whatever is the topic at hand. It's indeed as if you were reading two people try to make their ideas across when they're complete strangers eachother. But it takes completely different kind of "reading" from the one a person might use when reading prose or when reading a pure exposition.
This comes off as snarky and towards a strawman position. Only a very small (possibly religious) fraction of AI researchers believe that superhuman intelligence is not possible.
That's not true. Even in a survey done by Nick Bostrom which was probably biased towards researchers who believed in possible superhuman intelligence, 41% of researchers thought it would never be achieved.
And on the other hand the question was asking a milder thing than super intelligence, just that we would be able to simulate all aspects of human intelligence, particularly learning. 18% said that no line of research (eg cognitive science, artificial neural networks) would even contribute to the aim of human level machine intelligence. Moreover, the paper quotes one researcher who was against even taking the survey since they believed that even asking these questions was inherently 'misguided'.
It's interesting that we're even having this discussion: it wasn't so many years ago that the exact opposite discussion was common, i.e. that the actual AI researchers that think this is even a possibility are such a minority and are generally side lined that we shouldn't take their view much into account. In fact, I'm pretty sure that Bostrom's survey was done in part just to give evidence that there were real researchers who thought that human level AI was possible in the not-that-distant future.
I hope people stop citing that survey. NIPS and ICML authors are AI researchers, not attendees of PT-AI, AGI, or EETN (who are likely to be AI philosophers and singularists) or TOP100 authors who are likely to be be old, uninformed of current research. And anyone who agrees to such a survey is likely to have an extreme/controversial opinion (yes, surveyors were self-selected).
This is a bit OT, but it reminds me of the much under-reported aspect of Aaron Schwartz's document collecting:
As I understand it, he intended to analyze the resulting data set for patterns. Meta-analysis of said scientific literature, so to speak.
Of value in itself, for several reasons. Scientific. Personal interest -- interesting language and rhetoric.
One very important one -- that can be lumped under the important work of science as well as that of understanding human endeavor in all its aspects: Fraud detection.
We all have been reading about both mistakes and deliberate mis-representation in scientific publications. Including in some very influential works, subsequently retracted.
ArXiv is a step in the right direction. When we have all these scientific results locked up to the point where such meta-analysis and subsequent science is impossible, we've greatly shackled the process of scientific research and our own human endeavor and progress.
By the way, with respect to recent public conversation with respect to economics, some view this as another form of rent-seeking. People who don't produce but rather gather the "rights" to the results of production, and then divert as much as they can of its value into their own pockets -- often diverting it away from investment in the actual work from whence it comes.
I really enjoy how most of the abstracts in these papers are more like ELI5s, or just plain understandable. I think having an understandable abstract would be valuable for many papers to have to make large complex sciences understandable for people not in the field. Would there be any downside? The only one I can think of is that conclusions could sound more convicting in a simplified summary.
Having understandable abstracts is indeed very important for the success of a paper and scientists try very hard to come up with good abstracts. Writing the abstract is one of the harder parts of writing the paper. However, what you find understandable and what an expert in the field finds understandable is often very different.
Sal – I am reading a book, written long ago, where I just found this phrase: “perch`e i
nostri discorsi hanno a essere sopra un mondo sensibile, e non sopra un mondo di carta.”
Roughly: “our arguments have to be about the world we experience, not about a world
made of paper”.
Well, I was rooting for her because the story paints her as the hero; and because not being able to think of a falsifying experiment for your favourite theory is pretty damning.
Honestly I don't understand the specifics of the theories, and I wouldn't know if string theory is actually not falsifiable.
On second read I realise that this is a straw-string-theorist ready to be knocked down. It was still an enjoyable read though.
The Rovelli paper dates back to 2003, it looks like. Has a likely winner emerged, or would 'Sal' and 'Simp' end up rehashing the same points in a similar conversation today?