Am I understanding it correctly that the technique that was submitted yesterday would allow for similar frame rates even without relying on stroboscopy?
Am I wrong, or is this technique just a very advanced stroboscopy. Even though it may appear we are following the same pulse of light, that's not what is happening.
It is not one shot of light traveling, but every frame is created with a different pulse of laser captured at the right moment.
What I wonder is how did they figure out ways to capture such short exposure. That is the technological achievement making this happen.
I’ve also hacked bigger voting buttons (for the sake of Fitt’s law [1]) and some indentation marks [0] which I find quite helpful, both included with ~/.js [2].
I wish these google pages wouldn't automatically assume you speak some language based on your IP address - I get mine in German without any option to switch. Aren't there standards for setting language in web browsers?
Google collects all your data and applies machine learning to predict a probability value that you are human. If it is below a certain threshold you have to enter a CAPTCHA.
Sorry, I just thought "all your data" was a bit unspecific. I have no idea what Google collects in order to derive that. I have a lot of guesses, but "all your data" doesn't tell me in fact, how it works. And I thinks that was what the parent poster asked for.
Publishing and peer-review is one part of science, but you are missing an equally important one, which is coming up with (hopefully testable) hypotheses.
But how can you have something testable about something that does not exist yet? You would have to have evidence that an actual, intelligent program does something hostile, which is obviously impossible at the time since AGI has not been invented yet.
However, we do have evidence of naturally occuring hostile intelligence. If an AGI is vastly superior in some regards (for example through much tighter integration in knowledge and control systems), it could in fact pose a danger. The article even clearly states a hypothesis: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded."
We can’t really assign a probability of occurrence to scenarios like these, thus we have to include it in the list of events that could potentially wipe us out and which we may be able to prevent. Since people are actively working on AGI, it should probably be somewhere at the top where nuclear warfare, global warming and comet impacts are listed.
It seems irrational to make such a prediction about systems which we don’t understand at all.
It’s not clear how sophisticated a neural network needs to be in order to yield AGI. It could be very simple (then you would be wrong with high probability), or it could extremely complex (then you would be right with high probability).
http://emacs.sexy