Probably to help someone know what paragraph you're referencing if you're having a discussion. I could see it working better if it only appeared when you hovered over to the left of the paragraph.
How do people attain downvoting privileges? A possible solution would be to simply remove many people's downvoting ability. Reserve the right for an elite few who have insightful things to say and are sparing in their usage.
This is a new account and I've found that it's surprisingly liberating to not have a downvote option. When I disagree with someone, I can either reply or I can just ignore the comment. Good content will sift to the top anyway through upvotes. I would be interested in discovering any issues behind such a system.
Maybe it'd be a good idea for the site admins to check the first 5-10 downvotes of someone who just received downvoting privileges, and if they're inappropriately used, take the downvoting privileges away from that person.
Some sort of double checking of downvotes is pretty common in many places that do support them. We had metamoderation even in good old slashdot.
Now, at least there are plenty of HN users that are active looking at downvoted content, and bringing it back up if they think the downvote was unfair. This behavior could be used to evaluate frequent downvoters: If someone consistently downvotes comments that end up in the positives, then maybe their downvoting privileges should be revoked.
An algorithm like that should be pretty easy to tweak too: Start with removing, say, the bottom 1% in ranked downvoting quality, and alter it as needed.
Incredible and terrifying at the same time. If they're doing this kind of stuff right now with consumer cameras, imagine how effective this technology will be in just a few decades. Privacy is fading quickly with the advent of exciting technology like this.
> Because of a quirk in the design of most cameras’ sensors, the researchers were able to infer information about high-frequency vibrations even from video recorded at a standard 60 frames per second.
The audio from the 60fps video sounds pretty bad though, which I suspect is mostly because of inherent maths/physics limitations rather than anything that software can improve.
Edit: They mention capturing frequencies up to five times higher than the 60Hz frame rate, which would mean a maximum frequency of 300Hz, which would suggest the equivalent of 0.6kHz audio, which is a 73.5th of the audio rate of a CD. I doubt you'd get intelligible speech from current consumer hardware using this technique.
There is some small possibility of improvement through software techniques, such as maybe data assimilation, which can use information from surrounding time-frames to improve the measurement. This is assuming that the magnitude of vibrations changes a lot slower than the vibrations themselves, which is usually true, and how most audio compression works. It may be able to clean up the sound a little. However, I would say that the results they have obtained so far are very impressive.
The data comes in faster than 60 fps. A camera sensor doesn't capture the entire frame instantly every 1/60 second. It progressively scans through the frame over some measurable fraction of that 1/60 second. This is that quirk.
Suppose the camera scans 720 lines in HD every 1/60 second. Each row is offset in time by 1/43200 second. A rigid object could be slightly offset in space on each line of pixels, indicating that sound waves perturbed it in the time gap between when the camera captured each line. So that subframe video data can be turned back into audio at a much higher frequency than that apparent 60 Hz video sampling rate.
In other words, we're not just talking about 60 frames-per-second from a camera. It's really perhaps 43,200 rows per second, an enormously higher sampling frequency.
Yes, yes, that was completely obvious from the article. We are getting thousands of "measurements" per second.
However, each of those measurements is incredibly inaccurate. Each one is trying to detect the change of colour of 1/200 of the colour range in a single pixel. You may be getting less than a single bit of entropy per measurement.
An advanced signal processing technique will look at the longer-term picture. Sound vibrations are not a random walk - they tend to be a combination of sine wave vibrations, where the rate of change of magnitude of each wavelength is significantly lower than the vibrations themselves. Therefore they are to a certain extent predictable, and this predictability is used by audio compression algorithms. The signal processing algorithm will have to make use of the extremely limited information coming from the measurements, and match up possible sets of varying sine waves that could be causing those measurements. This may be sufficient to reject some of the noise that we could hear on that video, and clean up the sound a bit, but it is quite a hard (and CPU-intensive) processing task.
Let's say that it would read the entire image in 1/120 second, then it is waiting and does nothing another 1/120 second before it starts reading next frame.
The real number would be significantly smaller. Therefore they can not bump the sample rate more then five or six times. And I imagine they are using some intelligent algorithm to evenly space out the captured samples already.
A CD has a lot of overkill for basic speech. Doing a test here with some speech samples, 2kHz is ugly but intelligible, 1kHz is a mess but mostly understandable with effort, and 600Hz is almost useless for trying to find words (without any practice or computer assistance, of course).
"intelligible speech", are you sure that speech recognition really requires whole frequency range? Often it seems that data can be extracted after all, even if most of it is missing.
Isn't that why he was writing this? To explain that his mistakes aren't representative of what he now knows he should do as CEO? I understand what you're saying on a literal level, but it's very possible he's been a good CEO in other areas that he hasn't mentioned here. I would hold off on the judgment.
Going along with your analogy, it's very possible that if this central Asian George R. R. Martin was deterred from writing Game of Thrones in his native language because it isn't "reasonable" for others to expect to read his works, then maybe it would never have come into existence in the first place.
Look, I understand your point but sometimes when you want something done you don't mess around and care about what other people want. You just do it in whatever way is easiest for you. Saying you "hate" these people who actually have CREATED these things that have added value to people's existence seems really rather immature. It's not like lives are going to be saved because DF was open sourced.
It's definitely a bit of an issue, but less of one than people tend to believe. There are many decks that people bring to Legend rank that cost very little in terms of dust.
It is. Additionally the person who gets the Coin also gets an extra card in their starting hand. You can "mulligan" this card, or swap it out of your starting hand for another draw. Although this might seem relatively trivial, having an extra chance to mulligan a card increases your chances significantly of getting a card that you want in a specific matchup.