Hacker Newsnew | past | comments | ask | show | jobs | submit | more TooKool4This's commentslogin

Very interesting article, would definitely like to read the paper when I get the time.

Something I would like to understand is how these old systems scaled the reference measurements to order of magnitude larger/smaller weights to measure. Up until recently (before the SI redefinition of weight to physical constants) this was an issue even with the modern definition of the kg where for measuring larger and larger weights we essentially have to resort to larger reference weights. And in doing so the manufacturing complexity grows significantly if you want to hold the same uncertainty (as a %). And of course, this applies when measuring masses significantly smaller than the reference 1kg prototype.


Given the simple balance pictured in the article, it should be fairly easy to scale up or down by small integer ratios.

First, you can put two equal weights on the scale, then swap them to make sure they are the same, AND the scale is fare.

Next, you could make duplicates of that weight. You could then use some small number of them to make a heavier standard.

You could also make some smaller weights, slightly heavier than 1/2 of the weight, and grind them down until they add up to 1 standard unit. You could then work on duplicating the heavier one, and then shave both of them down a bit at a time until they are equal, and match the standard weight.

With time, you could probably come up with a fairly good set of standards, each up/down would be correct to a tolerance of the sensitivity of the scale.

All of this without a modern beam balance.


Is it really that hard? You make a lever/beam with notches at regular intervals. Move the hooks to set the ratio, and you can compare weights of arbitrary ratios. Sure there's some loss of precision at each step, but assuming you minimize the stacking of errors and return to the master reference whenever possible, I'd imagine it to be precise enough for pre-scientific uses.

Note that many old measurement systems use factors other than 10, which happen to be easy to multiply/divide with ratios of 2 and 3, which happens to be very easy to make levers/beams for.


> I would also enjoy a maximize button that does maximize the window and not create a virtual space with the application in full-screen.

This was the biggest thing that annoyed me when switching from Windows. I’ve been using SizeUp for 5 years now without any problem and it’s great, especially when using a 4K screen (quarter screen window tiles).

https://www.irradiatedsoftware.com/sizeup/


I am not sure its a problem of our species (most species seek to expand their footprint in the world) but considering we have the cognition to understand our impact on the world, it seems imperative that we minimize our impact to some extent.

But after all that, with the right perspective, what you describe can almost be seen as describing a fundamental law of nature (entropy). An ordered system will naturally trend towards disorder given no external input and its much easier to have an unordered system than an ordered system.


Transferring songs and playlist is actually very easy (unless you are listening to very niche stuff). Songshift (no affiliation) worked very well for my needs and moved most of my playlists and songs over. Not sure about discover weekly though


I second Songshift.


Aside from the language features, some of the libraries in Julia make it really useful for statistical computing. One really cool library I am trying to use more and more in Julia is the Measurements library [1]. With the multiple dispatch system in Julia its super easy to integrate into most problems and can let you estimate error bounds on values programs produce. Super important for scientific applications.

I am hoping in the future that I can mix this in with some auto-diff problems to get uncertainty bounds on estimation problems with minimal fiddling with covariance matrices. Right now the performance is the only problem in integrating the library into pretty much any problem :(

[1] https://github.com/JuliaPhysics/Measurements.jl


If I understand correctly, the audiophile review is claiming to recommend a network switch based on the difference of a few pico seconds of jitter!? Surely that is outside the range of human perceptibly right?

The main issue here is mixing up a statistically significant effect and that effect being significant. With a very high powered experiment you can detect very small effects but that doesn’t mean they have a significant effect on the outcome.

I really think the simple hearing test is best but should incorporate common tools of measurement science such as repeated tests and multiple, varied listeners. Audio “quality” is very subjective and can come down to preferences driven by culture, age, experiences, etc. Some sort of consensus rating over all reviewers should be done.

Edit: also the rule of 3s! Never measure twice, always 1 or 3 times, is particularly relevant to this article :)


Indeed, buffering the data by even just one or two sample periods and running the DAC from a local clock would be enough to eliminate any reasonable effect of network behavior. And it's probably already buffered by more than that, if data are sent as packets.

There's always the possibility that the equipment under test is itself badly designed, e.g., that the DAC is unstable against tiny changes in network behavior. That might tend to magnify tiny errors that would be accommodated by better designed but less esoteric equipment. Subjectively choosing among badly designed components could explain a lot, including why laypeople don't perceive the same effects when listening to mainstream gear.


> Unlikely things happen all the time. If he lost money, this article wouldn’t exist.

Exactly! The person you replied to did the right calculation but completely threw away the context. The argument here is akin to p-hacking where all the investors in the world are the experiments and this article merely picked the one that got lucky.

Different scenario but similar argument, I roll a large set of dice, 5x rolls each die. If the set is large enough, one of the dice is likely to land all 6s. While that is unlikely, that doesn’t automatically mean the diced are unfair. It’s not just enough to reject the null hypothesis but you also need to prove the new hypothesis. Not to mention having a solid working theorem for why the alternate hypothesis is correct.


I tried applying the wisdom of crowds idea instead :D https://twitter.com/SmittyW62858649/status/13589265825139957...

Not sure if taking a median is the best idea. I figure a gamma distribution might work better!


I tried this a few years ago with a jar-of-coins problem on Twitter. I think it was Orange in the UK, the winner won the value of a tube full of pennies?

You were allowed X guesses per day, per account (so I roped a few friends in). Since the guesses were public, it seemed like a good opportunity to try the theory out. The closest guess would win. The strategy was pretty simple: use your guesses to bracket other people's answers. So if someone guesssed £100, you would guess £99.99 and £100.01. Of course someone might guess exactly, but I don't think that happened (or maybe it did?).

I wrote a script (I think in PHP of all things) to poll the hashtag and put all the entries into a database, with some code to figure out what ranges we needed to cover each day. There may also have been some heuristics to account for the fact that you couldn't possibly cover everyone's guesses, but you could try to maximise the range of values that you would win for. I think we also tried to weight the suggested guesses around the median value.

In the end it didn't win, but it was a fun exercise. I think the winning answer was very close to the median.


That’s pretty cool! I’m going along in the same vein. I know my median guess was taken first so I can’t actually win but I just wanted to see how close I could get out of curiosity.

I am surprised the median was an accurate estimator in your case but I suppose it makes sense if you are not close to the bound of zero (I.e. it start approximating a normal distribution)


Were there any gaps in the guesses? You have to be first to guess the correct answer so taking an un-picked lottery number in the vicinity of the median would be a decent strategy.


Yep looks like there are but are in the higher range of the distribution (>300). Updated the post with some interesting plots.

I would really like to use a z-score metric to remove the outliers and fit a gamma distribution, but unfortunately its a busy Monday!


I think trying to fit a single solution to the entire range of schooling is a bad idea.

In the earlier years of your life, a big part of schooling is “learning how to learn” whether implicitly or explicitly. By the time you get to university you have a good set of skills in terms of taking notes, referencing literature, practicing problems such that more time is spent learning the material rather than developing those meta-learning skills. I think that might explain the difference between your experience and your colleagues’.

I haven’t thought too much about the pre-k to grade school level, but at the university level it seems like universities are making an incredibly lackluster attempt at remote learning. They seem to have just taken the lecture aspect of university, thrown it into a zoom meeting, and called it a day. For the tuition that students are paying in the US (10-50k per year) there is so much room to provide value in alternative ways. I understand a large portion of those fees go towards paying faculty salaries but surely without having to run facilities they can invest that money in providing more value in remote learning. Things such as VR lectures/labs, improving the social aspect of remote university, more smaller group learning with more remote TAs, etc. There is so much opportunity here but there seems to be very low effort on the university side, probably due to massive sunk costs on the in-person learning side.


In addition to "learning how to learn", esp early schooling offers a lot of social benefits, plus (depending on country etc) food and time away from home, which are important to a lot of kids (unfortunately).


I believe there would be minimal practical impact in any case.

In the case of anisotropy of the physical constants, it would have to be so small that to date independent labs around the world haven’t detected a difference outside of the measurement uncertainty.

In terms Of time variance a similar logic can apply. I would also add that variation of the physical constants over the span of a few decades is not very likely in the grand scheme of the universe.

It’s the same reason that nothing in your life materially changed when we switched the definition of the kilogram last year, unless you are a metrologist ;). The variations being discussed here have to fundamentally be below our current measurement uncertainties and our current measurement uncertainties are very very small.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: