Hacker Newsnew | past | comments | ask | show | jobs | submit | npk's commentslogin

I have a serious question that might sound not serious. The articles I read about boeing seem to focus on assembly line issues. As someone who flies a lot, assembly problems scare me, but so far assembly qv hasn’t been catastrophic (right?). It seems the real issue is a design flaw in the 737max pilot interface. Aside from some articles that feel vague (like competition with airbus led to some poor decision making); the chain of decisions that led to the design flaw aren’t really reported on (right?). Do you all have the same read?


> assembly qv hasn’t been catastrophic (right?)

The door plug that blew out of a plane because the screws weren't installed didn't kill anyone, but only because the nearest passengers had their seatbelts fastened.

Is there any reason to think that a plane that's missing screws on a door plug doesn't have improperly torqued fuel lines or defective emergency oxygen generators or metal burrs rubbing on the control lines to to tail, or a software crash in all three "triple-redundant" computers?


Carnegie Observatories (in Pasadena CA, not the telescope) is hosting an open house today (https://carnegiescience.edu/carnegie-observatories-2023-open...). It’s also almost the 100-year anniversary of one of the most important discoveries in science (and took place at Carnegie).


It turns out that this wavelength of infrared is short enough that it sees through windows just fine.


I personally had a bad customer service experience from them and switched. It's not obvious to me if you can generalize from one person's experience -- however -- the crux of my issue is their choice to not allow you to call them on the phone. It looks like this is still true (but I'm not certain). As a result of their cheapo policy, and the particular problem I was having, I switched.


They answered this question many years ago.

http://news.ycombinator.com/item?id=23938


Being proud, egotistical, or a braggart are all human characteristics. I mention this for two reasons. First, without the benefit of direct contact, you really have no idea what kind of person sivers is, or what his intention was when posting. Second, even if he is an egotistical bastard, so what?

Few people are in a position to donate $20 M to charity. Fewer still who are willing to give up what they worked so hard to make. This puts sivers in the top (some small number) percent of human population in important objectively measurable criteria.

Personally, I enjoy learning more about people I admire, flaws and all.


The standard deviation is simply a measure of the spread of a distribution. There's nothing wrong or right about high standard deviations. In fact, the high standard deviation means that you should expect highly variant performance.

Look at the figures. The performance of Slicehost follows a sawtooth like pattern. The quantity standard deviation is useful because it quantifies what to expect. Plus or minus one standard deviation means that ~ 2/3 of the time you will fall in that range.

If you think about the problem a little bit, you might be more worried about the standard deviation of the standard deviation. This, in fact, would be a useful quantity, but hard to measure.

EDIT below this line ------- Several comments below have commented that SD is somehow less useful if it's "large" (or large relative to the mean, or whatever). The reason people think large SDs are indicative of a poor experiment is that in school lab classes one calculates the SD and call it the "error".

The standard deviation is a measure of spread, if it's large then the spread is large. Knowing the spread has value. In this case, under the parent's experimental conditions EC2's performance is more constant than that of slicehost's.

A fair critique of the blog posting is that the error on the standard deviation may be large, depending on the experimental conditions. It is _not_ a fair critique to say that the SD is too high to make a prediction, you just have larger performance spread. Note that the performance spread described is not necessarily "error". The spread is inherit to either the server (as implied by the article) or the method (in which case it is an error).


I understand the performance variance characteristics, what concerns me about these tests is that the graphs of them are not continuous, they seem to be immediate dips instead of gradual curves. This indicates to me that the sample size wasn't large enough, or at least the graph needs a higher resolution if data exists to support it. Also, at the bottom, the article gives the numerical data, with the mean and standard deviation. However, the mean for the hosts with high standard deviations are essentially useless, because the standard deviation is so high. If we are going to compare average performance between cloud hosts, lets at least have useful averages to base our opinion on.


I think that low standard deviation / variance is desirable. It provides a level of performance that can be predictable and reliable. Then just multiply according to your processing needs.


> "Long Jumps" is a myth imho.

You are 100% on point.

The parent's poster's point on conformity is well taken. Still, "independent" researchers are viewed with suspicion because research is hard. As psranga points out, to make a long jump a researcher has to stay on top of the incremental steps. Unfortunately, for 99.9% of people, to do this they must be fluent with the literature and much more importantly talk with other researchers. No one works in a vacuum (and don't bother bringing up counter examples like einstein or newton).

Conformity is a big problem. It might arise when too many PhDs are hustling for a small pie. I'm not sure.


Conformity is a big problem.

What was the big heresy for one generation changes into "what you have to say to get tenure" in another.


Sparkfun just started selling a book called "Electrical Engineering 101". I've never read it, but I imagine it's pretty good.

Do you have access to a good electronics lab? A lab + H&H might be sufficient. I learned 80% of my electronics this way, but the last 20% was learned from old crusty EEs. If you don't have access to a lab, you can build your own (oscilloscope, function generator, power supply) for a few hundred dollars. I don't know where you can find old crusty EEs.


If you want to learn how to implement MCMC I recommend:

Bayesian Logical Analysis Physical Sciences by Gregory

Gregory's book explains a lot more of the engineering (autocorrelations, step size jumping, etc..). Even better, it discusses how to perform model selection using a clever annealing technique. Though model selection may not be of interest to you.

ps - MacKay's book is my nightly reading, so I'm not dissing MacKay :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: