Hacker Newsnew | past | comments | ask | show | jobs | submit | _bpo's commentslogin

> A better question would be, why do you think you did not succeed.

Is that a different question than "why you failed?" from the GP?


Yes. "Why you failed?" implies they did something _dumb_, as opposed to simply not doing something genius.


fail "literally" means to be unsuccessful

it implies doing something dumb as much as 'not succeeding'

... which is to say, not at all


Nice & succinct problem definition for why ACLs are so important for everyone:

> Let me show you an example. You have a Redis instance and you plan to use the instance to do a new thing: delayed jobs processing. You get a library from the internet, and it looks to work well. Now why on the earth such library, that you don’t know line by line, should be able to call “FLUSHALL” and flush away your database instantly? Maybe the library test will have such command inside and you realize it when it’s too late. Or maybe you just hired a junior developer that is keeping calling “KEYS *” on the Redis instance, while your company Redis policy is “No KEYS command”.

Without ACLs we need to rely on command renaming or completely isolating databases to guard against errors. ACLs sound complicated but they're actually a solid user experience improvement.


The tldr here is that this is a nice recounting of personal experience interviewing with Uber and being frustrated by it, pointing out the many warning signs seen in the interview about a toxic work culture.

The post was prompted by the phenomenal writeup by Susan Fowler on her year working with Uber. If you can read only one, certainly read hers. If you can read only two, consider reading Susan's twice as it's exceptionally good writing. This is a nice (not exceptionally original) personal account of a bad interview experience.


In case someone wants to save 1 minute of their life :) https://www.susanjfowler.com/blog/2017/2/19/reflecting-on-on...


I'm not familiar with the other write up you've mentioned, but I enjoyed this blog post. It's nice to know warning signs for potential companies you could work for. We have to stick together in IT and not let companies take advantage of us.

If doing 80 hours of work in 40 and all the other warning signs is no big deal to you, by all means, gun for an Uber position in that area. I'd like to hear about that though before I waste my time, so well done by OP.



lol!!


> even for a small company, $8k/year is admittedly a small fraction of one engineer

This is certainly true, but overlooks the fact that many major products start out as experimental projects, and $8k/year is a significant investment for an experiment.

If there's a reasonable upgrade path from traditional databases like MySQL and Postgres this shouldn't be a big deal, but if the answer is "rewrite your app" it will probably be a friction point for adoption.


Capital expenditures might shed some light on this. I don't think there's enough public data to be clear but in 2015 Amazon (4.8B), Google (9.9B) and Microsoft (5.9B) were at least on the same order of magnitude in terms of CapEx, whereas other major "datacenter" companies like Rackspace (475M) are much smaller.

I don't think you can draw any definitive conclusions from this, but calling it a class of size 1 or 2 is probably an overstatement of Google (+/- Amazon)'s advantage over Microsoft at least.


> "cost disease" has become politicized

Citation needed. Where is this term common and where is it broadly discussed?


It's not; I've never heard of the term before the article. I'm referring to the phenomenon itself. For example, Elizabeth Warren was recently quoted as measuring success in dollars spent: http://reason.com/blog/2017/01/19/on-entitlements-elizabeth-...


"Entitlement programs" is politicized as a term, certainly. "Cost disease", though... possibly in search for a new keyword?

> I'm referring to the phenomenon itself.

From your example, Elizabeth Warren and Tom Price are looking at the same appearances (phenomena) and deriving different "noumena".


Lots of downvotes here – any explanation? The GP uses the phrase "the phenomenon itself" referring to the Kantian term "Ding an sich", which (as applied to phenomena) is just a logical error...


I have nothing against being the guy who spends time pointing out small, or even irrelevant errors, but there is no error to correct. I think this is just a misinterpretation on your part, either stemming from not reading the article to understand what the author described as 'cost disease,' or perhaps you did not understand my use of quotation marks around cost disease in my original post.


I don't understand the hate for "entitlement programs"; it's a precise, common, useful category with no better suggestion offered.


citation: https://web.archive.org/web/20110724151936/http://publishing...

it's been a heavily discussed subject amongst economists. it has not fully passed into mainstream conversation. I'm not an economist but I first encountered the term about 10 years ago while reading economists.


Tangential, but why do people write things like "even after adjusting for inflation" when discussing long timespans (40 years in this case)?

Is there any reason to not adjust for inflation? The "even..." in the sentence suggests that the author is being liberal with potential critics when it would seem only rudimentary to acknowledge inflation when discussing a 40-year timespan. A typical (new) car cost ~$4.5k in 1977 in the US....


Because news articles often don't adjust for it when making their arguments, and savvy readers are used to thinking "yeah, this argument only sounds right because they don't adjust for inflation".


[flagged]


If adding the word "even" clarifies things for even just a tiny percentage of readers (clearly Tenoke found it helpful, and so did I), then it is worth doing. Sure, the article is quite long (Scott Alexander's writing often is), but the incremental cost of this one word (albeit repeated a few times in the article) does not increase that much.


I really hope that this is a bot making that comment, and not someone asking me for a citation on why I think the author has used a certain wording.


Not a bot, just a regular reader of quality newspapers who cannot remember a single case of an article making that mistake. It's slightly disheartening to see how the collective mind has turned on journalists to a point where any and all accusations can be levelled against them and will be assumed to be true.


Tenote mentioned > news articles

and you mentioned > quality newspapers

now, these things are not necessarily congruent. You might think that you read a quality newspaper, but how many people get their news from such? It certainly seems that more people get it from trashy tabloids in the UK (see https://en.wikipedia.org/wiki/List_of_newspapers_in_the_Unit... for reference) and there will be vastly more 'news articles' than exist in quality newspapers.

and anyway, is 'slatestarcodex.com' one of your quality newspapers? Certainly I've never heard of it, so I was happy to see that phrase included though I was not going to check their references anyway.


This isn't really the point here, but for what is worth slatestarcodex is actually really rigorous, and of really high quality. The author always makes sure to triple check everything, consider all sides, be explicit about things that he might be missing, etc. And the comment section there is pretty comparable to the comment section on HN in terms of quality.


It's a common idiom used when writing for general audience. Distinction between real and nominal is not at all clear to the public.


I agree but I think it is just a way to explicitly state that this adjustment was performed and not forgotten.

Better would be:

"Per student spending has increased about 2.5x in the past forty years (after adjusting for inflation)."


> Is there any reason to not adjust for inflation?

Lies, damned lies and statistics. It is easier to mislead if you don't adjust.


Author here. I completely agree. Embedded gists didn't have this annoyance when I wrote the article. I figured GitHub could be relied on to not change things up too much, but that's what I get for depending on an external service...


It's too bad (and uncharacteristic of mitchellh) that this post is so light on specifics. Were the "previously unknown challenges" simply that not enough people adopted Otto? Or were there actual technical hurdles?

The premise of Otto isn't clearly flawed, so it would be interesting to see specific challenges - even if it's just "the problem space is way too big and not enough people wanted it"


I'm happy to answer myself. Previously unknown challenges are just the various facets of building and deploying an application. Its not so much that they were unknown problems so much as the abstraction we designed for doing so proved challenging to solve those problems.

Ultimately, Otto was trying to be a masterless PaaS (Heroku, etc.). When you frame it that way and think about all the things you'll have to solve it becomes challenging. On top of that, we always wanted Otto itself to be fairly "thin" and delegate its difficult duties to the rest of the stack. This required us to build a bunch of features we weren't ready to build into our other products OR risk bloating Otto itself.

Overall, it was too early for us to do.


Thanks for your comment/insight. I understand what a PaaS is but what does the 'masterless' qualifier mean?


Well, in Heroku proper, you feed its git repos an app, it figures out what type of app it is, and applies the right build pack and hosting environment for it. Keeping build packs up to date, keeping all the scripts running, and making sure an app has associated dependencies, etc--I imagine that's the difference between an independent setup you can self-host quickly and easily and one that's very dependent on an ecosystem of Heroku maintainers, tooling and existing server infrastructure...

It's a very hard problem to solve, and one which will likely only catch on as devops tooling improves and becomes expected for apps, and as app runtimes standardize. Alternatively, you could look at the myriad ways operating systems package applications, and the ways applications allow themselves to be packaged, let alone store data in production, and ... basically give up on this ever happening in an easy, hands-free automated way.


> Keeping build packs up to date, keeping all the scripts running, and making sure an app has associated dependencies, etc--I imagine that's the difference between an independent setup you can self-host quickly and easily and one that's very dependent on an ecosystem of Heroku maintainers, tooling and existing server infrastructure...

I work for Pivotal on the Cloud Foundry buildpacks team. 4 of our buildpacks (Ruby, Python, Go, NodeJS) are downstream forks of Heroku's.

We merge from upstream approximately weekly, but the pace has definitely dropped.

We build all the runtime binaries we ship with our buildpacks. We also build the rootfs it all runs on. Some of these pipelines are now fully automated. For example, when a NodeJS tag lands, our pipeline will build the binary, add it to a buildpack and put it through our battery of test suites. Our product manager can make a release with a few keystrokes and a button press.

The difficulty of engineering really comes down to the nature of the ecosystem you're turning into a buildpack. We did an article on writing buildpacks[0], taking Rust as our example. It was a doddle, because of Cargo. Meanwhile our PHP buildpack performs incredible gymnastics to make a 12-factor cloud platform look like a shared host circa 1999.

[0] http://engineering.pivotal.io/post/creating-a-custom-buildpa...


The selling point was you just do the bare minimum and otto figures out what other specific things you need to get your project up and running. That's not scope, that's magic.


Lots of useful, successful things appear to be magic (especially in their selling points). Early Heroku is a great example in this space.

I didn't see anything in the initial premise of otto that was technically untenable. We could speculate about the "challenges" - the scope was too wide/unbounded, it was open source, it was a distraction from other company goals, it didn't gain enough early traction - but that's simply speculation without details from the creators.


> Languages are faster, development times are shorter, and chips are WAY faster.

This is due to Moore's law, not the software design choices that the article bemoans. Those $30k/month Sun servers were many times faster and cheaper than the earlier machines they replaced as well.


While Moore's law helps, languages are more expressive, safer, more performant and have more batteries included yielding a whole bunch of improvements.

We've had software and hardware gains, massive ones, and they compound.


> While Moore's law helps, languages are more expressive, safer, more performant and have more batteries included yielding a whole bunch of improvements.

I have to disagree, compilers may have gotten a bit better at making faster binaries. But languages, like new languages, are increasing in expressiveness and safety, sure, but very rarely efficiency. Go and Rust are not faster than C or C++, likely never will be (for one C has decades of lead time), Go and Rust may be faster than C was 20 years ago, but that doesn't matter.


If Rust is significantly slower than equivalent C or C++, it's a bug. Please file them.

(And yes, sometimes, it's faster. Today. Not always! Usually they're the same speed.)


My point is more like this chart [0]. C has so much lead time, Rust will probably never be able to catch up, be close? Sure. But C has decades of lead time.

[0] http://www.viva64.com/media/images/content/b/0324_Criticizin...


> My point is more like this chart <

As steveklabnik noted that is old data (which you would be normally be able to see from the date-stamp in the bottom-right corner, but that's been hidden).

This web page is updated several times a month, and presents the charts in context --

https://benchmarksgame.alioth.debian.org/u64q/which-programs...

(You might even think that you can tell which language implementations don't have programs written to use multi-core and which do.)


That chart is extremely old. We are sometimes faster than C in the benchmark games, with the exception of SIMD stuff due to it not being stable yet. (and, it can fluctuate, depending on the specific compiler version, of course.)

For example, here's a screenshot I took a few months ago: http://imgur.com/a/Of6XF

or today: http://imgur.com/a/U4Xsi

Here's the link for the actual programs: http://benchmarksgame.alioth.debian.org/u64q/rust.html

Today, we're faster in C than one program, very close in most, and behind where SIMD matters.

  > But C has decades of lead time.
Remember, Rust uses LLVM as a backend, which it shares with Clang. So all that work that's gone into codegen for making C programs fast also applies to Rust, and all of the work Apple and whomever else is working to improve it further, Rust gets for free.


I mean true, I'm playing devil's advocate here. I respect the Rust community (heck of all the nu-c languages I respect it the most, I even did a poster on 0.1 of it for my programming languages class), I will be quite impressed if they can pull off (and they are the most likely to be capable of it in my opinion) what so far has been an insurmountable task: beat an old guard language in general purpose performance (Fortran, C, etc.); languages that have every advantage but design foresight. If they do it, it will be a great historical case study on how to build a new programming language.

As an aside: As someone who has used LLVM to build a compiler, it doesn't quite work that way, yes rust has access to those gains, but it may not be able to effectively use them (due to differing assumptions and strategies).


Totally hear what you're saying on all counts :)


> languages are more expressive, safer

Not generally, no. Maybe the popular ones become so, but that's mostly by rediscovering the languages of old, which had better safety and more expressive power.


Moore's law is just an observation, and the only way chips can actually be made is through sustained, coordinated, and meticulous teamwork.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: