Hacker Newsnew | past | comments | ask | show | jobs | submit | drfloob's commentslogin

I work at a "large company", and I don't agree with most of this. Your write-up is highly subjective, and fairly pessimistic.

Some people hit the ground running, and teams/organizations/companies can thrive if they find ways to embrace that. Sometimes people get hired at the wrong level, and everyone benefits if some sort of work demonstrates that quickly. I have seen promotions happen based on prudent choices around one's individual strengths, simply by choosing to do a bit of the right work and getting eyes on your capabilities. There is no "one size fits all" prescription for what someone should work on.

Having a "shadow lead" can be one of the best situations for your growth, too. Not only do you get the experience leading a thing (for most of what that means, anyway), you may end up with a very strong ally when you knock it out of the park. I had a version of this experience, and I've watched others have it as well.

I'm guessing most of the negatives here are based on your personal experience, and for that I'm sorry. Hopefully you can encourage positive changes in your company's engineering culture.


> Having a "shadow lead" can be one of the best situations for your growth,

In a successful company with high retention, many senior engineers who would fill these roles as shadow lead, often have their ego turned down, less to prove, and will encourage you to succeed. In companies with a lot of churn, or always on the brink of disaster, you don't get that support.

overall though, I do agree with the article. orgs run on reputation, and it's slow to change, and snap judgements sadly matter.


> you may end up with a very strong ally when you knock it out of the park

I imagine this will be determined by the culture and the system of rewards which are out of your control. A shadow lead could be an ally, or they could pin any deficiencies on you. The author’s comment is sound in my opinion: depending on altruistic behavior is a bad position to be in.


The author's resume says he was a tech lead at Zendesk after 2.5 years of software development, with no formal software development qualifications. I'd be taking his perspectives with salt.


LinkedIn - Infrastructure Software Development | Senior/Staff Software Engineer - Backend, Python | ONSITE Sunnyvale, CA | Full-time | https://www.linkedin.com/jobs/view/712546597

LinkedIn's Infrastructure Software Development team builds and manages the company's production infrastructure source of truth, data center inventory and configuration management systems, monitoring, and workflow automation systems that underpin all of LinkedIn's production operations. We're looking for strong Senior or Staff level backend engineers to help us build highly reliable systems. We're a small team of strong developers, your contributions would have a large impact on LinkedIn's operations. Experience building data center monitoring, reporting, automation, or capacity management tooling is a big plus, but not a requirement.

Our software craftsmanship standards and culture are amazing, and our benefits and work-life-balance are top notch too, if you're into those sorts of things. I'm an engineer on the team, happy to answer any questions.

Check out the job description and apply here: https://www.linkedin.com/jobs/view/712546597


Flink is listed under `Unified Processing` as it supports both batch and streaming (Kappa Architecture)


Not so say Coursera doesn't have problems, but the start and end dates don't mean much as of earlier this year, when they rolled out their new platform. There's no penalty for late assignments[0], and you can switch to a later session of the course whenever you want[1].

[0]: https://learner.coursera.help/hc/en-us/articles/208279866

[1]: https://learner.coursera.help/hc/en-us/articles/208279776


I did all my college geometry homework with it (and LaTeX) back in 2011/2012, just for kicks. Is that interesting? Probably not too interesting :-)


Part answer to your question, part shameless plug.

Node cloning can be done fairly quickly if the bulk of the node data is shared. The `_tree` immutable tree library (https://github.com/drfloob/_tree#quality-metrics) has a 1,024 child-node benchmark. On my slow laptop, it can build 1024 node trees at ~10/second, and the performance appears logarithmic on node count (http://jsfiddle.net/9x7aJ/2734/). I found it fast enough for my client-side use cases, but Clojurescript's optimized branching scheme could probably be implemented to boost performance. And anyway, if 10ops/second isn't fast enough, `_tree.Tree.batch()` lets you temporarily bypass immutability if needed, making modifications much quicker.


Well, we can check a few of those guesses pretty easily. Here is the stock React TodoMVC demo augmented with Swannodette's benchmarks:

http://drfloob.github.io/todomvc/architecture-examples/react...

Benchmark to your heart's content! On my system, Om is about 4x faster on benchmark 1, and 1000x faster on benchmark 2. I don't think requestAnimationFrame has a lot to do with it, and immutability only slightly more; I think the real performance gain is in having an application data policy tailored to make the most of React's behavior.

To plug my own work a bit, I built a TodoMVC example with React and my own immutable data structure library in pure JS. It performs a lot like Om on both benchmarks, and actually seems a bit snappier in places, like toggling and clearing all completed todos:

http://drfloob.github.io/todomvc/labs/dependency-examples/re...


Thanks for posting this; I wished when writing the article that I had plain React benchmarks to run.

However when I profiled your benchmark it seems like it is invoking React's event loop more than I would expect (and more than the React/Om benchmark does) -- do you have any idea why this would be? It seems like for this to be apples to apples, it shouldn't be invoking React's update logic (ie. re-rendering everything) until the end of the benchmark. Perhaps this difference is because of requestAnimationFrame?

I don't understand this comment:

> I don't think requestAnimationFrame has a lot to do with it, and immutability only slightly more; I think the real performance gain is in having an application data policy tailored to make the most of React's behavior.

As I mentioned in my article, "Benchmark 2" is a no-op on the DOM. So literally all React should be doing is calling render() before the benchmark (which returns basically nothing), letting the entire benchmark run, then calling render() again (in which it again returns basically nothing).

In other words, I don't see what the "application data policy" has to do with making React efficient in this case; all we're asking React to do here is calculate a no-op diff and then do nothing.


> it seems like it is invoking React's event loop more than I would expect ... Perhaps this difference is because of requestAnimationFrame?

I think so, yes. The performance difference is already negligible with Om, so I didn't see a real need to optimize it any further. Pete Hunt's react-raf-batching (mentioned already) could probably be dropped in to get that last bit of optimization, but I haven't tried.

As for what I mean by "application data policy", I think that if you work with your application's data in such a way that it batches modifications, renders entirely from the top down, and uses fast `shouldComponentUpdate` implementations, you'd achieve most of the improvement you see in the Om vs React+Backbone TodoMVC comparison. Lots of things could achieve that. Immutability isn't a hard requirement to get those features done. And if my React+_tree example is any indication, requestAnimationFrame isn't buying you that much performance either.

I haven't really analyzed what Om's Benchmark 2 was doing under the hood, so thank you for that. I assumed it was doing something, but with the ~4ms benchmark, I just assumed that particular something was wizardry.


> I think so, yes. The performance difference is already negligible with Om, so I didn't see a real need to optimize it any further.

Sorry, I was unclear: my comments were about the (slow) first benchmark you posted that is using React but not using your library. I think it would be orders of magnitude faster with requestAnimationFrame.


EDIT: not true. see below.

----------------------------

Good call! Benchmark 1 is about 33x faster in my browser (with the setTimeout fallback being used, I think).~

http://drfloob.github.io/todomvc/labs/architecture-examples/...

This is the same React TodoMVC example with a different `react-with-addons.js`, built using React v0.9.0 and https://github.com/petehunt/react-raf-batching.


The timings you are presenting in the UI are not accurate. But you are correct that this approach results in timings that are almost identical to Om. Use the Chrome Dev Tools profiler flamegraph if you want to confirm what I'm saying.


Ah, right. Benchmark 1 is actually ~350ms on my machine. Thanks for the lesson in profiling asynchronous code. I revisited the React+_tree benchmark claims, too, and I think they are still spot on. If you're interested at all, I'd appreciate you taking the time to check my work there.


I took a look but it's hard to judge since you are using React 0.8.0 and Om is now on React 0.9.0.


Interesting stuff. Have you compared _tree with Mori?


Thanks. I knew of Mori, but hadn't looked at it yet. Now that I've glanced, you could definitely implement something like _tree with Mori, but it'd be missing a few key things.

What jumps out most is that I'd really miss batch mode, which lets you escape from immutability and get a big performance boost for complex atomic operations. I think Mori must use something like this internally, but the docs don't indicate it being exposed.

It may be subtle, but I also really prefer _tree's syntax, where objects are fitted with their own methods (Mori does something like `mori.get(m0, 'foo')` instead of the more succinct `m0.get('foo')`). This is also the backbone of _tree's data modeling layer, which lets you work in terms of your domain, rather than a tree.

Performance comparisons are on my list of things to do.


Mori's data structures are the ones from ClojureScript, so they're designed in a functional, rather than OO, style. I agree it's more foreign in a JS context. For some operations, there are advantages to being able to write functions that just work with generic data rather than objects. I could imagine having a layer on top of Mori that provides a more familiar face while still giving access to the "just data" stuff underneath for performing functional operations.

Pete Hunt did just that:

https://github.com/petehunt/morimodel/

Mori is sophisticated under the hood. It doesn't do massive amounts of copying or anything like that, so I don't think you need a "batch mode" to make it perform. Perhaps I'm missing something though.

Edit: I took a quick look at _tree's code. It's quite different from how I understand Mori to work. Mori exposes Clojure's data structures which implement their own immutable maps, lists and vectors in a way that is very efficient for copying (both in time and space).


Thanks dangoor. It's a good day when someone shows me two recent projects that are very similar to my own. I started _tree a few months back because I couldn't find a project like it. Turns out I'm in good company.

_tree leans heavily on functional programming techniques, so it's not so different at all. The furthest it gets from functional is the modeling layer implementation, which is prototypal, and still just a thin layer over the core.

I haven't studied any clojurescript, but conceptually, compound operations on immutable structures, such as re-parenting a subtree or sorting a vector/list/set, would be implemented inefficiently without mutable intermediate objects. For example, imagine writing quicksort where exchanges cost O(n) instead of O(1). Performance would totally tank. It'd be silly.

That's why I expect clojurescript has mutable intermediates. I'd love for someone to prove this guess wrong, it would blow my mind. Anyway, that's what _tree's batch mode is, in essence: exposed access to the mutable, pre-finalized versions of _tree primitives.


It's a bit more sophisticated than that.

https://news.ycombinator.com/item?id=7292588


Right, it sounds like they use tries with 5 bit keys. It's a very neat data structure, for sure, but _tree is just at a lower level. It doesn't impose structure. Tries can be implemented in _tree.


All respect issues aside, generic names like this are a pain in the ass. "go" (google) and "react" (facebook) are similar. For example, Twitter-folk have taken to using #ReactJS to mean Facebook's React UI framework, but react.js[1] has been around a while already, and it does some similar stuff (if you squint a bit).

You can't mention or search for them without barfing out the company name, and sometimes a specific property alongside to be unambiguous about your meaning. It makes finding discussions and blog posts harder because people don't have a common language for these poorly-named things.

It just seems like these companies are shooting themselves in the foot. Is natural word-of-mouth growth no longer a concern?

[1]: http://reactjs.com/


I get your gripe, but there's a good use case for this stuff in building dynamic web applications. Just because a technology is misused doesn't mean the technology is to blame.


Browsers aren't general purpose applications. They can't fill every application role on the computer. The more "web dev" tries to push them in that direction, the slower, more difficult to develop and maintain, and generally poor they will be, and then the "dynamic web applications" that run on them will suffer.


What, for instance? Your comments seem heavy on condemnation but light on substance. Sure there are certain donation specific tools such as Photoshop or 3dsmax or ableton etc that wouldn't work in the browser, but for your day to day business 99% application, the user generally does not need to do anything that the browser cannot.

The fact is, the web world is evolving at an extremely fast rate, and leaving the native world in the dust.


I don't argue that the browser cannot be made to do something. I merely point out that making it do "everything" will only end up in disaster for browser creators and maintainers, and for app developers alike.

The "web world" hasn't even caught up to the native world yet. It may be evolving very rapidly, faster than native, even, but it's still behind.


What do you propose, exactly?


I've just addressed that elsewhere in this thread:

https://news.ycombinator.com/item?id=7120285


It's always easier to criticize than rectify.


Thank you, Xmonad, for not supporting chrome fullscreen in your default configuration.


That...is a feature?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: