Seawalls are a bandaid fix, yes, but they're probably a reality we'll have to deal with at this point. We may have passed the point of inevitability for this rise in sea level -- barring, as the article suggests, any unforeseen stabilizing or countervailing forces to balance out the loss of the ice sheets.
Basically: seawalls aren't the solution, but they're probably part of how we cope with whatever degree of sea-level rise is already bound to happen at this point.
The point of React Native isn't to create a delightful product experience.
The point is to create the equivalent experience of a native app with a delightful development and organizational experience.
- Instead of having separate teams develop your iOS, Android, and now OS X apps, one product team can make all three and share the majority of the code.
- Instead of a slow frustrating compile cycle you get live reloading.
I could write a Lovecraftian short story about my experience with React Native. Trying to turn a web app into a delightful React Native app nearly sank me into depression & burnout.
[Edited to clarify that of course I can't blame my mental health problems on a UI framework... but I keep the formulation for the sake of sympathy with anyone else who may feel less than "delightful."]
It was pretty cool at first, after a couple of months of trying to get the thing working decently without crashing, lagging, or just sucking, I gave up on the whole project.
I don't doubt that many people use React Native with delight. But there are aspects of it that are likely to cause frustration. It's a mashup of two incommensurable platforms, and in the gap between them there is horror.
You understand that in realistic professional scenarios it's rather the boss (especially a technical boss), that will dictate that it's ok to have something easier to develop with.
Poor user experience doesn't mean much if you are not on the market soon enough.
Which should be obvious on HN, and it's a point PG himself has iterated several times...
There are a lot of devs on here with little to no real world experience or not enough to make sound judgements as far as I can tell anyway. I'm not sure if it's better to enlighten them or to ignore them since they are the sort of people that are willing to post their baseless opinions for the world to see.
TBH this kind of solution is more often pushed by bosses than devs.
Any management person will throw in a "you've done X on this platform, why don't you try to reuse most of your stuff on this other platform ?", just in case it's actually doable.
In my experience as a mobile app freelancer, many prospective clients(=bosses) insist on portable/JavaScript frameworks for new projects. I guess it is because native iOS/Android programmers are expensive, and a single JavaScript programmer rotating between platforms with a shared codebase is both cheaper and easier to manage than two or more native app programmers (at least in theory).
I feel bad for you if you've met such bosses. As one, and as someone who has worked under several, I've never once experienced a boss knowing or saying such things. I've only ever seen it come from developers.
Edit: Just to clarify, there may be instances where a mgmt type has asked if X on platform Y can be reused on platform Z easily. I've never seen it go beyond the ask.
React Native doesn't deliver a poor user experience though. It's not the right choice for every project, but in most cases users won't be able to discern between RN and native.
On iOS maybe, but not on Android. You have to reimplement a lot of the things you get for free when developing native apps yourself. Most animations etc, you all have to add it yourself. Problem with that is that the React Native team has some decent iOS stuff, but for Android it's all sorely lacking.
Besides Facebook's Ad Manager (which barely cuts it) I haven't seen any React Native app on Android that didn't feel horrible (and not like an Android app).
Does the Facebook app use React "Native'? Because that app is a horrible power hog and vastly glitchy and buggy. That is a poor user experience compared to what a well written native iOS app could deliver. Users might not know its a poor experience because that can't compare to how fast and responsive the app would be if it were actually written in Swift.
There are people in this world that think Olive Garden is great Italian food. There are people in this world that think React "Native' is a good user experience.
Why does Facebook absolutely insist on avoiding writing actual native apps? As in Swift for example. It seems like they are almost allergic to actual native and instead focus on this inferior cross platform stuff.
In 'most cases' users won't know the difference? sure they will; they'll notice that the app they're using sucks more than other apps on their phone. They'll tolerate the glichiness, the occasional blank data screen during a load because they care about the content more than the terrible experience. All because a developer or the CTO somehow thinks "it works well enough" is the same as "let's really give our users the best possible experience." I am amazed that Facebook has thousands of employees but can't be bothered to write a single line of code in Swift. They've been insisting on this stubborn course of action since the beginning of their mobile experience. Does Zuckerberg just hate Obj C or Swift? Why are they making the mobile experience into the lowest common denominator? Why does their Facebook app feel like some cheap PhoneGap experiment? My Facebook app on iOS performs exactly the same as it does on a years-old Android phone.. And that's ridiculous. I have superior hardware and yet I get to run an inferior app because JavaScript? It's like socialism for apps: make everyone equally miserable.
The simple fact is this: I hate cross platform systems because they end up averaging the quantity of the mobile experience with capabilities being reduced to support the lowest common denominator. If I want my apps to run as terrible as many do on Android, I will use an Android. It's lazy development. It's a means for the JavaScript crowd to avoid leaning Swift (or Java) so they can provide middling to bad mobile apps rather than actually building the absolute highest quality product they could build.
Even Facebook does it! I feel like the React Native ecosystem is doing more to reduce the quality of the mobile app experience than anything else. Apps are being turned into these average pieces of crap with only the UI being slightly different. Does anyone have any performance benchmarks on a React "Native" compared to Swift? Any data at all? Or are we just so excited to write apps in React that we fail to care? If we care about 'cross platform' development -- we can already do that; it's called 'the web.' Let stop foisting inferior mobile apps on people just because we can.
Interesting - is there an intention to rewrite the whole thing gradually and move away from native? Any reason it's not listed on the React Native site?
I left Facebook almost a year ago so I can't speak about Today.
But back then internal adoption of React Native was definitely accelerating rapidly and it was solving very real organizational and developer experience problems. The original motivation of the project internally was to solve developer experience pains, just like React but for mobile.
- With such a large app (Facebook), the compile cycle was becoming quite slow. RN has no compile cycle.
- You've got 3 teams (web, iOS, Android) per product (eg. Events, Groups, etc) and they don't really communicate or share any product code despite building effectively the same thing.
Take the Mobile Ads Manager app: Now one team (of web engineers) can ship an app on iOS and Android, while sharing 83% of the code between each app, in half the time the project had budgeted for just the iOS pure native app. Not to mention the team constantly loved their jobs because they didn't have to wait 5 minutes for the damn thing to compile every time they made a change.
JavaScript is faster in most cases than Objective-C.
(I'm not going to dispute that cross-platform toolkits can have less native fidelity than compared to coding to the native toolkit can. I just don't like the Objective-C vs. JavaScript performance myth.)
But is this even a debate? Wouldn't you expect a compiled, manual memory managed, language to be faster than an interpreted language with garbage collection?
That's an interesting benchmark, and I'd need to dive into the details to see what is going on. Perhaps there is some sort of JIT slow path. I would not expect method-heavy Objective-C to beat JavaScript. In general:
> But is this even a debate? Wouldn't you expect a compiled, manual memory managed, language to be faster than an interpreted language with garbage collection?
Objective-C is not compiled in terms of method dispatch, nor it is manually memory managed. Instead, all method dispatch happens through essentially interned string lookup at runtime, backed by a cache. Objective-C also has a slow garbage collector--atomic reference counting for all objects. (Hans Boehm has some well-known numbers showing how slow this is compared to any tracing GC, much less a good generational tracing GC like all non-Safari browsers have.)
The method lookup issue has massive consequences for optimization. Because JavaScript has a JIT, polymorphic inline caching is feasible, whereas in Objective-C it is not. It's been well known in Smalltalk research since the '80s that inline caching is essentially the only way to make dynamic method lookup acceptably fast. Moreover, JavaScript has the advantage of speculative optimization: when a particular method target has been observed, the JIT can perform speculative inlining and recompile the function. Inlining is key to all sorts of optimizations, because it converts intraprocedural optimizations to interprocedural optimizations. It can easily make 2x-10x of a difference or more in performance. This route is completely closed off to Objective-C (unless the programmer manually does imp caching or whatnot), because the compiler cannot see through method lookups.
Apple engineers know this, which is why Swift backed off to a more C++-like model for vtable dispatch and has aggressive devirtualization optimizations built on top of this model implemented in swiftc. This effort effectively makes iOS's native language catch up to what JavaScript JITs can already do through speculation.
Thanks for the detailed response, to summarize it sounds like your position is that:
1. Objective-C's compile time memory management is actually slower than JavaScript's garbage collection.
2. The performance consequences of Objective-C message sending are greater than JavaScript's JIT compilation. And furthermore, that JIT compilation is actually an advantage due to the other optimization techniques it enables.
I'd like to see a more direct comparison with benchmarks, but I can see where you're coming from.
Right. Note that this advantage pretty much goes away with Swift. Swift is very smartly designed to fix the exact problems that Apple was hitting with Objective-C performance.
I realized another issue, too: I don't think it's possible to perform scalar replacement of aggregates on Objective-C objects at all, whereas JavaScript engines are now starting to be able to escape analyze and SROA JS values. SROA is another critical optimization because it converts memory into SSA values, where instcombine and other optimizations can work on them. Again, Swift fixes this with SIL-level SROA.
I see, good point, but then I'd expect the cost of JIT compilation would still have a performance cost? As opposed to Objective-C being compiled before distribution to the client?
Source please, because unless you have anything to back this up, your claim can only be considered grade-A FUD.
Sometimes I feel like using JavaScript too much transports people to some kind of imaginary JavaScript fairy land full of rainbows and unicorns. I read the craziest things about JavaScript development even though I can't comprehend why so many people believe it has any redeeming positive qualities over other languages, besides ubiquity.
See my reply to the sibling comment for the explanation. There haven't been enough cross-language benchmarks here to say definitively, but as a compiler developer I can tell you the method lookup issue is really fundamental and in fact is most of the reason for Swift's (eventual) performance advantage over Objective-C.
I've read your explanation, but I'm not convinced they support your assumptions (which, barring any benchmarks that make them factual, is what I consider them to be).
I'm aware of the dynamic dispatch overhead of Objective-C, but first of all it's my understanding that Apple's Objective-C runtime & compiler perform all kinds of smart tricks to reduce the overhead to a minimum (caching selector lookups and such), and second, because Objective-C does not require you to use dynamic dispatch if performance is a concern. No one is preventing you from doing plain-old C-style functions for performance criticial sections.
I also don't buy the 'ARC is slower than GC' argument. ARC reference counting on 64 bit iOS, as implemented using tagged pointers, has almost zero overhead for typical workloads. Only if you would write some kind of computational kernel that operates on NSValue or whatever (which is a dumb idea in any scenario, about as dumb as writing such a thing in JavaScript) you would ever even see the difference between not having any memory management at all. Just like your other performance claims: without data, there is nothing that backs up your statement that ARC is slower than GC for typical workloads. Hans Bohm is not the most objective source for such benchmarks by the way.
Apart from that you seem to spend an awful lot of effort explaining the things that would make Objective-C slower than JIT'ed JavaScript, while completely disregarding the overhead all this JIT'ting, dynamic typing, etc has, and the fact that in JavaScript you basically have no way to optimize your code for cache friendliness or whatnot.
You may be a compiler developer, but based on your comments I'm highly doubtful you are aware of how much optimization already went into Apple's compilers, which greatly reduce the overhead of dynamic dispatch and ARC.
> second, because Objective-C does not require you to use dynamic dispatch if performance is a concern. No one is preventing you from doing plain-old C-style functions for performance criticial sections.
That's just writing C, not Objective-C. But if we're going to go there, then neither does JavaScript. You can even use a C compiler if you like (Web Assembly/asm.js).
> ARC reference counting on 64 bit iOS, as implemented using tagged pointers, has almost zero overhead for typical workloads.
No, it doesn't. I can guarantee it. The throughput is small in relative terms, I'm sure, but the throughput would be smaller if Objective-C used a good generational tracing GC.
(This is not to say Apple is wrong to use reference counting. Latency matters too.)
> Just like your other performance claims: without data, there is nothing that backs up your statement that ARC is slower than GC for typical workloads. Hans Boehm is not the most objective source for such benchmarks by the way.
Anyway, just to name one of the most famous of dozens of academic papers, here's "Down for the Count" backing up these claims: https://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-... Figure 9: "Our optimized reference counting very closely matches mark-sweep, while standard reference counting performs 30% worse." (Apple's reference counting does none of the optimizations in "Down for the Count".)
memorymanagement.org (an excellent resource for this stuff, by the way) says "Reference counting is often used because it can be implemented without any support from the language or compiler...However, it would normally be more efficient to use a tracing garbage collector instead." http://www.memorymanagement.org/glossary/r.html#term-referen...
This has been measured again and again and reference counting always loses in throughput.
> You may be a compiler developer, but based on your comments I'm highly doubtful you are aware of how much optimization already went into Apple's compilers, which greatly reduce the overhead of dynamic dispatch and ARC.
I've worked with Apple's compiler technology (clang and LLVM) for years. The method caching in Objective-C is implemented with a hash table lookup. It's like 10 instructions [1] compared to 1 or 2 in C++, which is a 3x-4x difference right there. But the real problem isn't the method caching: it's the lack of devirtualization and inlining, which allows all sorts of other optimizations to kick in.
Apple did things differently in Swift for a reason, you know.
That's again a lot of information showing why Objective-C is not the most efficient language possible for all use cases, but it still does not provide any evidence why JavaScript would be faster. I'm not disputing the individual points you made, but in the context of comparing overall performance of Objective-C vs. JavaScript it doesn't say much at all. It mostly shows Objective-C will always be slower than straight C/C++, nothing about JavaScript performance. I appreciate the thorough reply though.
One thing I do want to make a last comment about is your dismissal of using C/C++ inside Objective-C programs as some kind of bait-and-switch argument. Using C/C++ for performance critical sections is IMO not the same as calling out to native code from something like JavaScript, or writing asm.js or whatever other crutch you could use to escape the performance limitations of a language. As a superset of C, mixing C/C++ with Objective-C is so ingrained in the language you have to consider it a language feature, not a 'breakout feature'. Nobody who cares about performance writes tight loops using NSValue or NSArray, or dispatches millions of messages from code on the critical path of the applications performance (which usually covers less than 10% of your codebase). As an example, I'm currently writing particle systems in Objective-C, but it wouldn't even cross my mind to use anything but straight-C arrays and for loops that operate directly on the data to store and manipulate particles. This is nothing like 'escaping from Objective-C', as all of this is still architecturally embedded transparently inside the rest of the Objective-C code, just using different data types (float * instead of NSArray) and calling conventions (direct data access instead of encapsulation). It's more like using hand-tuned data structures vs STL in C++, than like calling native code or writing ASM.js from Javascript.
I've seen cordova/phonegap being used as a MVP. Then slowly some screens got rewritten in their native platforms, starting with the most critical ones.
All by manager's and client's decision. It was a horrible experience overall.
Exactly. Except, the time to do it is before you decide on a tool, when the company is comparing time to value vs. quality.
Whether or not the experience is great isn't a deal-breaker, it's a factor in the decision making process. For plenty of companies out there, having a less-than-perfect application that they can deploy to many platforms in a month or two with a single or couple of developers is a choice that might get made, when the alternatives are either much longer timetables or many more dollars.
Using a language in which 1 + "1" == "11" instead of one of the best type and null-safe languages to be developed in the past few years is not exactly my idea of a delightful experience, but interface as a pure function of state is a good idea.
So degrade the experience just to make the product team happy? What product team worth a darn would actually be ok with that?
Did we forget the customer somewhere? What ever happened to trying to wow your customers rather than simply tolerating them?
This attitude makes me crazy angry. That one comment just revealed everything that's wrong with many product companies: just give the user a 'good enough' experience. If you aren't striving to create a delightful user experience than you're disrespecting your users. Laziness.
If Steve Jobs had heard you say that, he would have fired you in an instant -- and for good reason. We owe our users a delightful experience, giving them anything less is just unacceptable.
A product team with limited time and money. We'd all love to have best in class apps on every platform, but there's only so many hours in the day, and only so many developers on the team. At some point we have to make tradeoffs, and sometimes the right one is to deliver something imperfect now rather than perfection later.
> So degrade the experience just to make the product team happy?
Which part of "equivalent" did you miss?
You're entirely missing the point. This isn't about product experiences whatsoever. You can ship equivalently delightful experiences in React Native as you can in pure native apps. The point of React Native isn't to make your app more delightful.
The point is to develop it faster, in a more enjoyable manner, and enable one product team to develop for multiple platforms. This is an incredible win organizationally and for developer experience.
I totally would have used this last year when commuting from SF to Menlo Park. It's too bad I sold my car and quit my job, would have been fun to meet some cool people and make some extra money at the same time.
What are some actions people like myself can take to push our community, nation, and world towards Basic Income?
This is particularly important to me because of how fast Uber/Tesla are rushing towards fully autonomous vehicles. Over 4 million jobs in US alone are as drivers. They're all gonna be laid off soon.
If you live in the United States, you might want to connect with us at the Universal Income Project.
We have regular meetups in San Francisco, and are looking for more organizers around the country to collaborate with on Basic Income Create-a-thons. And if you're in a different country, we might be able to connect you with advocates closer to you. There are a ton of interesting, active projects going on right now around the globe.
We recently hosted a talk by Matt Krisiloff, who is managing YC Research's basic income program, and are really excited to see how it develops.
Shoot us an email at questions@universalincome.org if you want to learn more!
I think the term "self-driving" is preferable to "driverless" because it's not like there's no driver, the car is the driver. It's also less scary-sounding which could help speed up adoption. :)
Some jurisdictions have rules to restrict convoying, many trucks following each other as a group. I'm not sure I want to see road trains just to save a few bucks on that second/third driver.
Not having a driver in the cab supervising the robot is safer? I don't see any call to remove pilots from airplanes. And I doubt the extra 200lbs of driver makes much different given these are trucks.
Again, how does having or not having a driver in each cab affect this?
I would add that allowing multiple trucks to operate this way will require some changes to the law. And the redesign of many roads. Imagine trying to merge into traffic or onto a bridge when some vehicles are longer than merging lanes. It sounds like a great idea but isn't practical.
Truck five, if it has a human driver, needs to leave plenty of space so s/he can see what's going on with truck four and three. (Repeat this for all the lorries and they end up spread out) The computers don't need that visual space, because they're all linked with radio shuffling data back and forth.
> Imagine trying to merge into traffic or onto a bridge when some vehicles are longer than merging lanes
i) When the lorries pass approach a merging lane they either add some space between each truck, or they use a different lane of the motorway (because this article is talking about England) which allows people to move from the merging lane onto the motorway.
ii) Lorries will already drive in an informal ad-hoc convey. They'll keep a safe distance from each other. That safe distance isn't enough for a car to get into if the car is trying to merge from the entrance ramp. So if this isn't already a problem I'm not sure why it would be a problem with robot drivers. What's changed that suddenly makes it a problem?