Hacker News new | past | comments | ask | show | jobs | submit login

> we decided that the only way to leverage the full value of Kotlin was to go all in on conversion

Could someone expand on this please.




In addition to what @phyrex already pointed out, without any Java in the code base, they probably hope to hire from a different cohort of job seekers.

Younger developers are far less likely to know Java today than Kotlin, since Kotlin has been the lingua franca for Android development for quite some time. Mobile developers skew younger, and younger developers skew cheaper.

With Java out of the code base they can hire "Kotlin developers" wanting to work in a "modern code base".

I'm not suggesting there's something malevolent about this, but I'd be surprised if it wasn't a goal.


I think you're on to something here. When recruiters contact me about Java jobs, I tell them my level of interest in a Java job is about as high as RPG or COBOL, and that I'm wary of spending time on a "legacy" technology. Most of them are fairly understanding of that sentiment, too.

If I had someone call me about Kotlin, I would assume the people hiring are more interested in the future than being stuck in the past.


You're already expected to learn a number of exotic (Hack) or old (C++) languages at Meta, so I'm pretty sure that's not the reason.

To quote from another comment I made:

> I don't have any numbers, but we know that the Meta family of apps has ~3B users, and that most of them are on mobile. Let's assume half of them are on Android, and you're easily looking at ~1B users on Android. If you have a nullpointer exception in a core framework that somehow made it through testing, and it takes 5-6 hours to push an emergency update, then Meta stands to lose millions of dollars in ad revenue. Arguably even one of these makes it worth to move to a null-safe language!


An NPE means an incomplete feature was originally pushed to production. It would still be incomplete or incorrect, in Kotlin and would still need a fix pushed to production.

It's even worse with Kotlin, without the NPE to provide the warning something is wrong, the bug could persist in PROD much longer potentially impacting the lives of 1 Billion users much longer than it would have if the code remained in the sane Java world.


How would a bug persist in production if you get a compile time error that prevents you from running the application? You don't seem like you know what you're talking about.

Even if I am charitable with my interpretation, I'm not sure I get your point. If you refuse to handle the case where something is nullable and you convert it to non-null via .unwrap() (Rust perspective, I haven't used Kotlin), then you will get your NullPointerException in that location, so Kotlin is just as powerful as Java in terms of producing NPEs, but here is the thing. The locations where you can get NPEs are limited to the places where you have done .unwrap(), which is much easier to search through, than the entire codebase, which is what you'd have to do in Java, where every single line could produce an NPE. So in reality if you push incomplete code to production, you will have strong markers in the code that indicate that it is unfinished.


"The" reason is not what I'm speculating on, because I don't think a singular reason is likely to exists.

There is likely a mix of reasons -- of which NPE avoidance is almost certainly one. And hiring/talent management is almost always another, when making technology choices. Particularly when choices are coupled with a blog post on the company's tech blog.


From the article:

> The short answer is that any remaining Java code can be an agent of nullability chaos, especially if it’s not null safe and even more so if it’s central to the dependency graph. (For a more detailed explanation, see the section below on null safety.)


One of my biggest gripes with an otherwise strictly typed language like Java was deciding to allow nulls. It is particularly annoying since implementing something like NullableTypes would have been quite trivial in Java.


Would it have been trivial and obvious for Java (and would Java still have been "not scary") back in the 90s when it came out?


It wouldn't have been particularly hard from a language, standard library, and virtual machine perspective. It would have made converting legacy C++ programmers harder (scarier). Back then the average developer had a higher tolerance for defects because the consequences seemed less severe. It was common to intentionally use null variables to indicate failures or other special meanings. It seemed like a good idea at the time


> It would have made converting legacy C++ programmers harder (scarier).

And that, right there, is all the reason they needed back then. Sun wanted C++ developers (and C developers, to some extent) to switch to Java.


It would have been trivial for record types to be non-nullable by default.

Record types are 3 years old and they are already obsolete with regards to compile time null checking. This is a big problem in Java. A lot of new features have become legacy code and are now preventing future features to be included out of the box.

This is why the incremental approach to language updates doesn't work. You can't change the foundation and the foundation grows with every release.

I am awaiting the day Oracle releases class2 and record2 keywords for Java with sane defaults.


Tony Hoare (the guy who originally introduced the concept of null for pointers in ALGOL W) gave a talk on it being his "billion dollar mistake" in 2009: https://www.infoq.com/presentations/Null-References-The-Bill...

Now, this wasn't some thing that just dropped out of the blue - the problems were known for some time before. However, it was considered manageable, treated similarly to other cases where some operations are invalid on valid values, such as e.g. division by zero triggering a runtime error.

The other reason why there was some resistance to dropping nulls is because it makes a bunch of other PL design a lot easier. Consider this simple case: in Java, you can create an array of object references like so:

   Foo[] a = new Foo[n];  // n is a variable so we don't know size in advance
The elements are all initialized to their default values, which for object references is null. If Foo isn't implicitly nullable, what should the elements be in this case? Modern PLs generally provide some kind of factory function or equivalent syntax that lets you write initialization code for each element based on index; e.g. in Kotlin, arrays have a constructor that takes an element initializer lambda:

   a = Array(n) { i -> new Foo(...) } 
But this requires lambdas, which were not a common feature in mainstream PLs back in the 90s. Speaking more generally, it makes initialization more complicated to reason about, so when you're trying to keep the language semantics simple, this is a can of worms that makes it that much harder.

Note that this isn't specific to arrays, either. For objects themselves, the same question arises wrt not-yet-initialized fields, e.g. supposing:

   class Foo {
      Foo other;   
      Foo() { ... }
   }
What value does `this.other` have inside the constructor, before it gets a chance to assign anything there? In this simple case the compiler can look at control flow and forbid accessing `other` before it's assigned, but what if instead the constructor does a method call on `this` that is dynamically dispatched to some unknown method in a derived class that might or might not access `other`? (Coincidentally, this is exactly why in C++, classes during initialization "change" their type as their constructors run, so that virtual calls always dispatch to the implementation that will only see the initialized base class subobject, even in cases like using dynamic_cast to try to get a derived class pointer.)

Again, you can ultimately resolve this with a bunch of restrictions and checks and additional syntax to work around some of that, but, again, it complicates the language significantly, and back then this amount of complexity was deemed rather extreme for a mainstream PL, and so hard to justify for nulls.

So we had to learn that lesson from experience first. And, arguably, we still haven't fully done that, when you consider that e.g. Go today makes this same exact tradeoff that Java did, and largely for the same reasons.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: