Hacker News new | past | comments | ask | show | jobs | submit login
IT Runs on Java 8 (veekaybee.github.io)
964 points by anielsen on May 10, 2019 | hide | past | favorite | 529 comments



In the majority of companies it's simply not possible to operate on the bleeding edge the way HN articles would have you believe you should. Besides the obvious issues around the value of rewriting stable legacy systems on new platforms, there are also man power issues. You need tier 1 developers to live on the bleeding edge because any problem that comes up (and they will come up) largely requires you to solve it yourself sans the help of the greater internet. On a legacy system an average dev can generally Google any issues that arise because they are known problems.

When dealing with new tech it's a lot more likely to be the first person to run into some obscure use case no one has ever experienced before. And in that case you need devs who are not only capable but willing to invest in solving the problem. Sometimes that means, re-architect something in the stack and sometimes it may even require making a PR to the original project. That's a lot of investment that may make your devs happier but probably provides little concrete value to the business.

That said I'm still building on elixir so what do I know.


Really good points. IMO the web-surfing crowd's taste for news (as opposed to their capability when it comes to work) is deceiving. Their excitement for prospective high-leverage information creates a demand that biases community sites like HN and Reddit toward novelty, and the result is this massive FOMO loop: "Read it, remember to try it, forget to try it, aw damn a new thing already came out and it's better because X! Read it..." I love that the author of the article really speaks to this situation.

I have a relative who got really into this mindset. In fact he picked the "HN's Choice" software framework of the day for one of our co-creative projects a couple years ago. Mostly a fun project, but it could have gone somewhere, maybe. He got as far as setting it up so that we had some scaffolding, then basically he flamed out. I don't blame him at all; he'd never even used it before and expected himself to be able to just run with it. And this guy was a _master_ at a certain language starting with the letter P, but he was ashamed to use it. It made me so upset to see him feel all this pressure and then collapse.

Personally I feel that pressure myself sometimes but being aware of it helps a lot. As a hobby side project, I decided to go back and do some 1990s MS-DOS programming and experiencing this FOMO stuff was part of the motivation. I needed to be free to work deep instead of thinking broad, so to speak.


themodelplumber says> "And this guy was a _master_ at a certain language starting with the letter P...

Prolog?

What a shame he couldn't use his best language! It has such nice web frameworks.

https://en.wikipedia.org/wiki/List_of_programming_languages#...


I imagine you're joking? More likely Perl, it was a big web language back in the day. Smaller chance that it was Pascal.


Actually php is more likely than either of those.


Good points here. Which is why HN is oriented toward startups with Tier 1 engineers. PG and his cofounders invented the web app, using a combination of old tech (lisp) and new (the internet).

There are plenty of other forums and sites to read about conventional tech. HN is where I find the bleeding edge. 90% of it is just fascinating. But the other 10% offer tantalizing possibilities for making something novel, which is to say: solving an unsolved problem.

“How to Become a Hacker” puts it plainly: no problem should have to be solved twice. Drudgery is evil. And by those axioms, a ‘Hacker’ is just uninterested in old solutions to old problems. We need to live in the future so that we can build the future. Although HN’s content has suffered over the last 5 years with the influx that accompanied a wider awareness of startups, it still is the best place I know of to dip your toes into the various futures we may one day encounter.

Having said that, lately I’ve found that Google Scholar is often more thought-provoking. If only there was a HN for Google Scholar.


Oh, for christ's sake. I don't want to get all "get off my lawn" but HN is full of early-20-somethings rediscovering things and calling them 'bleeding edge'.

The highest paid people in our industry are working on drudgery full-time for FAANG. And that's fine, people have families, I'm not judging anyone. But let's not fool ourselves.


A good example of this is static typing - shat on for years by HN, and now, all of a sudden it's the greatest thing since sliced bread!


Yeah, but I think the kind of static typing that was 'shat on' for years is not the same as the one being praised today.

The shat on one is the old Java, verbose, obtrusive style. The newly praised one is Haskell-style, type inferred, expressive...

Now you could say that's not new, BUT what is new is marrying ML style type system to languages whose other concepts devs are largely familiar with and packaging it the right way to get into production, instead of just academia and that being the case even for historically impenetrable low-level programming and such.


Agreed. The article that opened my eyes to this shows (IMO) a serious deficiency in C#'s type system compared to F#'s, which includes the concept of tagged-union types. It shows a very simple shopping cart program that can't be modeled cleanly in c# without using the visitor pattern....which is difficult to read IMO.

https://fsharpforfunandprofit.com/csharp/union-types-in-csha...


These are the kind of comments that make HN so special: knowledgeable and insightful, due to deep experience with the past and critical understanding of the present. It isn’t just a rush to what is shiny and new, but is capable of identifying what is novel and valuable, and is articulate enough to explain why.

Normally my comment would be an unnecessary back-slapping, but since the conversation veered this way, I think it is appropriate to call out your comment as something that represents the spirit that makes HN unique and wonderful.


I never realised how highly some people praise hackernews comments lol

Hackernews has an _instagram filter_ where everybody brags about how smart they are.

Then there's a bunch of fresh grads who fanboy their favourite tech companies.

Then every so often you get a gem of a comment from knowledge leader in the industry.

But that's quickly overshadowed by 'my Startup, my Startup, mah Startup' and how everybody wants to be rich one day.

But like every larger community in the internet, it eventually derives itself into an echo chamber.


Current-day HN strikes me as very similar to early 00s Slashdot, albeit more self-serious and with an ideology more informed by SV capitalism/entrepreneurialism than the convoluted politics of open source software.


Hacker News ranks at #959 in the US. It didn’t get that rank from just being geared toward startups with Tier 1 engineers.

Look on the front page right now and see how many stories are about startups. That didn’t change in the last five years. It’s always been about technology in general.

HN’s front page 10 years ago:

https://news.ycombinator.com/front?day=2009-05-10

HN’s front page 5 years ago

https://news.ycombinator.com/front?day=2014-05-10


We’re both making points based off anecdotal experience, so who knows. I have been a reader for the last 7 or 8 years, but my experience is just one subjective data-point.

Having said that, the most apparent change to me is in the tone and substance of the comments rather than the front page. And it isn’t a huge change. Just noticeable for me.


You don’t have to use anecdotal experience. You can use the same link format for any day that Hacker News existed and see the same thing.


HN archives are stored in BigQuery, so it would be possible to do an analysis that was more rigorous than a human reading the thousands of front pages to see a pattern of difference.


You are assuming "tier 1 engineers" are also the ones using the bleeding edge technologies. This is not true in my experience. The best developers care a lot about using the best tools available for a particular task - which is very different from using the newest tools avaialbe. In reality it typically takes a long time for a tool to become mature enough for productive use, at which point it is not cutting edge anymore.


In case of open source, everyone wants someone to (beta) test their solutions in production. Only after few years of that such testing a tech product becomes usable for the rest of the industry and thats when it becomes profitable.

Im not surprised the leaders of IT, ppl who create those new technologies, push for a narrative to use those new shiny things.


> Which is why HN is oriented toward startups with Tier 1 engineers.

Or engineers who've convinced themselves that they're Tier 1 engineers...


Are we reading same HN? It seams majority of people advocate fairly sane proven choices for majority of projects e.g. start with the monolith default to PostgreSQL for dbms etc.


There's a pretty big difference between what the people on HN would have you believe and what the articles on HN would have you believe.


You've never seen disdain for Java on HN?


Somewhat but I think it's more a reflection of sentiment towards anything Oracle related.


> provides little concrete value to the business

The concrete value doing such things add to our business is happy staff (which you mentioned), which means the best people don't leave, and stat excited and productive for the long term. Also, when hiring, because it helps us get the best people to join us in the first place.

It's not just about "bleeding edge", it's about giving dev teams the freedom to do that if they want to. Some do more than others, but the point is, if they get it wrong, things break and they get called at the weekend or whatever and they soon change track.


The costs of doing such things include...

1. Half-done projects started by some happy staff who then went on to start the next project in the next shiny new thing and either more-or-less completed by someone who was just about capable of cut-n-pasting without understanding or who decided that some new shiny thing needed to be added to make the rest wonderful.

2. A monstrous stack of projects in 637 different programming languages, frameworks, ideologies, coding styles, and indentation levels, guaranteed to require rewriting for any change. Result: 638 different things.


I always hear people warn about these things, but haven't seen it in practice at all, as long as those who write the systems are also responsible for operating them and not subjected to inappropriate external interference.

Of course, letting someone write whatever and then move on and leave it to someone else is a problem, but that's a problem regardless of whether they used bleeding edge tech or not.

For example, I've personally seen more tech-debt sins in more traditional monolithic Java + rdbms applications than I have an any of the Go based micro-services + nosql/etc I've seen over the years. I'm not trying to prove that "bleeding edge" is therefore better because it's not. I'm saying it's irrelevant.

The problem of leaving behind unmaintainable crap isn't about the tech, it's about the management, the team processes, the prioritisation process etc.


"Move Judiciously And Reuse Things" vs "Move Fast And Break Things"

(There may be some overlap)


Fun story : i had to develop the same system for two different companies, one a startup with 3 employees (the founders) , the other a billion dollar business. Just a backend api & web interface, together with an iOS app displaying the content.

I could use shiny new tech for the startup (which at the time was python on app engine and its nosql datastore, with a backbone.js framework), whereas the other one forced me to use java and deploy the code on the big corp datacenter.

The start up code was built in two months, and maintained by an intern for two years with success. It basically cost nothing to run.

The big corp code was built in a year (note: i did know how to develop in java), people in charge couldn't manage to deploy it (i had to do it myself, despite just discovering their infra on the spot), and i had to maintain the code myself, because despite huge code guidelines and restrictions on the technologies allowed, nobody inside was assigned to maintaining the code.

So, yeah, there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.


It is not the technology which is slowing down the enterprises. That is done by processes, humans and risk.

You cannot stay up to date permanently, because that introduces risks all the time. Once you earn money, there are layers to protect the money stream.

You can loose tons of money by not staying modern, that is true. But you can loose everything when you ignore a risk. The trick is to find a balance.

And all of that is not considering investors and their idea how the money should be used.


You can't avoid upgrading forever though, eventually your legacy technology will stop getting security updates and support. Then you are tasked with a huge leap from old to new.

I suspect corporates don't really want to get stuck with so much legacy, they just don't know how to avoid it.


That is right. That is the balance I mentioned. In our teams we try to update the base framework every year to avoid locking us out of the innovation in language and library ecosystem. But a UI stack for example you do not migrate without explicit funding.


You nailed this and this is very thoughtful.


The start-up code written in modern Java would similarly taken two months to develop and maintainable by an intern.

The problem in enterprise tech like you found is that one is forced to use certain, non-productive old frameworks filled with legacy, over-engineered bloat.

Modern Java micro-services in contrast are really fast to develop. Green field Java projects where one can make personal choices of lean technology and libraries are simply amazing. They also have terrific characteristics under load. The JVM performs well even with bloatware. When you trim out and make your JAR's lean and mean then its utterly amazing.


> Modern Java micro-services in contrast are really fast to develop.

This attitude is a bit surprising to me. The main point of micro-services is that they address a complexity problem when dealing with large organizations. That is, they allow a large organization to break into small teams that can work (relatively) independently so each team can iterate faster. However, microservices definitely make a host of issues harder:

* cross-service transactions are much harder

* cross-cutting concerns can be more difficult to change.

* can add organizational complexity of a feature you want to add needs corresponding changes it up or downstream services.

Even Martin Fowler has this quote:

Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith. The majority of software systems should be built as a single monolithic application. Do pay attention to good modularity within that monolith, but don’t try to separate it into separate services.


And I agree with everything you say! We have followed Martin Fowler's advice. We had a monolith product developed by several geographically separated teams whose development crawled to a snail's pace and whose full build took an eye-rolling amount of time.

Micro-services allowed us to break this. Micro-services also allowed us a faster turnaround time in feature delivery and reliability. It is also easier to isolate problems.

Generally stuff in the monolith that are already abstracted by large service facades are a good candidate for a separate service with their independent data model. Avoid cross-service transactions completely. If you have a cross-cutting concern, it generally means you need a separate service managing that cross-cutting concern.

Organisational complexity is definitely increased. This can be mitigated by tooling. Our build pipeline shows the full graph dependency chain, what is built, what is getting built, what has been deployed, etc.

We have the concept of a "system" that is basically a versioned set of micro-services running off a build trigger. We developed the capability to namespace systems - ie each system of services uses separate resources (kafka/db/etc) and separate URL's (via custom domains) when deployed on our cloud platform. You can also "plugin" your micro-service into a targeted system for diagnostics. This way dev, testing and product demo teams can work independently. The latter is not micro-service best practice, but in a large, slow-moving organisation, we found it valuable.


> We had a monolith product developed by several geographically separated teams whose development crawled to a snail's pace and whose full build took an eye-rolling amount of time. Micro-services allowed us to break this.

We also have a distributed dev team, but decided against using microservices. Instead, we have a monolith with a plugin architecture, so remote teams can just add independent modules to add functionality. Occasionally changes are made to the monolithic application, and these changes are heavily scrutinized. The plugin architecture provides many of the benefits of a microservice, while also allowing for more flexibility on occasion, and eliminates flaky network calls that are inherit to microservices.


> microservices definitely make a host of issues harder ...

I agree with all the issues you've stated, but I'd like to add one more. Microservices arguably make it much easier to build systems with circulate dependencies leading to weird race conditions and deadlocks.

Consider two units of code A and B. If implemented as classes, modules, or libraries, it's relatively easy to spot and prevent A calling into B which in turn calls back to A. Sometimes the compiler and tools can automatically catch that.

With microservices, catching dangerous dependencies like this are much more difficult as each service, outwardly, seems independent of the other and there are few tools to catch these dependencies.


There are few options, but what do you need beside a graph data structure?

We had a pipeline consisting of microservices and Kafka topics. Simple if/then logic quickly became problematic so I implemented our flow control as a directed acyclic graph, and it helped tremendously.

It's also easy to render your graph out with any number of visualization tools to quickly understand/validate work flows.


> There are few options, but what do you need beside a graph data structure?

I don't see how graph data structure solves this problem. Suppose you've created a photo sharing app. One microservice, A, has the graph database and stores the photos. One service, B, uploads and downloads photos. And a third service, C, applies filters to photos.

It's pretty easy to architect these services such that the download service B uses the filter service C in some situations, and the filter service C uses the download service B in others. This is obviously a bad design, but with microservices it's easier to make these bad design choices because the folks who wrote one service have little information about the other.


Sure, a graph wouldn't help fix that but it would at least illustrate the cyclic relationships and someone is hopefully recognizing the drawbacks inherent.

A tool to "fix" your example probably doesn't exist, but a graph is an excellent way to represent dependencies, reason about progress in the flow, and enforce constraints in a generic way.


Unlike the others, I'm going to disagree with what you've said. If you have a mono-repo set up you can develop like it's a monolith but with the added benefit that you don't have deploy the whole macro-service at once.

I think people are just spinning up a bunch of unorganized services and calling them micro and then complaining about it.

If you're on a small team that can build a clean monolith, the work to make them microservices is imo pretty trivial.


The main point was that its a lot easier to make those design mistakes whwn using microservices. Like you said there is no tool to easly spot and fix those issues. In case of a monolith there are such tools and often resolve to simply view dependency graph in your IDE.


I have never seen a microservices architecture where transactions spanning services were trivial.


If you need to do that then you've most likely cut along the wrong service boundry. Same thing can happen in a monolith, although you can kludge something out with spaghetti code I suppose. Doing it the right way in both cases takes about the same amount of work.


How do you cut your service boundaries?


Sure, modern Java micro-services could be fast. Compared to legacy Java micro-services to years ago.

It's still not comparable with Rails. Let alone some niche bleeding edge technologies which are focused on productivity.


The goal of enterprise dev process is no matter the outcome is, nobody is to blame.


GAE went through a lot of changes after coming out, I remember the days spend on catching up its changes, so I wouldn't say it costs nothing to run 2 successive years. Even worse, after around 3 or 4 years they completely cancelled the node I am on notifying me to deal with my datae myself within limited amounte of time. Bleeding age my ass


> GAE went through a lot of changes after coming out, I remember the days spend on catching up its changes, ... after around 3 or 4 years they completely cancelled the node I am on notifying me to deal with my datae myself with[] limited amount[] of time.

I hear you brother. Google generally puts out the "smartest" stuff, but they don't really care about the day-to-day needs of most of their users. The idea is generally "we've thought about this a lot, and this is the best way, so change your code to handle it." The problem is that every few months, they seem to come out with a new "best way".


This sounds like Java is not the big corp's problem. This is bound to happen if the people deploying the software don't know their way around their own infrastructure and around the software they are going to deploy. A big corp tends to centralize knowledge into departments. To get stuff done, departments have to communicate and coordinate. If there are shoddy processes for this in place, you get a big, fat, slow organization.


I don’t even like Java but I can tell you it had nothing to do with Java vs. the new and shiny.


it wasn’t java per say (as i said they had a set of strict rules as to what framework i was allowed to use , aka spring only), but the general culture around it, of « safe and proven, enterprise grade stack ». Had to use ext.js for the web interface for example.

Yet, even with this stack, 50% of the spent time was due to internal process. As an example : we called them once to ask for some news, and they told us about doing a kickstart meeting. At this point we already had finished developping everything and wanted to talk about deployment (little did we know we would have to redevelop part of the stack because of guidelines we hadn’t been told about before).


I honestly believe given the same set of requirements, constraints, and people skilled with their respective stacks, the language and framework wouldn’t make that much of a difference.

Out of the four modern languages/frameworks that I’m fluent in: C++/TreeFrog (played around with it just because I am a masochist), C#/ASP, Python/Django and JS/Express, I really don’t see any difference in my productivity in any of them.


> there's definitely a danger to absolutely wanting to follow the HN hype. But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.

I think the tech itself has little to do with it. Cash-strapped small startups are hungry for developers, and often ask them to do more than they can. Large companies often have many developers idling or working on fake toy projects that never see the light of day. There are massive inefficiencies in large institutions. Using old tech may be one of them, but it's a drop in the bucket compared to inefficiently utilizing the people you have.


> But don't feel good if your company relies on 8yo tech and process. It could be loosing a TON of money by not staying up-to-date with the state of the art.

And yet. Despite the danger, that was a billion dollar business. So they were doing something right, weren’t they? ;)


The "problem" is, in a big corp nobody gives a damn how much things cost. But you see, it's only a problem from a certain point of view (you want to get things done quick and cheaply). Otherwise, it's not a problem at all. It looks like you got yourself a nice support contract with that big corp, and the big corp will happily absorb the cost - everyone's happy.


What was the relative scale of the two systems?


very similar. Very few users, everything could run on one server without a problem. The startup could have needed to scale up if they’d met some heavy success, but appengine would have been handling it just fine.


The HN front page is similar to any community. For example, a car site's homepage will be listing the latest supercars, expensive turbos, rims, whatever... while most readers are driving a $30,000 Civic.

The best & brightest in tech are working with the best tools on the biggest problems, and that's what gets talked about, regardless of what the mass is doing.


> The best & brightest in tech are working with the best tools on the biggest problems

I don't think that's true—there is a subset of the best and brightest who work on greenfield projects very decoupled from existing customer bases, and they get to blog / present / post a lot about what they're doing. There are quite a few "best and brightest" people who are in large companies or slow-moving industries. They're often constrained to existing tooling, because moving to fancy new tooling is a huge risk and time sink for limited reward. They might be using cool personal tools—fancy editors and keyboards and window managers—but the stack they work on is generally "legacy".

And usually the problem of "How do we make this work slightly better for millions of end users" ends up being a bigger problem than "How do we do something really cool as a demo."


The best and brightest are working on greatest problems and are not focused on using the newest tools in many cases they are using substandard tools because they are focused on the problem.


If they're really the best, wouldn't they have their pick of workplaces and optimize for personal enjoyment? Nobody who could chew through research level algorithm problems all day would willingly write Java 8 CRUD apps for Windows Server 2000, because those are the people that have a choice.


Personal enjoyment can take many forms and people have more than one priority in life. Big corps can also have their upsides aside from tech and process related questions that might be attractive. While I have my pet peeves that would hinder progress, I personally don't really care about a particular stack enough to get invested. If I'm too concerned with that aspect that would imply that I'm not working on the interesting part of the problem anyway.

You can handle a lot of restrictions if the domain/problem is interesting enough and the constraints put on you don't feel too taxing, e.g. because they aren't enforced for your role or team very much. I feel like company size just isn't a good indicator for personal enjoyment/growth/$whatever, lots of research oriented divisions in larger corporations will let you work on interesting topics and hand off the engineering part to teams with people that enjoy that particular aspect of our world, both working for the same company.


One, the hiring market isn't either liquid enough or high-information enough for this to work.

Two, they have their pick of workplaces and focus on finding the biggest problem or perhaps the biggest paycheck, not the most freedom in tools. I have perfect freedom in tools hacking on OSS by myself; I don't look for that in a job.


> Nobody who could chew through research level algorithm problems all day would willingly write Java 8 CRUD apps for Windows Server 2000

Why do you assume it's impossible to have a fulfilling career writing Java 8 CRUD apps for Windows Server 2000?


People have a strange idea of what "best and brightest" work entails.

I work with Java 8 on a greenfield high-frequency transaction platform for a very large company. It is extremely satisfying to build the "world's largest" of something, and no amount of shiny features in a cute new language would deter me from this work.


Good point about communities, but I don't think it's good to say "the best and brightest in tech" in this context. HN exists to gratify curiosity, and people are more curious about things they haven't seen before, or that are currently captivating their imagination. This emotion is not a reflection of the world around us—it's aspirational. But there's a risk if we start to feel inferior because we don't get to do that at our job or whatever, which is partly what the article is about.

Incidentally, it's the same curiosity that has the current article near the top of HN right now. It's not a point that has been articulated so often, or so recently, or so well, so it's fresh. Even if we already know it, it's a fresh reminder. (And also, of course, there's the catnip of the meta dimension.)


Are we just going to ignore the idea of paying $30k for a Civic? I hope noone is doing that.


If the poster was from Canada, $30K is actually about right for a 2019 Touring trim level (it would be even more after taxes):

https://carcostcanada.com/Canada/Prices/2019-Honda-Civic_Sed...


You could pull it off by buying a Civic Type-R, which I think fits the theme.


The price is negligible in that the point is most people will work with Java, PHP, JS, etc at their workplaces yet indulge in the newest concepts/technologies on HN.


I have like $2k into a 90s Ranger. By the time I'm doing doing "enthusiast things" to it I'll probably have well over 10k into it.


Well the sticker price isn't 30k, but that's probably what you end up paying by the end of the 5 year loan.


> The best & brightest in tech are working with the best tools on the biggest problems, and that's what gets talked about, regardless of what the mass is doing.

That doesn't have to mean new and shiny.

By way of analogy: the F-117 stealth fighter--the world's first operational stealth aircraft--was developed by the best and brightest (Lockheed Martin Skunk Works) and solved the biggest problem (visibility to radar). Aside from having a weird polyhedral shape and innovative radar-absorbing material, nothing about the plane was the "best", "latest", or "cutting-edge" at all. Much of it consisted of parts from other aircraft hacked together. If you actually do want to have the best cutting-edge technology all in one plane, you spend 25 years designing the damn thing--that's the F-35.


The initial version of F-35 was lifted wholesale from Yakovlev Yak-141. Basically during the wild late 80's early 90's in Russia Lockheed Martin entered into an agreement with the Yakovlev bureau to do who knows what with Yak-141 (which sounds impossible and wild in itself nowadays). The agreement lasted just long enough for them to rip everything off and was then dissolved.

http://aviationintel.com/yak-141-freestyle-the-f-35b-was-bor...


That's a massive exaggeration at best. The F-35B variant borrows the lift fan design from the Yak-141. That's a single feature of a single variant of an aircraft that has a half-dozen other completely unrelated features. The Yak-141 didn't have helmet-mounted displays, it wasn't stealthy, it didn't have the new combat information management systems, it wasn't built out of modern composite materials, and so forth.


Would be quite a feat to have all those things in late 80s, don't you think? :-) It's not an "exaggeration" The planes even look similar, and Lockheed got $1.5T in government money for a few million they spent bribing government officials in Moscow.


> Would be quite a feat to have all those things in late 80s, don't you think? :-)

The F-35 has them. The Yak-141 didn't. Therefore, it absolutely is an exaggeration to say the F-35 was "lifted wholesale" from the Yak-141. Two of the three F-35 variants don't even have the lift fan!


>The best & brightest in tech are working with the best tools on the biggest problems, and that's what gets talked about

We should be so lucky. Nobody knows who's the best, what the best tools are, or what the biggest problems are. We only know the ones that get written about.

Writers cover what's flashy, who's the best self-promoter, and what makes the most money.


Exactly right.

Sites like HN and Reddit are aggregators in the literal sense. They take a large volume of data points and reduce it to a select few that get shown to most users. that aggregation function is "most interesting", not "most representative". Those are direct opposites of each other: interesting is almost by definition unusual.

I always find the subreddits that try to explicitly not target polar extremes the most fascinating from a sociology perspective. For example, this photo of a sink faucet is the most mildly interesting submission of the past year:

https://www.reddit.com/r/mildlyinteresting/comments/9ykoe4/t...

Does that mean all other submissions were less interesting, or less mild?


There is most likely some kind of division here where 5% here is working with the best tools on the biggest problems, 60% are working with old and non optimal tools on crud making money, 15% is working with the latest and coolest at a well funded startup and the rest is just following hypes making peanuts trying to fill their resume. Here best & brightest depends on your definition; if you are in it for the money and do not live in the valley, I think brightest might be java or c# if you want to make money.


$30,000 for a Civic? You better be getting gold-plated cup holders at that price.


1x $1900 Civic

1x $100 eBay turbocharger

1x $1000 misc expenses

27x $1000 JDM long-block

There's probably less fun ways to spend $30k on a Civic too


Like in Singapore:

Example A - Honda Civic 1.6 i-VTEC

    Registration Fees - S$220
    OMV - S$19,370
    Excise Duty (20 percent of OMV) - S$3,874
    COE (as of 23rd March 2018) - S$38,000
    ARF - S$19,370
    VES - $0
Basic Cost of Car - S$82,460 (~ 60k USD)

https://www.sgcarmart.com/news/writeup.php?AID=171


This is implying the best & brightest drivers are driving tricked out super cars.


Not a super-driver but Enzo Ferrari used to daily drive a Fiat 128 [1] . Nowadays the former chairman of the VW Group and a member of the family that controls Porsche has just paid $18 million for a custom-made Bugatti.

[1] https://youtu.be/lQSac0Jpz0Y


Right. It's not what all the best and brightest are doing. It's what people like to read about, and what we want to be doing, even if most of what we do isn't that.


I think mundane programmers could be some of the brightest, but for whatever reason, they don't go into the limelight to be noticed.


Supercars or soup-of-the-day?


Much of the ahead-of-the-curve, aspirational tech stack posts featured on HN are the blogging equivalent of Instagram-polished shots. On a personal blog they're equal parts self-expression and self-promotion; on a company blog they're equal parts deep dive and recruitment teaser. It's okay. Some readers will feel intrigue, some confusion, some envy. They'll wonder why they can't use those new tools at their work.

It's nice to recognize there exist developers who seem to stand apart from the vocal ones who make these posts, but they're not a single class. Some brag just as much, just in different circles. Some don't brag, but they do lots of good work. Some put out work that's not so good, and they may or may seek to represent their accomplishment regardless. The only factor they clearly share is their underrepresentation and their lack of attained popularity in talking about their work on social media.

As tempting as it may be, it's a big reach to conclude due to their underrepresentation that their tech stacks are conservative, or even dated; it's not a given that their work is solid and they're hard at work solving business problems given numerous constraints and office politics. Efforts to define this population by their rejection of shiny hype just perpetuates a kind of fictitious class divide between "fancy" developers and "plain" developers, when the real class divide is between management that is desperate for results, and IT workers that are desperate for empowered decision-making and resources. Businesses are under tremendous pressure, and silver bullets read about elsewhere carry great appeal. Transplanted organically by enthusiastic developers or by order of management, such solutions rarely work, because changing just a few inputs of a complex process won't deterministically give a better result. And mediocre results of a messy process make for a different genre of writing.

Maybe there's a class divide after all: between developers who are privileged enough to cook up all sorts of clever, custom solutions to challenging problems and can write about it, and those who spend their whole workday surviving the onslaught of nonsensical demands and deliver something mostly working in the end.


I have seen some Excel files from end users which grows into a whole application. The users wanted more functionalty and didn't get through all the management levels to reach someone from corporate IT and just start doing it by themselve with the tools they know and develop from Excel to Macros to VBA to VBA with SQL Database (because they have a standard process in the company to request a new SQL Database and the data was to big to be just in the Excel file directly). And then the company have this whole mess of "application" which is already business cricitcal because a lot of other users just use it as the fastet way to get their job done and the person who develop it is the only one which someone knows to handle it and maintain the application.

At some point in time the original person isn't available anymore and whole setup will have troubles because of upgrades of the basis (e.g. Excel) or just performance problems. They are even consulting companies specialize in optimizing or migrate to new Excel version for such kind of applications.

The corporate IT have to pick up the users early to help them to solve their current problem or help them to be more efficient with IT. If nobody helps they will develop their own solution which the knowledge/tools the have.


Agreed, 'shadow IT' is very real in the Enterprise. And I don't think it's the user's that are to blame - corporate IT departments just make everything so bureaucratic, expensive and difficult. Meeting after meeting, and all the time your department's budget is getting hammered for the hours, long before any development even starts.


I worked for a few years in enterprise app development at a Fortune 500, and our bread and butter (probably 75% of new code written) was making replacements for people's homegrown "apps" build on top of Excel or Access that had grown so large that they needed a real SQL back end.

I say this not to disparage those apps (though they were often horrifying to us as devs) but to point out that huge portions of your average large business probably run on stuff like that.


My wife use to work at a credit card company. Someone in their group had taken the Microsoft Access 'Northwinds' sample database, changed the labels, and reworked the processes to more or less fit the SQL/tables that existed. No understanding, beyond behavior, of what was underneath at all. There was much 'you got to be kidding' when the corporate mandated all database applications would be migrated to Oracle.


If it reaches that point it becomes an enterprise project with a budget and resources/time. If you try to kill every excel sheet in the wild you take on those projects without the resources to support them.

Any earlier commenter said Enterprise values process over outcome. It is true at the macro level but at the micro level bosses need to make decisions or your new work will receive no credit from above and will be used against you when it fails.


I think you misunderstood something here, corporate IT's main purpose is to prevent people from working efficiently.

https://dilbert.com/search_results?terms=information+service...


The funny thing is that from a purely technological point of view, Java (even the 5-year-old Java 8 and certainly recent versions) is far ahead of most other stuff hyped on HN (as well as less hyped stuff). Virtually no other platform comes close to that combination of state-of-the-art optimizing compilers, state-of-the-art GCs, and low-overhead in-production profiling/monitoring/management. And much of the cutting-edge development and technological breakthroughs on these matters continues to take place in Java (disclosure: I work on OpenJDK full-time). Just in the past few years we've seen the release of open-source GCs with less than 2ms worst-case pause times on terabytes of heap (ZGC and Shenandoah), a revolution in optimizing compilers (Graal/Truffle), and there's an upcoming release to streaming access to low-overhead deep profiling (JFR). So Java is not only the safe choice for serious server-side software; it's also the bleeding edge.


This. Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved. AOT used to be bag of hurt with GCJ ( I know I could use Excelsior, not sure if it was free in my time though ), but now even that will be an supported option from Graal.

Java the languages still isn't pretty, but it has been much improved.

OpenJDK is GPL and apart form the trademark ( ? ) with Java ( I didn't read much into the JakartaEE problem, ) everything should be fine. I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.


> Technically there is nothing I don't like about JVM right now, everything that seems impossible 15 years ago is now solved.

Value types are a major missing piece in the JVM stack right now. It's at least on the roadmap, but it keeps getting pushed back and back and back. I'd also argue runtime generics is another one, and perhaps more depressingly one that is unlikely to ever get fixed.

.NET has both of them and also has the same core strengths JVM does, so given the choice I'd go with .NET over JVM 100% of the time as a result. JVM's GC & JIT seem to be on a never ending improvement cycle, but the actual language & core libraries are incredibly slow to react to anything.


> .NET has both of them

I'll give you value types, but reified generics in .NET were a mistake. It really makes interop and code sharing among languages hard, in exchange for a rather slight added convenience. This means that if you're a language implementor and you're targeting .NET, you'll get much less from the platform than you would if you target Java, which makes .NET not a very appealing common language runtime. And not only is Java a good platform for Kotlin and Clojure, but thanks to Truffle/Graal it's becoming a very attractive, and highly competitive, platform for Ruby, Python and JavaScript. All of that would have been much more difficult with reified generics.

Also, I don't think value types in Java are being "pushed back." The team is investing a lot of work into them as it's a huge project, but AFAIK no timeline has been announced.

> and also has the same core strengths JVM does

I don't think so. Its compilers are not as good, its GCs are not nearly as good, and its production profiling is certainly not as good (have you seen what JFR can do?); and its tooling and ecosystem are not even in the same ballpark.


I see it as a double-edged sword.

On the one hand, reified generics means that it's .NET's object model or the highway.

On the other hand, .NET maintains a much higher standard of inter-language interoperability. When I was on .NET, I didn't have to worry much about the folks working in F# accidentally rendering their module unusable from C#. Now that I'm on the JVM, I've accepted that it's just a given that the dependency arrows between the Java and Scala modules should only ever point in one direction.


It's not .NET vs Java so much that the development of the two languages you mention are coordinated. For Kotlin and Clojure interop works both ways; Scala doesn't care much about that, so Scala developers write special facades as Java language API. There are lots of different languages on Java, and they care about different things. The Java team itself develops only one language, although at one time there was another (JavaFX Script). Some language developers (Kotlin, JRuby) work more closely with the Java team, and others (Scala) hardly ever do.


Dumb question. Why do reified generics make interop challenging? Or is it reified generics plus value types that don't inherit system.Object? Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?


> Why do reified generics make interop challenging?

Suppose you have types A and B, such that A <: B. What is, then, the relationship between List<A> and List<B>? This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.

> Couldn't the language implementations basically pass around ICollection<Object> in .NET somewhat similar to how they do in Java?

They could, but then this adds significant runtime overhead at the interop layer. For example, a popular standard API may take a parameter of type List<int>. How do you then call it from, say, JavaScript, without an O(n) operation (and without changing JS)?


> This question is called variance, and different languages have very different answers to this, but once generics are reified, the chosen variance strategy is baked into the runtime.

Which, realistically, is probably the only principled way to do things if you want to be doing much with variant generics in a cross-language way.

The Java way, "I pick my variance strategy, you pick yours, and we'll both pass everything around as a List<Object> at runtime and just hope that our varying decisions about what their actual contents are allowed to be never cause us any nasty surprises for each other at run time," is not type-safe and can lead to nasty surprises at run time. It's easier, sure, but easier is not necessarily better.


Except, realistically, of all the polyglot runtimes, the ones that have good interop erase and the one that doesn't reifies.


The problem is that your GenericClass<T1> and GenericClass<T2> are really more like GenericClass_T1 and GenericClass_T2 with their own distinct type definitions and interfaces. From the perspective of a different runtime/language trying to interop with these types, you have to somehow understand and work with this mapping game. It's much easier from inside the .NET runtime than outside.

The general solution is, like you suggested, to avoid using reified generics in the module interface where the interop happens.


The solution I remember from the last time I dealt with Python on .NET (which was admittedly a long time ago) was the opposite - you did use the reified generics, and there were facilities to create an instance of GenericClass<TWhatever> from within Python. There's a whole dynamic language runtime that is purpose-built for smoothing over a lot of that stuff.

What wouldn't work would be to, e.g., create a Python-native list and try to pass it into a function that expects a .NET IList<T>. Which doesn't feel that odd to me - they may have the same name, but otherwise they're very different types that have very different interfaces.

That said, the Iron languages never took off. My personal story there is that all the new dynamic features that C# got with the release of the DLR pretty much killed my desire to interact with C# from a dynamic language. The release that gave me Python on .NET also turned C# itself into an acceptable enough Python for my needs.


.NET Value types do inherit from System.Object.

See: https://docs.microsoft.com/en-us/dotnet/api/system.valuetype...


Is a value type different from a value class?

https://github.com/google/auto/blob/master/value/userguide/i...


Yes, those value classes are still heap/GC allocated objects.

Value types are a generalisation of `int`, `long`, `float` (etc), where values are stored inline, not allocated on the heap. For instance, a `Pair<long, double>` that isn't a pointer, but is instead the exactly the size of a long plus a double, and is the same as writing `long first; double second;` inline.


.NET is tied to Microsoft, so I'd avoid it 100% of the time.

Yes, yes. I know that Microsoft theoretically open sourced and ported it. However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.

Otherwise I agree that it is a better Java.


They didn't "theoretically" open source it - they actually open sourced it.

I get why people used to shit on Microsoft, but Microsoft has demonstrated over a number of years that its changed under Satya.

> However the way that this always works is that there is a base that can be written in, but anything non-trivial will have pulled in something that, surprise surprise, is Windows only.

Outside of desktop GUIs, this is simply not true. I'm writing complex, cross-platform systems that work just fine on Windows and Linux (and would on MacOS if I chose to target it).

Hell, even a lot of the tooling is now cross-platform: Visual Studio Code, Azure Data Studio, even Visual Studio and Xamarin run on MacOS!


No, Visual Studio does notrun on macOS. Visual Studio for macOS is a fork of MonoDevelop.


So because it’s not the same code base but it is produced by the same company with the same purpose? It’s not “Visusl Studio Code”? Was Photoshop not Photoshop on Windows when the assembly language optimizations were different between PPC and x86?


I don’t think is true anymore.

The .net 5 announcement was very clear that .net core is the future and its been a while since you’ve had needed things that are windows only to build a non-trivial .net core application.


Please double check my wording.

Yes, you can write a non-trivial .NET application on Linux. But if you take a non-trivial .NET application that runs on Windows, the odds are low that it can easily be ported to Linux. And there are almost no non-trivial .NET applications that weren't originally written for Windows.

The result is that if you work with .NET, you're going to be pushed towards Windows.


It the wording of the announcement (taking them in good faith) that applies to applications using .NET Framework. .NET Core should be 100% portable to Linux/Mac/wasm.

.NET 5 should supersede both Core and Framework IIRC


I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.

In the years since, I've encountered many stories that attempted to make use of that portability. All failed.

I've seen the promise of portability with other software stacks, and know how hard it is. I also know that taking software that was not written to be portable, and then making it portable, is massively harder than writing it to be portable from scratch.

So, based on both the history of .NET and general knowledge, I won't believe that .NET will actually wind up being portable until I hear stories in the wild of people porting non-trivial projects to Linux with success. And I will discount all announcements that suggest the contrary.


I have been hearing announcements about how Microsoft was working to make .NET code portable ever since Mono was first started in 2001.

Microsoft didn’t have anything to do with Mono until 2016.

If you start from scratch with a greenfield .Net Core project there aren’t really any issues getting it to work cross platform.


You are no more “pushed toward Windows” with new code you do with .Net Core than you are if you use any other cross platform language.

While I’ll agree that anything that uses any of the advanced features of Entity Framework and MVC is not trivially ported.


When you're trying to write new code, you run into a problem and look for a library that solves it. But all of the libraries that you find are Windows First, and it is not until after you're committed to them that you sometimes discover how they are Windows Only.

So yes, even in a new project, there will be a pull back to Windows. Because virtually nothing is truly written from scratch.


It’s no more of a problem with .Net Core than it is with Python modules that have native dependencies, or Node modules with native dependencies.

You’re not going to mistakenly add a non .Net Core Nuget package to a .Net Core project. It won’t even compile.

Of course you can find Windows only nuget packages for Windows only functionality like TopShelf - a package to make creating a Windows Service easy. But even then, I’ve taken the same solution and deployed it to both a Windows server and an AWS Linux based lambda just by configuring the lambda to use a different entry point (lambda handler) than the Windows server.

You can even cross compile a standalone Windows application on Linux and vice versa.

I use a Linux Docker container to build both packages via AWS CodeBuild.

Would you also criticize Python for not being cross platform because there are some Windows only modules?

https://sourceforge.net/projects/pywin32/


Looking at the announcement it seems they're basically folding all of the Windows-specific stuff back into .net core. Isn't that just going back to a compatibility minefield?


> the actual language & core libraries are incredibly slow to react to anything.

async/await being the obvious example. Still no sign of it in the Java language, nor any expectation of it, unless I missed something.


https://wiki.openjdk.java.net/display/loom/ which is superior to async/await, if I may say so myself (I'm the project lead)


That's very nice, but saying it's strictly superior to async/await is a stretch. Fibers/stackful coroutines are a different approach with its own tradeoffs.

On the plus side, fibers offer almost painless integration of synchronous code, while async/await suffer from the "colored functions" problem[1].

The price you pay for that, is the higher overhead of having to allocate stacks. If you don't support dynamic stacks which can be resized, you basically don't have a much better overhead than native threads. There are two solutions I'm aware of, both of them have been done by Go at different times: segmented stacks and re-aligning and copying the stack on resize. Both carry some memory overhead (unused stacks) and computational overhead (stack resizing).

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


> Fibers/stackful coroutines are a different approach with its own tradeoffs.

The only tradeoffs involved, as far as I'm aware, are effort of implementation. There are no runtime tradeoffs.

> The price you pay for that, is the higher overhead of having to allocate stacks.

You have to allocate memory to store the state of the continuation either way. Some languages can choose not to call the memory required for stackless continuations "stacks" but it's the same amount of memory.

> Both carry some memory overhead (unused stacks) and computational overhead (stack resizing). Their "advantage" is that, because they're inconvenient, people try to keep those stacks shallow.

Stackless continuations have the same issue. They use what amounts to segmented stacks. "Stackless" means that they're organized as separate frames.


Great article, thanks. Some perhaps-silly questions:

1. Is it possible to inspect the state of a 'parking' operation, the way you can in .Net with Task#Status?

2. So fibers run in 'carrier threads'. Is there a pool of carrier threads, or can any thread act as a carrier? I'm thinking of .Net's where this is configurable (ignoring that .Net 'contexts' aren't exactly threads), by means of Task#ContinueWith() and the Scheduler class. I take it from the following snippets that fibers can only run on the thread where they were created:

starting or continuing a continuation mounts it and its stack on the current thread – conceptually concatenating the continuation's stack to the thread's – while yielding a continuation unmounts or dismounts it.

And also:

Parking (blocking) a fiber results in yielding its continuation, and unparking it results in the continuation being resubmitted to the scheduler.

On a non-technical note, how do OpenJDK projects feed back into the Java spec and Oracle Java?


1. Yes.

2. Configurable. A fiber is assigned to a scheduler that is, at least currently, and Executor (so you can implement your own).

> I take it from the following snippets that fibers can only run on the thread where they were created

No. A fiber has no special relationship to the thread (or fiber) that created it, although now there can be a supervision hierarchy thanks to structured concurrency: https://wiki.openjdk.java.net/display/loom/Structured+Concur...

> OpenJDK projects feed back into the Java spec and Oracle Java?

OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license). Projects are developed in OpenJDK (for the most part by Oracle employees because Oracle funds ~95% of OpenJDK's development, but there are some non-Oracle-led projects from time to time[1]) and are then approved by the JCP as an umbrella "Platform JSR" for a specific release (e.g. this is the one for the current version: https://openjdk.java.net/projects/jdk/12/spec/)

[1]: E.g. the Shenandoah GC (https://wiki.openjdk.java.net/display/shenandoah) is led by Red Hat, TSAN (https://wiki.openjdk.java.net/display/tsan) is led by Google, and the s390 port (http://openjdk.java.net/projects/s390x-port/) is led by SAP.


Very neat. So it preserves the virtues of .Net's task-based concurrency, but is even less intrusive regarding the necessary code-changes to existing synchronous code.

Does it impact things from the perspective of the JNI programmer?

> OpenJDK is the name of the Java implementation developed by Oracle (Oracle JDK is a build of OpenJDK under a commercial license).

Ah, of course. I'd missed that.

> there are non-Oracle-led projects from time to time[1], and are then approved by the JCP as an umbrella "Platform JSR" for a specific release

How do they handle copyright?


> but is even less intrusive regarding the necessary code-changes to existing synchronous code.

Yes. All existing blocking IO code will become automatically fiber-blocking rather than kernel-thread-blocking, except where there are OS issues (file IO; Go has the same problem). Fibers and threads may end up using the same API, as they're just two implementations of the same abstraction.

> Does it impact things from the perspective of the JNI programmer?

Fibers can freely call native code, either with JNI or with the upcoming Project Panama, which is set to replace it, but a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.

> How do they handle copyright?

Both the contributors and Oracle own the copyright (i.e. both can do whatever they want with the code). This is common in large, company-run open source projects.


> a fiber that tries to block inside a native call, i.e., when there is a native frame on the fiber's stack, will be "pinned" and block the underlying kernel thread.

Doesn't this boil down to the native function blocking the thread?

How about the C API/ABI of JNI? Will there be additions there for better supporting concurrency (i.e. not simply blocking)? Or can that be handled today, with something akin to callbacks?


If the native routine blocks the kernel thread, it blocks, and if not, it doesn't. While something could hypothetically be done about blocking native routines, we don't see it as an important use case. Calling blocking native code from Java is quite uncommon. We've so far identified only one common case, DNS lookup, and will address it specifically.


Nice. So going 100% async is a real possibility?


I'm not sure what that means.


> Fibers are user-mode lightweight threads that allow synchronous (blocking) code to be efficiently scheduled, so that it performs as well as asynchronous code

Sounds a lot like Erlang BEAM processes.


Yep; or Go's goroutines. Except that the fibers are implemented and scheduled in the Java libraries; the VM only provides continuations.


How does it compare to Windows fibers?


I find this really interesting. Care to provide some comparisons?


I think that the approach done in https://wiki.openjdk.java.net/display/loom/Main is better than the async/await infrastructure.


Well, other than the fact that it only supports Linux and MacOS.


Is that true? The build instructions are for a Posix-like evironment, but I haven't actually looked to see if the actual implementation supports Windows yet.

As someone who runs Windows and Linux about equally, in differing proportions over time, I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet.


It's a prototype. It will support Windows when it's released, and probably sooner. We're literally changing it every day, and it's hard and not very productive to make these daily changes on multiple platforms, especially as none of the current developers use Windows (this is changing soon, though).


> I do find it disappointing that some (b)leading edge JVM and Java features don't support Windows yet

Seems understandable though. Java is primarily a tool for heavyweight Unix servers, after all. (This is of course an empirical claim, and I have no source, but I'd be surprised if I turn out to be mistaken.)

Makes good sense to go with the strategy of building an industry-strength technology before investing the time to handle Windows.


I like java. Java 8 streams are particularly interesting. Its fast too. I took a hadoop class (which taught java 8 and ironically discouraged hadoop use excepting exceptional cases..).

The hardest part that everyone struggled with was getting a Java environment up and running. Gradle, maven, ant... You almost need and IDE. Its almost like they don't want people using it. I stopped when I didn't have too.

Plus the acronyms. Ones I didn't know from your post:

AOT, GCJ, Excelsior, Graal, OpenJDK, JakartaEE


It’s funny, I feel the same way about web development right now.


Except web development is almost all bootstrapped from a simple npm library these days. You generally npm install and you've got all your dependencies whether it's Angular, Vue, React or pretty much any modern web frameworks. The time for a new developer to get the tooling out of the way and start looking at code is dramatically shorter for web apps than Java in my experience.


It seems like you just don't know how to properly use maven. In my experience it is always as simple as mvn {build, compile, test, package, deploy}.


>Plus the acronyms.

GCJ and Excelsior are really niche, even people familiar with Java ecosystem might not known them as they are mostly used for AOT ( Ahead of Time ) Compiling Java into a Single redsitrbutuamble binary in the early 2000s. I was writing RSS Web Server application then and was looking into how to do Client Side Desktop Apps.... UI toolkit was a bag of hurt for Java, and I gathered that is still the case today.

I think JakartaEE is really just a rebranded JavaEE.

I know Graal only because I follow TruffleRuby, which is a new JVM written in Java. And it has proved that optimising Ruby, once thought as impossible due to its dynamic nature to be possible.


How is this any different than python or javascript? NPM, Babel, Webpack, TSC, PIP, VENV, PyPy, CPython, etc. They all have their learning curves and if you weren't in the ecosystem you wouldn't know what they meant.


> I am just a little uneasy with Oracle lurking around, I just don't know what they are going to do next.

What do you mean by "lurking"? Oracle is the company developing OpenJDK, and it will continue to do so. All our projects are done in the open, with lots of communication.


By "lurking" people mean that the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.

You may not remember incidents like why Jenkins was forked from Hudson, but Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.


> the executives who care nothing about open source are firmly in control, and some day may try to assert their control in ways that nobody else likes.

I have no idea what Oracle may do tomorrow, but Oracle has been in control of Java for a decade, and what it has actually done is 1. significantly increase the investment in the platform and 2. open source the entire JDK. So I don't know about the next ten years, but the past ten years have been really good for Java (well, at least on the server).

> Oracle is run by people driven by what they think they can get away with, and not by what is good for the projects that they have power over.

I don't share your romantic views of multinational corporations. Corporations are not our friends, and while they're made of people, they're not themselves people, despite what some courts may have ruled. But like people, different corporations have different styles, and it would be extremely hard to call any of them "good." I have certainly never heard of one is driven by caring (although what you do when you don't care may differ; some may be aggressive with licensing, some are in the business of mass surveillance, some help subvert democracy, others drive entire industries out of business through centralizations and others still drive kids to conspiracy theories). When you look at what Oracle has actually done so far for Java, I think it has been a better, and more trustworthy steward than Microsoft and Google have been for their own projects (Java's technical leadership at Oracle is made up of the same people who led it at Sun). And people who bet on Java ten years ago are happier now than those who bet on alternatives (well, at least on the server). This despite some decisions that made some people unhappy. You can like the good stuff and be disappointed about the bad stuff without some emotional attraction or rejection to these conglomerate monsters.


Your opinion is your opinion.

My opinion of the company and its products is more consistently negative than any other large company. And while you think that the non-Java world is suffering, I think you have some tunnel vision.

Let's just say that I am personally happy with my decision to stay away from Java. And the brief periods where I had to work with Java were misery. Languages have personalities as well as companies, and there is a reason that the startup world stays away from Java in droves.


> and there is a reason that the startup world stays away from Java in droves.

You may have your own case of tunnel vision. I mean, sure, there are "droves" of startups that stay away from Java (many only to turn to it later), but there are also "droves" that adopt it from the get-go.


Why don't we look for some concrete information?

According to the statistics reported by https://www.codingvc.com/which-technologies-do-startups-use-..., Java looks like the fourth most commonly used language in the startup world, and its usage is not particularly well correlated with success.

Both Ruby and Python are more popular than Java, AND are better correlated with how good the company is. Your odds of being in a successful startup are improved if you are in those languages INSTEAD OF Java.

What about from the individual programmer level? Triplebyte did an article about how programming language and hiring statistics correlate. My impression is that their programmers are mostly being hired into relatively good startups, so it is a pretty good view of the startup world. That article is at https://triplebyte.com/blog/technical-interview-performance-....

Long story short, Java was the #2 language that programmers chose. Behind Python. Not so bad. But choosing Java REDUCED your odds by 23% of actually getting to a job interview. And for those got to a job interview, it reduced your odds by 31% of actually getting hired. By contrast Python IMPROVED those same odds by 11% and 18% respectively.

Apparently the startup world doesn't like Java developers either. You'd be far better off with Python.

Now I'm sure that you can trot out every successful Java startup out there. And there will be quite a few. But based on available data, not opinions, I did NOT express tunnel vision when I said that the startup world stays away from Java in droves.


If you truly believe any of the conclusions you've drawn from the numbers in the links you posted, then your favorite programming language REDUCES statistics skills.


Startups don't use Java because Java is for large-scale stable long- lived enterprises, not for prototyping simple small web apps that might be thrown away in a couple of years.


You often hear this, but what does it actually mean? Why is Java for one but not the other.

Here is my understanding.

Java was designed to limit the damage that any developer could accidentally do, rather than maximize the potential for productivity. Which is an appropriate tradeoff for a large team.

It is hard to get good statistics on this, but the figures that I've seen in books like Software Estimation suggest that the productivity difference is around a factor of two.

This matters because it turns out that teams of more than 8 people have to be carefully structured based on the requirements of communication overhead. (A point usually attributed to The Mythical Man-Month.) This reduces individual productivity. Smaller teams can ignore this problem. The result is that measured productivity for teams of size 5-8 is about the same as a team of 20. But the throughput of large teams grows almost linearly after that. An army does accomplish more, but far less per person.

Limiting damage matters more for large teams. Which are more likely to be affordable for large enterprises. However being in such an environment guarantees bureaucracy, politics, and everything negative that goes with that.

By contrast startups can't afford to have such large teams. Therefore they are better off maximizing individual productivity so that they can get the most out of a small team. And using a scripting language is one way to do that.


Today, I go back and forth between three languages at work - .Net, JavaScript, and Python. For simple prototype web apps or more realistically REST microservic APIs to feed front end frameworks, I really don’t see either being slower to develop.

For larger applications with multiple developers working in the same code base, the compile time checking of static languages is a god send. I would at least move over to TypeScript instead of plain JavaScript.


Oracle has a long track record of Sales & Marketing tactics which we can use as a reliable benchmark to predict outcomes.

Oracle will likely pursue the most aggressive strategy they can get away with Java.

I don't believe Sun was suing Google, but Oracle did.

The fact that Google is switching to Kotlin is mostly a means to absolve themselves of the 'Oracle risk' - it's a big change surely, a decision not taken lightly.

The future of Java under Oracle is hard to predict but there's legit concerns Oracle will make things hard.


Kotlin uses the same VM and API, so it makes no difference in this regard. It's not a big change – it's fully interoperable with Java. You can easily take a single class in a Java application and rewrite it in Kotlin, and everything continues working just as before.

Google adopted it because, as they more or less said in the announcement, it was already being adopted by the community and it hugely improved development experience.


but you splitting your developer base. * There will be people better at kotlin * There will be people better at java.

This is a problem when you are looking at hiring new people etc . This fragmentation is going cause issues just because people are hedging against Oracle future decisions.

In a perfect world Google should have bought Sun and the current version of Java would look at lot like Kotlin.


Kotlin is a light syntax for a coding style. It's as easy for a Java dev to learn Kotlin as it is to learn Spring or Hibernate or whatever library or framework the team at your new job uses


It's a whole other domain of things to learn, and now we're stuck with context switching all of the time.

Even if Kotline were 'better' (And I don't think it is), it'd have to be quite a jump better.

It's a little nicer for getting ideas down quickly, but beyond that to me it's just 'different' and now a whole other bag of things to support.

If I have to chose between Kotlin+Java or just Java I'll take just Java.

Going back to Java from Kotlin there's really nothing I miss.


I think Sun might have wanted to sue Google?

https://news.ycombinator.com/item?id=10951407


Yes, thanks for that, it stirred my recollection as I actually bumped into Jon Swartz by accident just in that era.

I don't think it was money, so much as the established culture at Sun (i.e. James Gosling: "Sun is not so much a company but a debating society). A more aggressive CEO/leadership/culture would have maybe raised the money to take on Google, or to take another tact.

So while you are right - and thanks for the reference - the issue here is what Sun was about, vs. what Oracle is about.


Whatever Sun was "about", sadly, it didn't work, and damaged some excellent technologies, like Java and Solaris, that Sun couldn't invest a lot of resources into because it no longer had them. Oracle managed to save one of them and make it thrive. Sun, as a big, impactful company, was a product of the dot-com bubble. It certainly made more lasting contributions than other bubble-era companies, but its strategy couldn't survive the crash. Maybe great ideas can be born in companies like Sun but need companies like Oracle to sustain.


I've been saying for years that Pivotal Labs is a debating club that produces code as a by-product. But now I'm wondering if I read the Gosling quip and then forgot I had.


Google made $billions from Java while Sun went near bankrupt, and is now a top-10 wealthiest company in USA. Oracle trying to get $ from Google in partnership with the former Sun is a different issue from your company's risk.


I do understand Oracle is paying the bill. As well as the team working on Graal and TruffleRuby. so I am grateful for that. Thank You.

>What do you mean by "lurking"?

Referring to Copywriting API a while ago and the JakartaEE problem which has blown up on my twitter feeds. I understand why Oracle is trying to charge money, and I am perfectly fine with that, I just don't like they are using Copywriting API as the tool. And whatever problem it is with JakartaEE this time around I don't have time to follow.


In a lawsuit, Oracle pushed for API's to be copywritten, not just their implementation. They also have paid lobbyists. They're also greedy assholes. The combo of greedy assholes and the ability to rewrite the law is a dangerous one.

So, I don't use a language unless it's open with patent grants and has a non-malicious owner. At this point, Wirth's stuff is probably legally the safest.


I'm curious about the last part. Are you using Modula 2 or something?


I don't have any public projects to release right now. So, I don't have to worry about getting sued. Modula 2 was nice but you could use any of Wirth's with low risk. Although Lisp's had lots of companies involved, Scheme is probably safe with PreScheme aiming for low-level use. A Racket dialect with C/C++ features like ZL language had could be extremely powerful and safe.

Rust, with Mozilla backing it, is probably not going to get you sued. Nim has potential given their commercial interests are paid development and support so far. As in, less greedy they are the better. Languages with community-focused foundations, such as Python, controlling them are probably pretty safe. Although it was risky, the D language now has a foundation. Although no foundation or legal protections, the Myrddin and Zig languages are being run by folks that look helpful rather than predatory.

So, there's you a few examples you might consider if avoiding dependencies on companies likely to pull shit in the future. Venture-backed, publicly-traded, growth-at-all-costs, and/or profit-maximizing-at-all-costs are examples of what to avoid if wanting to future-proof against evil managers turning it from good dependency into bad one.


> copywritten

copyrighted

"Copywritten" probably means nothing, but if it did, it would have something to with copy writing, the act of writing for publication (usually commercial, usually not long-form).

Added: FYI "copyrighting" is not a conscious decision, or an action you can take. Copyright emerges automatically when you create a work, what they've done is defend their copyright in court, and the courts have mixed opinions on the matter.


That is a gross mischaracterization of what Oracle did. They didn't just defend a copyright in court. They pushed to extend copyright to a mostly functional element that copyright law has not traditionally been thought to cover. It's a tremendously harmful viewpoint for interoperability.


Not just "not traditionally been thought to cover", but which existing precedent said DID NOT cover.

Does it surprise anyone that this case was decided by the Federal Circuit? The rogue court most consistently overturned by the Supreme Court, which also is responsible for most of the disastrous software patent cases out there.

The only bright light is that the Supreme Court has reopened the question. Given how often they overturn the Federal Circuit, we have real hope that we'll return to the previous precedent. Which is that since matching APIs is a functional part of how code works, and things that are functional are by law not copyrightable, APIs are not copyrightable.


Yes, copywritten isn't a word, but their point was that Oracle pushed for API's to be copywritable, which was not the case before their suit. It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.


Not to belabor the point, but... "copyrightable".


> It's an incredibly bad result with many shitty implications that are currently mostly being ignored but could lead to legal nuclear war at any time.

I mean, it has been big news, and it has already been nuclear war, with Oracle putting Google in a position to switch android from Dalvik (and successors) to OpenJDK. I agree that it could become a pretty horrible precedent (imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place).


> imagine if Microsoft forbade Sun from implementing Excel functions in StarOffice, or for that matter, if MS were prevented from producing Excel in the first place

The things you're talking about are already protected by patents, and the copyrightability of APIs have nothing to do with them. At the very least, for something to be copyrightable it must be some specific fixed expression (a piece of text, image, video or audio). So the O v. G ruling applies only to actual (code) APIs; not to protocols (or REST "APIs") and certainly not to stuff that's already protected by patents (the distinction between the two may not always make sense to programmers, but it is what it is; for example, algorithms are patentable but not copyrightable, while programs are copyrightable but not patentable).


You should do some reasearch on the Oracle vs Google case.


It's literally no different. Excell has APIs just like Java, it's just the code goes in cells rather than on lines.


The licensing has gotten super onerous.


The licensing has gotten far better. First, Oracle has just open sourced the entire JDK for the first time ever, and second, instead of offering a mixed free/paid, open/proprietary JDK (with -XX:+UnlockCommercialFeatures flags), it now offers the JDK under either a completely free and open license (under the name OpenJDK) or a commercial license for Oracle support subscription customers (under the name Oracle JDK).


Thanks, that clear things up bit.


What licensing? OpenJDK is licensed under GPLv2 plus Classpath exception, the same as ever.


Support for native code is very bad. The JNI is a pain to use and very slow, IPC is often faster. High performance numerical code often suffers because of poor vectorization. Not to mention tuning the JVM is often needed for critical tasks. Modern GC'd languages like Go have much better memory footprints and the penalty of fast numerical code is much smaller.


Panama (https://openjdk.java.net/projects/panama/) will be replacing JNI very soon, and I don't think you're correct about vectorization. While I think Go has some good features, nothing about it is more "modern" than Java except in the most literal chronological sense; Java is more modern in almost every other meaningful sense. While you may need to tune the VM for critical tasks, in Go you don't need to tune, but rather just run slower.


I thought everyone was using JNA [1] for native access these days. JNA overhead is pretty low, and it’s much simpler to use versus JNI.

[1] https://github.com/java-native-access/jna


Respectfully I'm not sure that's true.

> State-of-the-art optimizing compilers.

LLVM provides coverage for many of the new languages that are hyped these days (with the notable and truly unfortunate exception of Go). LLDB provides an excellent debugger infrastructure.

> State-of-the-art GCs.

True, there's some good work there but I'm excited about languages that don't need GCs at all, so that we can finally stop tricking the memory wizard into complying with our workloads and focus on determinism, power efficiency and memory efficiency.

> Low-overhead in-production profiling/monitoring/management.

Correct me if I'm wrong but a lot of that is available in dtrace is it not?

This is actually why I'm so excited by LLVM. I think it's the biggest advancement in computer science in ages and ages. It allows compiler engineers to focus on bringing what they do better and differently to the stack. It allows them to leverage the years of work and bundles of PHDs worth of research that went into the rest. All that time Go wasted developing assemblers for platforms should have been spent focusing on Go and letting LLVM handle it.

Over time the delta between what Java has and what LLVM has will shrink which in turn narrows the gap between Java and every other LLVM based language.

Think about it, macOS is basically just various front-ends for LLVM. Apple's implementations of OpenCL and OpenGL, the Metal Shading Language and Core Image are all wrappers for LLVM. Swift is a wrapper for LLVM. Clang makes C and C++ fancy wrappers for LLVM. Rust is a wrapper for LLVM. Almost everything your mac does is a fancy front-end for LLVM. Even Safari was (until B3 backend) a fancy front end for LLVM.


As I said in another comment, there can be more than one thing that's cutting edge, especially as the JVM and LLVM operate at completely different levels. Never mind GCs -- LLVM isn't even safe, and neither is WASM; and that's good at that low level. In fact, LLVM is employed by Falcon, the Zing JVM's JIT, and OpenJDK's HotSpot is compiled to LLVM on Mac.

> I'm excited about languages that don't need GCs at all, so that we can finally stop tricking the memory wizard into complying with our workloads and focus on determinism, power efficiency and memory efficiency.

That's fine, but it seems like giving up GCs comes with its own non-negligible costs, so much so that the two hardly compete in the same domains.

> Correct me if I'm wrong but a lot of that is available in dtrace is it not?

They have similarities (closer analogs would be https://github.com/btraceio/btrace and http://byteman.jboss.org/ than JFR), yes, but operate at different levels with somewhat different capabilities.

> This is actually why I'm so excited by LLVM. I think it's the biggest advancement in computer science in ages and ages.

LLVM is popular, extremely useful, and quite cutting edge in its domain, but there isn't much of a "computer science" advancement as there is, e.g., in Graal/Truffle.

> It allows them to leverage the years of work and bundles of PHDs worth of research that went into the rest.

It certainly does, but there's a lot that LLVM doesn't give language designers that a JVM does, like GCs and high-level language interop.

> Over time the delta between what Java has and what LLVM has will shrink which in turn narrows the gap between Java and every other LLVM based language.

I don't know if LLVM wants to operate at Java's level or vice versa. While LLVM may offer some basic GC and Java may offer "safe" LLVM with things like Sulong, I believe their main focus will be their own respective levels.


Java itself is fine. It's very popular--and that's the biggest problem. Not the language as such but the developer ecosystem around it. I am dead tired of seeing 80-character camel-case function and variable names, annotations that hide critical functionality of the code, stack traces dozens or hundreds of lines long because of the amount of indirection and poorly conceived configuration "languages" used to support dependency injection so developers can save a few characters of typing or mask dependencies and complexity, etc.

The language may be fine, but it has been appropriated into the worst sort of terrible programming paradigms one might imagine.


The terrible programming paradigms are workarounds for the terrible language.

I read an old blog post about Spring's @Required annotation versus plain constructor injection and typed out a small example to get a feel for it – a simple class with one required and one optional dependency – which resulted in

23 lines of plain Java with constructor injection, or

16 lines of Java with Spring @Autowired magic, or

7 lines of Kotlin – no magic needed.


We aren’t talking about Java the language. We are talking about the JVM.


I used to be a Java hater (I'm much more neutral now -- I even find aspects of it pretty pleasant). To me, it was never about the technology of Java (always been pretty impressive), or even the language (a little verbose -- but so is C++ and C# and I like both of those), it was really just about the ecosystem. For whatever reason 2000s era java had soo many libraries that were just insanely over-engineered and had these absurdly deep object taxonomies and trying to figure out how to do something came down to figuring out how 5 or 6 different classes interrelated. It was such a nightmare, especially since the IDEs back then weren't as smart as they are right now. I think it's become a lot better now though, modern libraries seem to have learned a lot of those lessons, and having proper closures makes a huge difference, and things like IntelliJ are really great about making dealing with the verbosity much easier.


A lot of the improvements in the ecosystem came from improvements in the language. Go4 patterns are in part shaped by language.


Great points, but it's actually much simpler than that: if you want static types, you have already eliminated the majority of HN darlings (JS, Python, and Ruby). Where do you? Most developers turn to C++, Java, or C# (not Haskell or F#, for example), and of these, Java and C# are both great for productivity and not daunting for someone just starting out, unlike C++.

There was a period after 200x where developers managed to throw out a bunch of babies along with the bathwater when they rejected Java, UML, SQL, and everything else that was used in 199x. After all, these are all literally from the last century, so they must be "uncool", amirite? /s

The preference for dynamic or duck typing in the name of productivity has always rubbed me the wrong way. To keep it brief, my defense / elevator pitch for static typing is: "Slow is smooth. Smooth is fast" :-)


> eliminated the majority of HN darlings (JS, Python, and Ruby)

Those were the darlings ten years ago (along with CoffeeScript and Clojure). The pendulum has swung back towards types and now the hip ones are Go, Kotlin, Swift, TypeScript, and (to a lesser extent) Haskell, Hack, Reason, and OCaml.

> Java, UML, SQL, and everything else that was used in 199x.

There was the whole no-SQL fad, but lately, even here, I see a lot of people re-discovering and advocating the relational model. Postgres seems to be hotter than MongoDB today.

Much of what is good about Java lives on in Kotlin and Swift. I agree it is unappreciated with today's eyes. Few remember that it was Java that introduced much of the world to garbage collection, memory safety (!), optimizing JIT compilation, runtime reflection, dynamic loading, high quality static analysis IDEs, etc.

UML is garbage. A visual language designed by non-artists with no aesthetic expertise. It deserves to be forgotten.


> Few remember that it was Java that introduced much of the world to garbage collection, memory safety (!), optimizing JIT compilation, runtime reflection, dynamic loading, high quality static analysis IDEs, etc.

Java is still continuing to lead on technology. In recent years it has introduced the world to low-latency, concurrent copying garbage collectors (C4, ZGC), partial-evaluation Futamura-projection optimizing compilers (Graal/Truffle), and low-overhead continuous deep profiling in production (JFR). I think that lots of people remember that, and care about those things, because Java is not only leading technologically but also in market share.


Asking as someone who cares about aesthetics a lot more than most developers, what does UML have to do with aesthetics? UML is for modeling systems and relationships through agreed upon conventions, and sometimes generating code stubs based on those diagrams. As long as you adhere to the conventions, the aesthetics can be changed as necessary.


It's a visual language. It exists entirely to be consumed by human eyeballs. Aesthetics are the user interface for that process.


But you can customize its aesthetics.


> Great points, but it's actually much simpler than that: if you want static types, you have already eliminated the majority of HN darlings (JS, Python, and Ruby).

Well, if you ignore mypy and consider TypeScript distinct enough from JS to not count, sure, though TS is if anything more of an HN darling than bare JS (Really, none of those languages has been an HN or developer community darling in years, though Python—due to ML and data science—is less horrible uncool than Ruby and JS right now. (And, yeah, Ruby is left out of the “but I have static type checking available” list...till Sorbet is available later this year.)

> Most developers turn to C++, Java, or C#

I'd be surprised if more than half (most) developers using static typing use those languages and not others outside that set.


>> Most developers [from context: who want static types] turn to C++, Java, or C#

> I'd be surprised if more than half (most) developers using static typing use those languages and not others outside that set.

Really? I'd be surprised if it was less than 90%. What is the competition? Go, TypeScript, Swift, Scala, Kotlin, Rust (in rough order of my gut sense for how widely used they currently are)? These are all still very small relative to C++, Java, C#.

If you argue that C has static types, then I guess you're probably right. But it isn't really true that C has static types in the contemporary sense (which I think is perhaps better referred to as "strong" type safety).


> If you argue that C has static types, then I guess you're probably right.

C (like C++) is both statically and weakly typed; strong and static typing are orthogonal axes (dynamic but strongly typed languages are common.) If you mean strong and static when you say static, you need to take C++ off your list.


No, it's (mostly) not about that.

It's about C not having any generics or similar things (like C++ templates), so you need to fall back way more often on void*.

So on that axis Go is roughly as statically typed as C, albeit more strongly typed.

On the other hand C++ is more statically typed than C, but is about as strongly typed (afaik).


Yeah, I kind of wondered about that as I was writing my comment, seems like a fair point.


According to Github: https://github.blog/2018-11-15-state-of-the-octoverse-top-pr...

As it happens, among the statically typed languages, C++, Java, and C# are the top three, with C and TypeScript filling out the top 10 lineup.

That's not saying "more than half", but if you include C onto that list, I would definitely bet against your statement.


While I still like Python, Python and Ruby haven't been darlings of anything in probably the last 10 years. Go, Rust, Swift, and Kotlin are newer statically typed languages that lots of devs on here like though.


This timeline is a bit aggressive for Ruby. The core of my career using Ruby was 10 years ago, and I would still call it "darling" at that point, though the rumblings were beginning. Node.js was first released almost exactly 10 years ago, and (from my perspective) over the next few years marked the first real exodus from people who had previously been all in on Ruby. I missed out on that one because I thought node.js seemed like an immature answer to a question I wasn't asking. But over the next few years, I became increasingly disillusioned with writing big software in a language without good static analysis, and was on board with the mindshare (if not actual employment) exodus toward languages like Go and Rust. So I would say it was more like 5-7 years ago that you started seeing more criticism than love for Ruby in places like this.

And yeah, as other commenters have said, Python is back due to data science / machine learning. (Though I'm hoping there will be a wave of adoption of tools like Julia and Swift for this in the near future; good static analysis will be nice for data science for all the same reasons it is nice for other kinds of software.)


Python's relevance got renewed by the ML frameworks (Tensorflow) and notebooks (Jupyter).


This. Jupyter notebooks, Matplotlib, Numpy, Spyder...all that made Python big again as ML and AI became the new hotness recently and the killer app for Python.

Outside AI, Python is a really good scripting language for both Linux and Windows. My entire industry seems to run off of Python for process automation and analysis. It really is a lingua franca in this space.


I agree. During my Maths + CS studies I've been using mainly MatLab and R, Andrew Ng's famous Machine Learning intro was also taught in MatLab/ Octave. But using just one „proper” general purposing language like Python instead makes way more sense, especially when building software from scratch. Need a custom ERP? Build one with django. A Web Server? Go with flask. Integrating some ML pipelines? Easy.


> So Java is not only the safe choice for serious server-side software; it's also the bleeding edge.

Sure for core Java (OpenJDK), but what the future of JEE licensing and development? Even non-"enterprise" apps typically make use of some JEE stuff like servlets and JDBC. Is it really the "safe" choice when, apparently, the trademark agreements have just fallen apart?

https://headcrashing.wordpress.com/2019/05/03/negotiations-f...


JDBC is not JEE - it is core Java. Servlets are the original web-container Java spec, independent of JEE. Both are great specs supported by several stable and performant implementations - rock solid tech compared to a lot of the flaky stuff you find advertised today.


How are Java programmers implementing RESTFUL APIs these days?


Many developers don't want to think too much and generally choose Spring boot for a stable, highly-popular and well-documented framework. There is nothing wrong with this choice but I find it too bloated for lean container based micro-services.

Dropwizard and Micronaut are pretty good for getting a smaller and saner footprint. If you need more than REST, say you want a lean MVC-based web-framework, then Blade is also a good choice.

You can also choose not to use a framework and say use code-generation from a swagger/openAPI REST document and implement the remaining bits on your own. If your deployment target is AWS, you can also choose to leverage Netflix's fantastic set of libraries.

Java has tonnes of options, so you usually need to spend some time evaluating what goes into your stack.


I'm curious to hear what specifically is too bloated in Spring Boot in your opinion for container based microservices. I've been writing microservices that run in docker in Java for a few years, then moved to Kotlin recently, all using Spring Boot, and I haven't run into anything that made me feel like they were bloated. I've also written Go and Node microservices for contrast.


Spring boot has a large startup time, heavy dependency trail and a lot of dynamic class-loading based off auto-configuration. This makes it un-suitable for some areas: if you desire fast scaling micro-services that respond quickly to incoming load. Or say you want to transform your micro-service to a server-less function. One can do that with Graal VM to compile your micro-service to a single binary with microscopic start-up time. However, native-image tends to fail more often than not with Spring boot apps. (I haven't tried this recently though so my knowledge could be out of date).


Spring Boot does what it's asked to do, which is to load everything it finds. The dynamic class work has basically nil effect on load time. The core performance constraint is that classes are loaded essentially linearly. If you ship with something enormous like Hibernate, that's thousands of files read individually from a zipfile.

Spring folks are actively involved with Graal. I see a lot of excitement internally at Pivotal. Watch this space.


I'm reminded of the crazy stunts from the 1980s where people would start some big, slow program (Emacs was huge!), dump core, and then "undump" to make a pre-initialized binary that would start as fast as the OS could read it. It actually worked, as long as it wasn't relying on open file descriptors, or signal handlers, or env vars, or …


I’m pretty sure Emacs still does this. There was a patch committed to change it to use a more portable method of dumping state, but I don’t think that change made it into the 26.2 release.


You should checkout new RedHat framework - [Quarkus](https://quarkus.io/). This is a framework which leverages Graal to create native images. Those images are very small and optimized. For example one of Quarkus developers showcase the size of native image, spoilers - it's [19MB](https://youtu.be/BcPLbhC9KAA?t=103). It takes 0,004s to start.

In [this](https://youtu.be/7G_r1iyrn2c?t=3104) session, RedHat developer shows how Quarkus application is being scaled. Comparing to Node, it's both faster to respond to first request and have smaller memory footprint (half the size of node).


Looks very interesting although unfortunate they went with Maven over Gradle.



I can't speak for others, but I work in games and we've spun up a bunch of servers and we use Netty/Jersey for REST and sockets. It's been incredibly stable and a joy to program for compared to our previous Node servers. Granted we were using bleeding-edge Node 5 years ago which is not the Node of today.


https://www.dropwizard.io/

Quoting from the site: Dropwizard is a Java framework for developing ops-friendly, high-performance, RESTful web services.

It's a nice microframework which comes with built in health-checks, and metrics and it's also ops friendly because it deploys as a single JAR file.


Not really related to Dropwizard but... this:

> mvn archetype:generate -DarchetypeGroupId=io.dropwizard.archetypes -DarchetypeArtifactId=java-simple -DarchetypeVersion=[REPLACE WITH A VALID DROPWIZARD VERSION]

Why is it that everytime I read something about Java I have a feeling it was not meant for humans ? Not as bad as C++ projects setup though, I'll give it that.


I don't use maven archetypes too often, the rest of maven seems pretty good to me, but maybe I'm just used to it.

I have done a lot more Java than Javascript, so I feel the same about npm error messages.


Don't get me started on js/npm stuff...


Spring Boot with Webflux and any rx supporting libraries you can get your hands on


Thanks.


I use javalin.io which was written by one if the maintainers of sparkjava (not to be confused with Apache Spark).

I'm a big fan.


Spring boot + jOOQ for RDMS support.


Pardon my ignorance but what does any of that trademark stuff have to do with jdbc?


I know little about EE (and I'm certainly not speaking for anyone but myself), but I believe Java EE has lost dominance not because of any corporate decision, but because it simply started losing ground to unstandardized open source projects [1], as opposed to EE's JCP. So people who liked EE wanted to ditch the slow-moving JCP in favor of a faster process, and one question was whether the new project will be able to change specifications of namespaces that are traditionally reserved to, and associated with, the JCP. In the end it was decided that no, they will not be able to change JCP namespaces (but can choose to maintain them until the end of time in addition to any innovation they do outside of JCP namespaces). The decision obviously disappointed some, but I don't think it's viewed as catastrophic (although some may think so). And I don’t think that the “negotiations have failed” so much as that either side wasn’t able to achieve what they believed was the best outcome for them, but an agreement has been made. You also need to realize that it wasn't a negotiation between Oracle and some grassroot project, but among multi-billion-dollar corporations, some of whom have fought Java standards for over a decade, so the process was both legally and politically complex. But now everyone can move on.

I, for one, am curious to see how Jakarta EE's current approach of what seems to be an internet-based, democratic semi-standard would work out, and if it can be better than both the JCP as well as more common centrally-controlled open-source projects.

Anyway, this is what Eclipse's director said on the matter:

https://twitter.com/mmilinkov/status/1125213654775889921

[1]: That post you linked to reminds me of those who blame Oracle for killing Solaris. Solaris is a terrific operating system, that was sadly killed by Linux long before Oracle acquired Sun. After a few years of trying, I guess Oracle decided they could no longer save it, and there was no point in continuing to throw good money after bad.


Your revisionism in your footnote is astounding, frankly. Oracle closing the source for OpenSolaris was not because of Linux - it was because of a choice made inside Oracle. Remember that development then continued for years in a closed source fashion, and is still ongoing with a skeletal staff.


> You also need to realize that it wasn't a negotiation between Oracle and some grassroot project, but among multi-billion-dollar corporations, some of whom have fought Java standards for over a decade, so the process was both legally and politically complex.

Yes exactly, I think that is what makes if feel like not the safest choice for many companies.


Java EE has not been the preferred choice for many companies long before this issue, which, I guess, is at least part of the reason why Oracle gave up those projects. I think Spring is the leader, but I am really not too familiar with that entire domain.


That's right that _pure_ JEE has not been preferred, but part of my original point is that simple things like servlets are actually technically part of JEE - so if you are using Spring for web, cloud, etc then you are actually using JEE at least a little bit.


I was never super into Java development. I started working in 2014 and was introduced to Weblogic, Jenkins, huge Maven POMs and all the rest, then went into Cloud consulting. When I sit down to do anything I get so wrapped up in all the stuff that comes along with Java and feel like I have to use some huge IDE like Eclipse (I hate) or IntelliJ (I <3 U Jetbrains) to do anything "real".

If I could just write code and have a simple package manager like NPM or even Go packages and not be hindered by trying to get VScode to work I would never look back. I just waste so much time trying to understand the ecosystem. I know that isn't a great excuse (not wanting to take the time to become an expert on tooling), but people who came into the professional space in the same time frame likely also just see it as an obvious path to just avoid all the cruft.

What I really want is to get a "lite" Java project like this up and running with all of the smart people's opinions in place that I can develop entirely in a lite editor (preferably without XML anywhere). Maybe something like Spring Boot would solve that but I have not investigated that yet.


Java-the-language is a fairly crummy thing to work with, it’s Java-the-ecosystem that’s genuinely good — the JVM, the tooling, etc etc. The quality of IDEs available is definitely a very large component of that. Doing Java on a lightweight editor seems like a way to pay the price without reaping the benefits.


I use TextMate (!) to maintain an Android SDK that's a mix of Java, C and C++ avoiding Android Studio like the plague, to maintain an iOS SDK that's mostly Objective-C, similarly avoiding Xcode's editor, and to maintain the backend written in Python that these SDK's talk to. I've got enough going on in my head w/o having to deal with the complexity of two different IDEs.

Anyway, for me Java-the-language is fine. The tooling, at least as it comes with Android (looking at you gradle and all its Android plugins) is what makes me want to pull my hair out.


I've not done any Android development but how is IntelliJ for it? While it can be slow on big codebases and obviously never as responsive as vim in a terminal, I think the tooling is excellent.

But again, you have to spend a month or so learning the shortcuts.


IntelliJ has an excellent vim keybinding plugin which I use (having come from Linux/scripting/Vim background to Java). Intellij also makes setting shortcuts pretty easy so you can customize to your needs. Having used both Eclipse and IntelliJ, I prefer the latter.


I haven’t used IntelliJ, only Android Studio which I know is based on IntelliJ but don’t know how much they differ. I find it to be a klunky UI and it easily eats all the CPU on a 2015 3.1 GHz MBP doing things like self-updating, downloading newer SDK versions, re-indexing a small repo, etc. I personally find it pretty unusable. There’s also parts of the UI you can’t even access (like the SDK manager and the AVD manager) unless you have a project open. It’s pretty obviously not a Mac native tool. I assume (hope anyway) it’s much better on Linux.


To a couple of those -

Compared to the other options, Gradle (IMO) significantly reduces the package & project management overhead. That said, it's still relatively high insofar as you are still asked to keep track of things like whether a dependency is needed at compile time, run time, or test time.

Java-the-language has a symbiotic relationship with heavyweight IDEs. The language's development is as influenced by the popularity of IDEs in its community every bit as much as the popularity of IDEs in its community is motivated by the language's characteristics. If you're looking for a good lightweight editor experience, I'd suggest looking at alternative JVM languages. Any of {Groovy, Clojure, Kotlin, Scala} will give you a better non-IDE experience while still giving you full access to the ecosystem. (That said, everyone still codes those in IntelliJ, too. It's still not gonna feel like working in Go.)

Lastly, it's totally OK to tune out the ecosystem when you don't need to be plugged into it. Yes, there are a bazillion JSON libraries out there. And you can easily spend more time and energy agonizing over their differences than you could possibly save by choosing the right one. Similarly, go ahead and ignore Spring. The whole Spring Experience™ is designed around developing applications a certain way. If you like to develop applications that way, you will know in your soul that Spring is right for you, and be attracted to it like a cat to an open can of tuna, and you would already have been a deeply devoted Java developer for years now.


Maven doesn’t have a lot of overhead, if you don’t overengineer your POM and use shared parent between multiple projects. It requires some DevOps thinking, but in the end typical project POM will be just a list of dependencies and basic metadata.


> What I really want is to get a "lite" Java project like this up and running with all of the smart people's opinions in place that I can develop entirely in a lite editor (preferably without XML anywhere).

That's pretty much the original value proposition for Spring Boot. "I just want to get to work".

I used Spring Boot before encountering a Spring 3 project. The difference is phenomenal.

Disclosure: I work for Pivotal, which sponsors Spring.


Spring boot still requires gradle, which is much more complex than naive use of npm or python pip.

In the long run java build tools are better, but due to the learning curve a lot of folks balk (leave) and use a different stack.

Whoever is in charge of openJDK should just adopt kotlin as java 14 even if it was Not Invented Here.


Spring Boot works with both Gradle and Maven.

https://start.spring.io/


Try a Spring Boot app using Gradle as the build tool.

Weblogic and all those app servers were a product of a different time. They solve problems that are largely solved other ways now (you could argue for some of their features). Spring Boot ships the web server in the application which is more akin to Rails, node etc.

Maven itself is also showing its age (its 1.0 release was 2004). I won't say that most of the industry has moved to Gradle, because the truth is so many workflows and projects are using Maven that it will be around for a long time. The good thing is that other build tools like Gradle, SBT etc. interop with maven the package repo just fine.

There is nothing stopping you from developing Java in vim. Syntastic and other plugins will help though.


For your web apps / services / APIs, check Dropwizard, it is not what I would call "bleeding edge", but it provides a reasonable base and has always allowed me to easily work with the rest of the awesome JVM ecosystem. My Dropwizard projects are usually cruft/magic free, start in less than 5 seconds, consume predictable resources and is easy to reason about, troubleshoot and improve.

I used to be a big fan of Spring(-boot) with MVC, but since I tried Dropwizard I never looked back. I'd still do Spring but perhaps for non-mvc/web needs.


The answer you are looking for is called "Clojure".


It’s the first time I see someone willing to trade Nexus/Maven for NPM. There’s even no standard way to build and package a TypeScript library, to start with. For small projects, maybe, it can work, but for enterprise needs JS/TS is not even close.


It takes a lot of power to push something that large. A lot of brain power to grock the ecosystem.

Outside of the Java bubble, the view is quite a bit different.

All that sophistication looks like a wasted effort.

Take something as simple as admining the garbage collector. Java has a big selection of GCs, and each has their bunch of knobs for tuning. And you have to pay attention to that stuff.

After working with Go for several years, at large scale, we never once had to touch any knobs for GC. We could focus on better things. Any of the Java stuff we deploy and deal with we have this extra worry and maintenance issue.

And that's just the GC.


Funnily enough, Go just chose to solve the problem the other way around: While JVM tackled GC with the code equivalent of lightsaber-equipped drones, Go‘s GC is almost embarrassingly simple in comparison (although it‘s pretty decent by now).

The major difference is that the whole Go language and stdlib is simply written around patterns that avoid allocations almost magically. The simplicity of the Reader and Writer concepts is so elegant, yet powerful and doesn‘t allocate anything but a tiny reused buffer on the stack. There‘s lots and lots of other examples, but if you have to collect ten times less garbage, you‘ll be better off, even if your GC is 2x slower.


The byte buffers that Go's Reader reads from and that Go's Writer writes into cannot in general be allocated on the stack because they are considered to escape. Because Reader and Writer are interfaces, calls are dispatched virtually, so escape analysis cannot always see through them. This is now fixed for simple cases in Go, but only very recently: https://github.com/golang/go/issues/19361

Ironically, Java HotSpot handles the use case of Reader and Writer better than Go does, since when it's not able to allocate on the stack it has fast allocation due to the use of a generational GC with bump allocation in the nursery. By virtue of the fact that it's a JIT, HotSpot can also do neat things like see that there's only one implementation of an interface and thereby devirtualize calls to that interface (allowing for escape analysis to kick in better), something Go cannot do in general as it's an AOT compiler.


> like see that there's only one implementation of an interface and thereby devirtualize calls to that interface

Oh, the JIT devirtualizes and inlines even if there are many implementations, but only one or two at a particular callsite. This has been generalized in Graal/Truffle so you almost automatically get stuff like this (https://twitter.com/ChrisGSeaton/status/619885182104043520, https://gist.github.com/chrisseaton/4464807d93b972813e49) by doing little more than writing an interpreter.


I agree it's impressive that Go manages to be not all that much slower than Java while having a much simpler runtime, but much of the simplicity is gained from lacking features that are very important in many cases of "serious software" like deep low-overhead continuous profiling and ubiquitous dynamic linking.


I’ve seen you mention the superior operability of Java and the JVM in high load production environments and I think this is a really important, often overlooked, and commonly misunderstood point.

Would you be up for writing a short post or blog post going into some anecdotal comparisons and sharing some resources?


Exactly. Go is benefiting from years of hard lessons learned in other stacks such as Java. Having super experienced GC builders involved early on resulted in a language and libraries that work much more synergisticaly with the GC.

It's better to dodge a problem, than to have a baked in problem that requires lots of really smart people to make work arounds.


Java HotSpot's garbage collector is significantly better for most workloads than that of Go, because it takes throughput into account, not just latency. Mike Hearn makes the point at length in this article: https://blog.plan99.net/modern-garbage-collection-911ef4f8bd...

I predict that over time Go's GC will evolve to become very similar to the modern GCs that Java HotSpot has. Not having generational GC was an interesting experiment, but I don't think it's panned out: you end up leaning really heavily on escape analysis and don't have a great story for what happens when you do fail the heuristics and have to allocate.


> I predict that over time Go's GC will evolve to become very similar to the modern GCs that Java HotSpot has.

At which time the talking point will become "look how advanced and sophisticated it is!"


> Java has a big selection of GCs, and each has their bunch of knobs for tuning. And you have to pay attention to that stuff.

You never have to touch them in the Java world either, unless you like making performance worse that is.

I've never ever seen a case where fiddling with garbage collection parameters didn't make things slower.

I worked with a guy who worked on a popular java compiler, and he says the same thing. he never twiddles GC parameters either.


What sort of scale are you working at? At our scale, it is normal to look into these sorts of things. Reading and understanding and applying articles like the following is absolutely necessary.

http://clojure-goes-fast.com/blog/shenandoah-in-production/


Medium scale. It's not like I need to pimp each of my servers like it's a Honda Civic.

If the load gets too high, I just add another instance.

It's way more economical than wasting developer time fiddling with GC params.


Are you serious? You have to mess with the maximum memory allocated to the GC all the time with java processes. Also the JVMs default settings basically assume it is the only process running. It will keep hogging a huge amount of memory even if the memory allocated to the GC is 60% empty unless you configure the GC properly. Wasting memory and stealing it from other processes which potentially causes swapping or crashing is far worse than any increased time spent garbage collecting. Unless you're as stupid as the Minecraft developers you will rarely suffer from GC pressure with heaps below 10GB.


Cassandra has benefitted from so many GC tweaks that I find this view hard to believe.

i can only imagine someone having this view if they have never cared about latency. How have you never experienced a multi-second GC pause on default CMS settings?


> Outside of the Java bubble, the view is quite a bit different.

I know that outside the "Java bubble" less advanced platforms are often good enough (BTW, while Java's GC ergonomics are getting better, I agree there may be more of a paradox of choice issue, but while you may not need to touch any knobs, you're also not getting low-overhead, deep production profiling and many of the other amazingly useful things you get with Java), and even inside the Java bubble we don't think Java is always the best choice for everything, but that doesn't mean that Java isn't leading in technological innovation, which was my main point.


I'm not saying I disagree but for a lot of businesses (even one with relatively high traffic) it is not unheard of to deploy with almost no tuning of the GC (aside from setting a heap min / max of 2,4,8GB) and have no issues.


I'm sure this all depends on use-cases, but I'll chime in to agree. I work on web services that do high (not Google high, but you've-heard-of-it high) levels of traffic, we run on the JVM, and GC pauses are not something that cause us to lose any sleep using out-of-the-box settings + explicit heap min/max.


Tuning GC is hard, and the nondeterminism is worrisome, but hitting a pathological case and rewriting a bunch of code hoping to avoid it is even harder.


For such an advanced JVM, having to prewarm it by calling code 5000x times or the JVM being much more memory hungry than other non-JVM languages doesn't feel like the cutting edge.


If you're not calling your code 5000x then it's likely not important for performance. I worked in compilers for 5 years and people's intuition for what parts of the code are the bottleneck is generally not very good. This includes me, I've guessed wrong a WHOLE lot.

If you're running microbenchmarks and having issues, then you're likely not using JMH, the Java Microbenchmark Harness which used to be a 3rd party library but is now built in to Java 12.

Is Java memory hungry? Maybe. If you're just writing a small routing server, probably not the right use case for Java. If you're writing a big and complicated game server with hundreds of routes and you need to talk to MySQL, Redis, and Memcache then I'd say the memory overhead of Java is really quite good. Don't forget that you can tune the min/max heap and other things like that, especially with the module system in Java 11. However if very low memory is a requirement, then you're probably running in an embedded environment and you probably shouldn't be using Java.


> If you're just writing a small routing server, probably not the right use case for Java.

General purpose languages have "use-cases"?


Yes, I wouldn't use Haskell for scripting and I wouldn't use javascript for a 3D game engine.


I think in this context Java means "the platform" i.e. the JDK


the usual C2 kicks-in is 10000, not 5k. But that statement about warming up is true only for microbenchmarks.

C1 which is an dumb (but not terrible) compiler tends to be at 1k, and if you cannot get your code to 1k, likely it never needs compilation, so no point to spend time and space on performing the said compilation.

The 'prewarming' does include perf. counters, so it's guided compilation to boot.


What?


I suspect that the parent is talking about the JVM's JIT, where it compiles java bytecode into machine instructions after loading the application. This is why the first few requests on a JVM are usually considerably slower than the rest.

Parent is obviously exaggerating with the 5000x, and he could have made his point in a different way, but there's some truth to it.


5000 refers to number of times a method needs to be called before the JIT decides the method is "warm" and needs to be compiled by the expensive to run C2 compiler which produces good quality machine code.

If you're benchmarking java code and the method was not called enough times before you measure, you're measuring code compiled by C1 (or even interpreted).

The performance gap between Java and C is on the order of x2 to x5.


Why not just pass -Xcomp? From the docs[1]: "You can completely disable interpretation of Java methods before compilation by specifying the -Xcomp option."

[1] https://docs.oracle.com/javase/8/docs/technotes/tools/window...


At the same time, JIT can outperform native code in some specific circumstances, since there are certain optimizations which can be proven safe at runtime which can not be guaranteed to be safe at compile time


this is a thing people say a lot but it is not something that seems to often result in an actual java program or benchmark being faster than program compiled AOT by an equivalently smart compiler (llvm etc)


Actually one of the less known features of the JIT is the ability to deoptimize/recompile based on CHA (class hierarchy analysis) or pushed compilation request via invokeDynamic&friends.


Isn't Minecraft written in Java. Many Fortune XX have their entire product lines written in it. Frankly, I don't understand the hate it gets on HN.


Most of AWS, a lot of Google - as far as I know.


yes, and I had to pay for a lot of ram on servers because of it ;(

The creator of notch chose java because when he started minecraft he thought that would let it run in web browsers. He wouldn't choose java if he could do it over again.


Do you have the link to back up the statement about running in browsers? Notch is super talented. MC was released in 2009. No way, he didn't know about browsers and Java


Paying for RAM on servers is one of the cheapest ways you can pay for performance.


It is also one of the poorest performing games in the history of video games.


> low-overhead in-production profiling/monitoring/management

This is a joke, right? The management overhead for the JVM in production environments is huge. It's really hard to get it right.


Not sure what you are talking about. Can you give me some examples?


Tuning the heap size, GC tuning, etc. are often needed to avoid huge GC spikes out of the box. Not to mention performance is pretty meh compared to something like Go for applications where non-trivial compute is needed.


GC tuning should be trivial for every Java shop. Yeah I guess you can find use cases where Go is faster than Java but in that case if this matters I would go for Rust be because I like it more than Go.


>stuff hyped on HN

HN is going to be inclined to take a look at new-ish stuff.


Indeed. For better or worse, that's what the "N" in "HN" stands for - "News".


Yes! Java 8 is good. And Excel is by far the best spreadsheet out there. I don't know enough about Sharepoint but it seems like the normal world has actually chosen very well when it comes to technology.


Despite its title, the article was not about Java 8.


You can try appending "in Go" or "in Tensorflow" to see if the title still makes sense.


I had cause to use a bit of Java recently, so I could wrap an existing library that did exactly the thing I wanted. I haven't used it since university—back in the 1.4 days—and I was actually pleasantly surprised. Performance was great, concurrency was easy, features like type inference and streams made the experience much more pleasant, and obviously the development tools are still first-rate.

The absence of a package manager like Bundler or Cargo was frustrating for someone coming from that environment – as was the effective requirement to use an IDE. But on the whole, the platform feels like something that totally-hip-and-edgy-clique developers like myself are too quick to discount.


>The absence of a package manager like Bundler or Cargo was frustrating

why you did not use maven or gradle?


I think everything you mention here is actually related to the JVM and not Java specifically, and while I agree the work is impressive, it’s also true that languages such as Kotlin and Scala benefit from that work while also offering some very nice modern improvements.


You are describing the Jvm there, mostly. The jvm is great, and java is pretty tired even in the latest incarnations. I would have nothing against working full time with a jvm stack. So long as java isn’t involved.


C# ?


This - Java 8 'feels' legacy, but in so many ways it's miles ahead of upstarts.

Sometimes we forget that the JVM (almost) = Java and that thing is a beast.


How does this compare to the .NET ecosystem?


C# is a better language. Java 8 started to catch up with some of the quality-of-live niceties, but Java has a lot of mistakes baked into the language and interfaces that can't be removed without potentially breaking a lot of stuff, which the consortium is not willing to do. C# and DotNet were designed with the wisdom gained from the early days of Java implementation and sidestepped a lot of these messes.

Java has a substantially better ecosystem. The tooling is just miles ahead of what exists in the DotNet world. Until very recently you developed in Visual Studio and you ran on Windows in production and that was that. They're working on Linux support and a broader ecosystem but they are probably a decade behind what is available on Java. Even stuff like package management is crude, NuGet is a joke compared to Gradle.


The JVM has multiple languages such as Kotlin, Scala and Groovy. Each of them arguably being a better language than Java. That most people still prefer to build projects in Java shows that other things such as backward compability, how many developers know the language, tool chain and so on matters more than the isolated technical merits of the language.


Cannot agree more. Syntactic sugar or certain more advanced constructs in the end have very low ROI for experienced developers, which can utilize very well the existing language features and extend their expressive power with internal DSLs. The costs associated with the tooling, hiring people with necessary expertise etc at that point are more important.


> but Java has a lot of mistakes baked into the language and interfaces that can't be removed without potentially breaking a lot of stuff

Absolutely true, but so does C#. In the end it turned out that .NET's reified generics were a mistake (which makes language interop on .NET painful), and more recently async/await.


> In the end it turned out that .NET's reified generics were a mistake

Do you have any resources that explain this point further?

Reified generics have often been praised as the thing that CLR got right (and JVM got wrong). I never understood fully why that is, especially since other languages with generics (e.g. Haskell and OCaml) don't have anything reified (although in all honesty also don't have RTTI so it's really not necessary).


Erasure makes it easy for Java, Kotlin, and Clojure share code and data structures without costly runtime conversions. Languages like Scala and F# have had trouble implementing features on the CLR because of reification, and take a look at what Python and Clojure on the CLR have to go through for interop.

I know some people think that say they like reified generics in C#, but those are mostly people who aren't aware of what cost to the entire ecosystem they're paying in exchange to what is a minor convenience in C#.

BTW, reification of reference-type generics is not to be confused with specializing collections for value types ("arrays-of-struct") an extremely important feature that CLR indeed has, and Java is now working on getting.


"Mistake" is a strong word, and I would disagree with it. But when building extensible libraries and systems I do often find myself wanting to pass around a List<Something<?>>, which isn't doable without writing a second generic interface or the like. On the flip side, in Java (or, these days, Kotlin), a type-erased generic can be mediated more easily because passing a Class is easier than adding a second interface.

Part of it is that a lot of that stuff is gamedev-related, for me. It's not the biggest thing in the world, but for a lot of the stuff I find myself writing in C#, I find myself wishing for type erasure to make throwing data around a little easier. On the other hand, though, when writing web stuff on the JVM--I absolutely will not waste my time doing this in .NET, ASP.NET Core is not very good and EF Core is awful--I often wish for type reification, so it's just a horses-for-courses thing.


Generics interop just fine on .NET, and async/await had become a starting point for similar designs in many other languages. I don't think you'll find many people who actually write code in that ecosystem agreeing with either of those claims.


> Generics interop just fine on .NET

They really don't. They have posed severe restrictions on features in languages like Scala and F#, and take a look at Clojure, Python and JS interop on the JDK as opposed to .NET.

> and async/await had become a starting point for similar designs in many other languages.

And it's a mistake in most of them. Java has copied some of C's mistake, C# has copied some of Java's, and others may copy some of C#. Every language/runtime both repeats others' mistakes and adds a good helping of its own.

> I don't think you'll find many people who actually write code in that ecosystem agreeing with either of those claims.

I can only express my own opinions (and I think reified generics, as implemented in the CLR, is a far bigger mistake than async/await, which only affects the C# language), but I am far from being alone in having them. Also, I have no doubt that async/await is an improvement on the previous situation, which is why people like it, but nevertheless I think it's a mistake as there are alternatives that are more convenient, more general, and have less of an adverse impact on the language (e.g. Go's goroutines).


What are the restrictions on F# that were posed by them? Given that Don Syme was the one who originally designed them, specifically with a mind for cross-language use (which is why they got stuff like variance long before C# supported it), this is a surprising claim. In fact, I recall Don saying something along the lines of, if CLR did generics with erasure like Java, F# probably wouldn't be where it is today.

I saw the link to Clojure page you posted in another comment, but I don't see any fundamental problem with generics there. Yes, if you're invoking a statically typed language from a dynamically typed one, you have to occasionally jump through hoops when dealing with things like method overloads that require types to distinguish. The same goes for Python. I have actually used Python embedded in C# more than once, with interop both ways, and in practice it "just works" most of the time.

Conversely, I don't see why statically typed languages should surrender valuable type information (and associated perf and expressivity gains) for the sake of convenience of dynamically typed ones, especially on the platform where static typing is the norm, and libraries are expected to be designed around it.

As far as async/await vs Go's goroutines - since we're talking about language interop, how many languages can Go coroutines interop with? Async/await easily flows across language boundaries, as you can see in WinRT - any language that has the notion of first-class function values can handle that pattern. Goroutines are essentially a proprietary ABI. And the worst part is that once the language has them, all FFI has to pay the tax to bridge to the outside world, regardless of how much you actually use them. It may be an argument for standardizing some form of green threads on platform ABI level, so that all code on that platform is aware of their existence and capable of handling them. Win32 tried with fibers, without much success. Perhaps it was too early and the design was too flawed, but it's not encouraging.


> What are the restrictions on F# that were posed by them?

It does make it harder to add features to the language that do not map to the current reified "generics" spec, for example higher-kinded polymorphic types.

Of course, the JVM has plenty of issues supporting alternative languages too, for example lack of tail-call optimisations, or switch-on-type, for functional languages.


If such features can be implemented via runtime type erasure on JVM, you can still do that on CLR - it's not like it prohibits that technique, it just doesn't use it for generics. You can even have different languages agree on how they would implement it, so that they could fully interop. With modopt/modreq, you can capture it all in metadata, as well.

It wouldn't interop with C# generics (although I don't see why it couldn't interop with C# by other, less convenient means). But if it can't be properly mapped to them when they're reified, why is there an expectation that it should? It seems to me like the gist of the argument here is that we can conflate two features into one, if only we remove all the conflicting bits of one of the features - which also happens to be the one much more broadly used at the moment. It's a strange trade-off.


> Of course, the JVM has plenty of issues supporting alternative languages too, for example lack of tail-call optimisations,

That's true (and will be addressed), but that's a problem that can be fixed by adding a feature, not removing a central one, which is why I said that both Java and .NET have made mistakes, and reified generics was one of .NET's.

> or switch-on-type, for functional languages.

There is no difficulty supporting that. In fact, the Java language is about to get that without JVM changes. Perhaps you mean switching on A<Foo> and A<Bar>, where Foo and Bar are reference types (and possible with a subtype relationship between them), well even Haskell can't do that, and if there was a language that thought this is a good idea, it would be able to do it quite easily.


> That's true (and will be addressed)

Glad to hear it! It's probably the biggest issue trying to do functional programming on the JVM.

> There is no difficulty supporting that. In fact, the Java language is about to get that without JVM changes.

The technique to implement ADTs in functional languages on the JVM has often been to add an integer tag to every subtype and switch on that, but it's ugly enough for interop that IIRC Scala doesn't do it (and is thus less efficient).


> C# is a better language.

Yep, as a seasoned Java developer I agree. There are times when I miss a part of the Java syntax but they are few and far between.

> Java has a substantially better ecosystem. The tooling is just miles ahead of what exists in the DotNet world.

Also true, although it is getting closer fast.

Possibly more important: they are both in another league compared most other languages. (Of the other languages/stacks I have some production experience with Angular/TypeScript is the only that I feel has anything close to the tooling support that C# and especially Java has.)


Packet is a great alternative to the NuGet client.

Tooling might be slightly behind Java's, but honestly, there's not much in it. Dotnet Core might only have been available on Linux for a few years, but Mono was around for many before that.

The dotnet CLI also recently got the concept of 'global tools', which are akin to NPMs.

A lot of work is also underway on better cross-platform performance analysis, crash dump handling and the like for runtime in production[0]

Is there anything in particular you miss?

[0] https://devblogs.microsoft.com/dotnet/introducing-diagnostic...


Maybe not much that the OP miss. But then again there's nothing to miss from Java either since Java fulfill his requirements (maybe more than .NET can).

If I'm a Java developer who has been using Maven for such a long time. I don't see the allure switching to .NET just because NuGet exist or whatever Packet stands for.

The big data ecosystem is all JVM/Java.

Android started off as Java. Sure they have Kotlin now, but I don't know how many developers switching to Kotlin in drives.

Basically, the ecosystem of Java is already _there_ when MSFT is trying to catch it up so experienced Devs (a.k.a people who are already comfortable with the tooling) do not see the reason to switch.

As for me myself, I used to intern at MSFT back in the mid 2000 drinking the .NET kool-aid. One day I woke up and realize that part of the hi-tech world that interest me, relies a lot on OSS and the large portion of OSS was Java. Fast forward to today, FAANG is mostly Java shop. Java seems like a safer ground for me to have a longer career.

That's just me and my 2c. I'm too lazy to switch to .NET since there's no added value at the moment unless if I want to do back-office in-house web-app.


According to the roadmap, Mono's JVM interop used for Xamarin Android will be making its way more directly into .NET Core and will light up on almost every platform, which would give .NET greater, direct access to the Java ecosystem as well.


What about Kotlin?


Why downvote ? C#/Visual Studio is one of the best languages out there.


Lightyears behind it. There's no reason Java is still prevalent besides inertia. .NET/.NET Core is going to slowly but surely overtake it with Java shooting itself in the foot with the new licensing terms and .NET/C#'s far better feature set/better design (lessons learned from Java/JVM's mistakes were able to be fixed in C#/.NET)


.NET Core is winning a lot of benchmark comparisons now. And one might consider Rust to be more bleeding edge.


Java and Rust are both bleeding edge in their respective domains, which are quite disjoint. I like them both, and Rust may well dominate its domain one day as much as Java dominate its own (although Rust's domain is even slower-moving than Java's, so that process may well take decades). I don't know of .NET Core winning any quality industry benchmarks that people actually pay attention to, let alone a lot of them.


TechEmpower BenchmarkGame

And SIMD is coming soon in .NET Core 3 which should be a big jump for many workloads.


True, but java carries a lot of legacy.


"Legacy" includes good things too, not just bad things.

I grew up promising myself that I'd never touch Java, but when I did, it was a relevation. Everything just works. The debugger was just "attach and go" -- no recompilation with different flags, no fishing for the IDE / plugin version in which it wasn't broken, no glaring missing features (conditional breakpoints, stack traces...) The frameworks were refreshingly complete; I always felt that extensibility mechanisms were available so that I didn't have to gamble that my requirements fell into the "just works" subset rather than the "just doesn't" subset. I never had to wonder if integrating with some bog-standard protocol would mean maintaining my own open source project; maven always had something workable available.

Java wasn't born like this, but the legacy it accumulated made it formidable.

Early-adopting a hip language nets you a few sexy wins at the cost of completely eliminating a gigantic Brontosaurus-size long tail of important functionality and ecosystem maturity.


You're right. I never said otherwise.


Java, or projects in Java? Java pretty much has to, in order to preserve the most important feature a runtime can have: backwards compatibility.

Projects in Java are cursed from the beginning: the language has great constructs for boxing complexity (modifiers, inheritance, interfaces, generics, design patterns allowed by performant virtual calls, etc.). This means that when projects in other languages become unmanageable, Java will keep chugging business requirements like no other. Accumulating complexity is the fate of every successful system; and Java makes you go a long way.

Now, a newly arrived person on the job market will face systems with a pile of convoluted business logic a few century.human's high. The first ticket will be: 0.5 month of getting accustomed to the language/framework, 1 month of archeology, producing 50 lines of code, 150 of tests, break a few tests, 0.5 month of corrections; For something that would take one day as a greenfield project. The job might pay well, but will be very lowly gratifying.

Now the sound enterprise strategy is to keep using Java: the platform allows for deeper and finer integration into the business processes. But the sound individual strategy for newcomers is to stay the hell away from it. Oh, you will only work with Rust/Go/Elixir/etc? Here, have a greenfield project.

Clojure/Scala/Kotlin/Ceylon/Frege/Groovy might be the best of both worlds.


It sounds like you're basically saying its difficult and slow to work on large projects, and the solution is to not work on large projects. And that new languages implicitly mean smaller and thus easier projects because they haven't had time to grow.

The weird thing I always get with the always greenfield developers is how you know you aren't just creating even larger messes tomorrow vs the messes you're avoiding today. I guess it isn't your problem though if you're always working greenfield - it's someone else's.


Yeah, there is the legacy of the thousands of mature libraries, frameworks and development tools.

Legacy isn't always a bad thing.


Having moved to a stack with a much more competent standard library, it's much nicer to have that great standard library than to have to wade through a rich set of libraries all the time to achieve the same thing. I'm talking about Go, of course. It's just nice getting right to work coding up an http server without having to think about what "web framework" I should need to select, or which logging library to use.


What stack?


never said it was bad. But just wanted to state that it's not all newness.


To me Java is like a diesel engine, old but proven technology that can keep on chugging for hundreds of thousands of hours.

(Not to discount the recent improvements in GC, etc which are also amazing...)


Once you write code, it is legacy.


So do the framework within Java.


[flagged]


No, you don't need to reverse your opinion at all.

Personally, I voted you down because your opinion was stated in a sort of offhanded manner that does not do the subject justice.


The manner is neutral. It's only a couple words. You want me to write a whole essay on the pros and cons of legacy? In that case I would invite a negative vote because in those cases you can disagree. In this case it's just a statement that is true.


Legacy is a poorly defined term anyway. Is COBOL legacy, how about jQuery? ...

It's mostly a derisive term that seems to mean: "Old technology I don't like."


Poor start-up time, terrible application bloat, uninspiring language with poor concurrency support, massive RAM requirements, everything XML, complex tuning required, what a nightmare.

No wonder the world is running towards Python and serverless as fast as they can...


Then they discover that even though Python is quicker to develop with, it actually runs much slower and has poor threading support. Since async only gets you so far, you try multiprocessing. It turns out running all those Python worker processes actually takes up more memory and CPU than the equivalent Java app...

Also, most of the things you complain about are not Java: they are the fault of JEE (aka J2EE) and "app servers."


CPython is 100 times slower and has no multi threading support (CPython is single threaded). JVM code can outperform native C code. The JVM supports many advanced languages (Scala, Clojure, Kotlin) and it can even run Python code (Jython) faster than CPython. For concurrency it has this ultra powerful library https://akka.io (Up to 50 million msg/sec on a single machine. Small memory footprint; ~2.5 million actors per GB of heap.)


Jython seems to be much, much slower than CPython, actually: https://pybenchmarks.org/u64q/jython.php

The only benchmark where it's faster is the one involving threads, which makes sense. CPython definitely has multi-threading support, but it also has the Global Interpreter Lock preventing threads from actually executing bytecode at the same time. Jython doesn't. People have written patches that successfully remove that GIL, but those patches make single-threaded performance worse and thus haven't been accepted.


I consider that as basically no multithreading support. CPython multithreading is useless in comparison. Those benchmarks indicate that the Jython compiler is far from optimal. The equivalent code rewritten in Java would be an order of magnitude faster. https://julialang.org/benchmarks/ (note the single Python outlier there is the result of calling a C library for matrix calculation). Anyway, the JVM and CPython are fundamentally not comparable. One is interpreting, and the other is a JIT compiler. There are Python JIT compilers that would make a better comparison. However, as a language, Python is not optimal for large codebases. Comparing the languages only (not their implementation), Python is good for small scripting tasks, Java is better for very large scale projects.


All that might very well be true, but I was responding to your assertion that "The JVM ... can even run Python code (Jython) faster than CPython".


Somewhere I heard that was the case, but I didn't verify it. These benchmarks show Jython performing hundreds of times worse. spectral-norm: 0.12 secs for CPython and 4.96 secs for Jython. How can the JVM be 40 times slower than an interpreter? Maybe they didn't warm up the hotspot compiler (amateur mistake), or the Jython compiler is suboptimal. However I compared the code: This https://pybenchmarks.org/u64q/program.php?test=spectralnorm&... versus this https://pybenchmarks.org/u64q/program.php?test=spectralnorm&...

The py3 version is calling numpy which calls a highly optimized C library. So this benchmark suite is worthless. They failed to warm up the JVM hotspot compiler, and they are comparing interpreted Jython with a highly optimized C library called from Python3

The lack of a warmup phase is confirmed by the raw command line and output logs on various benchmarks. Amateurs. This is a highly misleading set of results on multiple levels.

https://wiki.python.org/jython/JythonFaq/GeneralInfo#How_fas...


The funny thing is that from a purely technological point of view, Java (even the 5-year-old Java 8 and certainly recent versions) is far ahead of most other stuff hyped on HN (as well as less hyped stuff).

The technology is certainly impressive but it isn't equipped to deal with the modern world. It is trivial to run 10 Node.js applications in the memory of one JVM application. This never used to be a problem, but now the cloud is forcing apps to pay for the resources they use its a kiss of death.

There is also the move to serverless and the slow start of the JVM which makes it literally the worst choice out of all the alternatives.

Oh, and lets not even start on the train wreck of Oracle's stewardship -- Java 11 is competing with Java 8.

Java will survive but I cannot see it having any place in a modern software stack.


What's wrong with Oracle's stewardship? I am not a fan of Oracle, far from it, but Oracle's stewardship of Java has been exemplary.


Currently:

- The licence changes causing a premature move to Java 11 or OpenJDK out of fear of an audit. Today a colleague was investigating a Segmentation fault in the JVM because we moved to the OpenJDK. This is not productive work.

- The future of j2ee is uncertain: https://headcrashing.wordpress.com/2019/05/03/negotiations-f...

How did you get to "exemplary"? Have you got any examples?


I haven't tried serverless, but shouldn't that be Java's cup of tea?

The JVM can load code dynamically and adapt the runtime to the work load. Maybe there is a framework missing, so those that provide serverless starts a new JVM per function call, but they shouldn't really have to do that.


> And what Hacker News was teaching me was that the entire IT industry was working on brand-new Macbooks, using Go and Rust, Docker and CI, to build amazing products.

The problem is, some companies are using Go and Rust, K8s, Docker in brand new Macbooks. They are not the majority, but they are there. You may care about these things, or you may not. But if you do, working on a "normal" company is infuriating. I had to use CVS while using Git for my personal projects...

Now I am typing this on a Macbook (although not brand new) and deploying Go microservices in K8s.

The thing to keep in mind is that companies are schizophrenic. In another corner of the same company, there are people arguing against Lua for an embedded system because "no one uses it in production"...

So go find the corner that makes you happy.


The converse problem is that some companies end up building or rebuilding everything the most resume-driven way possible and then run headfirst into tooling and frameworks that are either unnecessary to their actual requirements, not mature enough, or was just a dumb fad all along. The actual developers get to put stuff on their resumes and leave for a new company after two years and their replacements end up wanting to rewrite their "legacy codebase" in the newest fashions, and the cycle continues. I don't know what the solution is, other than just exercising good technical judgment.


I went from a software dev role at a B2B SaaS company to a consulting role about a year and a half ago. It has been eye opening to say the least.

One of my first projects was helping a Fortune 500 company run analytics on AWS with data sourced from their Db2 mainframe, which had been in production for multiple decades. Most clients are very uncomfortable with Linux, and Microsoft stack is definitely the default. IT usually keeps the corporate network and individual workstations under VERY strict control, so getting new tech onboarded or new processes implemented can be a multi-week territorial battle.

It truly is a different world at most places.

ETA: I forgot to mention that things at FAANG companies may not be as different as you think. I am currently working with a business team at one which is using a few tools based entirely on Excel and VBA.


It's a bit disturbing how much people in this industry are focused on tooling.

It's a biproduct IMO of working on boring problems or supporting morally ambiguous corps. That thing that you need to keep yourself busy and keep going in the rat race.


Tooling is great for job security. You can be absolutely shit at writing code and designing systems, yet make yourself indispensable because you know Framework X and most other people don't. All it takes is time to memorize trivia of the framework and a bit of trial-and-error to figure out how to work around framework's pitfalls.

Unfortunately, this creates a whole host of long-term issues. For example, it create the situation where complex tools have their fervent advocates, while simple tools do not (because they don't provide job security). It also create an incentive to start a sort of complexity Ponzi scheme: you add tools to manage tools to manage other tools and so on. Each new layer reinforces the job security of people working on the previous one.

It's important to keep in mind that stuff like Java 8 and C are also tools, susceptible to the same problem.


> you add tools to manage tools to manage other tools and so on. Each new layer reinforces the job security of people working on the previous one.

The first time I seen a job title of "Kubernetes Keeper" I died a little inside. It's not that these jobs didn't exist before, and sysadmin is a perfectly good role which I personally don't have the skills or desire to do, but we keep pretending that we've build a way of replacing some of the effort which just moves it around a little.


Yeah it's much better to list a dozen tools/frameworks that you must be expert at than just making it clear about the current requirements.


The more experience I have, the more I appreciate good tooling.

Why? Because the tooling is what ends up being used every day in so many ways. A minor improvement in tooling can lead to drastically better quality of life. It's like getting a fancy but expensive office chair - no, I don't need it, but my back will thank me at the end of every day.

The catch here is that this applies across the board. For example, many new languages try to sell you on a better language design that is more convenient - and it may well be, but it doesn't matter when there's no good IDE, no good debugger, no smooth deployment story etc. PL and API design is important, but it's not more important than everything else. The languages that are the best for "quality of coding" are the ones that balance it all, and usually they have to dial some advanced aspects down to enable other areas to work. Or at least move slower with language evolution, so that new fancy features get full support across the board. It's no coincidence that languages like Java and C# - which lag behind on bleeding edge language features - have the best code completion and refactoring in the industry.


I use great software with subpar tooling all day long, and it is not a good feeling. It feels like sawing of your own arm to feed your hungry customers.


It's a bit disturbing how much people in this industry are focused on tooling.

That strikes me as such an odd critique, and I can't make it work for any other industry; you certainly wouldn't apply it to something like dentistry, or oil refinery, or plumbing.

If I went in for a root canal and the dentist said "We're gonna kick it old school today, sniff this rag of ether, I'm bored with all these tools!" I would grab the nearest scalpel and slowly back away. But yeah, if you're working on boring problems the tools won't do much to mask it.


>>That strikes me as such an odd critique, and I can't make it work for any other industry; you certainly wouldn't apply it to something like dentistry, or oil refinery, or plumbing.

A software developer's job is to create software. Almost like an author. Some more than others. At the end of the day, no matter how good your pen is, it will not write a great book for you.


I see your point, but I think you can't ignore the power of tools.

Most software created itself is a tool, a tool which typically leverages the speed of computing to perform a task hundreds, thousands, or millions of times faster than a human could. Think making a bank transfer - I can do this in 5 seconds on my phone now, which would take me over an hour to physically head into a branch and do it in person, and even then we are relying on computing power at the branch itself.

Why would we, as software developers, act like we aren't going to use tools ourselves? For productivity, for a more pleasant (or even just less frustrating) experience?

I initially was a vim + printf debugging stalwart, but having been in a Java environment grew to realise the immense power of an IDE like IntelliJ and how beneficial it is. You can detect errors in the code almost immediately, so spelling mistakes and missed semicolons aren't a thing. You can fix them in a fraction of a second. If I write a function that could be written in a clearer way, or I'm not sure what type I should write for a variable, it will suggest it immediately and I can follow that suggestion with a press of a key.

Yes, it cannot write great software for me. But if you have worked with software where there is no autocompletion, poor debugging and profiling tools, then I find it hard to believe that you don't think that the addition of these tool can help you write at the least MORE software, and probably even BETTER software.

It is not the same as writing a novel, as much as I would like to romanticise that it is. It is about organising and coordinating components to work together, and for this tools can be a tremendous help.


Eh, I'm not really seeing the analogy to an author, whose final product is the words on the page, so in that you are right that the kind of typewriter wont help much. But a developer's code isn't the final product, it' an input to an output, and good tooling can potentially make a mediocre dev better (within limits), by catching compile time bugs, by enforcing constraints, maybe, by exposing functionality (autocomplete), enabling profilers or debuggers, etc...


Geeze... the contemporary Internet, man. Ten years ago, we flamed each other over our personal preferences, and others' views were "stupid". Today, we flame on about our personal preferences, and others are implied to be "immoral". Both exercises are admittedly immature, but at the least the former was more cheeky and not so up its own posterior.

Yes, tooling is a huge deal in day-to-day life. When I think about drains on my time and productivity, I've never been too setback by having to deal with a "for" loop instead of something more monadic. However, I HAVE spent days doing painful refactoring work, that could have been done in minutes with a few right-clicks in an IDE. Build tools and CI/CD pipelines, profiling and troubleshooting tools, etc etc etc.

I can't believe that any of this stuff requires defending oneself from having "sold out to The Man".


> When I think about drains on my time and productivity, I've never been too setback by having to deal with a "for" loop instead of something more monadic. However, I HAVE spent days doing painful refactoring work, that could have been done in minutes with a few right-clicks in an IDE.

I think your examples betray your inexperience with what you're criticizing. A for loop isn't somehow corresponding to "something monadic" and safe refactoring (actual refactoring, not just renaming things, though that's obviously trivial as well) is extremely easy to do and produces great results in languages like Haskell, OCaml, etc.. Primarily this is because moving grouped things is nowhere near as sensitive as in a language like Java or the like.

But yeah, you can right-click rename all day, for sure... Never mind that other languages can let you safely re-architect the very bones of the solution with almost zero fear of coming out on the other side in failure.

Java has great tools, but let's not bring up refactoring as a real strength. It's kid's level stuff in comparison to languages that allow you to do that and more without tooling.

Haskell et al. have crap tooling and that has real consequences, but refactoring isn't one of the casualties at all. They're still better at refactoring, re-architecting and repurposing than any of the big name languages.

At least bring up debugging, system interaction or something.


I think the focus on tooling is more because: (1) developers understand it, since they use it full-time; (2) they're motivated to write it, since it will make their life easier. Understanding business needs takes work, and writing software you will never use is uninspiring.


I'm not sure I disagree with you, but I think the counterpoint is worth stating: that a compiler is the beginning, not the end, of a productive software engineering toolkit, and that there is no software stack that has achieved perfect tooling—quite on the contrary, I think it might be fair to say that most languages lack a great deal of tooling.

There's every reason to think that software engineers and system administrators do their job much more quickly and correctly when they have the support of rich editor support, understandable compiler errors, usable interactive debuggers, profilers, dependency management that won't set your hair on fire, etc. For whatever reason, it seems like businesses often aren't interested or successful in creating these products, so frustrated developers step in to create the tools they dream of in their own work after too many days spent refactoring with regex.

It's possible that people might be drawn to tooling projects because of some kind of internalized revulsion in response to the social impact of their employer, but I think you'd expect to see developers working on tooling regardless of whether or not this is the case because of how a lot of development tooling just wouldn't exist if it weren't for individual FOSS contributors' efforts.


It's also a product of ambitious orgs. One way to out compete a competitor with more resources is to have better development, ops, and release tooling. It's not a silver bullet but it can make a large positive impact when done properly.


Not sure what specifics you mean by tooling but in the context of build/deploying tooling, I really enjoyed learning about it. Beyong the knowledge to be able to develop at both front/backend, setting up the CI/CD pipeline and seeing it all work seamlessly, is useful. Understanding it end-to-end is quite satisfying too.


It seems more likely that an industry that makes and uses software tools would have a stronger than average appreciation for software tools.


Or maybe we don't understand the field at a level where we can objectively say which tool is right for a particular problem domain.


HN is basically Reddit for tech founders, so it tends to skew toward hype. And it's a small place, which makes for great echo chambers. Anything that makes it to the front page without being killed by admins, bots or users will probably be over-hyped.

Wrt over-using new tech: I've mostly seen this with hype directed at C-levels. Maybe one or two people on a team will pick up Rust or Go because they saw it on HN, but a year-plus slog will be initiated to implement a single Kubernetes platform that everyone must use, because a jagoff in a suit read a Wired article that said Kubernetes is the future of the internets. (Meanwhile, most of the devs don't use containers, pipelines are stood up in one-off Jenkins on shitty infra, the new "microservices" are really "distributed monoliths", and the security team is a guy nobody knows whose main occupation is writing drafts of best practices that nobody reads)

I think this will continue as long as we build tech by winging it, rather than doing case studies, analyzing solutions, and setting industry standards. And I'd argue another problem is cobbling together our own tools rather than paying for well-made ones. Because we're so interested in either getting things for free or writing them ourselves, really solid tools are rare.


This touches on something I've been thinking about a bit recently. It seems like software is undergoing a crisis of delivery, though I doubt that's new it still feels like there has not been much progress in delivering software that matters where it's needed.

I'm thinking specifically about the Crossrail project here in the UK where signalling has completely screwed the project https://www.google.com/amp/s/amp.theguardian.com/uk-news/201...

Of course such systems tend not to be taking advantage of the newest tech but it feels like there's a lot of talk about how to achieve Netflix scale for simple websites with a few 100,000s of users and not much progress on delivering tangible, socially useful, software. Maybe it's selection bias and Rust is perhaps promising in this space but a lot of hyped technologies feel ephemeral or not useful outside of Web.


All I know is that reading HN and responding to what I've learned has made my salary go up, up, up in the last 5 years. Partially by recognizing there are so many cool things out there that pay well. You can be defeatist and say "I'll never learn all of this,' or you can be clever and figure out the things that are worthwhile to learn.

I went from working in a boring insurance shop to one of the fastest growing tech companies in America. Had I not paid attention to HN, that wouldn't have happened.


Would love to hear more details about your story.

What were you doing before exactly?

What tech specifically do you feel helped you?


I was at an insurance company upgrading old systems to use less old technology. I had an information science degree and was a pretty lousy programmer.

HackerNews helped to demystify working at a software company. It just seemed more and more in reach as I kept reading and contributing. The most important thing though, is it taught me to get my shit together before making a comment. That extended to being more rigorous in all the work I did, leading to success. I also came across tons of articles about how to level up, and cool resources where I could practice coding challenges. Looking back, it was like having a rich community at tech-oriented company without being able to work at a place like that.

It's really hard to draw a line, but I think I just gained familiarity with the industry and learned to tell when an technology idea is bullshit or legit. All that led to more confidence and that shit is gold when it comes to leveling yourself up.


I mean, Java 8, that's great! From an enterprise point of view it was released basically yesterday. I'm sure a large minority of systems still runs on Java 5 or 6.


There's very little reason not to upgrade to Java 8. Generally things just work. With Java 11 generally things just break, so it's completely another matter. I think that Java 8/11 will repeat Python 2/3 story.


> There's very little reason not to upgrade to Java 8

Depending on a pre-compiled jar that is not compatible with Java 8 :(


We are moving to Java 8.

There is a special place in my heart to hate the new shiny lambdas. Makes code too difficult for reading, nowhere compatible when needing to use Java 7 every now and then.

With the article would just disagree on Sharepoint, would say Confluence has finally replaced that one.


I'm curious, do you really prefer anonymous classes over lambdas? To me, anonymous classes are much more annoying to read.

Or are you just saying that lambdas are as bad as anonymous classes?


IntelliJ IDEA actually just collapses them into lambdas by default. And it also auto-generates them for you so you don't have to type them. At least that was the state of things a couple of years ago, can only have gotten even better since then.


Do you really prefer a language to not have higher order functions than with them?

Quick, what does this do (in Scala)?

(1 to 100).map(x => x * x).reduce(_ + _) ?

Or if you don't like the underscores we could make it more explicit:

(1 to 100).map(x => x * x).reduce {case (x, y) => x + y }

Let's look at the iterative alternative:

var (i, sum) = (0, 0)

while (i < 100) { sum = i * i; i = i + 1 }

You could argue about performance or whatever, but closures are safer, more powerful, easier to read, and more concise.

Using a language without anonymous closures is like programming blind-folded and with one hand behind your back. It's pretty much the definition of Paul Graham's "Blub" languages[1]

Of course, classes are closures and closures are classes. But classes without closures are verbose, stupid, and ugly.

http://www.paulgraham.com/avg.html


  sum = (1 to 100).map(x => x * x).reduce {case (x, y) => x + y }

  sum = 0
  for i in 1..100 { sum += i * i }
I personally find the second one easier to understand. Performance is much better unless the compiler does some magic to convert the first version to the second one.

Adding more logic to the loop also doesn't make the code more complex.


Yeah, I agree as far as reading it goes when you switch between different projects using different versions. I do like using the Stream API though.


I work with Java 7


Using java8 in 2019 is pretty normal. It's "only" 6 years old (released at the beginning of 2014), nothing like the cobol horror stories. The 6 years old java is not very different from what software you get from debian stable. The next lts java11 was released at the end of 2018 and does break some things so you need time to update. It was supported by oracle until Jan 2019, while other vendors have pledged support until 2023 and more.

And most importantly java8 is still a decent experience.


what cobol horror stories?


I don't know why, but Scott's comments in here came off pretty condescending.

If you're a developer on Twitter, it's really easy to fall into the trap of believing that the only developers that matter are front-end engineers. Just because helping develop the kernel, or integrating a new product on an already existing, proven stack that runs Angular for the front and Spring in the back, does not provide shiny things to look at doesn't make them less important, and it really comes off as if he's saying the "famous developers" should feel bad about that.

I don't know, maybe it just rubbed me the wrong way.


Sometimes it can be mind-blowing the lengths front-end devs will go to avoid writing a little html file with self-contained script tags for a one-off throwaway page.


I think the stuff that surfaces on places like HN is mostly out of the ordinary.

I've worked in enterprise shops churning out staid Java code on weblogic servers in the past, if anything HN was kind of an escape hatch for me to see what else is going on in the tech world, even if sometimes it's driven by fads.

Most enterprise software engineering is either very complex due to over engineering, complex business rules or organisational structure or very mundane CRUD development. Not to say it isn't challenging because often it is, with the usual scaling, managing data models problems that need to meet the needs of the corporate hierarchy, but a lot of it is very bespoke.

The only commonality is maybe the frameworks and tooling (C#, Java, Spring etc) and maybe some industry level patterns that map to business processes (shopping carts, reporting systems, invoice processing etc) but the web has endless resources around these that take a while to dig through but usually you find something to solve your problem.


I would argue that Hacker News is actually not about the latest in cutting edge technology.

Rather, it derives more from the spirit of hacking, fundamentally making computers do something other people thought are not possible to do or never tried. It looks like the latest cutting edge tech but rather its a bunch of people just experimenting.

It does not show whats the norm, but what is possible.


That would be cool if that's how things worked, but it seems like nearly every post here (certainly most comments) aren't considering the shiny new tools in that context. Rather, most comments are in the vein of "this seems fun to play with, may have marginal benefit in x business scenario, how do I convince my bosses to let me spend resources implementing it?"


But when does hacking just become needlessly reinventing the wheel


At work, we have some customers who want to use WebLogic and Websphere, and last time I checked those don't have versions that support anything beyond Java 8.

I still use Java 8 even on personal projects. I'm migrating those to Kotlin, so I'm at least not held back with regard to programming language features. I could upgrade those to Java 11 (or 12), just haven't prioritized taking the time to do that over other tasks, and the benefit would be minimal for these particular projects.

When using Java 8 language, I'm just happy to have lambdas and streams. Java 8 was the biggest improvement to Java language since 5 -- and I'd say it's a bigger leap that 5 was.


I don't use Java at work, but at home I inevitably install it for some kind of tooling. Always Java 8. I've accidentally installed Java 9 or newer a couple times and always ended up with something broken.


I disagree. Addition of generics in Java 5 was bigger.


I'd rather keep lambdas and live with runtime polymorphism, than keep parametric polymorphism and lose lambdas.

Generics, in terms of scope, is a bigger feature, for sure. And generics make lambdas more usable. But the distance between generics and their alternative - manual casts throughout - is larger than the distance between lambdas and anonymous inner classes.


Lambdas on untyped streams? I think I'll pass...

Speaking of dark matter coding, story time: a spare time project of mine involves bringing an actual, pre-generics codebase into the not quite as distant past. That beast still has real, living users! Surprisingly, a frequent complaint is running out of heap memory, so they are happily sharing their magic -Xmx incantations with values that would have been outrageously high when the code was written.


I’ve developed in environments where Java 8 was still the latest available version because of upgrade security issues.

I have a friend who, until very recently, was working on a system based on Java 6, which coincidentally was the latest version back when the both of us were in college.

I'm currently in a not-that-old project(2 years old) the front-end of which was built using Angular 1.x.

I have to admit I like this, because my biggest gripe with this framework was that the API changes from version to version were so intense, that you couldn't find a working tutorial for some stuff.

Now that it's not being updated that fast it's much easier.


I agree with a lot of what this article is saying, but here's a counterpoint: http://www.paulgraham.com/pypar.html (Although I think you'd have to call it something different from the python paradox nowadays).

I know personally I prefer working for companies that have these modern tech stacks, not because I'm a tech hipster or because I think these newer technologies are implicitly better than the old ones. Rather, I think it signals that the people there care about their profession and are interested enough to read blogs, keep up to date, etc., and those are the people I want to work with.

That's not a criticism of the people that want to work with what they know and go home and not think about tech at all. Just a personal preference.


Hehe... Cool, I'm a "dark matter developer" and I like it. Edit: Because of downvotes, I'll expand a bit for more interesting comment. I work in a pretty large corp. as a developer, and solve a lot of smaller day-to-day business problems - let's say it begins with Excel and ends with a web form. I also set up the server, database and anything else I need on Azure in co-operation with our (very big) IT provider. I find it quite interesting, but my workday ends at 16.


Patterns and tools exist that many a company could use and benefit greatly from.

A lot of this actually gets described in various HN posts (ex just today “use a pixel to log traffic and ditch google analytics.), and this is something I really enjoy about this community!

There’s a lot of competent posters and as long as you’re just enough competent yourself to make the distinction of “out of my league, google scale” and “could make this thing I’m building simpler and better” all is good.

The key here use whatever’s “meeting your requirements”, and after 20 years in both pure tech and enterprise IT let me tell you — many “dark matter” devs could use stepping out of comfort zone and question whether there’s _simpler_ as well as _better_ ways of meeting reqs.

At many places politics and culture have stood in the way, but this is slowly eroding.

I personally believe this ”out of comfort zone check”, or evaluation, be done at regular intervals be it regarding infra, providers, tooling or languages and it’s patterns.

It’s how you learn, and more important _unlearn_!

My last few years at enterprise dev shops seem to point at “we’ve won” at the floor, when it comes to embracing open source and a sharing mindset (at least here in Sweden).

For me personally — happy times! Finally I’m actually allowed to use that pqsql or open source monitoring tool, and no one raises a brow.

This gives you magnificent opportunities but also an incredible amount of potential foot-guns!


Probably one of the most true things I have ever read on this site.


It has been a sentiment that is posted on here just about monthly, and this is surely not the first time you've seen it. Usually it yields a lot of cynical posts with clever new phrases for whatever is new, as if there are a lot of people feeling really defensive for whatever they're using.

HN is about the interesting parts of tech, pushing at the edges. Yet the vast majority of developers are developing vanilla IT processes in shops where they want it to be the most benign, easily transferable solution possible. The differential between the two isn't surprising or particularly interesting. It's an obvious outcome.

And in ten years the stuff that people talked about on HN become the normal stuff at those IT shops, and HN will be onto something else. And on your "legacy" shops you'll be using frameworks that incorporated the best of those ShinyNewThings. Or do people really think the Java ecosystem today (or the .NET ecosystem, etc) hasn't changed over that time.

And that's okay. It's okay to wait until it matured. What isn't okay is that it's usually coached in derisive, defensive terminology that denigrates whatever is "Trendy". That part isn't cool


In your last sentence, the converse also applies to would be trendsetters denigrating mature technologies. I think that the key problem is that people in this industry are quick to play the denigration card in order to justify their own beliefs, instead of fostering an environment of mutual respect. Remember, a lot of the audience of hackernews, for better or for worse, are impressionable green management that aren't exactly qualified to start powering their homes with miniaturized fusion reactors, and would be better off to keep using things like relational databases and cobol if the ends justify the means.

Does this problem play out in respect to experimental medicine I wonder?

j2ee+oracle == aspirin? isomorphic reactjs+node == S100A9 vaccine?


The direction most comments here are taking kind of proves her point.


But also so blindly obvious that I don’t know why she bothered. It’s not like it’s news


The article explains that.


This is my favorite quote:

> For better or worse, the world still runs on Excel, Java 8, and Sharepoint, and I think it’s important for us as technology professionals to remember and be empathetic of that.


Incremental improvements to an infrastructure or rather the software ecosystem of a company isn't something many people talk about, indeed. Shiny new tools are interesting and flashy to look at.

For example, I'm very happy with the development of the company I work for over the last 2 or 3 years. I've pushed them from old unmanaged systems to mutable systems managed through chef. There's still some really ugly things, it's mutable, it's not shiny, it's VMs running tomcat running java 8. I like to call it really 2000s and/or vintage. It took time to migrate all systems, to gain trust of stake holders, to educate and convince the ops team. And I guess in some cases, we're running a really ugly mess of a system, but it's a reproducible mess.

But this was a massive improvement and value gain for the company. Suddenly we have an ops team with a lot of leverage and competence to manage an ever growing SaaS setup. And overall, the mindset of most people involved has changed over time towards standards, automation and the value of this. Automation has improved the cost efficiency of some default projects dramatically. We're still old school overall, but we're generating value.

And now the wheel of time has turned some. We've been bought, now there's 8 more development teams, now there's new products being brought in. At this point we're picking up containers at a larger scale because we have to move faster than the config management can handle with the current manpower. So now we're handling some stuff we can using the config management, some stuff with containers.

Overall, moving slow and deliberately in an infrastructure is a very valuable skill. Solve the important problems. Sometimes a trusty, ugly, old java application server isn't your important problem.


Sure it runs on Java 8. Some companies prefer stable and battle-tested software and languages over whatever is currently The Hype™.


Java usually offer one of the easier paths to find decent developers.


Great commentary, but I disagree with this point:

> we are listening to complicated set-ups and overengineering systems of distributed networking and queues and serverless and microservices and machine learning platforms that our companies don’t need

A lot of these technologies are talked about not because they're cool or hip, but because they actually increase productivity quite significantly. You can build complex architectures with any stack. But what many new technologies enable is building a company with fewer engineers and fewer resources than what was required before.

That said: it's clearly possible to be very productive with old technologies as well. However, I just don't see new companies which need to interact with "legacy" software being run without using some sort of virtualization or containerization technology.


That was an enjoyable read. Just funny. Alright, I might not be a "dark" employee but we do run our shop on... FileMaker! Since 1993 as best I can tell.


I had no idea FileMaker was still around. Thirty-four years old and still being developed and used. Seems out-of-character for Apple to keep it going.


It's pretty sweet. I just fielded a full online order system and did it in 30 calendar days flat. It wins five-star reviews from my users and raving compliments.


Now that you put it that way, I am a "dark" employee. ha ha


I work at a company that employs +1.5k engineers that is currently stuck on guava 16 (released in 2014) and upgraded to java 8 about two years ago.

P.S.

> This is the reason many government agencies return data in PDF formats, or in XML.

Equating returning data in XML to returning data in PDF is cray. I'd take XML over human readable PDF 100 times to 1.


> This piece of the puzzle is the one that worries me the most. What I’m worried about is that places like Hacker News, r/programming, the tech press, and conferences expose us to a number of tech-forward biases about our industry that are overenthusiastic about the promises of new technology without talking about tradeoffs.

That's because these are marketing websites, meant to show off the newest and shiniest things, with an agenda. We like them because by our nature we like new and shiny things, but their content isn't representative of real life, like at all.

> That the loudest voices get the most credibility, and, that, as a result, we are listening to complicated set-ups and overengineering systems of distributed networking and queues and serverless and microservices and machine learning platforms that our companies don’t need, and that most other developers that pick up our work can’t relate to, or can even work with.

This is also part of human nature and my experience in the tech world confirms to me that the loudest voices are usually somewhere in the top of the bell curve of correctness or usefulness. Make friends with people who are working on real products every day and stay grounded in reality by talking to them, like knives that sharpen each other. The less they want to talk about tech or share their opinions, the more likely they are to have sane, reasonable and useful ones.


According to https://builtwith.com most of the world's top websites are built with PHP or ASP.Net, ie. pretty old technologies, yet all we hear about in the tech press and job boards is React.js and Node.


>Java 8 is still the dominant development environment, according to the JVM ecosystem report of 2018.

>If you think that’s bad...

Why would I think that's bad? The replacement for Java 8 was just recently released, does anybody really expect the world to upgrade so quickly?


Seriously. The replacement is maybe 6 months old and breaks a lot of things— the Java code I work on doesn't work with Java 11. While we can (and at some point will) figure out why and fix it, there's other things to do in the meantime. And just migrating from Oracle JDK to OpenJDK has been enough of a pain in the ass.


I'm currently searching for a replacement JAX-WS implementation so that we can migrate to Java 11, and so far I haven't been able to find one without old-ass CVEs which were patched in core Java.


Reminds me on the place I work. We have many projects for many clients. ASP.NET Web Forms, ASP.NET "old" MVC. We don't have webpack, we use BundleTransformer. We don't have MongoDB, we have SQL Server and NHibernate or Entity Framework. .NET Core is not even on the calendar yet (maybe a little bit sooner now .NET framework is essentially deprecated). We use Knockout, though a replacement is being researched on. Our code is stored in SVN. We use LESS and Typescript 2.x, modern eh?

If you have a single or a small number of products to maintain, because that is your core business, you can afford to upgrade and experiment, revert if necessary. But not if you have many projects, because those projects are per definition smaller. So you take smaller steps, less risky steps. You can't move that fast, it simply cannot be done unless you can create a business case for it.

The real challenge is to resist, as the article states, all the new stuff you get slapped with every day. It is not harmful to say on a bit older technology, technology which may _seem_ to be of another era. It is still useful, and as long as it isn't a business risk it is often not a business case.


HN news is about possibility, inspiration and also practical/tactical insights. Writing about the past and the maintenance of the old systems isn't all that interesting. Thus most don't want to write or read about. We all agree there are many old and once very cool tech in the wild, but I for one use HN to be inspired and jazzed about what the community on edge is doing or wanting to do. :)


HN might be pointing the way to the future.

Or it might be overly focused on things that will never take the world by storm.


What I think we see are

1 languages or tech promoted by companies like Google, MS or FB

2 developers that self promote by creating a side project using the latest buzzwords and coolest languages of the month and sharing it, some do it for the CV but others do it for learning to see what is all about.

HN can't predict the future, good tech can die because is not supported by a company with money, or the company that controls it is incompetent and kills it. I am still thinking that C# and .Net could have been more popular only if MS could have just open source it from the beginning, maybe we would have now a way to do cross platform WPF apps and running them in browser with an open source Silverlight


HN is a bit of an echo chamber. A very interesting and worthwhile echo chamber, but an echo chamber nevertheless.

This is not a criticism, but probably an unavoidable by-product of how it works and the audience it attracts.


Like some sort of colourful bird mating ritual, you have to append so many explanations so as to not offend someone's trigger-happy finger and lose karma.


Yep. And for a long while the future was Ruby, then Node.js, then it was Go. Now it's Elixir? Rust? WASM? Nim?


HN is pointing the way to the future...

...and all of the failed futures as well.


There is nothing that «most people do». When will people realize this? Yes you have trends on HN that are different from trends in, say, the oil prospecting industry or something. And yet they overlap sometimes. You also have vastly different needs. Some beed to optimize for execution speed and some for development speed. Some need high parallellization some need single thread performance. Analytics demands writing code in minutes or preferably seconds while aircraft software may require years of testing for correctness. Even the level of experience will impact how well you can perform on any level with any tool. I mean stop discussing how many teeth a horse has and just count them. We should spend time defining what problems current and old tools can solve, not discussing what tools solve one problem better because no two people will ever have the same exact problem let alone have the same exact understanding of it.


I was on the hype train for quite a while jumping between different systems and burning out a lot. It's mainly because I entered web dev when javascript was beginning to eat up the world. Its the only language I know and understand well. (learning Rust and dart now) Lots and lots of ideas and frameworks keep coming up. Now when i read stuff for the backend its pretty calm and steady. Especially in the java stack. I used to laugh at people writing java. I complained that JVM takes a lot of memory. But I didn't see that JVM still completed >98% of the requests whereas my Node app was completing only around 65% of them. The memory mattered in my laptop but not in the server.

Now we rapidly prototype features in Node and move them to java once the consumers like those features. Even though the stability of an application depends on the programmers, its a good decision to use well tested and stable stacks.


Is the story about Tesla's infrastructure true? I seem to remember it was posted before, and the consensus then was that it was a great work of fiction.


See also: “Why NASA’s newest space shuttle uses a computer chip from 2002”

https://qz.com/317406/why-nasas-newest-space-shuttle-uses-a-...


Effectively, HN has a strong bias over interesting things.


At one point, you have to stop "looking for the next magical/better thing".

You pick something mature enough, that works well, and you ignore new cool trends (anyway there will be cooler things soon enough).

https://www.youtube.com/watch?v=ecIWPzGEbFc

What I am trying to do with my framework, https://www.spincast.org , is to ignore new cool trends as much as possible and focus on real use cases. Comments welcome.


Isn't this what Bimodal IT strategy all about? You'd want to keep current with cutting edge (through sites like this one) so you know which future looks promising and then incorporate the changes into your organization's systems when the time is right.

I enjoyed the article and too have felt the urge to think "why aren't we doing that?" but the business still needs to happen while you're also experimenting with tensorflow emoji recognition.


IT runs on a wide array of technologies. Desktop games and 3D graphics which the area I focus and obviously a massive part of the software market if we include consoles and mobile devices , C++ still is king. Most games nowadays use either Unity or Unreal, which is C# or C++. Java is almost completely absent apart of course from the Android. Of course native iOS is still Objective C and Swift. AI another field that interests me is dominated by Python. Also if we venture outside the commercial part around 40% of software is written in languages that you will never hear anything about them . Mainly because are small projects where using any language is not an issue. Generally a vast majority of the IT is definetly not Java but a wide group of highly popular languages mercilessly bombarded by thousands of unknown languages struggling to gain a fraction of a percentage in terms of growing their community. Essentially strength in numbers. Software is chaotic field of countless of technologies. Even something as big as Java has no hope of dominating because it’s impossible for a language to excel in billion different scenarios. So don’t worry Java won’t be conquering the Software world any time soon.


I'm right now writing code for integration with one information system using SOAP Web Service with digital signatures. Good old Java 8, wsimport, etc. Nothing wrong about it, it gets job done. I'm still reluctant to throw away XML from my Spring projects, I don't like their new approach with annotations and XML works better for me.


HN is an early adopter community. If you want technical community, there are places like https://www.reddit.com/r/systems/.

Edit: Disclaimer: I am one of moderators and this is a blatant advertisement.


Speak for yourself, man. Lots of proud dinosaurs here, when it comes to deployed production systems.


Or even new ones. I’ve been given js+mongo apps by partners that I had to modify to use standard postgres before becoming responsible for herding them in prod.

Not everyone chases the hype cycle.


When does one start counting oneself as a dinosaur? I'm pushing 15 years.


This is so true that I can relate this myself and my co-workers. We work on javascript side most of the time and daily there will be set of developers talking about new bundlers, react features, hooks etc and they plan to use it in the production. The one thing I see and relate to this article is that, few team have used flow stating its a great tool for static typing in js world and now they are forced to change now to typescript. The same developers who introduced flow into the codebase, now in a position to say flow has huge drawbacks compared to typescript. Now they are moving to typescript, but who knows, typescript can change too. We never know. I always fight against these tech updates, without understanding its cost, but in most cases I fail to.


What's funny about this is that Java 8 is no longer supported by Oracle, so unless you are using openjdk or some other equivalent, IT is running on a potentially insecure platform.

Not being cutting edge is one thing, but being insecure...


There is no longer free Oracle support for Java 8. But Amazon's James Gosling lead Corretto is reviewing updates, and even backports: https://docs.aws.amazon.com/corretto/latest/corretto-8-ug/pa...


That's if you are using Corretto, not Oracle's JDK.


Large companies have support deals with Oracle (or other providers, like Red Hat, IBM or Azul). My client has one for Java 8 until 2026 I think. One of the larger obstacles in upgrading to Java 11 is that the libraries which replace SOAP clients in JDK are not patched (unlike the JDK), and have a ton of known CVEs to them.


Those are probably all the companies using Java 8 right? ;-)


Blindness to "old", established tools and technologies is one thing, but let's not pretend Sharepoint, Java 8, Excel, et al don't have their fair amount of suckiness and operational challenges.


Is anyone pretending that?


I don't think there's anything super surprising about running on Java 8. I would be more worried if she said Java 4! Java 11, the next long-term support release after Java 8, came out late last year, and there were a lot more breaking changes between 8 and 11 than there were between 7 and 8. I had to a lot of dependencies, especially the ones that use reflection heavily (Mockito and Lombok come to mind!).

Meanwhile there are still young Java programmers who think that since 8 is the newest and sexiest thing, everything has to be written using lambdas and streams.


I'm not sure I can fully agree with the author's point. 'Don't feel bad that your company is running outdated and expensive technology with known vulnerabilities!' I'm not sure how else to read an article seeking to highlight folks running telnet and ColdFusion. Complacency here isn't serving your customer's best interests.

It's absolutely the case that HN / tech social media induces FOMO and Resume Driven Development, but it's very easy become complacent if you adopt such a viewpoint.


>'Don't feel bad that your company is running outdated and expensive technology with known vulnerabilities!'

I don't think that is the point of the article.


But to be fair, Java 8 to Java 12 (2014 to 2019) happened a lot faster than Java 6 to Java 8 (2006 to 2014). Being on Java 8 is not a bad thing because it's actually not that old.


Java 12 won't have long-term support, so you need some compelling reason to choose it for prod. Java 11 (the next LTS release after 8) has only been out since September, and the module system (which I find pretty pointless for backend) did break some stuff, so I only feel a little bad about not being on it yet.


Agree with article. Probably my company has a 95% of dark matter programmers, plus writing C++ code as C with classes and keeping Java 5 (!!!) And Java Applets still in development for customers who don't want to leave Windows XP as their desktop OS..

IT might run on Java 8 if it's lucky, but more of them are even far behind it.

Imagine that Jenkins CI just last year moved to Java 8 officially as JVM requirement.

A ton of products are still on Java 7. If we talk then about EE Standard, i have bad news for you


Java 8? That’s great! It’s got streams and was fun to learn. Anything older than that would be rough. We’ve still got code that runs on node 1.x so java 8 is awesome.


Hacker News is for the people that are leading their companies into tomorrow. Everything else is for the people settled in today or fighting to stay in yesterday.


Devs using the newest tech at start-ups, hobbies, and in their side work leads to the menegerie of today's Node Microservices Containers (or whatever) becoming tomorrow's Java 8 on Windows 2012.

Interestingly, I work at a medium sized company that is updating existing tech to microservices and the cloud because 5 years later, it seems like those are good investments that return good value in ease of configuration, deployment, etc.


Hits home here. I started reading HN as a college freshman. At the time, a lot of the titles were gibberish to me. Over the next few years I sucked in articles and comments until I had (what I thought was) a pretty good mental model of the industry.

Well that model was... quite a bit off.

But the filter bubble has its merits. A little daily exposure to HN keeps the ideas churning when you're mostly doing mundane stuff.


this should be like, pinned, to the top of HN


Just replace HN with this static file.


Salient points, but even Java is a regional thing. In Texas, if you aren't using .NET then you are in the minority.


There is nothing that wrong with Java 8, honestly.

Have we done better since? Debatable, but maybe. But there certainly hasn't been a quantum leap. Two step forwards and one step back, more like (and sometimes the reverse).

Just the title shows you how much maturity the article can muster, and the rest doesn't really hold up better.


I always thought it's a common sense that most of the industry is still running on legacy stuffs. I also thought it's common sense to critically examine other stories and opinions and to evaluate them in your own context before forming your judgment. But it seems it's not.


Nice article. As someone who has worked at a company that DOES operate on the bleeding edge of technology, it's also hard to go the other way.

Only after applying for other jobs, have I realized that essentially none of my skills are transferrable to the majority of businesses out there.


> Java 8 is still the dominant development environment, according to the JVM ecosystem report of 2018.

At the time the cited survey was conducted, Java 11 had not yet been released, so Java 8 was still the current LTS version.

> There’s a lot to dislike about the [HN] commenters ...

That seems uncalled for.


We have some java 1.4 code, which is running on OC4J app server version 10.1.3 with apache tapestry version 4.1 app framework. I have to look up some official documentation on the internet archive because much of it is 404d. Can someone please top this? :)


Yup still trying to migrate a big repo to git, and have been at it on and off for two years. Once your products have a decade or more of history and 100 man years or more of effort, they always carry a bit of momentum.


I love it because it is so true. I live in the bubble of the valley and I cannot even begin to recount the number of arguments I have had with people about why an big enterprise just cannot do/move/etc. to X.


> I’ve finally come to realize that most businesss and developers simply don’t revolve around whatever’s trending on HN.

shocking.

but truth be told,HN have a share on my hate to any IT job in my country, and my past professional decisions.


It's a great argument. But at the same it is important to see where a lot of new investment is going into. Enterprise is changing too. So next 10 years will look a lot more different, is my opinion.


I work on an Android codebase. I wish I could use all of Java 8...


I think the author might not realize the substantial difference between working in a tech company (the HN audience) and in supporting roles in non-tech businesses. I have worked in both worlds, and it really just comes down to choosing the right tools for the job. In tech you're supposed to be on the cutting edge. That's part of what makes it fun and rewarding.

However, if anybody feels lesser for working in non-tech they're probably thinking the wrong way about the value they're creating. It can be immensely rewarding to build something unsexy and relatively simple that moves the needle by millions or even billions of dollars.


> I think the author might not realize the substantial difference between working in a tech company (the HN audience) and in supporting roles in non-tech businesses.

I too underestimated this -- a tech internship at ExxonMobil was such a vastly different experience from a tech internship at Facebook that they could barely both be considered the same field.


The biggest tech companies in the world run on mountains of C and Obj-C code that are old enough to buy a beer or rent a car without paying extra.

The thing about the bleeding edge is it's too easy to cut yourself and bleed out. I see lots of people bogging theselves down in the details of their sexy new framework, without understanding the principles of what it is doing - which is rarely different than using cool new thing n-1 or n-2 correctly.


I think you're overestimating how much development at tech companies is done using newer languages. Java is the dominant language at both Amazon and Google.


Yep, if your company's core business is not tech, be prepared to work on shitload of legacy and outdated technologies.


Java 8? I wish! We're mostly stuck with Java 6 and 7! :(


Honestly I would choose Java and not Go, Elixir etc. if I was writing software for an insurance company or other customers that demand complex business logic.


Everything is fine with HN front page - technology enthusiasts should talk about bleeding edge technologies.

You expect them to discuss VB6 or Sharepoint? Well, problem is not with them.

If you don't learn new things your skills will be outdated soon - world is moving forward, technologies are running forward. You shouldn't rewrite everything with new tool/language, but you should constantly read what new techs are around, what is interesting or useful for your business or just you personally.


I was right with the author until the last line where it was suggested that I just need to live with sharepoint. :(


A third of the websites run on Wordpress, its not trendy but it works and has a huge library of plugins and themes.


>> Not me, but my mom's a COBOL programmer for a Fortune 50.

Smart girls learn COBOL.


> IT Runs on Java 8 in Go


Actually, it runs on cobol. But whatever, self research is always amusing.


> IT Runs on Java 8 using Go

> IT Runs on Java 8 using Rust

> IT Runs on Java 8 using Tensorflow

Still makes sense


Maybe it's wrong to see startup tech as the cutting edge part of the IT.

IT maybe just another industry similar to bio or agriculture and not part of the startup tech.

Maybe IT crowd should simply avoid sharing common forums like HN with startup and SV people.


I hope not! HN is for everybody who is intellectually curious.


Most of the comments here are missing the point. Very HN.


HN is biased towards startups. Startups are mostly very short lived operations. The lucky ones end up re-implementing their core technology several times.

That stimulates a pattern of behavior where people emphasize engineering things that differentiate their business from what others are doing. You don't build a tech startup on doing basically what companies that already exist have been doing for years. If you copy those, you'll be exactly like those and not be able to differentiate. Also, startups don't have to worry about supporting still relevant software that they inherited from way back. I't s a clean slate by definition and that means picking current versions of whatever technology is relevant to you. So, this is simply a form of selection bias.

It's actually interesting to see what startups are doing because many normal businesses end up cherry picking what happens there some time later; particularly when those startups survive (most of course don't).

The title mentions Java 8, which is not that old yet. It was basically the current version until last year. 9 and 10 were non LTS releases with only 6 months support. v11 was only released summer last year.

If you use Scala or Kotlin or most other JVM languages, v11 brings very little new to the table that actually matters. It's a very minor release unless you actually use the Java language. GraalVM, which is a popular new related technology is still stuck on v8. Personally, I'm a big Kotlin fan. It looks exactly the same on JDK 11 as it did on JDK 8 and there's no big need to update. Actually because of Android, Kotlin is still biased towards Java 7.

I always look at technology platforms from a point of view of risk management. In a startup, you need to take risks but not across the board. If you go for some new funky storage layer (nosql, event stores, etc.), pair it with some new language nobody is using yet, and a bunch of frameworks that got released on github 3 weeks ago in a pre 1.0 state, then you are taking a lot of risk and the chance increases that some of it won't work out as you hope. It's fine if your startup is about that technology stack but otherwise, you may want to be more conservative and pick your risks more carefully.

If, for example, you need to have some microservice with a bog standard REST API running, pick something that you are comfortable with or plan to take a calculated risk by e.g. trying out a new framework but pairing it with a DB and language you already know.

If your startup is building what is a glorified web shop or something similarly mundane but necessary, there are a few good arguments for just sticking with proven technology. This is why e.g. Spring Boot is pretty popular with some startups. It's simple and new enough that you can get away with it without looking too old fashioned. And if you need to scale an engineering team around that in a hurry, it's kind of nice to have something reasonably widely used, understood, and mature.

People stick with Ruby on Rails for the same reason even though lots of people now pull their nose up for stuff like that. A lot of fintech startups tend to stick with stuff like this. They'll use current versions and might bring in e.g. Scala or Clojure. But overall they are definitely biased towards using more enterprisey stuff.

Funnily enough, node.js is now old enough that you see that creeping into the enterprise world as well. Personally for me that's still in the high risk category due to the high amount of changes and turnover with many npms. I have a few legacy node.js projects and updating their dependencies is an enormous PITA because basically everything breaks if you do that to a node.js project that is more than a few months old. It's like a snapshot of what was fashionable a while back. Most of my remaining Java projects on Github I update once every year or so (or whenever I need to touch them) and typically without any drama whatsoever. V7 or v8 to v11 is mostly a just bumping a few versions. Of course v11 can use them as is so there's no big need to do this. But I like to stay on top of this generally.

If you run a company that still needs to exist in two years, resilience over time is a good thing.


Perfect!


Great post


Oracle


"And, if the tech is, in fact old and outdated, and the tradeoff from replacing it is lower than the tradeoff of keeping it, we shouldn’t be jumping to replace it with the latest and greatest. While we should evaluate new tools evenhandedly, most times, Postgres works just fine."

Bravo! Buzzword fanboy club here on "Hacker" "News" please take notice: outside of this massively biased webshit / GNU/Linux / GPL / Rust / JavaScript echo chamber, "Hacker" "News" is the butt of jokes. Try to guess why.


Dunno what hole she is in but Excel, Sharepoint are all things that new age companies don't work on & rather is the tool of choice for all things by legacy ones still chugging along.


You may be very surprised how much very serious stuff runs in and is still being written on top of Excel. Trading models dealing with tens of millions of dollars a day for example.


Yup, can confirm, Excel is everywhere. And it's even applied where it's not appropriate (VBA macros runnings on way too large datasets for hours and hours, completely inefficiently).


Also key components in calculations that a major investment banker would use to compute numbers for a major deal.

The list is very long.


"Legacy ones" as in almost all companies. The fact is that the economy, in large part, runs on Excel.


And in addition, Visual Basic.

Oh, don't forget COBOL.


The part when they said they spent a lot of money to port to C/C++ but couldn't get same efficiency/speed, that's the problem with most porting endeavours: you need people who grok both languages you're porting from and you're porting to, or at least understand the specs and business logic. That COBOL program has probably been optimised so much that the code makes no sense to people without understanding of the intricacies of that language.

My team spent a whole quarter on converting R code into Python, because we wanted to use Tensorflow for machine learning(). When they finally got the thing running, they found out it wasn't performing as fast as R. I thought, that couldn't be possible, they use pretty much the same linear algebra libraries. So I peeked into the code to see what went wrong and found out that they writing it the wrong way: (1) calculated on Pandas dataframe directly instead of extracting the values first when doing matrix calculation, (2) instead of plain ndarray, were using matrix instead, which is slower. Both of which someone without experience in Python wouldn't have known.

() On a hindsight, did it have to be Tensorflow? Besides, there's already an interface for R[1]. Maybe the team decided on Python anyway in case they want to try out the plethora of ML libraries available for that language.

[1] https://tensorflow.rstudio.com/


I have been part of couple of migration projects from COBOL (and accompanying mainframe tech) to newer tech. COBOL is a very simple language. What makes porting next to impossible is the functional knowledge of the application, how it intracts with uostream/downstream systems, all the special case handling and how business users expect the new system to behave (hint: same as the existing system). Biggest challenge to porting is that you can't port an entire system at once (takes years), so when you starting migrating part by part, it has to interact with its neighbouring systems exactly how the old system worked. And that is what is most difficult part of porting IMHO..


The hole she is in is called "long actual experience in our industry".


And what would be the future replacement of Excel? Specially for non-tech people.


yep, they run whatever version of Confluence was released 4 years ago


Dunno what hole she is in but Excel, Sharepoint are all things that small companies struggling to make a profit don't work on & rather is the tool of choice for all things by hugely profitable organisations that will pick the bones of those smaller companies in a few years time


IT runs on Java 8. IT in 2030 will not run on Java 8.


Are you talking about Java 8 specifically, or are you saying that for some reason there is going to be a sea-change and corporate IT is going to start using all the buzzwords and tearing down their old systems?

If it's the later, and seeing how much enterprise has shifted in the last 30 years... if I were a betting fellow I'd say big systems will still be Java 8 in ten years time. Mayyyybe a freshly discovered catastrophic flaw in the jvm causes them to upgrade a version!


They won't tear down anything. But there will be more and more systems that were built using newer things, some which we consider cutting edge right now. Some companies will get acquired, some will go bankrupt and their old systems deprecated, etc.


You might be surprised. I know companies still using Java 5 almost 15 years after release...


It will be Java 15 then.


Or maybe Keystra


Plenty of enterprise technologies have lasted more than 16 years. People are still paying good money to get help with software that's been running since the 60's. Less IT will be running on Java 8 by 2030. Maybe.


It will run on twigs and pebbles.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: