Hacker Newsnew | past | comments | ask | show | jobs | submit | dehrmann's commentslogin

It's more like JITs got good.

I never understood why AOT never took off for Java. The write once run anywhere quickly faded as an argument, the number of platforms that a software package needs to support is rather small.

Because developers don't like to pay for tools.

https://en.wikipedia.org/wiki/Excelsior_JET

https://www.ptc.com/en/products/developer-tools/perc

https://www.aicas.com/products-services/jamaicavm/

It is now getting adopted because GraalVM and OpenJ9 are available for free.

Also while not being proper Java, Android does AOT since version 5, mixed JIT/AOT since version 7.

EDIT: Fixed the sentence regarding Android versions.


Developers pay for tools gladly when the pricing model isn’t based on how much money you’re making.

I’m happy to drop a fixed 200e/mo on Claude but I’d never sign paperwork that required us to track user installs and deliver $0.02 per install to someone


Especially not if those kind of contracts don't survive an acquisition because then your acquisition is most likely dead in the water. The acquirer would have to re-negotiate the license and with a little luck they'd be screwed over because they have nowhere else to go.

I have seen worse, where people updated the EULA 6 months after being paid $14k/seat.

Now it is FOSS all the way... lesson learned... =3

https://www.youtube.com/watch?v=WpE_xMRiCLE


That is something that I never understood, that that's even legal. You enter into an agreement (let's call it a contract, because that's how the other side treats it) and then, retroactively they get to pull the rug right out from under you.

I made the 'FOSS all the way' decision somewhere in '96 or so but unfortunately our bookkeeping system and our own software package only worked on Windows (this was an end-user thing) so we had to keep one windows machine around. I was pretty happy when we finally switched it off.

The funny thing is that I wouldn't even know where to start to develop on/for mac or windows, Linux just feels so much more powerful in that sense. Yes, it has some sharp edges but for the most part it is the best thing that could have happened to the world of software development.


I have done native cross-platform projects in https://wxwidgets.org/ and https://quasar.dev/ . Fine for basic interfaces, but static linking on Win64 gets dicey with lgpl libraries etc. YMMV For iOS targets, one must use a MacOS environment with a non-free Apple developer account.

Personally, I like Apache 2.0, and standard quality of life *nix build tools. Everything Windows runs off a frozen VM backing image KVM COW file now, as even Microsoft can no longer resist the urge to break things. =3


Depends on the use-case, anyone that has seen the commercial host scaling cost of options like MATLAB usually ported to another language. lesson learned...

Commercial licensing is simply a variable cost, and if there is another FOSS option most people will make the right call. Some commercial licenses are just Faustian bargains, that can cost serious money to escape. =3


I think what they do is correct. We also need to get paid this way.

You could do AOT Java using gcj, it didn't need commercial tools.

If we ignore gcj was never production ready, and basically the only good case that Red-Hat sponsored was to compile Eclipse, which was usually slower than using the JIT anyway.

And that around 2009, most of the team left the project, some went to OpenJDK, others elsewhere, while GCC kept it around because gcj unit tests stressed parts of the GCC that weren't tested by other frontends, until the decision came to remove it completly.

As side note, I expect a similar outcome to gccgo, abandoned since Go added generics support.


You don't have to pay for dotnet AOT.

Actually you do indirectly, via Windows licenses, Office, Azure, Visual Studio Professional and Ultimate licenses, C# DevKit.

Also you are forgetting AOT first came with NGEN, .NET Native, commercial, and on Mono side, Xamarin had some price points for AOT optimiztions, if I recall correctly.

However this is a moot point, you also don't pay for GraalVM, OpenJ9, or Android.


> I never understood why AOT never took off for Java.

GraalVM native images certainly are being adopted, the creation of native binaries via GraalVM is seamlessly integrated into stacks like Quarkus or Spring Boot. One small example would be kcctl, a CLI client for Kafka Connect (https://github.com/kcctl/kcctl/). I guess it boils down to the question of what constitutes "taking off" for you?

But it's also not that native images are unambiguously superior to running on the JVM. Build times definitely leave to be desired, not all 3rd party libraries can easily be used, not all GCs are supported, the closed world assumption is not always practical, peak performance may also be better with JIT. So the way I see it, AOT compiled apps are seen as a tactical tool by the Java community currently, utilized when their advantages (e.g. fast start-up) matter.

That said, interesting work is happening in OpenJDK's Project Leyden, which aims to move more work to AOT while being less disruptive to the development experience than GraalVM native binaries. Arguably, if you're using CDS, you are using AOT.


Well, one aspect is how dynamic the platform is.

It simply defaults to an open world where you could just load a class from any source at any time to subclass something, or straight up apply some transformation to classes as they load via instrumentation. And defaults matter, so AOT compilation is not completely trivial (though it's not too bad either with GraalVM's native image, given that the framework you use (if any) supports it).

Meanwhile most "AOT-first" languages assume a closed-world where everything "that could ever exist" is already known fully.


Except when they support dynamic linking they pay the indirect call cost that JITs can remove.

dynamic class loading is a major issue, and it's an integral feature. Realistically, there are very few cases that AOT and Java make sense.

Coincidentally on the front page, https://news.ycombinator.com/item?id=45989650

ARM also used to have opcodes for Java: https://en.wikipedia.org/wiki/Jazelle


Haha, I was the one who submitted it after going down a rabbit hole from this :)

It's the same blind spot people have to Java's checked exceptions. People commonly resort to Pokemon exception handling and either blindly ignoring or rethrowing as a runtime exception. When Rust got popular, I was a bit confused by people talking about how great Result it's essentially a checked exception without a stack trace.

"Checked Exceptions Are Actually Good" gang, rise up! :p

I think adoption would have played out very different if there had only been some more syntactic-sugar. For example, an easy syntax for saying: "In this method, any (checked) DeepException e that bubbles up should immediately be replaced by a new (checked) MylayerException(e) that contains the original one as a cause.

We might still get lazy programmers making systems where every damn thing goes into a generic MylayerException, but that mess would still be way easier to fix later than a hundred scattered RuntimeExceptions.


Exception handling would be better than what we're seeing here.

The problem is that any non-trivial software is composition, and encapsulation means most errors aren't recoverable.

We just need easy ways to propagate exceptions out to the appropriate reliability boundary, ie. the transaction/ request/ config loading, and fail it sensibly, with an easily diagnosable message and without crashing the whole process.

C# or unchecked Java exceptions are actually fairly close to ideal for this.

The correct paradigm is "prefer throw to catch" -- requiring devs to check every ret-val just created thousands of opportunities for mistakes to be made.

By contrast, a reliable C# or Java version might have just 3 catch clauses and handle errors arising below sensibly without any developer effort.

https://literatejava.com/exceptions/ten-practices-for-perfec...


I'm with you! Checked exceptions are actually good and the hate for them is super short sighted. The exact same criticisms levied at checked exceptions apply to static typing in general, but people acknowledge the great value static types have for preventing errors at compile time. Checked exceptions have that same value, but are dunked on for some reason.

The dislike is probably because of 2 reasons.

1. in most cases they don't want to handle `InterruptedException` or `IOException` and yet need to bubble them up. In that case the code is very verbose.

2. it makes lambdas and functions incompatible. So eg: if you're passing a function to forEach, you're forced to wrap it in runtime exception.

3. Due to (1) and (2), most people become lazy and do `throws Exception` which negates most advantages of having exceptions in the first place.

In line-of-business apps (where Java is used the most), an uncaught exception is not a big deal. It will bubble up and gets handled somewhere far up the stack (eg: the server logger) without disrupting other parts of the application. This reduces the utility of having every function throw InterruptedException / IOException when those hardly ever happen.


Java checked exceptions suffer from a lack of generic exception types ("throws T", where T can be e.g. "Exception", "Exception1|Exception2", or "never") This would also require union types and a bottom type. Without generics, higher order functions are very hard to use.

> 2. it makes lambdas and functions incompatible.

This is true, but the hate predated lambdas in Java.


You could always manually build the same thing as lambda with a class and you had the same problem.

> an uncaught exception is not a big deal

In my experience, it actually is a big deal, leaving a wake of indeterminant state behind after stack unrolling. The app then fails with heisenbugs later, raising more exceptions that get ignored, compounding the problem.

People just shrug off that unreliability as an unavoidable cost of doing business.


Yeah, in both cases it's a layering situation, where it's the duty of your code to decide what layers of abstraction need to be be bridged, and to execute on that decision. Translating/wrapping exception-types from deeper functions is the same as translating/wrapping return-types the same places.

I think it comes down to a psychological or use-case issue: People hate thinking about errors and handling them, because it's that hard stuff that always consumes more time than we'd like to think. Not just digitally, but in physical machines too. It's also easier to put off "for later."


Checked exceptions in theory were good, but Java simply did not add facilities to handle or support them well in many APIs. Even the new API's in Java - Streams, etc do not support checked exceptions.

There is also the problem that they decided to make all references nullable, so `NullPointerException`s could appear everywhere. This "forced" them to introduce the escape hatch of `RuntimeException`, which of course was way overused immediately, normalizing it.

It's a lot lighter: a stack trace takes a lot of overhead to generate; a result has no overhead for a failure. The overhead (panic) only comes once the failure can't be handled. (Most books on Java/C# don't explain that throwing exceptions has high performance overhead.)

Exceptions force a panic on all errors, which is why they're supposed to be used in "exceptional" situations. To avoid exceptions when an error is expected, (eof, broken socket, file not found,) you either have to use an unnatural return type or accept the performance penalty of the panic that happens when you "throw."

In Rust, the stack trace happens at panic (unwrap), which is when the error isn't handled. IE, it's not when the file isn't found, it's when the error isn't handled.


> Exceptions force a panic on all errors

What do you mean?

Exceptions do not force panic at all. In most practical situations, an exception unhandled close to where it was thrown will eventually get logged. It's kind of a "local" panic, if you will, that will terminate the specific function, but the rest of the program will remain unaffected. For example, a web server might throw an exception while processing a specific HTTP request, but other HTTP requests are unaffected.

Throwing an exception does not necessarily mean that your program is suddenly in an unsupported state, and therefore does not require terminating the entire program.


> Throwing an exception does not necessarily mean that your program is suddenly in an unsupported state, and therefore does not require terminating the entire program.

That's not what a panic means. Take a read through Go's panic / resume mechanism; it's similar to exceptions, but the semantics (with multiple return values) make it clear that panic is for exceptional situations. (IE, panic isn't for "file not found," but instead it's for when code isn't written to handle "file not found.")

Even Rust has mechanisms to panic without aborting the process, although I will readily admit that I haven't used them and don't understand them: https://doc.rust-lang.org/std/panic/fn.resume_unwind.html


> Throwing an exception does not necessarily mean that your program is suddenly in an unsupported state

When everyone uses runtime exceptions and doesn’t count for exception handling in every possible code path, that’s exactly what it means.


Sure, but the same is true of any error handling strategy.

When you work with exceptions, the key is to assume that every line can throw unless proven otherwise, which in practice means almost all lines of code can throw. Once you adopt that mental model, things get easier.


Explicit error handling strategies allow you to not worry about all the code paths that explicitly cannot throw -- which is a lot of them. It makes life a lot easier in the non-throwing case, and doesn't complicate life any more in the throwing case as compared to exception-based error handling.

It also makes errors part of the API contract, which is where they belong, because they are.


I would respectfully disagree with most of what you said. I guess individual perspective depends a lot on the kinds of code you work on.

The point about being explicitly part of the API stands, though.


> a stack trace takes a lot of overhead to generate

Can't Hotspot not generate the stack trace when it knows the exception will be caught and the stack trace ignored?


It can and that optimization has existed for a while.

Actually it can also just turn off the collection of stack traces entirely for throw sites that are being hit all the time. But most Java code doesn't need this because code only throws exceptions for exceptional situations.


> it's essentially a checked exception without a stack trace

In theory, theory and practice are the same. In practice...

You can't throw a checked exception in a stream, this fact actually underlines the key difference between an exception and a Result: Result is in return position and exceptions are a sort of side effect that has its own control flow. Because of that, once your method throws an Exception or you are writing code in a try block that catches an exception, you become blind to further exceptions of that type, even if you might be able to or required to fix those errors. Results are required to be handled individually and you get syntactic sugar to easily back propagate.

It is trivial to include a stack trace, but stack traces are really only useful for identifying where something occurred, and generally what is superior is attaching context as you back propagate which trivially occurs with judicious use of custom error types with From impls. Doing this means that the error message uniquely defines the origin and paths it passed through without intermediate unimportant stack noise. With exceptions you would always need to catch each exception and rethrow a new exception containing the old to add contextual information, then to avoid catching to much you need variables that will be initialized inside the try block defined outside of the try block. So stack traces are basically only useful when you are doing Pokemon exception handling.


checked exceptions failed because when used properly they fossilize method signatures. they're fine if your code will never be changed and they're fine when you control 100% of users of the throwing code. if you're distributing a library... no bueno.

That’s just not true. They required that you use hierarchical exception types and define your own library exception type that you declare at the boundary.

The same is required for any principled error handling.


> When Rust got popular, I was a bit confused by people talking about how great Result it's essentially a checked exception without a stack trace.

It's not a checked exception without a stack trace.

Rust doesn't have Java's checked or unchecked exception semantics at the moment. Panics are more like Java's Errors (e.g. OOM error). Results are just error codes on steroids.


And Nvidia has a target on its back right now. It's priced like it's the only game in town, but AMD, Google, Meta and Intel have varying degrees of competing chips for AI use.

Even the shoeshine boy is giving stock tips about the AI bubble and selling NVDA.

I think we are in an insane LLM/compute bubble but I just put long trades on in Palantir and Nvidia the past few days.

Sentiment is just so one sided right now.


> Meta and Intel

?


Meta: https://ai.meta.com/blog/next-generation-meta-training-infer...

They don't sell it, but they could if Nvidia started charging too much, and some of these are going into Meta datacenters instead of Nvidia.

Intel makes the Arc. It's probably the least viable competitor in that list, but the fundamentals are there, so some about of focus and investment could result in a viable competitor.


Do you not just shred them and send them to a scrap metal processor?

Only pennies before 1982 are worth scrapping as they are made of copper.

The newer pennies are not really worth the effort as they are mostly zinc.

Ironically if they are no longer illegal to melt down (IANAL but I would think this is true?) they actually would be more worth it to scrap because of the negated risk.


No law in relation to pennies has changed. The executive branch has simply took the law stating the mint should create as many pennies as necessary, and decided that the necessary amount is 0.

The practicalities of their illegality then comes down to enforcement. Given the current executive branch's behavior related to enforcement of laws, that can mean anything from "melt them all down", to "don't do it", to "if our friends start doing it, it'll be legal, if our enemies start doing it, we'll enforce".


> The newer pennies are not really worth the effort as they are mostly zinc.

They're still worth $1 per lb., and you have to destroy them, anyway.


It's their mix with copper I beleive that makes them less valuable than their raw value in zinc if thats what your number is based on...

because the cost of seperation from the copper is greater than simply sourcing other materials.


I read this as a joke ($1/lb because 100 pennies weighs about a pound - although online sources make it sound like it's closer to 200 pennies for a pound)

Someone producing brass (copper-zinc alloy) could presumably use them, as they only need to add extra copper.

We can turn them into suntan lotion!

hahah ok actually I love that.

I think however the problem would be the trouble in seperating the zinc from the copper, I think you would likely operate at a loss still but this is just a guess.


It's called Coppertone for a reason

I've been listening to Marketplace less because of stories like this. The half cent went away, the penny went away, other countries have discontinued currencies. You keep accepting pennies and you round when people pay in cash. At some point, your register will do the rounding for you. There isn't really a story here.

The register might already do the rounding if it was designed to work in Canada, which got rid of the penny over a decade ago.

There's a bunch of regulations that need tweaking. AIUI, it's illegal to charge SNAP more than other customers. someone who paid cash and gets rounded down technically pays less than what the government got charged. It's only on the order of pennies, I don't think the law cares about that at all.

That one is easy without regulatory changes: just round the SNAP transactions. The SNAP equal treatment rule only requires charging SNAP customers the same price as cash purchases, not the same price as credit or debit card purchases.

Is that a federal law or state law? Whichever jurisdiction it is, surely you'd only need a one-clause amendment to add an exception for rounding cash transactions by up to two cents to account for the discontinuation of pennies... I just can't imagine that taking more than a few weeks to resolve, surely your political systems in the US haven't become that dysfunctional where this couldn't be fixed that quickly?

> surely your political systems in the US haven't become that dysfunctional where this couldn't be fixed that quickly?

In America this can be done - by 2028 or thereabouts :)


I don't buy the SNAP argument because there's already rounding when taxes are applied, and half cents are still legal tender, so you could go into a store, tell them they should have charged half a cent less, and then they'd be in a similar trivial violation of SNAP.

Yeah this is the kind of objection dreamed up by an engineer, who thinks that law is mechanically applied. In reality, if there are no other factors this spends two seconds in front of a judge, who then throws it out with prejudice for wasting the court’s time.

The problem is these small customers never drive enough sales to bother with—you're better off investing in a feature for a large customer. And by the time small customers get large enough to need things like complex permissioning, they've outgrown Heroku and will be onboarding anyway. Giving startups credits really might be the most effective way to handle rough edges for small shops.

As a startup, I'd probably bite the bullet of one-time setup pain for a database, blob store, load balancer, and service hosting at a major cloud provider because those systems will be rock-solid with well-understood APIs. Full disclosure: I work for a major cloud provider.


I researched this a bit and found there really is no standardized "cup of coffee" for research purposes. Even for volume, I've seen it range from 6 to 12 fl oz. The main mechanisms of action are caffeine and flavonoids, and there's so much variation across beans and brewing methods that you'd think researchers would try to include that in their data to normalize it.

Caffeine, flavonoids and monoamine oxidase inhibitors.

Route 53 is one of the few intuitively named services they offer.

Only if you're already familiar with DNS.

Whether or not this is the case for this particular study, I wouldn't be surprised if they end up being miracle drugs that reduce everything from heart disease to liver disease to cancer through weight loss and reduced alcohol consumption.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: