Hacker News new | past | comments | ask | show | jobs | submit login

In the code I write where it matters the issue is allocation & deallocation time. Thus you don't do those things on the hot path in either gc'd or manual memory management environments.

Given that the overhead become 0 in either.

Is it sometimes harder to write zero alloc code in GC'd languages? Sure but its not impossible.

In Java for instance the difference in performance compared to C++ comes from memory layout ability not memory management.




> In Java for instance the difference in performance compared to C++ comes from memory layout ability not memory management.

We need more modern languages with AOT compilation, ability to allocate on the stack and use value types.

I like C++ and have spent quite a few years using it, before settling on memory safe languages for my work.

Many C++ presentations seem to forget on purpose that there are memory safe programming languages, which offer similar capabilities for controlling memory layouts, thus presenting the language as the only capable of doing it.

Modula-3 is an example, or Swift and D for something more actual.


The reality is that there aren't any straight replacements though since C++ has such mature tools, move semantics take away significant amounts of memory management pain, D without a gc is still exotic, and rust is still in its early stages.


It is a matter of which market is relevant for a developer.

There are many use cases where C++ is still used where the use case at hand actually didn't any a real need for it.

For example, on my area of work, C++ has been out of the picture since around 2006. We only use it when Java or .NET need an helping hand, which happens very seldom.

On the mobile OSes for example, C++ is only relevant as possible language to write portable code across platforms, but not so much for those that decide to focus on just one.

There Swift, Java and C# are much better options.

For those in HPC, languages like Chapel and X10 are gaining adepts.

Also as an C++ early adopter (1993) I remember being told by C developers something similar to what you are saying in regards to tool maturity.

Now around 30 years later, their compilers are written in C++.


I'm not trying to claim C++ is the only language anyone will ever need. I've tried hard to find alternatives but until the D community really goes full force on getting the garbage collection out and starts to care about the tools for the language instead of just the language, it seems like rust will be the only contender (and future C++ and maybe even jai). I wish a language called clay had taken off, it was pretty well put together as a better C.


Actually I would rather like they would improve the GC to more modern algorithms, instead of the basic one it uses.


SaferCPlusPlus[1]: C++. Memory safe. No GC. Quite fast.

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus


I'll mention a significant case that doesn't have to do with allocation. Large graph-like data structures (lots of small nodes linked by reference) are effectively prohibited entirely by the GC. They make every marking phase take much too long, until whenever time the whole thing gets promoted into the long-lived generation. A mainstream runtime having such an impactful opinion about my data structures (not alloc patterns) is something I just find deeply offensive. Imagine if C++ programs stuttered because you had too many pointers!

They could have avoided that whole problem by providing an API like DontCollectThisRoot, but instead they (and a lot of programmers) chose to pretend the GC was good enough to treat like a black box.


Huh? Are you talking about a particular GC? Because every object-oriented program I've ever seen could be described as "Large graph-like data structures (lots of small nodes linked by reference)".


Any GC that walks the graph, and isn't mostly concurrent. You will know when the graph becomes complex enough, because thereafter the GC will not let you ignore it. In my experience, as few as several hundred thousand objects can become a problem. Imagine trying to write a responsive 3D modeling app with vertices, edges, and faces all bidirectionally pointing to each other. You the programmer would think very carefully before fully traversing such a data structure (much of the point of having thorough internal references is avoiding doing much traversal!), and yet the runtime helpfully does it for you, automatically, and there's nothing you can do to stop it.

https://samsaffron.com/archive/2011/10/28/in-managed-code-we...

http://stackoverflow.com/a/14277068


FWIW, Go has value types, so there's less referencing than in Java, etc. Also worth noting that these are actually used unlike in C# which has a reference-type-first culture.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: