Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The major problem I have with electron apps is that they eat memory for breakfast, lunch, and dinner.

This happens with jvm applications as well, but you can limit the max heap size and force the garbage collector to work more, trading off speed for the ability to run more apps side by side.

AFAIK, you can't limit the memory used in electron apps, and they don't respond by sharing heap with their child processes. With enough extensions to make it usable, vscode easily eats GB of memory.

I like lsp. But I don't need vscode to do the rendering.



The majority of the problems with "Electron" are actually just problems with the development style used by the types of people who publish and consume packages from NPM.

We've gone from a world where JS wasn't particularly fast, but it powered apps like Netscape, Firefox, and Thunderbird just fine (despite the fact that the machines of the era were nothing like what we have today) and most people didn't even know it, to a V8-era world where JS became crazy fast, to the world we're in now where people think that web-related tech is inherently slow, just because of how poorly most apps are implemented when they're written in JS.

If you want to write fast code, including for Electron, then the first step is to keep your wits as a programmer, and the second step is to ignore pretty much everything that anyone associated with NPM and contemporary Electron development is doing or that would lead you to think that you're supposed to be emulating.


I agree, and this is something also I have witnessed in my own Electron project, where careful care was taken to write fast and memory efficient code. It doesn't really use that much memory compared to native applications when running, I've done comparisons.

I feel also that the problem is more with the style of javascript development rampant these days, where not a lot of care is taken into making memory efficient or even efficient code.

This has to do a lot of course with the high rise in people studying to become (mostly) web-developers, without any deeper degree in CS or understanding of how computers really work.


This isn't entirely the fault of Electron though, but the convenient data types exposed in a web environment. Beyond the baseline memory of running Chromium, you could use various tricks to keep memory very low such as minimizing GC (eg. declare variable up front, not within loops), use array buffers extensively, shared array buffers to share memory with workers, etc.


> eg. declare variable up front, not within loops

I have trouble believing modern JS engines wouldn’t optimize this to the same thing.


Behaviorally they aren't the same thing, so that's not a straightforward mechanical translation. For example if the variable is an object instance then if escape analysis can prove the variable doesn't outlive the loop, then it can be put on the stack & then yes there wouldn't be any benefit to the suggested change. Although deoptimization makes stack allocation more complicated, so JS engines are more conservative here than say JVM runtimes.

But it's really easy for escape analysis to fail, it has to be conservative. So you can end up heap allocating a temporary object every loop iteration quite easily.


Is there a good guide to these techniques in modern JS? How likely are they to remain viable long-term?


Not sure on any particular guide, but I learned a lot from the old #perfmatters push from Chrome, getting a deeper understanding of what the JS engine does when you create an object, where it lives, how it interacts with the garbage collector and so on would be a good thing to learn about. Also it's generally only worth considering optimization for things that store a lot of data like arrays/maps. I don't see why these techniques wouldn't be good in the long term.

I definitely agree that it's easier to make webapps that consumer much more memory than it is using a lower-level language like C++, unless you're being careful.


I just in the past month upgraded my main work laptop from 16 GB from 40 GB (8 GB soldered + 32 GB SODIMM). So your point is granted, but on the other hand, DDR4 prices have collapsed ~50% from 2018 (I couldn't believe it either, given all of the other semiconductor issues).


nice, but you are lucky to have laptop that have extra RAM slot. not everyone can do that.


I specifically bought this laptop with that in mind.


Luck has nothing to do with it, the Macbook Pro users know what they're getting into.


Solution: 64GB RAM. Never look back.


This is a first-world solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: