I'm the author of the post and I should also say that I don't love Electron. It's a practical choice for getting an app of the ground, but I want something more lightweight eventually.
I see it as a stop-gap until I can invest more resources into something more lean. But I'm focused on the product right now, not rewriting the world and wasting time because of engineering ideals.
I agree with your statement but just have never really seen it work that way in real life.
Slack is the best example. An app that just 'works' in Electron and the stop-gap became the permanent solution for eternity et voila -> still huge valuation and IPO.
I am afraid once you go Electron, most will stay Electron.
Whatever it ends up being it has to be web-based. I'm definitely not going to rewrite the app.
However, if something were to come along that uses the user's local webview instead and provided a way to bundle a node server with it, I could absolutely switch to it.
In fact, I've played with the idea of providing an app that only sits in your menubar and runs the server, and opening the app just opens a tab in your browser. But most users don't care as much about all this as we think and would find that odd.
You're right though - if something is working well enough it probably will end up sticking. Slack just goes to show that it's probably Good Enough despite what we feel about it.
Someone tried that with Electrino, but there hasn't been any activity in 2 years.
That only deals with memory and disk use though.
CPU and battery use under 'load' will still be through the roof compared to native apps. I put load in quotes because even entering text or scrolling creates massive energy consumption spikes on Electron.
Oh and your UI will still be sluggish too.
To be clear: I'm not hating on your app or choice of Electron, just be aware most of the drawbacks are here to stay.
Even Microsoft with all their engineering prowess can't seem to fix them (see VScode vs Sublime), and they are already using a heavily customized and optimized Electron.
You can ship the node binary, then have the "app" open up in "chrome-less" mode (no browser bars, like with electron). It's supported in IE, and Chrome. Firefox also supports "chrome-less" but only for URL's that start with file:// meanwhile some browser features require https:// and http://localhost/ is not good enough. While it is possible to build a local web app, browsers could make it easier to do so. Also take a look at feature phones that run forked versions of FirefoxOS, it's a growing market. Where apps are basically packaged local web apps. There's also Chrome apps, but they have been deprecated (never build something on top of Google tech :P)
There have been multiple webview-using equivalents through the years.
A big part of why Electron-likes have succeeded so widely where those did not is because they suffered from absolutely massive fragmentation, both in features and in performance/bugs. Electron (comparatively) does not, because it bundles its own full stack. You have to deal with upgrades to it when you choose to do so, but not "everyone simultaneously has a different version of their webview, many of which you have never seen and cannot reproduce, and you always have to deal with them all".
I used a similar api-based approach to have Electron host C# and Python backends[0][1]. The main difference I think is that I went with GraphQL for the api layer instead of Electron's ipc mechanism, including random port selection and a simple bit of security handshaking to prevent malicious sites from just hitting an exposed port (the goal being to strike a balance between showing that security matters and not over-complicating what is intended to be a simple boilerplate example).
Given their BUILD talk, with comparisons against Electron hungry resource usage, it wouldn't surprise me if VSCode eventually migrates to React Native.
Yeah here's the problem. Eventually the thought of rewriting the electron app native becomes too daunting. It's grown to large. Then you're shipping bloatware for life.
That’s the right mentality. I'd say don't change it if it works (and is nowhere near as bad as Slack). If the difference in practice is marginal it‘d provide no real value.
Thanks for sharing. What is the advantage of using socket based IPC to communicate between client and server, instead of enabling nodeIntegration on the client process and using the remote library?
As long as you don't block in the main process and take advantage of Promises and even WebWorkers, why is another dedicated process needed?
It'd be nice if there was a better abstraction of Qt for cross platform usage. Something like how React components work, have a library of common layouts, could even specialize in Material to make a more narrow scope. Basically pick your frame layouts and can tie in logic to those possible interactions.
Yes and no, it's the right level of abstract, but it's still Qt. I think the issue here is that it's the unfamiliar. If it had a layer on top of that where you could use json config files, style with sass files, and ad logic in JS, it'd make all the javascript boys come to the yard.
> If it had a layer on top of that where you could use json config files, style with sass files, and ad logic in JS, it'd make all the javascript boys come to the yard.
QML is already JS (ES7) so you can code your logic in it (though 95% of people opt for doing it in C++ both for the speed and the static typing). Sure, there's no SASS but I would argue that the QML offering is much clearer than CSS - http://qmlbook.github.io/ch04-qmlstart/qmlstart.html.
It's 100% unfamiliarity with C++. Because you always end up doing logic in C++. JS is there in small bits to tie things together - the original purpose of JS in the old Web.
Not relevant, but I'm really glad you started blogging again.
I would often point other people to your blog as a good resource for learning javascript.
I don't actively use typescript, the example is focused solely on the architecture for adding a background process. Typescript seems like a separate concern?
The problem I see with many electron applications isn't just that they're bloated (they probably are in about the same relation to average RAM that Tcl/Tk apps were or early Gnome/Mono ones). [Edit: To clarify, I mean the average RAM at the time they were accused of bloat]
It's that they're quite non-native. And not just in the "button borders are 1 pixel wider than usual" sense -- they're usually styled to resemble web applications, with big data views, Material UI, etc.
Going from this to something more lightweight/native doesn't just require a different API, it requires a completely redesign.
In other words: The Big Rewrite That We're Definitely Going To Do Someday(tm).
I agree with this too but this problem can be addressed by caring. I care. You can find a bunch of non-native interactions in my app but every time I work on it I make it better and implement more of a native feel (like disabling selecting text when you shouldn't be able to, etc).
It's a balance - personally I think apps that focus on a specific platform is exclusionary. So accepting some non-native feel means more people can use my app.
> like disabling selecting text when you shouldn't be able to
This is the worst thing in computers. Why would you EVER _block_ the ability to select text? I am engaged enough to want to select something you made! Embrace that, don't stop it!
Think of all the thousands of tiny snippets of text and single Unicode characters that make up a UI. You definitely don't want to be able to select snippets within most controls, because selection interferes with using the control.
Examples: the text in a menu item, the text in a button, the Unicode x in a close icon, the bar between menus.
This is really noticeable in a few web applications where you can select the wrong thing and it interferes severely with using the UI (I have experienced this with Windows and Android).
Yes, you usually want to be able to select the main text. However many mobile UI frameworks just disable all selection, because that is the easiest way to also disable selection within UI controls (managing this issue is actually quite difficult from my experience writing a HTML UI framework).
So long as you don't want an OS native UI, yes I think so.
For example I use Visual Studio Code every day and the UI seems pretty good to me (panels, menus, tabs, check boxes, combos etc). I am presuming it uses a component framework (although I admit I haven't looked at the source).
I got one. Because my electron app is a game. I don't want you interacting with text as text, it's supposed to be more of a "texture" like one would expect from a native DirectX implementation
To simultaneously answer many of the "usually you don't want X to be selected":
So don't always allow selecting everything. If you start your selection inside "content", constrain to content, don't select labels. If you start on a label, select labels and maybe content too.
Native apps do this all the time in small degrees, people are used to it: "select all" selects content, not chrome, and it's context-sensitive in many cases (e.g. select all in a folder doesn't select parent folders even if they're visible).
having encountered some buttons with significant amounts of text: nothing is off-limits IMO. frequently tho, yea, "ok" isn't useful to allow to be selected.
On the other hand, getting people to copy what they see rather than read it out loud incorrectly can, pretty often, cut several rounds of back-and-forth out of remote tech support. It seems insane because it is, but yes - copying text in a button rather than having them say "I clicked the yes button" (when no such button exists) is a useful feature.
On the other hand I hate apps that have user interface elements where you can select the text by accident instead of interacting with the element that contains the text.
Imagine what it would be like if a click and drag on a window title bar moved the window but a click and drag on the text part of the title bar just selected text.
Back in the Amiga days there was a nice little app that let you go into selection mode and then draw a rect around anything and it would pseudo-ocr it. Surely something like that exists on current machines, but I havn't seen it.
I know what you’re saying. Our company settled on a well known cloud app for ERP/CRM and daily I deal with copying and pasting something from it that invariably brings along extra text/ui elements that I didn’t want. Sometimes it’s just extraneous space, sometimes it’s extra text I don’t even realize was highlighted.
Another example are tables that have selection boxes for records or “action” links in each row like print or view. If I could select this table without those columns coming along for the copy ride I’d cut at least one if not a few steps out for every time I copy/paste something.
I'd argue that despite all cries about non-native styling and behavior, MacOS Classic, OS X, Windows 95, OS/2, Next, Swing etc. had more in common with each other than they do with webapps (unless we're talking about ExtJS enterprise apps or the like).
This won't matter as much for rich-text chat apps or the like, which always looked horribly non-native, over-styled, skeuomorphic (cf. Trillian)…
But let's picture a database client or IDE. Those used to resemble regular desktop apps. Nowadays?
Note that I'm not arguing about your article or even approach, I was just chiming in on the dislike for Electron in general, where I think bloat is the least of our worries. All this wasn't something new, the downturn of desktop UIs started way before, as people are now used to mobile and webapps and expect that kind of look and feel everywhere. The OS manufacturers definitely seem to approve and comply.
You saw this happening in other places, before. Just look at enterprise Windows applications that were targeted to people used to terminal UIs (3270, VT220).
Current UI trends are the same, just with spurious rounded corners instead of closely-grouped form elements…
Yes, major desktop GUIs 20 yrs ago had more in common with each other than they have with web apps. They also had more in common with each other than with iOS and Android phones. Yet today, most people in the more-developed economies use desktop apps, web apps on desktop, phone apps and web apps on phone every day. Some even have GUI apps in their cars and on their wrists. And among the new desktop apps, they have apps that work with touch UI or keyboard and/or mouse, that are available on multiple platforms with significant UI differences, and that have mobile versions.
The result for most users is that they use a whole medley of different UIs daily and are far more flexible about UIs than back in the day when you were either a Mac person or a Windows person exclusively and all of your apps were native apps on your one machine. As clients continue to diversify, as Web apps spread, as companies like Apple keep bumping the Mac UI to lower and lower priority and claim Mac (which mustn't have touch)/iPod/iPhone/AppleWatch all need different UIs and all must have some sort of web support, and Google, Facebook, Twitter, and Amazon apps in their wide and ever-changing variety are used more and more...the demand for consistent UI in the fine details is not nearly as big an issue for most people as it used to be.
Also, as a user, I expect the UI of my apps to be somewhat inconsistent. If the app is perfectly consistent with the native guidelines, then that's a small plus. But any slightly inconsistent behavior can be quickly learned, and I don't consider it a big deal (again, as a user).
Of course, if every app out there was perfectly consistent, then perhaps it would be a different story.
I think it's quite possible that the majority of users are the moment have no idea how to recognize "native UI" and won't know the difference either way.
I do believe that a well-designed UI makes a difference, whether users realize it consciously or not.
I am not sure whether how much an app stick to OS "native" idioms is actually significant for a well-designed UI in 2019; I think users don't spend enough time off the web to be used to those idioms anyway.
Can we, as an industry, please stop pointing to some strawman "majority of users" to justify our poor decisions? I'm kind of sick of everything sucking too, but can we at least admit that they suck because of us?
I know what you're saying, but I think there's something here that isn't just BS or an admission of defeat.
What if the way most people use their computers/devices now, sticking to a standard "OS native" design language simply doesn't benefit them.
Then if it takes you more time/money to do so, that is time/money spent without actually helping anyone, when you could have been working on something that did. I'm not even talking about "helping your startup succeed", I mean literally, helping the users.
Design matters, I'm a big believer in that. But you've got to design for your actual users in the actual world they are in, designing for the world you wish you had with the users behaving how you wished they behave... is the programmer's fallacy.
I think it's a legitimate question, does an app "behaving native" (as far as UI/UX elements and the OS design patterns) actually help the users? Or do we just imagine/wish it would? Does it actually matter? Does it actually benefit real people users in the actual world?
Maybe. I'm not sure it doesn't. But I'm suspicious enough to ask.
> I think it's a legitimate question, does an app "behaving native" (as far as UI/UX elements and the OS design patterns) actually help the users?
Hmm... does an application behaving according to the UX language of the rest of the system actually benefit the people using that system? Let me think about that.
Nothing about the current trends in UI design are about a better experience for the user. Nothing. Ask anyone who relies on accessibility features, for instance. I'm sure we all love hijacked scrolling, pop-in content, pop-up boxes, etc. which is why they're there right?
No, as usual users are just resigned to putting up with the bullshit that crappy developers deliver to them, and it is getting easier over time as they forget that things used to be better.
You're not designing for users, you're designing for money, so at least take responsibility for making shit suck so you can put food on the table.
An article written by a blind programmer[1] was posted on HN a while ago, and he said he uses Notepad++ because it's a native applications and plays nice with screen reader.
Notepad++ may be "native" in some sense but it certainly does not "behave according to the UX language of the rest of the system". It uses tabs rather than multiple windows, its icons are not the standard windows ones for new/open/..., and its menu bar, file selection dialogue and so on all look slightly "off".
> does an application behaving according to the UX language of the rest of the system actually benefit the people using that system?
The answer is of course, no. Unless your target audience is already technically-minded people, most users will struggle the same and simply do not internalize the "UX language of a system", only noticing when the departures are huge.
Examples of such departures that can actually make a noticeable dent on the average users' productivity with software are:
- the move from classic menu -> ribbon menu
- single desktop -> multiple desktop
- stacked windows -> tiling by default
Examples of changes that do not affect anyone's productivity but annoy UI purists:
- Input doesn't glow the same way native input does when selected
- OK and cancel swapped places or are aligned to the other side (I'll grant you the importance of swapped buttons if they pretend to look native)
- The menus are behind the titlebar instead of using the global menu
- Using a custom set of icons for standard behavior
- Hierarchy of background colors is not respected
- It's using the horrible Qt file picker again
All of this is coming from an UI purist that has seen how cross-platform development looks like. Having a good language design that is good enough for all targets requires work to come up to but is well-treaded ground (and there are many already out there you can just copy). Alternatively, having 3 codebases for the same app multiplies the cost of frontend development by anywhere from 1.5x to 3x depending on the feature and architecture and that is simply absurd for the overhwelming majority of applications given all options.
Which is why "design" is mostly visual and user interaction is at its poorest since the inception of the WIMP interface, never mind how many friggin' "UX" experts we got now strutting around.
Most of the time, I don't see an issue with non-native looking UIs. Oftentimes, non-native UIs look better.
VSCode and Atom both look great and they're non-native. I also think Slack looks pretty good too. Except for MacOS, native UI widgets generally seem to be pretty ugly.
Even Microsoft is embracing non-native UI for its own Microsoft Office platform.
> Most of the time, I don't see an issue with non-native looking UIs. Oftentimes, non-native UIs look better.
I think there's a threshold for this.
Throw 1 or 2 non-native designs (and UX) at me, I'm fine. I'll appreciate the aesthetics. Make every app have its own UX/design and I'm going to get lost very easily.
And there's a hidden context switch cost there that can accumulate.
Microsoft has never had a consistent look and feel across its suite of applications. Windows Live Mail, Zune media player, Windows media player, the Office suite, etc all looked different and they all looked different than other Microsoft applications.
Microsoft doesn’t even have a consistent look and feel within its operating system. Windows 8 forward has been an inconsistent mess of a half implemented tablet interface.
Microsoft Teams, Skype, VS Code, Internet Explorer, System Center. Not sure you could argue these are non-business applications, and none of them seem very consistent to me.
Interesting point, but that wasn't my experience. The office suite was consistent, but SharePoint differed markedly (at least in ~2010 when I last used it). IIS was different yet. I'm trying to think of other business-oriented MS products that I used (it's been a while)...
The normal tclkit runtime with Tk is 6.1 Mb of disk space and 10-15 Mb of ram.
The most bloated Tcl/Tk runtime I could find (undroidwish tcl/tk running on SDL + OpenGL and Anti-grain backend) is 23 Mb of disk space and 42 Mb of ram (including all the shared libraries).
Hard to compare with a web engine in functionality and ram usage.
I just tried to get started with tclkit to try it out. The first result for "getting started tclkit" has links with 404. I was able to download a prebuilt binary for macOS which does run, but I can't find any clear tutorials for how to start building apps (the top result for "getting started" is from 2008).
I decided to try some example apps instead, so downloaded some .kit files and tried to run them:
% ./tclkit-8.6.3-macosx10.5-ix86+x86_64 fractal.kit
2019-06-25 12:14:12.541 tclkit-8.6.3-macosx10.5-ix86+x86_64[18970:1942703] -[TKWindow setCanCycle:]: unrecognized selector sent to instance 0x10023a910
2019-06-25 12:14:12.542 tclkit-8.6.3-macosx10.5-ix86+x86_64[18970:1942703] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[TKWindow setCanCycle:]: unrecognized selector sent to instance 0x10023a910'
It crashes.
Hard to compare with the ecosystem and maturity of the web.
I was comparing it to the average RAM of computers when Tcl/Tk was still in its hype cycle and you heard bloat arguments against it, which must've been the 90s. When Tk was still yellow-ish.
My fault, could've expressed it better. And as someone who actually like(d) Tcl/Tk, I understand the urge to defend it perfectly.
If we're comparing UI implementation languages, it doesn't look good from a historic perspective. PostScript was pretty much better than all its successors. Tcl/Tk had a decent approach as something Unix-y, and I don't even want to know what will come after JS/CSS/HTML if we're continuing that arc.
Please don't use Tk for new applications, unless they're just for your own use. Tk is completely inaccessible to blind users, and probably people with some other disabilities as well. As much as people may hate Electron, it is better in this area, or at least can be if the app uses good HTML.
To be fair, nobody cares. And I'm not just being snide.
One of the social apps proudly wore it's "hard to use" interface as a way to scare off the grownups.
And, personally, I'd rather have an app that scales it's interface elements nicely rather than the "detents" that we get at particular monitor resolution (Try scaling something like Reaper (They're not the only ones) to anything other than 2x on a 4K monitor--Umm, if I wanted a 1920x1080 monitor, I'd have bought one, thanks. What I want is for you to scale to about 150% so that I get more screen elements AND they're larger).
Add the fact that "screencasting" is much easier if the application renders all its elements, and you've got a lot of reasons to go non-native and never look back.
I think people do care, but they don't know what they dislike.
I mean, there are certain things we tolerate on the web, like a UI taking a while to load. We don't tolerate that in native apps — even if the content is missing, I still expect the interface to show up instantly.
It's not a question of native vs web apps. It's a question of "does this fulfil the expectations I set for native apps?"
My position is that frameworks like Electron and React Native aren't amenable to fulfilling those expectations but they are capable of doing so if the developers put in the extra work. It's just a shame that so many developers don't, and their sloppy efforts become the poster children for these web apps.
At the same time, my position is also that if you're going to put in that extra work to make web apps feel right natively, you could probably have done native UIs to start with.
So, I love the role that Electron fits, but boy do I hate how much we discuss Electron itself.
What we should be discussing is providing the functionality of Electron without the cost. A protocol that behaves like React native. Ideally a spec, which any language can implement and provide a native UX to the end user.
Some projects try (ProtonNative, for example), but I've not seen one go quite far enough. I don't want to be forced to write a JS app to get benefits here. After all, with ReactNative, it's basically a process RPC to a bunch of widgets. `process <-> UI`. So why be forced to write JS? An open spec would do wonders here.
I've not invested cycles on solving this problem, of course. Yet I can't help but wonder why so many people are focused on solving UI issues with very narrow solutions like Electron, ReactNative, ProtonNative and so forth. They're great projects, but a slightly wider vision would make them amazing projects.
Shameless plug: We're developing the Boden Framework [0] which kind of shares that vision.
I like the idea of a protocol and that is pretty much what Boden attempts to achieve in the end. Having said that, the more I work on this project the more I come to believe that, at a certain point (level of detail), attempts at the unification of real native UX on different platforms become virtually impossible. To achieve real native cross-platform UX, you need to embrace and take care of the different aspects, behaviors, and quirks of each platform. Boden attempts to make that as easy as possible (and without forcing you to write JS).
> To achieve real native cross-platform UX, you need to embrace and take care of the different aspects, behaviors, and quirks of each platform.
I 100% agree. My recent comment[0] mentions the same thing.
I'll look into Boden, sounds promising! With that said, I don't see any mention of desktop builds. Is that on Boden's roadmap? Would love to try it with Rust some time.
Yes, Desktop builds are on the roadmap. We don't have a fixed release date for those yet. An experimental macOS build is already available in the mainline repo.
>I don't want to be forced to write a JS app to get benefits here.
Although most major Electron apps are written in node, it's pretty easy to have a backend in another (compiled) language. You can simply use the subprocess module in node and build your own bridge, for example to a Go CLI that accepts events from electron and does whatever you want it to do. It's a workaround for sure, but the point is you could do it in a pretty stable way.
>After all, with ReactNative, it's basically a process RPC to a bunch of widgets
>Yet I can't help but wonder why so many people are focused on solving UI issues with very narrow solutions like Electron, ReactNative, ProtonNative and so forth
I've used React Native in the past and I'm working on a Flutter app now. The problem with RN is that every platform has its own unique widgets. It's neat that you have one codebase, but you still need to build a separate UI (at least for certain pages) for Android and iOS. It just doesn't look right otherwise.
Flutter gets away with this because it owns the drawing layer as well as the widget layer. I know that something drawn in Flutter is going to look exactly the same no matter where I run it. This means that widgets fail gracefully by default - in RN, things just straight up break or look weird. The worst that can happen in Flutter is that the user has to deal with an iOS picker on Android or vice versa.
Electron is more akin to Flutter than RN in this way. It controls the drawing/rendering because of Chromium. Slack on Mac looks exactly like Slack on Windows. That's more enticing than needing to build new UIs for every platform you want to support.
The only way to solve this is to basically invent another cross platform drawing layer. But why bother? HTML/CSS already exists. There's effort and tooling and docs and community and everything else you'd need for a successful framework. It's not worth the effort.
I think the best solution will be just embedding Electron at the OS layer, so apps don't have to bring along their own Chromium runtime. This way it runs closer to the metal, which should help alleviate RAM and storage concerns (these seem to be the only major concerns most people have with it).
> The only way to solve this is to basically invent another cross platform drawing layer. But why bother? HTML/CSS already exists. There's effort and tooling and docs and community and everything else you'd need for a successful framework. It's not worth the effort.
That's assuming we want that though. TBH I'm tired of web apps. I pay for a bunch of OSX native applications because it just feels better. I want to minimize friction for creating truly native apps. Not just for performance, but for visual consistency, OS features, etc.
I disagree that RN just breaks or looks weird. It does, yes, if you're trying to ignore platform widgets and make your own unicorn. I however am mainly speaking as a user. I want applications that feel like they are on my platform of choice, not some UI designers HTML page thrown into every OS.
You're right in cases where app devs are trying to make cross platform apps without actually making what I view as an application. They're just wanting to put their webpage onto the users computer. However I'm not wanting that. Luckily for them they already have that; It's Electron, and it works great for them. You're right we can minimize how awful Electron is, but I want native apps to feel good, feel native. I pay for apps that are more than just HTML.
Lastly, I think you sell RN's design short. Making two separate apps with RN involves a lot of code reuse. It's the same codebase, with two different binaries effectively. You can abstract a lot of logic to a unified layer, the majority duplication ends up being layout and widget choices. You're in the same language the entire time, using all your helper libraries and in general it feels identical (speaking from my experience, at least). Compare this to writing an Android app and a iOS app using their native languages. RN's developer overhead is miniscule by comparison. This gives users a native feel with a minimal amount of developer work.
I'm saying we need a RN-like solution to minimize effort for developers who desire to give users a native experience. I also greedily hope that in a world where this exists, HTML on my desktop won't be used as much.
> Flutter gets away with this because it owns the drawing layer as well as the widget layer. I know that something drawn in Flutter is going to look exactly the same no matter where I run it.
Isn't this pretty unfriendly to users? As an iOS user, I occasionally encounter apps written with Material guidelines in mind, and it's a bit jarring because they don't fit the guidelines of the device I'm using.
These are all great solutions for developers, but for consumers, it all kind of sucks to varying degrees.
> Isn't this pretty unfriendly to users? As an iOS user, I occasionally encounter apps written with Material guidelines in mind, and it's a bit jarring because they don't fit the guidelines of the device I'm using.
Personally I care a lot more about what app I'm using than what OS it's running on; I'd far rather have Slack look the same on my phone as it does on both my computers than have Slack on one computer look like apps on that computer but different from Slack on the other computer. Sometimes we forget that the OS exists to facilitate the apps, not the other way around.
> Sometimes we forget that the OS exists to facilitate the apps, not the other way around
For OS where the base operating system didn't offer much functionality to apps beyond the basic API for building controls and so on, this might be true.
For Apple's systems, especially macOS and iOS/iPadOS, this isn't quite true. On iOS, apps need to be built according to system conventions and standards in order to be able to take advantage of a lot of services that the OS provides (such as accessibility, automation, i10n).
When a macOS or iOS ignores system conventions, the result can range from completely unnoticeable, a slight inconvenience, or the app being next to unusable after the OS updates due to underlying features being changed without the developer having updated their (needless) custom code to account for it.
I notice this with the Sensibo app which follows Material design guidelines. It took me a long time to figure out how to edit a particular bit of information because the icons that the app used were not what I was expecting. On iOS, an editable table view should be accompanied by a button at the top right that says "Edit"; Sensibo instead provided a pencil icon. After I finished editing, I wasn't sure what to do — on iOS, I expect to see a "Done" or "OK" button where the Edit button was; instead, Sensibo required me to click on the pencil again even though it was greyed out and looked inactive.
When an app doesn't conform to the guidelines and I have a choice about whether or not to use that app, I'll generally prefer to find something else where I'm not expected to memorise how to use it.
Sometimes we forget that apps exist to facilitate getting tasks done, not forcing us to remember their deliberately-included inconsistencies compared to every other app on the system (including third party ones).
The problem with HTML/CSS is the meteoric levels of baggage they bring. If Electron was based around a super lightweight web engine that outright dropped support for anything older than HTML5, CSS3, and latest versions of JS it would be a dramatically better solution than it is now.
i really loved xul form mozilla.
i build a lot of tiny apps with it, unfortunately it's no longer supported.
I also used it to build the kind of bridge you mention, it was for dynamically build gui for cli apps that did job processing (but it was a basic bridge with few functionality), and also some crud form from the database schema.
That too is relative. My copy of Thunderbird with 6 open emails from six open mailboxes is consuming about 170MB of memory right now, and that's hosting a fully-featured browser to view emails.
Granted, that's reserved memory, but that's the only memory you really need to worry about anyway.
Of course, native doesn't make a program automatically use less RAM. It does start with a lower footprint through, meaning that you can keep an eye on memory usage without relying on ugly hacks in your code.
Is it minimized? IIRC on Windows, when an application is minimized the working set is trimmed which can result in a significant reduction - but it jumps back up when you start using the application again.
Yes, this is Office 365 on macOS. Neither the computer nor Outlook have been restarted in weeks, which is surely contributing to its current RAM footprint.
Amazing. You must be 100 times more productive than the parent above whose mail app uses 2300 MB. You must be able to do soooo much more with such efficient tools.
It's not a judgement either way. I'm just noting that apps like Outlook and Chrome can eat GB of RAM, so Electron's overhead isn't a huge deal for me personally.
Did you read the article? It says 230MB for starters, you're clearly trying to paint it negatively. And the first thing it says after that is "still not ideal". Looking around at other native apps and at rest they are consuming between 150-200MB.
It goes on to say that there's a memory-intensive piece that I haven't even optimized yet and will rewrite into Rust which should bring the memory down a lot more.
"Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that.""
>Did you read the article? It says 230MB for starters, you're clearly trying to paint it negatively. And the first thing it says after that is "still not ideal". Looking around at other native apps and at rest they are consuming between 150-200MB.
Making electron, the entire point of which was to lower the barrier to entry to app development, even vaguely reasonable in terms of memory use by having to code in Rust which is very much not low barrier to entry is an irony that strikes exactly to the heart of the point the person was making.
I don't get what you are saying. Using 230MB of memory for a very data-heavy app is already somewhat reasonable, and more advanced optimizations are going to require more advanced engineering.
It's the same thing with any app, as you go further along development you will want to optimize in more advanced ways.
Every single transaction on every account they have for starters, plus a data point for the total in the account at the end of each day I would imagine, plus all metadata you could ever want to associated with a transaction
Electron lowers the barrier-to-entry for frontend app development. Or, more cynically: Electron allows you to hire cheaper, less-experienced frontend engineers.
A background worker in an Electron process, however, can be thought of (depending on what it’s used for) as more of a “local ambassador of the server”, and thus isn’t really “frontend” at all. It lands on the client, sure, but it’s no more frontend than the client-side half of the Firebase library is frontend. It’s plumbing, and so it’d be the responsibility of a backend or infrastructure engineer to write and maintain.
If you take this perspective, you still receive the main practical advantage that the system’s product manager was probably seeking by choosing Electron in the first place: the ability to hire inexperienced engineers to do UX tweaks, freeing up the senior engineers’ time to handle the hard stuff.
UX tweaks won’t generally bubble down to the level of needing to modify the background worker (again, depending on how you’ve written the thing), so your [junior] frontend engineers won’t need to touch the background worker, so it can be written in a language more on the “good engineering” side of the spectrum than the “low barrier to entry” side—like Rust—without that causing you any headaches of the “everybody who knows how to fix it is busy” variety.
> Electron allows you to hire cheaper, less-experienced frontend engineers.
Do you mean that the front end developers are cheaper or you mean even the less experienced can code in Electron? I have issue with both but I need to understand what you mean first.
I mean that you can hire developers who, due to their inexperience, know just enough to make one very specific kind of thing, and that thing is a certain type of SPA frontend (e.g. "React developers", sort of like "Java developers"). Because they only have this one skill, and because "web SPA frontend developer" is in-demand enough for a decent supply pipeline of them to have formed (of "code bootcamps" et al), the junior ones are relatively cheap to hire.
You can then have someone with more experience "box up" that web SPA frontend in an Electron wrapper, in a way that will generally last a good while.
(This is, AFAIK, precisely the approach taken by teams like Spotify, where the Electron app has no extra functionality beyond what you'd get from the web app.)
Of course, developers generally age out of only having one skill, so this is sort of the programming equivalent of an "intern job": not something anyone would stay in for more than the length of an internship, and something that you just constantly pump new people through.
Aha, sorry I have totally misinterpreted that then, here's an up-vote :) I sort of got super sensitive when it comes to people underrating the huge amount of skills that are required to be a front-end developer.
Its interesting that slack takes 1gb+ memory. Slack still uses the OS provided webview, right? That seems like the right way to do it (last time I used Slack, it was a 3mb download - glorious), but how is it using so much RAM? maybe its leaky?
As someone who uses Slack only in the browser and not as a native app, Electron is not the reason Slack is slow.
Electron doesn't help, of course.
But Slack seems to either cache a lot of stuff, or have memory leaks, or something. It's weird, because Slack seems to re-fetch data whenever I switch rooms, so I'm not sure what all that RAM is being used for.
Electron ships its own Chromium-based browser view. Using the OS provided one does sound like an efficient option, but then you have to deal with all the same compatibility testing (different browsers, different releases, running under different OSes and OS releases) that makes web development suck. It's trying to provide a web-based stack while abstracting away most of those headaches.
It also provides some Chrome specific functionality (loading Chrome extensions) that wouldn't work in an engine-agnostic system, though I don't imagine this is a commonly used mechanism. Extensions are more useful for shoving your own functionality into someone else's webpage, if you control the whole codebase surely it's not the best way to do anything.
> but then you have to deal with all the same compatibility testing
Which means they choose the wrong way to go about creating an app. If you have to bundle a huge secondary app just to run your main application, you've done something wrong.
My guess is Slack likes to keep pages around, and each workspace is a separate page that are individually pretty heavy. So if I'm logged into 4-5 workspaces that adds up quick.
Slack does not use the OS webview, it is Electron. I don't think it's a 3MB download - if it is it must just be an installer that installs the real thing? I haven't looked recently.
It has been years so youre probably right. Bummer. That explains the ram usage.
I do prefer the webview approach because I don’t need/want my chat app to bundle a browser :-/ Ive been a web and native app dev for a decade and do not agree that developer convenience could ever justify these sorts of choices. My webview backed native apps are all sub 1mb and I have no trouble developing them. I guess I just dont get it.
Electron discussions often focus around the battery and memory impact, one other thing I'd mention is the poor access to native features. You get the stuff a web browser can do, but for anything else offered by the OS you might need to wait a while.
For example, Apple shipped their first TouchID laptop in 2016, and Electron's support for that landed in v6.0.0 beta 1 just last month.
I'd looked at switching off 1password to BitWarden and that's a pretty major feature to be missing for three years.
Surprisingly, for as much as everyone craps on the touchbar, that got supported by Electron a lot faster than TouchID did. I guess because not all apps need authentication and they can rely on passwords instead, while it's really obvious when an app doesn't have touchbar support.
There’s system feature absences that bleed over from Chrome, too. One that comes to mind is the total exclusion of the numerous bits of functionality that macOS provides in text fields, a great example of which is how Chrome has lacked support for macOS text substitution for coming up on a decade now.
If electron apps' claimed memory causes your computer to go slow due to swapping, you should complain. But most complaints I see on this topic stem from a discomfort over the reported figure by the operating system. With SSDs and modern virtual memory systems the claimed memory figure is not a sufficient measure to infer the resource utilization of a process. For example, if slack has a bunch of allocated memory that is rarely accessed, it's going to (at worst) land on your SSD and sit there, having no impact on your user experience.
If you want to complain about applications' engineering, you should complain not about claimed memory but about things that are directly affecting resource utilization or user experience like power draw or excessive CPU utilization due to poor programming. Claimed memory must be combined with a profile to show that cold reads are common enough for the claimed memory figure to be correlated with a performance hit. But of course, this often isn't readily available without profiling for all but the most extreme cases, and so engineering types fixate on easily visible measures like claimed memory that likely have no actual resource utilization impact in most cases.
>If electron apps' claimed memory causes your computer to go slow due to swapping, you should complain. But most complaints I see on this topic stem from a discomfort over the reported figure by the operating system.
I tried that once. But I'm sick of having to fight that my usecase is valid, that my laptop isn't too old, and that it isn't broken/misconfigured in some other way. It's 4 years old, and I can't run 2 electron applications and firefox at the same time without swapping so hard I can't get anything done. Fuck me for asking so much from a computer, right?
No idea why you're getting downvotes. I think this actually is a problem in the dev world. They often have overpowered systems compared to the average user and as a result might not realize or simply don't care how their software performs on lower end systems.
One example that's driving me mad is shotwell (photo managing software). For some reason whenever you start it, it is crawling your entire collection of photos again. There are countless rejected tickets from people asking to add an option to disable it. I personally manage my photos on an older laptop with a spinning 2tb drive, for when I travel... So I'm somewhere without a power outlet in reach, battery sitting at 12%, and would like to quickly show some photos to a friend, but opening up shotwell will make sure that remaining battery will be drained in an instant. But the neckbeards developing that software sitting in their basements with 4 SSDs running in raid 0 just fail to understand what my problem is. Sadly I haven't found a good replacement so far, feature wise.
I very specifically cited someone like you as being a person who has a valid complaint. I'm calling out people who get upset at the idea of a computer doing "wasteful" work, when that wasteful work has no meaningful impact on latency, responsiveness, power draw, etc. This is common and is a form of "visibility bias". For example, computers do not report their CPU cache miss rates in a highly visible place. I'd imagine if they did, you'd have countless people agonizing over them, despite the fact that in most cases a higher cache miss rate would not result in perceptible difference when using the computer.
> I'm calling out people who get upset at the idea of a computer doing "wasteful" work, when that wasteful work has no meaningful impact on latency, responsiveness, power draw, etc.
But who gets to make that call?
For every person out there who complaints that using voice chat in Discord on a MacBook Pro on battery causes the chassis to heat up, fans to kick into turbo boost, and the battery to dwindle — but only when using the .app, not when using the website through Firefox, there's another person who says "oh, but my computer doesn't do that".
So then everybody who has a similar complaint as the first person is branded a liar and whiner.
We used to boot entire operating systems full of applications off HDDs into <1GB of ram. Waste is waste, and it is a shame that developers think it is totally ok just because their company-bought workstation is top of the line and replaced every year.
That's a very cool project (I've used OCaml before) and I hope it takes off. I should try it, but I admit I'm a little skeptical because there are so many details to get right like focus management, accessibility, and more.
Unfortunately since I can't rewrite my project entirely, I'm going to need something that reduces bloat by using a local webview instead, so it's still web-based.
Why? Because you picked a particularly bad one or because you have 200 open tabs? Usually the browser is running anyways so adding another open tab or window increases resource usage much less than another program shipping its own browser engine.
Without a breakdown of what you're doing in that browser, what you have customized in terms of config and extensions, etc. there is literally no information for others in a statement like that; this is an anecdote.
It's not even clear if you're being disparaging or whether you intended to follow this up with "and that's why I'm glad people reuse that stack instead of adding yet more bloat to my already struggling machine in the form of a home-rolled sub-par UI rendering stacks".
Sorry, I'm uncertain what you're trying to say -- do you actually think that the only two choices are browsers and home rolled rendering, or are you deliberately constructing a straw man?
You mean like what you just did, turning the comment about your statement having no information into a comment about a binary choice?
No.
I am saying that your original statement is of no use because it has no information: it is an anecdote about "some browser" running on "some machine" being in "in some way", none of quantified or even further described, and so your claim can't be used to draw any conclusions about really anything at all.
What did you even intend to imply? Did you say it because you wanted to thumbs-up the idea of reusing already installed browsers (because it can certainly be read that way) or did you want to thumbs-down the idea of using the already installed browser? (because it can certainly ALSO be read that way).
Adding a browser component to applications would be a pretty big regression.
You want details?
19 tabs. ublock origin, tab search.
The browser process alone (no tabs included) is sitting at about 615 megabytes of RSS. The GPU process is using another 325 megabytes. Ublock origin is using 115 megabytes on top of that. The tabs are varying between 20 megabytes (for the runit documentation, http://smarden.org/runit/), 100 megabytes (browsing a private github repository), 200 megabytes (for a Jupyter notebook with about 10 paragraphs and 3 images shown), to 1.2 gigabytes of memory (Slack tab).
The next fattest program I have currently running is Wireshark. It's at around 200 megabytes of RSS while dealing with a capture containing about half a million packets in the current list view. MPV is surprisingly large, sitting at 110 megabytes -- guessing some of that is buffering the video, but it's still a bit surprising. Sylpheed is also significant, at well over 150 megabytes. And all of the gvim 20-odd gvim and terminal emulator processes add up, each one at close to 20 megabytes of RSS. Evince is also significant, although I've only got 7 or 8 of them running. Each is using about 60 megabytes of RSS.
Still, all non-browsercombined running on my machine come out to about 20% of the RSS size of Chrome, in spite of Chrome processes being 25% of the total number of running processes.
But I suppose you're technically right: Chrome isn't the heaviest thing on my machine -- Those tend to be spark jobs that I spin up. Those are capped to 30 gigabytes. But I don't do much spark these days, so...
There's also https://github.com/zserge/webview, gives you the system browser engine in a window, and you can evaluate Javascript from the native side, or call a native function from the Javascript side with a string argument (usually a JSON payload).
The "hello world" which opens a webpage in a window is just a single line of C code and produces a 26 kilobytes executable on macOS.
I've used Etcher a bunch of times. It's pretty and gets the job done.
But it's also heavily recommended within the Raspberry Pi community, and used by Balena's customers for starter IoT applications. We can assume most of the users of this application are beginners.
So, do you think these beginners would care about the size of this "app" that they'll probably delete as soon as it finishes? Do you think they'd care about 200MB of RAM when they're almost certainly running it on any modern PC instead of some ancient EeePC from hell? As a beginner, would you feel more comfortable in front of Etcher or dd?
The crux of your argument basically boils down to "a GUI takes up more RAM than a CLI." Which is obviously true, but it doesn't invalidate the need for more GUIs in the world. Could Etcher have been built with a native GUI? Sure, but it's free software, there are no expectations. If you wanted to, you could clone the repo and build it natively yourself.
But why would you? Etcher already exists, and it would save you from hundreds of hours of debugging and learning new frameworks. Knowing HTML/CSS/JS, you could build Electron Etcher in a fraction of the time it would take you to build Qt4 Etcher, and the compromises are acceptable given the target audience.
And now you understand why the Etcher devs chose Electron.
> And now you understand why the Etcher devs chose Electron.
No one is claiming that the choice to use electron is without merit or that it is a mysterious choice. It's just heavily disappointing that we've gotten to the point where 300mb and 4 processes is acceptable for the cruft developed with it.
Why is that disappointing? Worrying about how much RAM your program is using feels similar to worrying about how much paper you're using to take notes - at best it's a distraction from what's actually important.
If you want disappointing, how about the fact that we're still finding buffer overflow vulnerabilities? IMO we shouldn't even begin to talk about software performance while we have such difficulty getting software to behave correctly at all.
If I wanted disappointing, I would accept that some people in this world will never see the big picture and are so out of touch that they try 1-upping others on a message board by throwing out buzz-words.
In the author's defense, it's just an example he was using to explain a development setup and some techniques. You can also do a factorial on a sheet of paper at 0mb RAM
The 230MB (not 300MB) is from my real app Actual. The example project uses much less because it does nothing except start electron, which looks like it uses about 80MB baseline.
Obviously bundling nearly all of Chromium with an app is inherently inefficient, but out of curiosity's sake, has there been any objective analysis of exactly what makes this use a seemingly disproportionate amount of resources? Are JavaScript runtimes inherently inefficient? Is it Electron's node.js part? Is it the huge amounts of code needed to parse and render HTML and CSS? Is it the multimedia features, like WebGL and the video and audio players? Is the problem actually just badly written code running on top of Electron? And if so, what exactly are are most Electron apps doing wrong that one could avoid? Is it, as this blog post seems to suspect, that they aren't sufficiently taking advantage of Electron's node.js integration?
In addition, once the actual bottleneck has been identified, doesn't this open the door to fix things on the WebKit/Blink engine side of things? Before, when Electron was managed by GitHub, one could say this would be unreasonably difficult, but now that GitHub is owned by Microsoft, isn't tweaking Blink to be more suitable for deployment environments outside of Chrome exactly the type of thing they would have a vested interest in doing, and the resources to actually pull off? After all, with VS Code, they are one of the highest-profile Electron users, and using Blink in Edge presumably involves working with the Blink code to some degree anyway.
It's always struck me that a lot of Electron apps likely never use a single line of WebGL, while a lot of WebGL apps likely only use DOM APIs to pretend the DOM doesn't exist by plastering everything in a huge <canvas> element, and both likely don't want their binaries to include the Chrome devtools, but Electron includes all of these things anyway. I'm not sure how much space or resources it would actually save, but is Chromium/Blink refactoring their code and build process to facilitate Electron building different Electron "profiles" that include different sets of features even remotely a possibility?
Not only memory is the problem with electron, but also app feeling.
Skype left is the only electron app I'm using. And comparing to desktop messaging app (telegram):
1. Skype unresponsive. Opening chat takes up to few seconds.
2. I see how opened chat is literally updating -- messages, avatars, emoticons emerges from dust
3. Selecting messages as messages in native vs selecting messages as text with surrounding artefacts (chat is a html page with all what comes with it)
Summarizing, electron apps are just feels like foreigners.
I have never had an issue with electron apps being terrible because of networking - although clearly there are apps that do push more work than is necessary over the network, there’s plenty of native software that does too.
My problem with electron apps is that they use way more memory and cpu than they should be - the architecture fix for that would be to not be a full browser.
The problem I have is they lack basic app behaviors from the platform. My experience is mostly for the Mac, but my experience using vscode on Linux has been similar.
But here we go, for a Mac:
* cmd-e/f/g are so wide, not just app wide. Vscode goes even further and seems to achieve per-view behavior.
* document editors and viewers are expected to have an icon for the file being edited, either in the window or in the tab. That is a draggable link to the file, and a drop down on right click for the directory hierarchy.
* drag and drop of files into and out of the windows don’t work
These are basic behaviors that even the simplest apps manage to get right on OS X, yet no electron apps seem capable of meeting this low bar.
Qt. Supports Linux, macOS, and Windows. Supports at least one high-level programming language very well (Python). Has good tooling and a lot of support.
It's LGPL, which is fine for most cases. But you could always buy a commercial license.
I use VS Code and it never felt sluggish. But it’s probably one of the best Electron apps out there.
Honestly, if I had to develop a Desktop app today and wanted it to be cross-platform, I’d probably use Electron. No need for hacky workarounds like in a web app to make it run on every major browser.
I use VS Code too and it's extremely sluggish compared to a text editor like Sublime. I use it only because the plugin experience is much better than Sublime and the responsiveness is infinitely better than JetBrains' products.
I find it not fair to compare VSCode to JetBrains' product: one of them is an IDE, the other one is an editor, different features and different targets!
I disagree. VSCode is really a hybrid product. With plugins, it does intellisense and debugging, which are really the two major value-add features of an IDE for me. It's still light years faster than IntelliJ (and light years slower than Sublime). I switched from IntelliJ to VSCode and it minimally affected my workflow, so they're objectively not different targets in that regard.
I honestly don't know why people waste time building Electron apps.
It's the worst possible combination. All of the downsides of web apps (poor performance, memory use, poor integration, etc.) and native apps (security problems, hard to update, etc.), without the advantages of either one.
Why would I want to download your specially packaged browser, without adblock or uMatrix or any of that, just to basically view a website anyway?
Ah, but you don't waste time, that's the whole point of the idea. You just repackage your existing web app and distribute it as a "native" app, no effort required. You also have a cross-platform application with no extra work (the auld "Write once, run anywhere" slogan), and can use web developers instead of people who know QT/Java/what have you.
Then the user can run your chat/music/text editor in the background, have it appear in the taskbar, and start boot, at the cost of requiring a few gigabytes of RAM and 25% CPU utilization while idling.
Personally I'd rather have Progressive Web Apps but for some reason, those are still in an awkward stage.
> I honestly don't know why people waste time building Electron apps.
Yes you do. Because some people understand UI design and know how to program in javascript/html/css and have a need to make a desktop app.
For example, a small budget indie startup has enough to make ONE version of their app to get funding. They can use JS with the same budget and launch on Web, PC, Mac, Linux, iOS and Android instead of just making a single Windows native app.
There is 100% a place for electron. You can argue that after funding they should convert to native, sure. But if you're reading HN you are definitely smart enough to know why a lot of us use electron :)
tldr; Smart use of electron is NOT for the end user. It's for the developer.
I've never noticed Electron apps negatively impacting my computer. I use Slack all the time and I don't notice a difference when I have it running vs when i don't. Do most people not like Electron because the number seems high or does it negatively impact their computer in some way? (I have a MBP with 8 gig RAM, that doesn't seem out of the ordinary)
As for impact it depends. Slack is not that bad except that it is slow (for example switching teams was sluggish when I was using it). But when I run Slack, Skype, Insomnia and VS Code at the same time I definitely saw the impact. I had 16Gigs of ram but half of that was eaten by a VM all the time.
It definitely negatively affects the battery life of my 2018 MBP. Using only native applications, I can get a full days work done. If I have Slack open, usually I can only work until noon.
I don't like Slack for that reason, but it's a Slack issue, not Electron. Other Electron apps (vscode, keeweb for example) aren't that resource hungry.
I am an electron hater but this is a really good setup. Server reloading with that debugger is rad. I'm going to use these methods with a platform I don't like.
Well to me it is thing that should not be done at all, instead of doing it diligently.
I will put this in category of 'The secret of good user tracking on web', 'the secret of doing quality journalism on ads supported model' or 'the secret of nuanced political debate on Twitter' etc.
For Electron haters, here are the alternatives: OpenJFX (some people hate Java), Qt5 (it can be pricey in some situations), wxWidgets (not popular enough?).
However, I have a lot of hope for Flutter in desktop. I hope the developers of Flutter framework can pull off the challenge.
In my case, for opensource projects, I definitely use Qt5 (using PySide2). But for proprietary projects, I have to admit that most likely I will choose Electron.
I'm building an application using Electron and I've found the majority of memory consumption occurs when instantiating new components. When using a UI framework like Material, this can happen frequently, such that the simple act of hovering over a menu and showing an indicator can cause an increase of 3 MB of RAM. I love Electron, it allows me to create applications in an intuitive way using a language meant for building UIs, rather than slapping a library on a native codebase, and I can deploy it wherever Chromium is supported, so I'd like to make it efficient, wherever possible.
Two solutions I have found during my development:
1) Keep your components light. Most of what I used from Material can be done completely with raw HTML and CSS in a simpler fashion without much going on in terms of state, so I created my own components that serve my purposes. A lot of libraries and frameworks are heavy because they are used on webpages viewed by multiple different browsers and require complex logic to work across different environments. Electron sandboxes your code in Chromium + node, you code only for 1 browser engine, so I hope more devs take advantage of that fact.
2) Load what you know you need in the beginning, and keep your cache to a minimum. Most Electron apps do not have view transitions, they are built as SPAs. One of the downsides of this is that you need to be mindful of your cache since it is not cleared when you display a new component. Clearing the things that don't matter to the view or data has been essential in keeping my memory footprint small.
With a large app that does many things and loads many views, I'm able to keep the RAM usage to under 200, most times around 150, which, while larger than something like Excel running idle, is a small price to pay to have intuitive design features and access to a large and constantly growing library of reusable components and packages.
The type of developer that chooses Electron isn't going to care about squeezing out the best performance possible. They already made a conscious trade-off to sacrifice performance and memory usage in exchange for productivity. They won't do some hacky things like use Rust instead of nodejs.
It sounds like a generally good approach for keeping the UI process responsive at all times. I wonder if the serialization cost behind all those IPC calls becomes an issue at some point, especially when passing large structures back and forth, have you found this to be an issue for your use case?
I haven't noticed an issue - the only alternative would be to block the renderer and do it in the same process. The IPC uses a local domain socket which is very high performance (better than a web socket) so any overhead is quickly made up for with UI responsiveness. There may be specialized cases where it is a problem and you'll have to figure something out.
The MIT-SHM shared memory extension was added pretty early on. And nowadays (in GTK etc.) everything is drawn on the client side and sent over as a bitmap.
The old-style serialized drawing commands model is only responsive when you have a very simple UI that doesn't do much drawing.
I feel like the key idea here is somewhat unrelated to writing Electron apps. It's that you can set up any Node server app to run inside an Electron window during development and then you get cmd+r and all the devtools for free. Kind of like a node inspector on steroids. Pretty nifty!
If you need a background process to call Node APIs because "it's the only place where it's safe to access them", what unsafe way is this meant to contrast with? Does Electron normally (but unsafely?) call Node APIs from the browser process?
You can enable "node integration" with any renderer process and call node from their. But you definitely should not enable it in your primary renderer process if any untrusted code is running their (if you pull in any JS from anywhere else)
Thanks for sharing! Your post is full of things that might seem obvious in hindsight, but all too likely to be ignored absent a compelling writeup like yours. Bookmarked the linked repo, will revisit w/ my next Electron app.
But I love the addition of defaulting to a node server for lower memory usage.
The Atom editor (the first Electron app) has slowly been rewriting a lot of their modules in C++ for speed and lower memory usage, binding to them through NodeJs. Kind of ironic since GitHub created Electron to build desktop apps using web technologies.
> IMHO, good electron apps feel and behave like native apps.
I second this, but there are a few reasons why they can't. For example Blink does not respect MacOS highlight color setting so any electron app won't either.
I should not be able to select text in parts of interface (e.g.: text inside a button), but it is pretty common that a cmd+a selects everything including interface.
Buttons should be in the correct order and have correct highlights depending on the platform.
I completely agree! Are you referring to the "ctrl-r" to reloading the server in the article? That's only for development - the user never has to refresh the app like that of course!
I think you’re saying “a user should never have to refresh the app”. Because ctrl-r has returned completely borked applications back to working. Specifically, I’ve had to use it almost daily to make slack recover from a freeze.
I'm not sure what the point is, here. Is it that Electron apps can lock up or crash? That they can be refreshed seems pretty immaterial, given that any application in any language can crash
My simple rule of thumb for reducing the memory usage of Electron apps is to not run any of them. Problem solved. It's like why I never ever installed java which was subpar garbage. Gosling has a lot to answer for.
I find the problem lies with the trade-off you are making.
You are making your life as a developer easier in exchange massively increasing the resource requirements and battery usage on end-user machines.
It's just bad practice and Electron encourages it.
I feel like if we ever want to be taken seriously as a profession the way structural or mechanical engineers are for example, then efficiency and reliability need to be primary considerations, otherwise we are basically making toys.
You can have that already - use web, the real one, not Electron. Everyone then can use their favorite browser for your app, not forced to use Chromium.
And if you want to make an efficient desktop app, look at telegram source code.
The web's offline support sucks, and don't tell me to use service workers. Actual is a truly offline 100% local app. Most users love that they can simply launch a desktop app and it always runs no matter their network connection.
I would discourage libui… The bus factor is 1 right now. There is now guarantee it’s going to be around in a few years. Qt5 on the other hand has a large community around it.
Some people seem to think that unused ram is good, quite the contrary actually. If you have 50 of these apps running doing work while playing a game while having a pc from 10 years ago, then I can actually see a problem. Laptop and battery-driven devices is a whole other matter though.
I like to think the reusability and speed of development with electron is just terrific and is a trade-off that is worth it in the end.
"Some people seem to think that unused ram is good, quite the contrary actually."
Bubkiss. Using ram is an inherent negative that is justified by saving cpu time or disk access.
All things being equal if you could do the same job with a fraction of the resources the user would be better of because someone else's bloated app can use that ram or it can be used to cache files. It certainly isn't "wasted"
Now using ram to cache files is unequivocally good because as the file exists on disk we can at any time evict it from memory and read it again later.
"Laptop and battery-driven devices is a whole other matter though." Wherein a whole other matter is the majority of computers.
"If you have 50 of these apps running doing work while playing a game while having a pc from 10 years ago, then I can actually see a problem"
Complete strawman. There are probably more people that are poor or cheap running slow devices than fast, This is by no means limited to a tiny number of very old devices. Likd 80% of computers are constrained in terms of battery or ram.
"electron is just terrific and is a trade-off that is worth it in the end"
What you are actually saying is that you value one developers time over a million users time. The challenge is that it doesn't scale very well. If the user runs 10 apps that use a gb per app a typical 8gb ram machine will be swapping when switching apps and a 4gb which still exists will have stopped working.
Unused RAM is good. It lets the OS do things like pre-fetching and caching disc reads. That is, it speeds everything up to not be using all of your RAM.
Plus, when you do fill your RAM, you have to start contending with swapping, which slows everything down significantly.
Swap is not enabled by default by any OS that I know of. And I doubt any OS requires you to have several gbs free in order to work smoothly.
My point being is, we are not stuck with 2 gbs of ram anymore and like games, apps will scale to use more resources for ease of development or better features.
I care as much about your ease of development as you care about the ease of assembly of your car. If someone told you that your next car was going to get 15 mpg and told you that gas stations are plentiful nowadays you would presumably by a different car.
But not Android, iOS or iPadOS. (And for good reason - these OS's run on devices that use bottom-of-the-barrel flash storage and can't sustain the wear-and-tear that comes with using swap.) That's not "literally every major OS"!
They don't "suspend applications". They can prompt an application to serialize an image of its "core" memory for storage on the flash media and to subsequently restore its state from the serialized memory dump, but very few applications are written to do the job properly.
As far as I am aware both windows and linux (several distros) use swap by default. On windows it will be a file on the C drive and on Linux it will be a swap partition.
Of course they are not going to swap unless they have to, but they will swap in default configuration.
I see it as a stop-gap until I can invest more resources into something more lean. But I'm focused on the product right now, not rewriting the world and wasting time because of engineering ideals.