I gave up trying to use native libraries for UI on cross-platform desktop programs.
I'm playing with the idea to create a Web UI and launch it automatically from the Rust server by opening a browser and pointing it at localhost. No electron bloat.
The article seems to consider this hacky and insecure:
> Web broswer communicating with a Rust local server: too much hacky, insecure? (DNS rebinding attacks) and does not support native features like tray icons.
Personally I don't agree it's hacky and while DNS rebinding attacks are feasible, doesn't your application just need to check against a host whitelist to protect itself?
My suggestion is to avoid cookies completely since they'd be shared with all services on the same IP/hostname because they ignore ports. I'd also add a random "key" as the first thing in the URL path so you'd end up with something like "http ://127.0.0.1 :1234/Lxk8gE7qnClf/actual/path/here" and have everything else tell the user to open the app with your icon or something.
This prevents malware from accessing your app while avoiding leaking authentication cookies to other http services on localhost.
It at least used to be the case that this could be gotten around with flash, though that may be fixed, and many people won't run strange flash anymore anyway.
Another way, if you're using WebSockets, you can establish that the latency is unrealistically low to be a switched physical network, with pings (with cookies).
Consider the following task: you have UI widget and you will need to draw something on it. Something that cannot be trivially expressed by CSS means.
What will you do in native application?
You will put custom drawing code (C/C++/Rust/Go) in WM_PAINT/onPaint/onDraw method of the widget. If in Sciter[1] then you can do event_handler::handle_draw(graphics* pgfx) - Sciter's applications are native ones.
This will be fastest and most lightweight way of doing such things.
Now, what will you do on Web platform (browser or Electron)?
Calling server (over TCP - browser, RPC - Electron) for providing a drawing is clearly not an option.
So you will make some <canvas> element where you will draw that thing. <canvas> is a bitmap based thing so you will have that bitmap allocated as in CPU memory as in GPU memory. Plus you will have some nontrivial setup to position that <canvas> where you need it.
Therefore you will have at least memory requirement increased for your app (if to compare with native/sciter app).
Note that modern browser uses separate process for each tab. So you will have at least 3 processes running your application. That's also about memory and CPU consumption needed for RPC between them all.
And what if that thing that you will need to draw is not that trivial - will be heavy for script to handle...
And so you will start to add JIT to your script engine. That will need more memory and CPU (at least for bootstrap). Otherwise some smart people will propose you to use WebAssemble for that, so you will load WebAsm VM into your application...
See where it goes?
To create obstacles for yourself to overcome them heroically, right?
If that is for you personally then you can do whatever you want. But you want to put that burden to your users ...
And so as many users as many machines converting that needles payload to heat without doing anything extra of what native applications can do already …
To a state where users need a (another) datacenter to access/view/use even trivial apps with “reasonable” responsiveness. There are services already being offered to this end [1].
I enjoy noticing the system tray spin on cue when I boot up a laptop or plug a phone in as it syncs over WLAN. It's quite a cool feeling seeing the automatic file sync through the air as if by magic, no separate Dropbox/Google/Onedrive/Nextcloud service needed.
It also supports a super bloat-free option where entire Desktop Apps can be run from Gists. All Apps are run and share the same executable (and re-use its dependencies) so the Gists only need to contain their App-specific scripts and dependencies, giving each App tiny footprints:
I have. In my opinion it is actually much more secure than Electron since you rely on the browser directly. (And this is why I did it instead of using Electron.)
You do have to add authentication to prevent against dns rebinding but I solved that by adding a random token in the get parameters when I launch the page. (You then cache this inside a cookie.)
Yes and no - It likely has more extensions and stuff. But actually since electron is essentially chromium + nodejs I would guess electron would be "heavier".
Anyway the reason this would be desirable is because the user's browser is likely already open (and doesn't have to be chrome), and one of the critiques with electron is that each electron app is essentially another instance of chromium running, they don't share any resources.
But a web browser that's already open just has to open a new tab, and you can close it when you don't need it, allowing the otherwise lightweight daemon to run with minimum performance impact.
I'm thinking of using the user's default browser, not downloading another one for my program. In most cases the default browser is already running - no need to launch another instance like electron.
By using the user's browser, you do have to handle all of the different possible browsers somehow, even if that's only to do annoying browser detection and say "use a different browser". Electron and similar ideas at least limit you to designing / developing / testing a single browser. At least AFAIK; I haven't done any Electron work.
I mean... it's no different than making any website work in multiple browsers. Which really isn't too hard nowadays unless you're using bleeding-edge features or obscure non-standard ones.
Supporting multiple browsers is always somewhat more difficult than just one browser.
You are right that your choice of features plays into it, but I don't believe that's all there is. Even in the best case (all of features you want are available in all the browsers you care to support), you still should be testing in those browsers (ideally on all of the platforms / devices you support).
You also have to make the initial decision about what browsers/platforms/devices you support in the first place, and then choose when to re-evaluate your support. None of that is free.
Please don't get me wrong: I think it's a fine path (and one I have chosen myself), but it's definitely not without downsides.
I suppose you could, but I think you'd need to write version detection for a browser, the ability to figure out the user's chosen browser, the logic to download a reasonable version of Chromium, install it, get it past virus scanners, etc.
Using the preinstalled vrowser would mean bloating you app with polyfiles to aupport all differents browsers. It's one of the reasons modern apps are bloated. Electron solves this.
I'm playing with the idea to create a Web UI and launch it automatically from the Rust server by opening a browser and pointing it at localhost. No electron bloat.
Has anyone tried this, any thoughts?