One of the things Sal Mercogliano stressed is that the crew (and possibly other crews of the same line) modified systems in order to save time.
Rather than doing the process of purging high-sulphur fuel that can't be used in USA waters, they had it set so that some of the generators were fed from USA-approved fuel, resulting in redundancy & automatic failover being compromised.
It seems probable that the wire failure would not have caused catastrophic overall loss of power if the generators had been in the normal configuration.
I've gone off Kyle Hill after a lot of people pointed out that he was promoting a scam (BetterHelp) on his video about fraud and his response was just to tell people to deal with it
This may be good for the selfhoster who is running more an a couple of sites.
But a GUI to manage enterprise-level SSL fleets? Doubtful.
Not when a change/configuration management system (Puppet, Chef, Ansible etc etc..) driven by git commits enables single-source-of-truth, peer-review, and automatic creation/monitoring/renewal of certificates.
You're absolutely right, at the enterprise level, managing an SSL fleet goes far beyond just issuance, and you can't assume the certificates you're issuing are the only ones that exist.
Shameless plug: if you need to cut through the noise of thousands of certs across thousands of hosts, there's https://sslboard.com
To be honest, it's rather difficult and costly to run, with a 1.5B rows database of indexed unexpired certificates and a scanning job that took weeks from dozens of IPs.
The CT Log scanning infrastructure is cloud based (rather bare metal actually), the application db, service, and Host scanning can be on-prem. An exceptional enterprise customer could convince me to offer a 100% on-prem solution
Hey! Appreciate the concern w/ web requests. If you look closely at the URL you'll notice that all the requests are to *.localhost addresses. This is just how Tauri's asset handling & IPC systems work - nothing to worry about :)
I've gone ahead and create a GitHub Issue about setting up a checkbox with the installer such that it doesn't force Electro as the default app. It's something I wanted to add pre-announcement but figured it wouldn't be as much of a requested feature as it clearly is, my bad.
Oh bloody hell of course Microsoft is pulling off stunts like this. Thanks for raising this, I'll have a look at what my options are to turn this off.
Tauri 2.0 was initially chosen because it would let me get an MVP out quickly to start getting user feedback (like your own).
My end goal is to move to a custom renderer so I'm not relying on Chromium / WebView2. This will take many months of work I suspect (balancing with my FYP @ university & other projects).
This? Finding outbound/inbound requests to an app?
Not sure it's worth an entire post. But:
The application in question is NetLimiter for Windows https://www.netlimiter.com/
(I'm sure there are others, btw)
It acts as a per-application firewall. It also has the ability to block internet access completely, as well as Priorities (bandwith allocation) per application.
By default it will pop up a window every time an application makes web requests, either inbound or outbound.
You have the option to Deny or Allow the operation. And options to have that be temporary (next x minutes) or permanent.
After being set up an alarming number of applications will cause NetLimiter popups, but very soon everything will either be allowed or blocked.
So I spoke to one of the contributors of Tauri who got back to me with the following response:
"
It's not a thing tauri controls and the telemetry settings are an operating system setting since you are running windows this kind of telemetry is not completely avoidable."
This quote from the issue stood out in particular:
"
WebView2 is considered a Windows component, and the data collection consent is governed by Windows Diagnostic setting on Windows 10 as a centralized switch.
End users are empowered to control the data collection of WebView2 and can do so via toggling the Windows Diagnostic setting on Windows 10. This is also what the Edge browser does. On Windows 7/8.1, because there is no Windows Diagnostic setting, we treat this as no consent for optional data. There is very limited required data that the OS always collects, unless you're on some specific SKUs. Developers are definitely welcomed to convey that to their end users and ask them to use the OS toggle.
"
I've been meaning to move away from Tauri + WebView2, this might be the best call to make (not only for this reason of course)
Configuration Management of one form or another is a way to ensure consistency across a fleet of servers, and reduce administration overhead. VMs aren't going away any time soon, despite SaaS companies best efforts.
Companies with 100s to 1000s of hours of investment in software like Puppet aren't going to rearchitect without being forced to. By the application becoming unsuitable for current needs, or due to cost.
Broadcom bought out VMWare and jacked the price up by unworkable amounts. Puppet is now owned by a venture capital company and the non-zero possibility is they'll follow Broadcom's playbook. That's why Puppet is being forked.
Why not contribute to Ansible or Salt? What individual programmers do in their spare time is irrelevant to the majority of users of those products.
VMs can be run as immutable prebaked images, with no configuration management needed. They may not be going away, but the way the tech is used is changing.
Configuration changes. With immutable image every time any parameter changes you have to rebuild and re-deploy image which looks like a waste of resources - to change even 1 byte you need to re-build and re-push an image (which could be a multi Gb one). Then restart a whole cluster which depending on the application can be either costly or disruptive (in case of long lived network connection, on in case if you have a database). And sometimes you need to change configuration parameters many times per day.
What developers tend to do when forced to use immutable infra is to move configuration from on-disk files to RAM and query it using network API from a central system. The problem is that it makes systems less reliable. If a VM/server restarted it can practically always read a config form disk but if you service relies on an external system to get runtime configuration it would not work if this external system is down, overloaded, misconfigured, returns wrong config because of a bug e. t. c. And it does happen in practice even when system designers tell they configuration API is very reliable (in theory). After seeing such systems fail I like simple on-disk configs more and more.
Sure. And some of us may want or need to do live changes at runtime across the fleet without needing a full rebuild and redeploy. It can certainly be more green. It's fine if you find it useful to be dogmatic and consistent but don't expect that to be the right approach everywhere or for everyone.
CM is useful here. And usually in the bootstrapping of such architectures.
I don't know if I'd describe the approach as dogmatic so much as deterministic. Live patching is certainly faster than the alternative, but you have to make sure you e.g. restart services when the underlying libraries get updated, as one example. Otherwise a naive vulnerability scanner might see that the OS package for e.g. openssl is up to date, meanwhile the version loaded by nginx, which is now removed on disk, is vulnerable.
Er, and how to you make the images? Or a family of tweaked images for different use cases. The make sure security standards, access to centralized services, etc is working?
I've used both. Ansible is abysmally slow compared to puppet.
Loosing puppet would be bad -- I don't want to go back to ansible (I used it from the beginning, when Michel DeHaan was still onboard. I don't like the crude mixture of programming language and YAML it became)
Rather than doing the process of purging high-sulphur fuel that can't be used in USA waters, they had it set so that some of the generators were fed from USA-approved fuel, resulting in redundancy & automatic failover being compromised.
It seems probable that the wire failure would not have caused catastrophic overall loss of power if the generators had been in the normal configuration.