Take a look at the MCAP file format (https://mcap.dev), we invented it for the robotics industry but it’s a generic write-optimized container format for time series data. Since it’s a container format, you need to choose a serialization format as well such as flatbuffers or protobufs.
It’s not just you, this is a fundamental challenge in programming. I think this paper by Peter Naur lays out the reason why it’s difficult, because software is a lossy representation of a theory held in one or more individuals heads. The original author had a model for how a problem could be solved by written code, and how that code might be extended or refactored in the future to solve related problems. No amount of API design or naming conventions or documentation can perfectly capture those ideas.
Verdant Robotics (https://www.verdantrobotics.com/) does targeted herbicide applications using weed detection and a two-axis turret, with a laser-based variant in the works.
My company is remote only and the money saved on office space has been reallocated to additional team outings and a longer runway. Constant outings sounds a bit overwhelming, personally.
Even for our company, we would fork over the $600 but it looks like all of the EV cert options require a hardware signing key. Putting a human in the loop for our otherwise fully automated release process is a non-starter.
Worse still, the SafeNet software that my cert vendor recommends using (to interact with the hardware key) doesn't even allow use of Remote Desktop sessions!
It somehow detects if you're in an RDP session, and shows that there are no hardware tokens attached if that's the case. No message or warning whatsoever. My only Windows PC is headless and I lost several hours trying to debug this.
The entire EV cert process is such an outrage. My cert vendor advertised that the validation process would take 2-3 business days if all docs were in order, DUNS info correct, etc. I spent a lot of time ahead of the order ensuring the docs were indeed in order, and the process still inexplicably took 9 business days.
It's not about virtualisation. RDP sessions are actually marked as remote login sessions. The login source can be checked easily in each app. (or just run `net session`)
If TeamViewer acts on an already logged-in local session, it should work well.
Back in 2014 I was working at AltspaceVR (a social virtual reality startup) and we had Mac and Windows versions of the product. I set up a Mac Mini at the office to do the Mac builds, and it also ran a Windows VM under Parallels to do the Windows code signing. (The actual Windows builds ran in the cloud and we sent them down to the Windows VM for signing and then it sent them back up to the cloud.)
We had a Digicert code signing certificate that used a hardware key connected to the Windows VM. Unfortunately it required a password to be manually entered each time the code was signed.
To automate this, I wrote a little AutoHotkey script that watched for the password dialog and entered the password.
There wasn't any RDP issue because we didn't use RDP, just a Windows VM that didn't need any user intervention. (It could have been a separate physical machine, but since we had the Mini anyway and it had the capacity, it was convenient to have it do both the Mac builds and the Windows code signing.)
I sometimes think there are few problems that AutoHotkey cannot solve.
Also, Microsoft is working on a code signing service called Azure Code Signing where Microsoft issues and manages the certificate and keys and you simply upload binaries/app packages to Azure which does the signing.
That sounds like abuse of a monopoly position to me. They keep the horrendous status quo as bad as possible so their new product looks good by comparison.
Of course, there's a kinda reasonable reason for the hardware token requirement: Widely publicised 2010 virus 'StuxNet' had a driver signature, using a stolen copy of Realtek's driver signing certificate. [1]
And stolen certificates make the whole code-signing house of cards falls apart - you can't trust something signed by Realtek if it was not, in fact, signed by Realtek!
Of course, hardware tokens aren't a panacea: Some malware authors simply set up a shell company and get a certificate issued to that company.
One of my clients has strict requirements for an automated build process, and we managed to use an EV code signing cert on a YubiKey w/ PIN - so it’s definitely possible with a little leg work.
After having gone through it, I agree with other posts that the main annoyance is the verification process and weeks of delays/back-and-forth. That, and the inconvenience of now having a single point of failure in the build process (unless multiple certs are purchased).
Correct me if I'm wrong, but when a fully preconfigured YubiKey is shipped to you as part of the EV cert fulfillment, then there is no way to do this after-the-fact.
You need the key, but there are ways to get a .pfx out of it. Which I unfortunately don't remember, but that is probably documented by whoever you got the key from. And otherwise signtool can be used with the key, though it is not always trivial to get working.
I had the opportunity to go down to JPL and speak with team members about this design decision. The space hardened processors are not fast enough to do real time sensor fusion and flight control, so they were forced to move to the faster snapdragon. This processor will have not flips on Mars, possibly up to every few minutes. Their solution is to hold two copies of memory and double check operations as much as possible, and if any difference is detected they simply reboot. Ingenuity will start to fall out of the sky, but it can go through a full reboot and come back online in a few hundred milliseconds to continue flying.
In the far future where robots are exploring distant planets, our best tech troubleshooting tool is to turn it off and turn it on again.
I'm a little surprised they didn't go for three separate computers and compare them for every operation, or something like that, but I'm sure they have their reasons.
I've never seen an off-the-shelf processor that has hardware support for doing that kind of cross-checking on every instruction. And doing it in software would probably add so much overhead that the error-checking would be much more likely to fail than the application code.
If you're willing to relax your real-time constraints a bit, and risk a brief period of incorrect behavior before the error is caught, the problem becomes vastly easier and cheaper to solve.
>off-the-shelf processor that has hardware support for doing that kind of cross-checking on every instruction.
it is usually done with COTS CPU by either running the CPUs in lockstep (the simpler early generations of CPU) or by inserting hardware checkpoints at various points like branches, by number of instructions, etc. A recent such commercial system was the triple Itanium from Tandem/NonStop(HP).
"This the first time we’ll be flying Linux on Mars. We’re actually running on a Linux operating system. The software framework that we’re using is one that we developed at JPL for cubesats and instruments, and we open-sourced[0] it a few years ago. So, you can get the software framework that’s flying on the Mars helicopter, and use it on your own project. It’s kind of an open-source victory, because we’re flying an open-source operating system and an open-source flight software framework and flying commercial parts that you can buy off the shelf if you wanted to do this yourself someday. This is a new thing for JPL because they tend to like what’s very safe and proven, but a lot of people are very excited about it, and we’re really looking forward to doing it."
The worst is when I see someone using vim or Emacs over SSH. If you think Electron apps are bad, imagine adding a network round trip between every keystroke!
Suggestion for anyone who's annoyed by SSH latency - try out mosh[0]. It watches to see if your keystrokes are echoed and if they are, it'll start echoing your keystrokes locally without waiting for network roundtrip.
That, plus the ability to rejoin a session even from a different IP, makes working over SSH doable even from airplane Wi-Fi.
Terminal Emacs over ssh is, well, just like anything else in a terminal over ssh. Can't say I notice the latency unless the datacenter is on the other side of the country.
idk, vim across ssh (rn from my place in the west coast to a vps in iirc the east coast) feels a lot snappier and less frustrating than my typical interaction with slack.