C++ with Boost has let you grab a stacktrace anywhere in the application for years. But in April 2024 Boost 1.85 added a big new feature: stacktrace from arbitrary exception ( https://www.boost.org/releases/1.85.0/ ), which shows the call stack at the time of the throw. We added it to our codebase, and suddenly errors where exceptions were thrown became orders of magnitude easier to debug.
C++23 added std::tracktrace, but until it includes stacktrace from arbitrary exception, we're sticking with Boost.
The US federal government issues its employees smart cards (Common Access Cards) that contain digital certs. Government employees can use these to send and receive S/MIME encrypted emails. That's a couple million users!
Our small company has been encrypting all emails by default with S/MIME for 15-20 years. A company can generate its own certs for free from a company root cert, use a provider like Sectigo for $20/year, or get US Government ECA certs for about $100/year.
You can read encrypted emails on company-managed mobile devices that have Knox chips to secure access to the certificate. We're careful to back up all our old keys so we can always read old emails.
Some drawbacks are:
- Email "search" features only see the subjects, not the contents, of encrypted emails.
- You can't read encrypted emails via web email.
- Few others have S/MIME certs. Most major government contractors seem confused when we ask about encrypting emails with them...
Johnny may not encrypt, but every business really can.
> It's unfortunate that so many people end up parroting fanciful ideas without fully appreciating the different contexts around software development.
Of course that's true of both sides of this discussion too.
I really value DRY, but of course I have seen cases where a little duplication is preferable. Lately I've seen a steady stream of these "duplication is ok" posts, and I worry that newer programmers will use it to justify copy-paste-modifying 20-30-line blocks of code without even trying to create an appropriate abstraction.
The reality of software is, as you suggest, that there are many good rules of thumb, but also lots of exceptions, and judgment is required in applying them.
> it's much easier to remove and consolidate duplicative work than unwind a poor abstraction that is embedded everywhere.
It's not easy to deduplicate after a few years have passed, and one copy had a bugfix, another got a refactoring improvement, and a third copy got a language modernization.
With poor abstractions, at least you can readily find all the places that the abstraction is used and imorove them. Whereas copy-paste-modified code can be hard to even find.
With poor abstractions I can improve abstractions and ensure holistic impact because of the reuse. Then I’m left with well factored reusable code full of increasingly powerful abstractions. Productivity increases over time. Abstractions improve and refine over time. Domain understanding is deepened.
With duplicated messes you may be looking at years before a logical point to attack across the stack is even available because the team is duplicating and producing duplicated efforts on an ongoing basis. Every issue, every hotfix, every customer request, every semi-complete update, every deviation is putting pressure to produce and with duplication available as the quickest and possibly only method. And there are geological nuances to each copy and paste exercise that often have rippling effects…
The necessary abstractions often aren’t even immaturely conceived of. Domain understanding is buried under layers of incidental complexity. Superstition around troublesome components takes over decision making. And a few years of plugging the same dams with the same fingers drains and scares off proper IT talent. Up front savings transmutate to tech debt, with every incentive to every actor at every point to make the collective situation worse by repeating the same short term reasoning.
Learning to abstract and modularize properly is the underlying issue. Learn to express yourself in maintainable fashion, then Don’t Repeat Yourself.
I feel AI does decent at fixing the dupes and consolidating it as one instance. Abstractions can have far deeper connections and embeddings making it really hard to undo and reform but to each their own on what works for them.
I've been writing complex scientific UIs for more than two decades and still don't feel like I always get it right. We aim for "gradual reveal" and making the most common options easy to find and use, but it's hard to get that right for everyone.
Microsoft tried hiding less commonly-used menu options a decade or so with Office and it was so terrible they abandoned it - only to try the same approach with the Windows 11 Explorer menu.
I absolutely hate that rigid "Basic" vs. "Advanced" distinction, but one of our image processing UIs was so complicated a customer really pressed us to add that. We tried and tried and couldn't come up with something better, so we settled on an approach that I still feel is suboptimal.
So I welcome seeing what AI/LLMs may be able to contribute to the UI design space, and customizing per user based on their usage seems like an interesting experiment. But at the same time I'm skeptical that AI will really excel in this very human and subjective area.
The AI spreadsheet example in the linked article is interesting, but that occurs within a more specialized and constrained environment than general GUI design. I think of good UI design as involving a lot of human factors understanding combined with 2D spatial reasoning and layout, which I don't think AI is good at. (Today's article about Claude failing to reproduce an HTML layout to match a screen capture is one example of this: https://news.ycombinator.com/item?id=46183294 )
But my pessimism may be unfounded or based on ignorance. At some point AI will probably get better at these things as well, either with better LLMs or by augmenting LLMs with outboard spatial reasoning modules that they can interact with.
Stellar Science | Hybrid (USA) Albuquerque NM, Washington DC (Tysons VA), Dayton OH | Full time, interns/co-ops | U.S. citizenship required | https://www.stellarscience.com
Company: We're a small scientific software development company that develops custom scientific and engineering analysis applications in domains including: space situational awareness (monitoring the locations, health and status of on-orbit satellites), image simulation, high power microwave systems, modeling and simulation, laser systems modeling, AI/ML including physics-informed neural networks (PINN), human body thermoregulation, computer vision and image processing, high performance computing (HPC), computer aided design (CAD), and more. All exciting applications and no CRUD. We emphasize high quality code and lightweight processes that free software engineers to be productive.
Experience: Except for interns, we currently require a Bachelors degree in physics, engineering, math, computer science, or a related field. Masters or PhD is a plus. (Roughly 25% of our staff have PhDs.)
Technologies: Mostly C++23, Qt 6.9, CMake, git, OpenGL, CUDA, Boost, Jenkins. Windows and Linux, msvc/gcc/clang/clangcl. AI/ML and analysis projects use Python and C++. Some projects use Java or Typescript/React.
Exactly. Unlike Java where every object inherits from Ojbect, in C++ multiply inheriting from objects with a common base class is rare.
Some older C++ frameworks give all their objects a common base class. If that inheritance isn't virtual, developers may not be able to multiply inherit objects from that framework. That's fine, one can still inherit from classes outside the framework to "mix in" or add capabilities.
I've never understood the diamond pattern fear-mongering. It's just a rarely-encountered issue to keep in mind and handle appropriately.
> in C++ multiply inheriting from objects with a common base class is rare.
One example is COM (or COM-like frameworks) where every interface inherits from IUnknown. However, there is no diamond problem because COM interfaces are pure abstract base classes and the pure virtual methods in IUnknown are implemented only once in the actual concrete class.
We paused hiring fresh grads, but still hire interns, and those who prove themselves get full-time offers. We've found internships to be a great pipeline to great hires over the years.
We've had several candidates with completed bachelor's degrees apply for internships, prove themselves, and get full-time jobs that way. This "back door" job hiring pathway might work elsewhere as well.
Same here. We pay interns pretty well and we invest a lot in them during their internship. It doesn't make sense for us (and I imagine others) to take in interns and then not hire the good ones. That's the entire reason we do internships to start with.
Stellar Science | Hybrid (USA) Albuquerque NM, Washington DC (Tysons VA), Dayton OH | Full time, interns/co-ops | U.S. citizenship required | https://www.stellarscience.com
Company: We're a small scientific software development company that develops custom scientific and engineering analysis applications in domains including: space situational awareness (monitoring the locations, health and status of on-orbit satellites), image simulation, high power microwave systems, modeling and simulation, laser systems modeling, AI/ML including physics-informed neural networks (PINN), human body thermoregulation, computer vision and image processing, high performance computing (HPC), computer aided design (CAD), and more. All exciting applications and no CRUD. We emphasize high quality code and lightweight processes that free software engineers to be productive.
Experience: Other than interns, we currently require a Bachelors degree in physics, engineering, math, computer science, or a related field, plus preferably 3+ years of work experience, or a Masters or PhD in lieu of work experience. (Roughly 25% of our staff have PhDs.)
Technologies: Mostly C++23, Qt 6.9, CMake, git, OpenGL, CUDA, Boost, Jenkins. Windows and Linux, msvc/gcc/clang/clangcl. AI/ML and analysis projects use Python and C++. Some projects use Java or Typescript/React.
Stellar Science | Hybrid (USA) Albuquerque NM, Washington DC (Tysons VA), Dayton OH | Full time, interns/co-ops | U.S. citizenship required | https://www.stellarscience.com
Company: We're a small scientific software development company that develops custom scientific and engineering analysis applications in domains including: space situational awareness (monitoring the locations, health and status of on-orbit satellites), image simulation, high power microwave systems, modeling and simulation, laser systems modeling, AI/ML including physics-informed neural networks (PINN), human body thermoregulation, computer vision and image processing, high performance computing (HPC), computer aided design (CAD), and more. All exciting applications and no CRUD. We emphasize high quality code and lightweight processes that free software engineers to be productive.
Experience: Other than interns, we currently require a Bachelors degree in physics, engineering, math, computer science, or a related field, plus preferably 3+ years of work experience, or a Masters or PhD in lieu of work experience. (Roughly 25% of our staff have PhDs.)
Technologies: Mostly C++23, Qt 6.9, CMake, git, OpenGL, CUDA, Boost, Jenkins. Windows and Linux, msvc/gcc/clang/clangcl. AI/ML and analysis projects use Python and C++. Some projects use Java or Typescript/React.
C++23 added std::tracktrace, but until it includes stacktrace from arbitrary exception, we're sticking with Boost.