This is cool. I wonder if your VM could work in conjunction with an LLM? Have you tried making this optimizer available as an MCP, or maybe some of the calculated invariants could be exposed as well?
This matches my experience with AI agents. Wiring up the correct feedback and paying attention to ensure they use it is important. Tests and linters are great, but there's usually much more that human devs look at for feedback, including perceived speed and efficiency.
I agree, it is absolutely a matter of judgment and is heavily dependent on the stage and specific threats a particular organization faces. It is difficult to balance product velocity with the need to protect a growing "something to lose" that the company is accumulating.
I think one of the best things we can do as security professionals is to identify or work to create security measures that have outsized ROI and advocate for those. Using battle-tested software is one, as are, I believe, measures like MFA.
I'd also submit that one of the most important things is recognizing that ROI requires a net positive return. It's not just the time required to implement a control, you also have to factor in the opportunity cost of the increased friction. I've seen way too many times infosec organizations completely ignoring that the loss outweighs the actual risk. Hyperbolic analogy, but like forbidding driving delivery routes to avoid a parking ticket.
Hey all, we wrote rpCheckup after seeing the Endgame (https://news.ycombinator.com/item?id=26154038) post the other day. A lot of the issues it surfaces are things we routinely check for with our customers, but we are unaware of any open source tools that help AWS customers check on their own.
We would love to get your feedback and we hope you find it useful.
My company, Gold Fig(https://goldfiglabs.com), is working on something just adjacent to annotating webpages: we enable you to annotate changes you make on the web. Since so much of our work is powered by online platforms, we wanted a way to keep track of changes we were making in the tools we were using. Similar to how you would annotate a code change with a commit message, you should be able to annotate a CMS change, a platform settings change, or anything else you might be tweaking as you go about your work.
We'd love feedback from anyone who is interested in adding annotations, and especially those working on teams who could benefit from being able to share annotations.
We definitely understand that a lot of data is currently siloed, so to that end we're also interested in the annotation formats that we should be looking at. We currently export our own json format, but we would love to work well with existing tools.
To be clear, at Firebase, we held these kinds of discussions in addition to the technical assessment. A portion of my section of the interview process was usually to do a deep dive on a recent interesting project that the candidate had worked on, either sourced from their resume or just by asking. You're right that you can often tell their level of involvement by seeing how quickly the well of details dries up.
There are tradeoffs, but I found it to be an overall net positive in addition to the technical assessment. Doing this effectively requires that the interviewer have a pretty broad exposure to lots of different technology, otherwise you can't distinguish between someone BSing their role vs something you just don't know a lot about / can't ask intelligent questions about. It's also incredibly subjective, which would make me hesitant to rely on it without more objective criteria.
That is what we're aiming at. We're starting by capturing deltas, but we'll eventually be able to backfill the entire existing configuration, whether it has changed or not. That can then live alongside the configuration for your other services. This is where you get a lot of power: you'll be able to write tests or constraints across different services, for instance requiring DNS settings include ownership markers for your GSuite domain.
Currently, any changes made without the extension won't be tracked. For some services, we will eventually be able to capture the delta anyways, if they have API access to their own configuration. This, however, is still sub-optimal because you lose out on the commit message and the opportunity to capture the intent behind the change at the time that it's being made.
Our hope is that much like policies around code review, where often you can commit to master, but by policy you don't and request a review first, your coworkers will see the value in everyone contributing so that everyone is kept aware and the log is maintained.