There’s a difference between being required to perform the normal functions of the government and being required to espouse a political philosophy. The Hatch Act makes it clear that you can have a political opinion, but that it occurs on your own time. So the rationale of the court is “nobody is allowed to use their office for politics” and “by putting words in government employee mouths, their right to free speech is being abridged.”
5 U.S.C. § 7323(a)(1): “An employee may take an active part in political management or in political campaigns, except an employee may not — (1) use his official authority or influence for the purpose of interfering with or affecting the result of an election.”
The same way that Eisenhower served under the Democrats FDR and Truman then was elected to the presidency as a Republican?
It's a job. With particular job duties. You do those duties regardless of who's in charge. It's just that under one administration those duties are oriented to a particular larger purpose, while under another administration they are oriented to another particular larger purpose. That still doesn't change the vast majority of jobs, and for those few it does, aren't most of them political appointments already?
This is phenomenal advice. The post itself strikes me as either lifted directly from reddit (experienceddevs or another sub), but your advice is generally great for any reader curious about how to handle this type of problem employee.
From the perspective of a line manager, your statement about not coddling and directly confronting the issue intuitively sound correct. If it's possible to address behavioral issues in this type of high-talent high-friction engineer, it actually doesn't hurt to bruise their ego a little--if anything, doing it respectfully means they listen, and value the feedback more than usual.
Edit: also, took a look at your profile--couldn't tell, what type of org are you VP of eng at? (Private, equity-funded, late-stage, early stage, fintech, biotech, saas, etc.). Curious as the advice rings sound, but I only saw your consultancy work.
Maintain a good agents.md with notes on code grammar/structure/architecture conventions your org uses, then for each problem, prompt it step-by-step as if you were a junior engineer's monologue.
e.g. as I am dropped into a new codebase:
1. Ask Claude to find the section of code that controls X
2. Take a look manually
3. Ask it to explain the chain of events
4. Ask it to implement change Y, in order to modify X to do behavior we want
5. Ask it about any implementation details you don't understand, or want clarification on -- it usually self-edits well.
6. You can ask it to add comments, tests, etc., at this point, and it should run tests to confirm everything works as expected.
7. Manually step through tests, then code, to sanity check (it can easily have errors in both).
8. Review its diff to satisfaction.
9. Ask it to review its own diff as if it was a senior engineer.
This is the method I've been using, as I onboard onto week 1 in a new codebase. If the codebase is massive, and READMEs are weak, AI copilot tools can cut down overall PR time by 2-3x.
I imagine overall performance dips after developer familiarity increases. From my observation, it's especially great for automating code-finding and logic tracing, which often involves a bunch of context-switching and open windows--human developers often struggle with this more than LLMs. Also great for creating scaffolding/project structure. Overall weak at debugging complex issues, less-documented public API logic, often has junior level failures.
Great walkthrough, I might send your comment to my coworkers. I use AI to write pretty much 100% of my code and my process looks similar. For writing code, you really want to step through each edit one by one and course-correct it as you go. A lot of times it's obvious when it's taking a suboptimal approach and it's much easier to correct before the wrong thing is written. Plus it's easier to control this way than trying to overengineer rules files to get it to do exactly what you want. The "I'm running 10 autonomous agents at once" stuff is a complete joke unless you are a solo dev just trying to crap something working out.
I use Sonnet 4.5 exclusively for this right now. Codex is great if you have some kind of high-context tricky logic to think through. If Sonnet 4.5 gets stuck I like to have it write a prompt for Codex. But Codex is not a good daily driver.
As usual with people describing their AI workflows, I’m amazed how complicated and hand-holding their whole process is. Sounds like you’re spending the time you would otherwise spend on the task to struggle with ai tools.
An apparently very common CloudFront misconfiguration that has spawned a thousand articles and StackExchange Q&As on how to fix it. Randomly chosen one:
It happens all the time, I have seen it in NYC. Usually its an early stage thing, cofounder leaves after 1 year etc. Much harder to do with a complicated cap table. Investors I could name even suggest it
The juice has to be worth the squeeze. No sense in fighting against fiduciary duty, minority shareholder oppression, etc., etc. unless there is some sort of value there. This usually means a successful exit before taking action.
I think we're saying the same thing --- that none of this matters, just walk away with the vested shares and be a friend to the company. Diluting his founder shares in subsequent rounds is going to be a nonevent, and diluting him to zero in an acquisition --- unless it's a seller's market or a bidding war --- may be as well. It's just not worth worrying about; I think the only real question here might be "do I take a buyout if offered", and this person is nowhere near that yet.
I think you bring up an interesting tangential point that I might agree with--that the people doing the misalignment are how architecture astronauts remain employed.
But the core of Joel Spolsky's three posts on Architecture Astronauts is his expression of frustration at engineers who don't focus on delivering product value. These "Architecture Astronauts" are building layer on layer of abstraction so high that what results is a "worldchanging" yet extremely convoluted system that no real product would use.
> "What is it going to take for you to get the message that customers don’t want the things that architecture astronauts just love to build."
> "this so called synchronization problem is just not an actual problem, it’s a fun programming exercise that you’re doing because it’s just hard enough to be interesting but not so hard that you can’t figure it out."
I don't think this is tangential at all. This whole conversation is exactly the same as Spolsky's point about Napster: it's hard to know what to say to someone who thinks the reason the web was successful was REST, rather than HTML letting you make cool web pages with images in them. And this has played out exactly as you'd expect: nobody cares at all about REST, because it's pure architecture astronaut stuff.