There are jobs (e.g. database engine development) where knowledge of distributed systems algorithms comes in handy. Most likely not nearly at the level of your theoretical PhD, but also much more stimulating that adding fields to JSON.
I've never worked on database engine stuff, so I cannot speak to it, but I've worked a million jobs, many of which have titles like "Distributed Systems Engineer".
My frustrations come from the fact that "Distributed Systems Engineer" seems to broadly translate to "we use Kafka to glue two services together" for most companies. While that can still require some level of clever engineering, it doesn't really require a ton of theory. At multiple companies, when I do on rare occasion come across a project that requires some more elaborate planning, I will try suggesting using TLA+ to test things out, and management will look at me like I suggested kicking a puppy or something. "Math" is sort of a dirty word in industry, at least in the places I've worked. Maybe I just don't do a great job selling it. In every single job I've ever had, I've always been "that weird math guy", despite the fact that I really don't think I suggest anything that arcane, at least shouldn't be to a software crowd.
Again, it's hardly something to complain about; "Woe is me, I'm making yuppie money but I don't feel fulfilled all the time". It's just something that annoys me.
Lots of us are increasingly interested in formal methods coming out of the labs and ivory tower and seeing more casual usage in industry.
But pitching this sort of thing to management is hard, where if they’ve heard of it at all then they associate it with big brain critical path shit at the highest levels of huge orgs like nasa or aws.
Can’t blame people for not seeing the value yet though.. we need to make things easier/cheaper. Plain old testing is often seen as a cost/effort sink that dev teams and management resent and so of course Model checking looks like a whole different layer of ambiguously-worthwhile effort.
Mere advocacy here won’t win people over, because they care more about costs than “correctness” as an abstract virtue. We probably need progress on things like generating code from models and models from code if we want to see more adoption
I agree, and part of the issue is that if the formal methods work as intended, then by design nothing interesting happens, making their success less visible to managers that aren't paying close attention. Ideally something like TLA+ would catch a bug that you wouldn't notice for multiple years (e.g. infinite money glitches), but that means that value becomes less obvious because it can take multiple years to manifest, and that means you're waiting multiple years for TLA+ to show its value, which would be "nothing happened", which can be harder to measure.
I've debated trying to write a PlusCal->Java transpiler [1], and I haven't really ruled that out yet; while I don't think it should be necessary to have that for PlusCal to be useful, I think it would potentially sweeten the sales pitch.
[1] I think PlusCal would translate better to a sequential Java style program than raw TLA+ would.
Why don't you try using PlusCal/TLA+ on some portion of the code at work (if you still care to stay there) to prove its effectiveness? Whittle down the code to something basic to show how it'd work so your manager/co-workers can see for themselves on code that they're reasonably familiar with.
Unless your manager/colleagues are intimately familiar with such tools, I'd think that your touting their usage at work, is more like wishful thinking.
Just do the math and don't worry about management. If you are generally productive you can take a quarter here and there to focus on a thing only you know the value of.
Yeah, I've taken to occasionally testing some of the simpler stuff I write in PlusCal without telling managers. If they think I'm dicking around with "theoretical" math stuff they might get annoyed but if I submit code that works correctly within a deadline I suspect they'll be alright.
My long term plan on this is to submit a bunch of code that works correctly the first time, and eventually once I am established at this job as "clever" I'll reveal the PlusCal/TLA+ specs and hopefully have a decent sales pitch.
What you’re describing are just run of the mill backend jobs. They aren’t very sophisticated in terms of algorithms or theory. What I had in mind is working on say Cassandra or ElasticSearch, or internally within FAANG on one of their database engines. If you’re high enough there (which, granted, may take multiple years) to decide on the future direction of the engine, you get to ponder a lot of theoretical or at least very complex stuff.
I did try applying to MongoDB a few times, even made it relatively far in the interview process both times but eventually get the "We regret to inform you..." emails for it. I also failed a phone-screen for CockroachDB though I'm actually not 100% sure why. ElasticSearch never got back to me. I certainly applied for these kinds of jobs (and many, many more), but there's not much more I can do than wait for the companies to recognize my brilliance.
When I worked at Apple I did an internal interview for a team that worked on a distributed-RAID-style thing, did really well on the interview, almost got to the offer stage, but then they saw the previous year's review where I got a "Needs Improvement" for (I think) purely bureaucratic reasons that I've talked about on HN before, and they decided not to move forward.
It's tough, because I think a lot of those jobs are in pretty high demand, and as such a lot of really qualified people apply to them and they can be picky, and I don't really blame them for that. There's a lot more generic unsexy backend jobs.