Multiple denied promo applications. Warm, caring language but no attempt at retention on resignation. Other companies unsure of hiring candidate even after 10+ interviews.
The simplest explanation of these datapoints is simply that this person is not operating at the staff level in a way that is fairly obvious to others, yet hard to articulate in a way that this person can emotionally receive and accept.
None of this means they aren’t or can’t be a highly valuable and skilled engineer. Higher levels are more about capacity for high-level responsibility and accountability in a way that makes executives feel comfortable and at ease. “Not enough impact” means that even if this person is involved in high-impact projects, executives do not ascribe the results or responsibility for those results entirely to them.
While this is painful, it is not a bad thing, and it is not a disfavor. People who aren’t ready for great responsibility often underestimate the size of the gap. Watching a talented engineer get eaten alive because they were given executive-adjacent accountability that they weren’t ready for is not fun for anybody. Anyone who has operated in true staff+ or director+ roles at huge companies here knows just how brutal the step up in expectations is. It is far from trivial, and it simply isn’t for everyone.
Author here. I do agree to an extent. But getting datpoints from the other people in the company at those higher positions is important. Asking what can you do to improve and what you can do to make better impact. For my situation, many people did agree that they agreed that I should be up leveled. Some people did say I could work on different projects but they have seen people get up leveled for way less. Some of it is luck as well.
It's also a horrible swe job market out there. Haha
But the biggest is to never feel like it's a disfavor. You are worth it and there is always room to grow, I just didn't know how else to grow at the company anymore
Don’t listen to defeatist BS. If a candidate needs to grow, the response should be to give them small projects to lead and grow. A few university classes in missing subjects, coaching, etc. Not keep them in purgatory.
> You can't take denied promos at face value, honestly.
This was my experience as well.
Maybe your manager didn't push hard enough for you at the level calibration meeting. Maybe your director didn't like the project you were on as much as the one another manager's engineers worked on, so they weren't inclined to listen to your manager push for you. Maybe the leadership team decided to hire a new ML/AI team this fiscal year, so they told the rest of the engineering org that they only have the budget for half as many promos as the year before.
And these are the things I've heard about on the _low_ end of the spectrum of corporate/political bullshit.
There is an argument to be made that playing the game is part of the job. Perhaps, but you still get to decide to what degree you want to play at any given company, and are allowed to leave and get a different set of rules. And even so, there will always be a lot of elements that are completely outside of your control.
Yeah. I really feel for this guy. I'm at a bigco too and at my yoe, I would really like to be officially "senior".
But if I'm being honest with myself I have a bit of growing to do before I am there. The limiting factor is definitely me. I am improving every year but my peers are excellent.
I'm not "senior", but I'm enjoying my work, I'm making more than I ever have, and I'm improving as an IC.
I can't quite tell from OPs account if he really is the one being wronged in this situation. But I also think places like Google are not for everyone. At least from this post, I think they'll be happy with the new opportunity.
I mostly disagree with this. Lots of things correlate weakly with other things, often in confusing and overlapping ways. For instance, expertise can also correlate with resistance to change. Ego can correlate with protection of the status quo and dismissal of people who don't have the "right" credentials. Love of craft can correlate with distaste for automation of said craft (regardless of the effectiveness of the automation). Threat to personal financial stability can correlate with resistance (regardless of technical merit). Potential for personal profit can correlate with support (regardless of technical merit). Understanding neural nets can correlate both with exuberance and skepticism in slightly different populations.
Correlations are interesting but when examined only individually they are not nearly as meaningful as they might seem. Which one you latch onto as "the truth" probably says more about what tribe you value or want to be part of than anything fundamental about technology or society or people in general.
Human code review does not prove correctness. Almost every software service out there contains bugs. Humans have struggled for decades to reliably produce correct software at scale and speed. Overall, humans have a pretty terrible track record of producing bug-free correct code no matter how much they double-check and review their code along the way.
So the solution is to stop doing code reviews and just YOLO-merge everything? After all, everything is fucked already, how much worse could it get?
For the record, there are examples where human code review and design guidelines can lead to very low-bug code. NASA published their internal guidelines for producing safety-critical code[1]. The problem is that the development cost of software when using such processes is too high for most companies, and most companies don't actually produce safety-critical software.
My experience with the vast majority of LLM code submitted to projects I maintain is that it has subtle bugs that I managed to find through fairly cursory human review. The copilot code review feature on GitHub also tends to miss actual bugs and report nonexistent bugs, making it worse than useless. So in my view, the death of the benefits of human code review have been wildly exaggerated.
No, that's not what I wrote, and it's not the correct conclusion. What I wrote (and what you, in fact, also wrote) is that in reality we generally do not actually need provably correct software except in rare cases (e.g., safety-critical applications). Suggesting that human review cannot be reduced or phased out at all until we can automatically prove correctness is wrong, because fully 100% correct and bug-free software is not needed for the vast majority of code being produced. That does not mean we immediately throw out all human review, but the bar for making changes for how we review code is certainly much lower than the above poster suggested.
I don't really buy your premise. What you're suggesting is that all code has bugs, and those bugs have equal severity and distribution regardless of any forethought or rigor put into the code.
You're right, human review and thorough design are a poor approximation of proving assumptions about your code. Yes bugs still exist. No you won't be able to prove the correctness of your code.
However, I can pretty confidently assume that malloc will work when I call it. I can pretty confidently assume that my thoroughly tested linked list will work when I call it. I can pretty confidently assume that following RAII will avoid most memory leaks.
Not all software needs meticulous careful human review. But I believe that the compounding cost of abstractions being lost and invariants being given up can be massive. I don't see any other way to attempt to maintain those other than human review or proven correctness.
I did suggest all code has bugs (up to some limit -- while I wasn't careful to specify this, as discussed above, there does exist an extraordinary level of caution and review that if used can approximate perfect bug-free code, as in your malloc example and in the example of NASA, but that standard is not currently applied to 99.9% of human-generated and human-reviewed code, and it doesn't need to be). I did not suggest anything else you said I suggested, so I'm not sure why you made those parts up.
"Not all software needs meticulous careful human review" is exactly the point. The question of exactly what software needs that kind of review is one whose answer I expect to change over the next 5-10 years. We are already at the point where it's so easy to produce small but highly non-trivial one-off applications that one needn't examine the code at all -- I completely agree with the above poster that we're rapidly discovering new examples of software development where output-verification is all you need, just like right now you don't hand-inspect the machine code generated by your compiler. The question is how far that will be able to go, and I don't think anybody really knows right now, except that we are not yet at the threshold. You keep bringing up examples where the stakes are "existential", but you're underestimating how much software development does not have anything close to existential stakes.
People who've spent their life perfecting a craft are exactly the people you'd expect would be most negative about something genuinely disrupting that craft. There is significant precedent for this. It's happened repeatedly in history. Really smart, talented people routinely and in fact quite predictably resist technology that disrupts their craft, often even at great personal cost within their own lifetime.
I don't know that i consider recognizing the limitations of a tool to be resistance to the idea. It makes sense that experts would recognize those limitations most acutely -- my $30 harbor freight circular saw is a lifesaver for me when I'm doing slapdash work in my shed, but it'd be a critical liability for a professional carpenter needing precision cuts. That doesn't mean the professional carpenter is resistant to the idea of using power saws, just that they necessarily must be more discerning than I do.
Yes you get it. Obviously “writing code” will die. It will hold on in legacy systems that need bespoke maintenance, like COBOL systems have today. There will be artisanal coders, like there are artisanal blacksmiths, who do it the old fashioned way, and we will smile and encourage them. Within 20 years, writing code syntax will be like writing assembly: something they make you do in school, something that your dad reminds you about the good old days.
I talked to someone who was in denial about this, until he said he had conflated writing code with solving problems. Solving problems isn’t going anywhere! Solving problems: you observe a problem, write out a solution, implement that solution, measure the problem again, consider your metrics, then iterate.
“Implement it” can mean writing code, like the past 40 years, but it hasn’t always been. Before coding, it was economics and physics majors, who studied and implemented scientific management. For the next 20 years, it will be “describe the tool to Claude code and use the result”.
But Claude cannot code at all, it's gonna shit the bed and it learns only on human coders to be able to even know an example is a solution rather than a malware...
Every greenfield project uses claude code to write 90+% of code. Every YC startup for the past six months says AI writes 90+% of their code. Claude code writes 90+% of my code. That’s today.
It works great. I have a faster iteration cycle. For existing large codebases, AI modifications will continue to be okay-ish. But new companies with a faster iteration cycle will outcompete olds ones, and so in the long run most codebases will use the same “in-distribution” tech stacks and architecture and design principles that AI is good at.
I can't speak for the other poster, but I actually recently "abandoned" PC gaming. For me, it wasn't a deliberate decision but more of a change in behavior that occurred over time. I suspect the key event was picking up a PS5 Pro. For me, it's the first console that's felt powerful enough to scratch a similar itch as PC gaming -- except I could just plug it into our Atmos-equipped "home theater" set up and have it not only work flawlessly but be easily accessible to everyone, not just me. Since picking it up, between the PS5 Pro and handheld gaming devices, I just have not played a game on my gaming PC a single time and am currently planning on retiring it as a result.
There may be a connection here with age and the type of games I play too. I'm in my mid-30s now and am not interested in competitive twitch shooters like Call of Duty. In many cases, the games I've been interested in have actually been PS5 exclusives or were a mostly equivalent experience on PS5 Pro vs. PC or were actually arguably better on PS5 Pro (e.g., Jedi Survivor). In some cases, like with Doom: The Dark Ages, I've been surprised at how much I enjoyed something I previously would've only considered playing on PC -- the PS5 Pro version still manages to offer both 60 FPS and ray tracing. In other cases, like Diablo IV, I started playing on PC but gradually over time my playtime naturally transitioned almost entirely to PS5 Pro. The last time I played Diablo IV on my PC, which has a 4090, I was shocked at how unstable and stutter-filled the game was with ray tracing enabled, whereas it's comparatively much more stable on PS5 Pro while still offering ray tracing (albeit at 30 FPS -- but I've come to prefer stability > raw FPS in all but the most latency-sensitive games).
One benefit of this approach if you live with someone else or have a family, etc., is that investments in your setup can be experienced by everyone, even non-gamers. For instance, rather than spending thousands of dollars on a gaming PC that only I would use, I've instead been in the market for an upgraded and larger TV for the "home theater", which everyone can use both for gaming and non-gaming purposes.
Something else very cool but still quite niche and poorly understood, even amongst tech circles, is that it's possible to stream PS5 games into the Vision Pro. There are a few ways of doing this, but my preferred method has been using an app called Portal. This is a truly unique experience because of the Vision Pro's combination of high-end displays and quality full-color passthrough / mixed reality. You can essentially get a 4K 120"+ curved screen floating in space in the middle of your room at perfect eye level, with zero glare regardless of any lighting conditions in the room, while still using your surround sound system for audio. The only downside is that streaming does introduce some input latency. I wouldn't play Doom this way, but something like Astro Bot is just phenomenal. This all works flawlessly out of the box with no configuration.
(0) Be exceptionally intelligent and capable of applying that intelligence to people, not just code or math — necessary for everything that follows.
(1) Keep attention diversified as long as possible until the winning path becomes obvious.
(2) Focus on fringe bets, but pursue many simultaneously until one clearly dominates (see (1)).
(3) Extreme social manipulation — people-pleasing, control- and power-seeking, selective transparency, skillful large-scale dishonesty, and a willingness to hurt or betray when it serves (1) and (2) and the relational cost is acceptable.
(2) brought him into the startup ecosystem and the first YC batch in the first place (he had to start somewhere); combined with (3) he made an early fortune from a failed startup. (3) also ingratiated him with PG and others in those years. (1)+(2) ensured he always had exposure to every plausible frontier of the industry; when he was effectively fired from YC for (1)+(3), (2) made the OpenAI pivot the next obvious move — and a better one. (3) almost cost him his career a second time when the board fired him from OpenAI temporarily, but he survived because (3) also ensured he had enough to offer everyone else that he leveraged his way back in.
My company’s been through layoffs/reorgs every 3–6 months for three years. One thing is true: performance management happens faster. Many chronic low performers were laid off, and a few “too many cooks” problems were resolved. Those benefits are real and genuine.
But it’s a mistake to assume the remainder is automatically high‑performer‑only. Three patterns I’ve seen:
1) People with options leave first. If you can find a stable, supportive org at similar pay, you go. That’s often your top performers. We've lost some truly amazing people who left because they were simply not willing to tolerate working here anymore. Being absolutely ruthless in getting rid of low performers is honestly not worth it when you also lose the people who truly move the needle on creating new products, etc. If you make a mistake and get rid of some people who were talented high-performers, trust is instantly gone. The remaining high-performers now know that they may also be a target, and so they won't trust you and they'll leave whenever they can. And when you're axing 10k+ people, you're absolutely going to make mistakes.
2) The survivors change. Trust and empathy plummet. Incentives tilt toward optics and defensiveness, and managers start competing on visible ruthlessness. You can keep the lights on, but actually trying to innovate in this environment is too scary and risky.
3) In an atmosphere of fear, people who are willing to be genuinely dishonest and manipulative -- and who are good enough at it to get away with it -- have a serious competitive advantage. I've seen good, compassionate leaders go from a healthy willingness to make tough decisions on occasion to basically acting like complete psychopaths. Needless to say, that's extremely corrosive to meaningful output. While you could argue that skillful dishonesty is an individual advantage regardless of climate, an environment of repeat layoffs strongly incentivizes this behavior by reducing empathy, rewarding "do whatever it takes to win" behavior, etc.
Your comment made wonder if there is an social / economic phenomenon tied to your characterization. I'd be really curious if there is any academic work done on further elucidating it.
Edit: Did some research with ChatGPT and found the following papers if anyone else is interested in the above concepts.
At companies where decimation is a given... IME 3 or variations of 3 predominantly are already in play.
The most nefarious kind I saw was to use tenure capital towards influencing peers (above and below) into over-engineering complexity to improve longevity (or simply to flex on the basis of tenure) and this is a game-able closed loop. The longer one has been in a position is in a position to stay even longer via influence and no-one questions.
The up-levels explain this as "trust" which probably is slop/laziness or pure lack of time due to how busy the up-levels are managing up the chain (and working towards their own longevity)
The below-levels probably are afraid to question/oppose strongly due to obvious reasons. This becomes worse if the tenured person in question is already a "celebrated hero" or "10x-er".
No, but we own one of their vehicles and in years have never experienced a recall that involved physically recalling the vehicle. This one doesn't apply to us, but if it did, that alone would immediately make it stand out compared to every other recall we've experienced with the product (which have never had any effect on us whatsoever).
I think consumer personal finance is hard to disrupt with AI for two reasons: (1) mistakes, hallucinations, etc., aren't acceptable; (2) there really aren't any major secret insights to derive from the data -- just spend within your means, avoid debt or use it sparingly and wisely, target high-interest loans first if you have them, maintain a sufficient emergency fund, max out tax-advantaged accounts, invest in low-fee index funds as much as you can after all of the former. There just isn't much more to it.
If you're already doing all those things, the best way you can help your personal finances is to make good decisions and work hard and with integrity in other areas of your life: grow your career and income, position yourself smartly to lower your odds of layoffs / termination / etc., take care of your health to avoid health-related income disruptions and costs (this includes mental health!), and make smart relationship choices (i.e., don't end up in divorce).
Personal finance is pretty much a solved problem. Breakthrough results generally come from personal and moral discipline sustained consistently over a long period of time. Simple tools like this can help a lot -- I like it.
Being able to weigh the benefits of different places to put money is important too. I had an ex who was big on Ramsey and really wanted to pay down our 1.8% rate mortgage and put "rainy day" money into a savings account that also yielded 1-2%.
I argued that the the first priority should be maintaining a consistent $5k in the main chequing account and calling that the the rainy day money, since doing so would cause us to have a $20/mo service fee waived and we'd avoid the semi-regular NSF feeds we were getting dinged with from preauth utility bill debits.
Monthly interest of 2% on $5k is only about $8 a month; our interest rate would have needed to be 5%+ before it was worth it earning that before first getting the account fee cleared (or, of course, just switching to a bank that offered fee-free chequing accounts).
Untrue - there isn't a personal finance solution that doesn't have mistakes. Not one. I've literally tried them all. Its mostly because the syncing with accounts is very very brittle and a lot of things like stock plans etc aren't supported well so your daily view will always be somewhat off.
There are insights to derive from the data - like how much do you really spend. But again its really hard to get there because the numbers are always off and most people don't actually want to know.
You are severely underestimating the average person's agency with their money.
I use an app but I also have an excel sheet where I track everything very carefully every week or so.
Trust my excel sheet much much more.
Honestly for everyone I know this is how they do it. There is one guy who built his own app and his is perfect because he has solved for his specific bank accounts.
He knows every $ coming in and going out - its pretty impressive.
I think the lack of friction AI has is a real problem.
AI models output is always overly confident. And when you correct them they will almost always come up with something like "Ah, you're totally right" and switch around the output (unless there are safeguards / deep research involved).
AI doesn't push back, therefore you more often than not don't second guess your own thoughts. This is, in essence, the most valuable tool in discussions with other humans.
There are some databases with spending habits, so you can compare your spending to folks with similar households, and some prudent guidelines. But it's not rocket science and a bit silly. You don't really need to remind sneakerheads they spend more than average on shoes, or pennypinchers that they spend (too?) little.
For most people just keeping tabs of spending helps them reign it in. Setting up auto savings and pension contributions also helps. The next step is using cash to pay for discretionary spending - not more automation.
So I’m doing something like this for a legal case I am dealing with for giggles. AI is absolutely giving me bogus answers that I have to fact check. Not sure I would trust it to reconcile items.
Do you have to trust it? Have the AI built a list of all the invoices it can find in your email then a separate application reconciles that against your statements. The AI might miss something or invent something, but the mismatch list you get back should be short.
It wouldn't be perfect, but maybe it would be better than having to do it all manually.
Good advice except for this. In today’s society you’ll lose 10/10 times to somebody who’s working hard and doesn’t have any integrity. See Sam Altman, Larry Ellison, Elon Musk, Jeff Bezos, etc.
There are always outlier examples, but for most folks, sacrificing integrity too much will come back to bite them -- marital and family problems, divorce, or personal crises that end up eating through money or having ramifications that affect your career performance are the most common examples.
Living with integrity doesn't mean you can't be highly competitive or make emotionally difficult decisions in a professional context. But it does mean avoiding the kind of personal debt that can detonate your life in a way that most people, unlike Musk or Altman, can't truly get away with.
Said differently, taking moral shortcuts in your career might seem to work in isolation, but doing so in all aspects of your life tends to accrue real consequences that compound over time. And most folks who take moral shortcuts aren't capable of constraining that behavior to a single domain of their life over time. It's like an infection that spreads and is hard to control. Add in any kind of latent substance use risk -- common throughout the adult US population in general -- and successful, talented people end up in the second half of their life nowhere near where their financial potential was.
it's the thing that would be easier to automate and use AI. your rationalization would apply to everything. heck the front page have a medical diagnosis slop.
the point is, seen critically, it's obvious that it's not even a replacement for a calculator. yet the hype pushes on with infinite exceptions "oh, its not the best for this that i know about, but everything else I'm not an expert I will clai AI is perfect"
I must admit that I disagree and believe that AI will only be used more as it improves. There is already a big difference between GPT-4 and GPT-5 in terms of hallucinations. If AI can do your tax accounting in 20 seconds with a 95% probability of accuracy (perhaps better than most accountants—after all, who can really understand tax legislation?) and with some clear checksums, who wouldn't use it instead of doing it themselves? In addition, AI is really excellent at inferring meaning from data (and see relations).
For most folks, filing your taxes is already mostly automated and requires a time investment of maybe 45 to 60 minutes per year. Would it be nice to reduce that down to 20 seconds? Sure, but it's not going to materially change anything in the financial life of most people. I'm all for it if it could be done reliably at scale. Outlier cases where taxation is complicated enough that it's possible to engineer substantial and meaningful financial outcomes often count more as business finance than personal finance. I'm sure artificial intelligence will indeed be more disruptive to business and corporate finance. For true personal finance, though, I suspect it may be more predatory than helpful by selling people into bad decisions that sound smart (which banks, etc., already try to do with so-called financial advisors).
Keep in mind a 5% failure rate would likely be >10 million incorrect tax filings per year solely due to AI errors, not inclusive of additional incorrect filings due to human error as well.
Better idea: the U.S. could adopt the income tax practices of most other countries, where the revenue service tells you what you owe, instead of making you guess. No need for A.I. or Intuit.
Fwiw iirc Intuit has to offer a free version of their tool with the same capabilties but it is so hidden from everybody behind so much stuff that its hard for people to reach there.
Honestly, yes there shouldn't really even be a discussion about this. US should definitely do it. I think US and India are the only two major countries still stuck on something like this and I am not sure if there are really any advantages of it.
For the 95% of people who get all their income from W2/1099 compensation, bank accounts, and brokerage accounts and who either take the standard deduction or whose only itemised deductions are dependents, mortgage interest, and SALT, there's really no need for filing tax returns at all. But the tax accounting and tax software industries lobby to prevent it.
In the current AI = LLM world, why have a language model do taxes? Why not just have AI help you adapt your non AI tax planning platform to local tax laws by reading and comprehending them at scale, a language task, instead?
Try turning off memory. I've done a lot of experiments and find ChatGPT is objectively better and more useful in most ways with no memory at all. While that may seem counter-intuitive, it makes sense the more you think about it:
(1) Memory is primarily designed to be addictive. It feels "magical" when it references things it knows about you. But that doesn't make it useful.
(2) Memory massively clogs the context window. Quality, accuracy, and independent thought all degrade rapidly with too much context -- especially low-quality context that you can't precisely control or even see.
(3) Memory makes ChatGPT more sychophantic than it already is. Before long, it's just an echo chamber that can border on insanity.
(4) Memory doesn't work the way you think it does. ChatGPT doesn't reference everything from all your chats. Rather, your chat history gets compressed into a few information-dense paragraphs. In other words, ChatGPT's memory is a low-resolution, often inaccurate distortion of all your prior chats. That distortion then becomes the basis of every single subsequent interaction you have.
Another tip is to avoid long conversations, as very long chats end up reproducing within themselves the same problems as above. Disable memory, get what you need out of a chat, move on. I find that this "brings back" a lot of the impressiveness of the early version of ChatGPT.
Oh, and always enable as much thinking as you can tolerate to wait on for each question. In my experience, less thinking = more sychophantic responses.
Totally agree on the memory feature. You just have to look at the crap it tries to remember to see how useless it is and the kind of nonsense it will jam into the context.
“Cruffle is trying to make bath bombs using baking soda and citric acid and hasn’t decided what colorant to use” could be a memory. Yeah well I figured out what colorant to use… you wanna bet if it changed that memory? Nope! How would it even know? And how useful is that to keep in the first place? My memory was full of useless crap like that.
There is no way to edit the memories, decide when to add them to the context, etc. and adding controls for all of that is a level of micromanaging I do not want to do!
Seriously. I’ve yet to see any memory feature that is worth a single damn. Context management is absolutely crucial and letting random algorithms inject useless noise is going to degrade your experience.
About the only useful stuff for it to truly remember is basic facts like relationships (wife name is blah, kid is blah we live in blah blah). Things that make sense for it to know so you can mention things like “Mrs Duffle” and it knows instantly that is my wife and some bit about her background.
The simplest explanation of these datapoints is simply that this person is not operating at the staff level in a way that is fairly obvious to others, yet hard to articulate in a way that this person can emotionally receive and accept.
None of this means they aren’t or can’t be a highly valuable and skilled engineer. Higher levels are more about capacity for high-level responsibility and accountability in a way that makes executives feel comfortable and at ease. “Not enough impact” means that even if this person is involved in high-impact projects, executives do not ascribe the results or responsibility for those results entirely to them.
While this is painful, it is not a bad thing, and it is not a disfavor. People who aren’t ready for great responsibility often underestimate the size of the gap. Watching a talented engineer get eaten alive because they were given executive-adjacent accountability that they weren’t ready for is not fun for anybody. Anyone who has operated in true staff+ or director+ roles at huge companies here knows just how brutal the step up in expectations is. It is far from trivial, and it simply isn’t for everyone.
reply