Hacker Newsnew | past | comments | ask | show | jobs | submit | PragmaticPulp's commentslogin

> You know what I haven’t seen? not once in 15 years?

> A company going under.

What a wild assertion: The OP hasn’t personally seen a company fail, and therefore software quality doesn’t matter? Bugs and slow delivery are fine?

It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience. Startups fail all the time because their software isn’t good enough or isn’t delivered before the runway expires. The author is deliberately choosing to ignore this hard reality.

I think the author may have been swept up in big, slow companies that have so much money that they can afford to have terrible software development practices and massive internal bloat. Stay in this environment long enough and even the worst software development practices start to feel “normal” because you look around and nothing bad has happened yet.


For what it's worth, Mozilla was nearly killed at least twice by code quality.

Once upon startup. The code inherited from Netscape was... let's say charitably difficult to maintain. Turning it into something that could actually be improved upon and tested took years without any release of the Mozilla Suite.

Once when Chrome emerged. Not because the code was particularly bad, but because its architecture didn't reflect more modern requirements in terms of responsiveness or security. See https://yoric.github.io/post/why-did-mozilla-remove-xul-addo... for a few more details.


> For what it's worth, Mozilla was nearly killed at least twice by code quality.

> That’s not to say that I now think that “High quality good! Low quality better!”. > As many commenters on Cupać’s post observed, while high quality isn’t necessary for success, it improves the chances of it.

_Nearly_ killed. But Mozilla is still kicking around. Maybe not doing the kind of internet-shaping work they once did (here's hoping they get back there), but they continue to exist.


I was in a successful company that nearly died in 2017, where our entire production system corrupted itself due to a sneaking scale bug we had ported the system into. The problem was, the system with data had been running migrated for 3 months with the bug in it, so it was no longer possible to revert to the earlier working design. We were down for a week where no clients could run, and we spent the next 12 months purely digging ourselves out of that hole, with all new development paused, and all hands on limping the ship along. I would say, that bug was very close to ending us. Luckily, we never disappointed in a similar way since.


What I’m hearing here is that even a software problem so bad it forced you to focus on nothing except trying to fix it, for a full year, wasn’t bad enough to make the company go under.


“My company didn’t go under” is pretty much the lowest bar you can shoot for.

This blog post is saying “Staying healthy doesn’t matter because neither I nor anyone I know died so far.”


This analogy is a bit tortured but it's honestly pretty good anecdata that something probably doesn't generally kill you if you've been around people doing it for 15 years and none of them have died yet. It can kill you, but that doesn't mean it generally does.

I'd love to come to the conclusion that it's wrong, but it's right. Most companies absolutely 100% can afford to ship bugs: the proof? They're doing it, they have been doing it, they will continue to do it. That doesn't mean that every company can, always, forever. A single really bad bug can tank a company. In the right conditions, simply being buggier to a superior competitor can tank a company. However, those are mostly just things developers fantasize about. The market is by and large not decided by how elegant and well-designed your software is, moreso in some verticals than others. In fact, this is basically true for almost anything that isn't explicitly developer oriented in the first place.

Just look at enterprise software. Jesus Christ.


People smoke from 15-20 years old but still don't die in their 30s. It doesn't mean it's not bad for your health. This is the same thing, it's bad for the health of the company. High dev turnover, poor working conditions, and productivity issues can all lead to a death from a thousand cuts, and even if the company doesn't actually go under, you can't say that a company didn't suffer because of it.


I'm going to be honest, this analogy is just not that good. You can draw some parallels to cancer and bad workplace culture, but it's a very skin-deep/awkward comparison in my opinion.


Well, what is your goal?

Sure, healthy and happy chickens are nice, but if you are into industrial farming and your goal is to make money, you will make their life as miserable as necessary to extract profit. You don't care if they are barely alive as long as the health standards are met.

It's not about you being happy as an individual, it's about the company making money. If your lawyers and sales are good enough to build a captive market of miserable customers, your company can still make a ton of money and be very successful.


From personal experience with early stage startups, it is quite a low bar. I've seen companies "acquired" for pennies on the dollar, after almost the entire staff is laid off. Somehow, this is considered a success even though you would've had an better outcome investing your time and money in almost anything else. Employees lost, investors lost. The founder often gets a cushy job they could've had anyway.


"and I've never met any dead people either"


I think that it is (a little bit) more subtle: the importance of quality OF A PRODUCT (projects delivery is another beast) is relative to:

- the customer: either B2B or B2C

- the market share: minimal (< 1%... including all startups) or dominant (> 30%)

- B2C is really dynamic and few bad versions/products can make the customers fly away (except when strong dominance - like Windows - or no equivalent product) and shutdown a company. Price can be a strong factor and cost of migration/switching is usually not considered

- B2B is more conservative: hard to enter the market (so small market shares will need a lot of time to take off... if there's no competitor) but once you're in, the cost of change for a company is usually high enough to tolerate more bad versions (and more if there's few competitors, and incompatibilities between products, and legal requirements to keep records, and a lot of "configuration", and requiring a strong training for a lot of people...). Companies as customer dont see the switch of software as a technical problem (replacing on editor by another one) but as a management problem (training, cost to switch, data availability, availability of people already trained, cost of multi-year/multi-instance licences...)


> It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience.

Is it? What I've seen is the opposite.

Businesses can be terrible top to bottom, slow, inefficient, and painful for customers, and still keep going for years and years. It's more about income/funding than product.

> I think the author may have been swept up in big, slow companies that have so much money that they can afford to . . .

That's what I'm talking about. They are legion! They could be companies that serve a niche that no one else does or with prohibitive switching costs (training is expensive). They could also be companies that somehow got enough market share that "no one gets fired for buying IBM."

Also, you know what those "big, slow companies" have in common? They are successful businesses. Unlike most startups.


> It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience. Startups fail all the time because their software isn’t good enough or isn’t delivered before the runway expires. The author is deliberately choosing to ignore this hard reality.

While I personally haven't seen a company going under due to bad code, one can also definitely make the argument that software that is buggy or doesn't scale will lead to lost profits, which could also eventually become a problem, or present risks to business continuity.

I still recall working on critical performance issues for a near-real-time auction system way past the end of my working day, because there was an auction scheduled the next day and a fix was needed. I also recall visiting a healthcare business which could not provide services, because a system of theirs kept crashing and there was a long queue of people just sitting around and being miserable.

Whether private or public sector, poor code has a bad impact on many different things.

However, once can also definitely make the distinction between keeping the lights on (KTLO) and everything else. If an auction system cannot do auctions for some needed amount of users, that's a KTLO issue. If an e-commerce system calculates prices/taxes/discounts wrong and this leads to direct monetary losses, that's possibly a KTLO issue. If users occasionally get errors or weird widget sizing in some CRUD app or blog site, nobody really cares that much, at least as far as existential threats go.

Outside of that, bugs and slow delivery can be an unfortunate reality, yet one that can mostly be coped with.


Agree.

For example, pg said that ViaWeb was successful because they had put care into their code, which allowed them to iterate quickly and integrate new features that customers requested. Whereas competitors were held back by their cumbersome code and slow cadence of releasing features.


pg would say that, though, because it was his company and it worked out for him.

But maybe he was actually wrong about quality being a differentiator, and his competitors could've shipped features with shitty code but had other problems.


Friendster is memorable example. It did not scale well, and fed up users flocked to MySpace and Facebook, which came later.

More generally it depends on how competitive the space the product operates in, whether quality is something the buyer values and is able to evaluate! Enterprises for example infamously don't appreciate quality as much consumers because the economic buyer does not use the product.


It should have been "going under due to bugs / low quality and slow delivery".


The listing says it's compatible with Pololu pinout. That has become a common term in stepper motor control.

It's little more than a standard IC on a simple board with a standard pinout.


As a counterpoint, my YouTube feed is great these days. When I open it up, I get more of what I want to watch. The hardest part is choosing which video to watch in my limited time.

I think the key is that I subscribe to channels I want to watch and I use the like button on videos I want to see more of.

> I've noticed over the past few years though, that no matter how much I try to tweak the algorithm, I'm just getting mindless junk. And shorts are the worst of it! They're deliberately designed to hook you in, so they're very hard to ignore.

If you're actually clicking the shorts ("very hard to ignore") then you're going to get more of them, period. I get an occasional shorts line in my feed but I scroll right past it.


> I have never in my career had to do anything like designing a large scale system.

Giving large scale system design interview questions for a role where someone never has to work with large scale systems would be a weird cargo cult choice.

However, when a job involves working with large scale systems, it's important to understand the bigger picture even if you're never going to be the one designing the entire thing from scratch. Knowing why decisions were made and the context within which you're operating is important for being able to make good decisions.

> I've worked with the Linux kernel, I've written device drivers, I've programed in everything from Fortran to Go, and that's what I want to keep doing. Why put me through this?

If you were applying to a job for Linux kernel development, device driver development, and Fortran then I wouldn't expect your interviewers to ask about large scale distributed web development either. However, if you're applying to a job that involves doing large scale web development, then your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.


Oddly, knowing the limitations of last year's designs can, as often, limit you to last year's solutions. That is to say, the reason things were done in the past almost always come down to resourcing constraints.

Yes, it is good to understand constraints. It is also incredibly valuable to be respectful of the constraints that folks were working on before you got there. Even better to be mindful of the constraints you are working on today, as well. With an eye for constraints coming down the line.

But, evidence is absurdly clear that large systems are grown far more effectively than they are designed. My witticism in the past that none of the large companies were built with the architectures that we seem to claim are required for growth and success. Worse, many of them were actively done with derision for "best practices" coming from larger companies. Consider, do you know all of the design choices and reasons behind such things as the old Java GlassFish server?

Even more amusing, is to watch the slow tread of JSON down the path that was already covered by XML. In particular the attempts at schemas and namespaces.


> large systems are grown far more effectively than they are designed

It's easy to bake in poorly scaling technical decisions at an early stage that take an obscene amount of engineering effort to undo once the scaling problem become obvious. I've seen intern-days of "savings" turn into senior-years of rework and the scale in my corner of the world is tiny by SV standards.

I always assumed that SV companies experienced similar traumatic misadventures, multiplied up by scale, and baked "thinking at scale" into their technical interviews as a crude (but probably somewhat effective) countermeasure. Even if you only ever use the knowledge one time, indirectly and accidentally, by peer-pressuring your buddy into thinking before coding and therefore avoid a $10M landmine, it was all worthwhile.


It is as easy to bake in large maintenance and runtime costs on early stage development. Worse, it is easy to bake in aspirational growth ideas in the architecture that make it difficult to adjust as you go.

Is akin to thinking you need a large truck, when a very cheap pickup will do. Will the pickup scale to larger jobs you may grow to take on? Of course not. But it will be far cheaper to operate and own at the start, so that you can spare resources to get there.

Now, oddly, this can be taken in several directions. ORM is the poster child that folks love to hate on for how rigid it can be in a mid sized project. And it is also the poster child for how rapidly you can get moving with a database. Which is more important for a project? Really really hard to say, all told.


In contrast, there are a lot of systems out there designed to scale up really quickly, but never achieve the product-market fit to ever need this.

All that engineering for scalability would have been better applied toward things to find the right product-market fit.

It’s hard to strike the right balance of engineering in all aspects of a product. But I’d rather be at a company forced to pour hours of senior engineering effort into fixing scalability than one where things can scale to hundreds of millions of users, but you never attract more than a few thousand.


If they know the code base well, it shouldn't be that hard to undo intern-level shortcuts.

There's another failing here which is that quality wasn't gated well enough.


Hmm. This view works except where it doesn't. For example, if you don't pick the right ID/account/object # scheme so you can shard later on, good luck figuring how to distribute and/or scale the issuing of said IDs years down the road. Some things will never need to be sharded. Some things will kill you if you can't. Every bit of your code is going to make assumptions about this and you're maybe going to end up with a hot key that's hard to fix or have to do weird contortions to split your infrastructure by country or region where there are laws or regulations about data residency.

Here's a few others without explanation of how they'll blow up on you: not being careful about the process around managing feature flags, doing all your testing manually, not doing system level design reviews including the outline of a scaling plan for the system as planned prior to building systems, not building and testing every check in, doing system releases when it seems good or by marketing requirements instead of on a regular cadence, not having dev/test/production built via IAAC or at least by scripts that work in all envs. Not having runbooks.


>This view works except where it doesn't.

It only stops working in the worst case scenario though: LOTS of hastily written code (by interns?) that suddenly needs to scale and will take senior-level people years to do it.

If given that situation, most folks here would run the other way. That's years of toil for little career payoff, and a company in this situation is unlikely to be willing to pay for the best people to do it since they didn't want to pay for that in the first place.

It's very likely something like this will just die or get rewritten and it's probably for the best.


But some things are obvious once you build up scar tissue from previous experience.

And scaling could mean “it might work on a developers PC with 50 rows of data. But it won’t work with our current production load because he didn’t index a table”.


To me this is a entirely separate problem.

I’ve noticed that when less experienced people try to solve a problem, they have to look up how other people do it first.

But someone more experienced has a strong understanding of technologies on an abstract level so they can whiteboard a solution without even involving any specific software (then compare to how others do it). When you think that way, you’re not worrying about JSON or XML. You become neither tied to last year’s tech or too eager to try new tech. You just build something solid that’s reliable and long-lasting.

Knowing about different tech used in different designs expands the pool of legos that you can snap together and so it can’t hurt.


There is a similar learning style. Basically, guess the answer and then compare to the other answers. Even before you know anything about it, all told.

That said, I have as often fallen into the trap of trying to build it myself first. So called "first principals" thinking. That works far less often than folks think it does.


You missed the key statement in the commenter's post:

"If that's a requirement just say so"

Clearly the roles they're applying for are not concerned with the ab initio design of large-scale systems. Which is why they said what they said. They're not whining for the sake of whining.

Your experience writing Linux kernel code and device drivers obviously isn't a substitute for understanding these large scale system design questions.

A drop-in substitute, no. But an engineer who has the wherewithal to truly master the grisly low-level stuff can easily ramp up reasonably quickly in the large scale stuff as well, if needed. To not understand this is to not understand what makes good engineers tick.

We get the fact that, yeah, sometimes, for certain roles a certain level of battle-tested skills are needed in any domain. Nonetheless, there's an epidemic of overtesting (from everything to algorithms, to system design, to "culture fit") coursing through the industry's veins at present. Combined with a curious (and sometimes outright bizarre) inability of these companies to think about what's truly required for the roles -- and to explain in simple, plain English terms what these requirements are in the job description, and to design the interview process accordingly.


The problem is that the system design interview somehow became a necessary component of the FAANG hiring process.


FAANG and similar companies typically subscribe to something like the "T shaped engineer" philosophy. They're making a conscious choice that their engineers should be comfortable in discussions about distributed systems, performance tradeoffs, etc. regardless of whether they do such things on a regular basis.


Certainly not at the FAANG I work at. We hire specialized engineers to work on device drivers and OS kernels and absolutely do not ask them questions on how to design distributed web services.

I encourage you to apply: https://www.apple.com/careers/us/


Why would you interview for a role at a FAANG company in the first place?


They want to exchange the most money possible for their labor ?


This isn’t true, and even if it were, “most money possible” isn’t a meaningful metric.


So what companies pay on average than FAANG[1] for developers.

The most money possible is far from meaningless. If I work for a company, which one will deposit the most in my bank account in a year and/or the most in my brokerage account when my stock vests?

[1] not literally FAANG, the most profitable public tech companies


Most Fortune 100 companies are competitive now, and the most money is extremely meaningless.


I assure you that the other Fortune 100 companies are not paying even in the same ball park as Facebook, Apple, Amazon, and Google.

https://finasko.com/fortune-100-companies/

What do you think the average compensation of those companies are?

I’m well aware of the comp levels at at least three of those companies because they are based in my former home town - Delta, Home Depot and Coke.

They pay their senior devs about the same amount as an intern I mentored got as a return offer (cash + stock).


I can assure you they are absolutely paying in the same ballpark as Facebook, Apple, Amazon, Google, you just don't know how to limit your searches to the tech roles.


So give some numbers of what the f100 companies make compared to FAANG?

I’m very familiar with at least three of the companies that are based in Atlanta - Delta, Coke and Home Depot.

Seeing that I’ve worked for corp dev for almost 25 years before joining BigTech.


Depends on the role. $500k total comp is common on the tech scale of the companies I'm familiar with. While not the $800k+ you might see at some FAANG in specific roles, it absolutely is enough to no longer consider FAANG if you're bothered even one iota by their hiring process.


Name a company. You keep being obtuse


Fortune... 3. not FAANG but in retail. Can hit $500k total comp easily, if you live on either coast.


Not according to Levels

https://www.levels.fyi/companies/walmart/salaries

The highest one in retail is Walmart.

Level 5 comp for Walmart is about the same as a mid level developer at Amazon and Amazon is in the middle of the pack for tech compensation.

Again you still refuse to name a company and level and numbers.


If you knew anything about this you'd know "Walmart" isn't what you search.

But if you knew anything about English you'd know "ballpark" doesn't mean "exactly equal".


You said top 3 company in retail. Walmart is the highest in the F100 in retail. Again name names if you can’t, you’re obviously full of it.


Like I said, if you don't understand what's going on here, you're not qualified to have this conversation.


You’re right, I’ve only been in the industry for 27 years across 8 companies including my current one working at BigTech where I do cloud consulting for other large enterprise companies.

What do I know?

And yet you still haven’t been able to prove your claims that “most” pay about the same.


You linked to a site that literally showed Walmart's non-tech arm (hint: the majority of the tech jobs at Walmart aren't under 'Walmart' on levels.fyi) paying comparable salaries (you yourself compared them)...

But yeah, all those 27 years of experience totally weren't just you sitting in a chair somewhere, being "part of the team". Got it.

> prove your claims that “most” pay about the same.

first prove I claimed that (hint: I didn't)

I'm starting to think you got fooled into thinking BigTech was the only option, and are now discovering how untrue that actually was.


> You linked to a site that literally showed Walmart's non-tech arm (hint: the majority of the tech jobs at Walmart aren't under 'Walmart' on levels.fyi) paying comparable salaries (you yourself compared them)...

Now I’m still waiting for you to prove your claims which were “Most Fortune 100 companies are competitive now” and you haven’t provided a single link.

> I'm starting to think you got fooled into thinking BigTech was the only option, and are now discovering how untrue that actually was.

Well

A) seeing that when I started working, only one of the current FAANG companies existed, I know that FAANG isn’t my only option.

B) seeing that I specialize in cloud architecture and modernization + dev - ie “system design”. I think I’m at the right “F100” company.


I don't owe you any links, I am not here to do your research for you. You already found one company that pays similarly, levels.fyi will give you dozens of others, you even named a few who do pay in a similar ballpark as FAANG.

Besides, it's not actually at all clear if you care about this topic genuinely or are just being a jackass online, so you can forgive me if I don't take your demand for my effort more seriously.


They pay a lot?


In retrospect it was a horrible mistake.


They do some impressive stuff


Salary, I suppose?


My understanding is that this is not the case any more for the more junior software engineer positions in Google and in Amazon, which are expected to them learn system design before being promoted. If you are applying to a more senior position, then yes, there should be a question about system design, and yes, you will probably be doing system design in your work, so it's completely fair game.


And the second part of this is that just as all non-rich people tend to consider themselves as soon to be millionaires temporarily down on their luck, most startups (especially the VC funded ones making big promises to investors) tend to consider themselves soon to be FAANGs temporarily in the early phase of their inevitable hockey stick growth.


Is this a problem? I would argue that this style of interviewing is much more relastic to day-to-day activities than leetcode


You'd be arguing wrong imo. No one sits down solo and has to design a system to scale in isolation,and if you do then something up the chain from that moment went very wrong.

It's a pointless academia by proxy situation that encourages filling out teams with the kids of people who csn architect and tinker forever but have no capability to actually deliver software anyone wants to use. This becomes more clear when you look across the last 10 years in FAANG and list out what products have actually been delivered that are improvements to users and customers vs what's just infrastructure padding and bought in though acquisition.


In both FAANG jobs I had I was expected to design systems solo, and then review them with my team. If the system is complex enough, I would probably whiteboard it first with some teammates while I was designing it.

It is something that was asked of me in interviews, and comes up often in my day to day job. And being able to design systems, and to help review systems others are designing, is probably the single biggest impact thing I do regularly.

It is more useful in day to day than the algorithmic knowledge that was also asked of me during interviews. While there are people that do use complex algorithms in both companies, most software of Google is converting a protocol buffer into another protocol buffer, and in Amazon is the same thing but with JSON. If you are a frontend engineer, you might convert into HTML by plugging the values into a template engine.


I have really good analytical skills which I leverage to tackle issues in large scale system piecewise. I have to suspend my skeptical mind and switch to blue hat thinking to come up with something from scratch. Then I take it apart and iterate over it. I don't think large scale system design is a straightforward process and pretending it is so may very well lead to living in interesting times.


8K is four times the pixels and therefore four times the bandwidth as a 4K monitor.

It took us a long time to go from 1080p to 4K. It has taken even longer for 4K at 120-144Hz to be practical.

It’s more likely that you’ll end up with intermediate steps to 5K, 6K, than getting 8K 120Hz.

The other limitation is lack of demand. You need a gigantic monitor for 8K to be worth it, and you need a powerful video card to drive it. The number of people who would buy such a monitor is very, very small.


>You need a gigantic monitor for 8K to be worth it

I have a 4k 24" monitor that I can still see aliasing on with AA disabled.

8K 32" would give me more real estate and should, in theory, completely eliminate the need for AA.


Which makes me wonder what's the point of the article's author, 4k vs 6k on 32 inch one is already far into diminishing returns, 8k on 32 inch is just numbers for numbers sake


I don’t know if that’s necessarily true. I use a 32” 4k 144hz monitor at 100% scaling just fine. I’d loooove to replace it with an 8k monitor with similar refresh rate to run at 200% scaling and keep the same amount of workspace I have now


> I do not know how many developers use VS code, but all of them are using electron and it seems to be fast enough for them.

At this point, I think the debate about slow apps is more ideological than reality.

I also think a lot of people are mistaking backend/network latency for front-end slowness. Slack isn’t going to load your scroll back history any faster if the backend is spending all of that time searching the database. People are too quick to blame the front end.

Either that, or some of these posters are running 10-year old hardware and wonder why it’s slow


> Of course this is just a toy example. But suppose you had a different task that is I/O bound, like processing terabytes of jsonlines files.

If this task was the bottleneck in a large scale system then it would definitely get hand optimized after a proper analysis.

But if this is an occasionally run task or something otherwise not business critical that doesn’t bottleneck anything, spending orders of magnitude more time hyper-optimizing it would be a waste of time and money.

Match the solution to the job. Optimizing everything is one of the age-old mistakes in computer science.


You still need the skills to do it.


My best managers have been umbrellas, but with transparency. If something was happening in the company we would be informed, but could rest assured that our manager would do their best to work the issue for us while keeping us informed.

The worst managers I’ve had were umbrellas, but to such an extreme that they kept us in an isolated island separate from the rest of the company. We didn’t know what was going on in the company and had no chance to integrate that content into our work. It felt good at first, but over time I realized that the umbrella manager was trying to keep us in the dark so they could keep exclusive control over our work and neutralize any possibility of us competing with them among management. The last manager I had like this went so far they they would praise us for our work and give nothing but positive feedback, right up until he cut people for low performance. It felt like everything he did was for equal parts performance (looking like the ideal, happy, positive manager) and control (keeping us isolated from the rest of the company so he was always in full control).

Ironically, that manager now posts frequent leadership thoughts on LinkedIn and has a newsletter.


> Ironically, that manager now posts frequent leadership thoughts on LinkedIn and has a newsletter.

Interesting. It may be very possible the best management types work on refining their management skills -- not by writing blogs about it.

I'm curious if he ever mentions that rule number one for managers should be "to be humble."


I think it's a really hard balance to strike. There's so much company politics/etc that isn't beneficial for a team to experience, but equally, without being exposed to that noise you won't develop an intuition that helps you navigate the organisation.

Whatever the situation I think it's crucial you can trust your manager to be straight with you and give their unvarnished opinion of things if you ask them directly. That helps you trust they're communicating things accurately to you, which makes you feel more comfortable relying on them to provide you with the information instead of trying to seek it out some other way.


My current boss is this way and it’s almost certainly to prevent competition/control the narrative to upper management and other teams/save his own ass. As a result, I usually don’t know who I’m building software for or how they’re going to use the software. He’s a real detriment to the company and I hope he gets the ax.


I’d be more surprised if they didn’t make low effort feel-good posts on LinkedIn


It’s not really a significant safety difference. The visibility difference isn’t that big.

It’s nowhere near on the level of removing airbags.


There are massive safety differences in terms of visibility especially at night. I don’t think they should be banned but heavily restricted and definitely not to be allowed to drive after dark.


> I tripled my salary in two years by starting off woefully underpaid,

Unfortunately, this is how most "I tripled my salary in X years" stories look when you dig into the details.

I've spent a lot of time coaching people on interviewing and negotiating. With some people, half the battle is detaching them from their original compensation anchor point and re-centering on real market data.

On the other hand, I've also had to gently convince a lot of eager students that they can't expect $300K full-remote FAANG offers right out of college, despite whatever they heard on Reddit and Blind.


> Unfortunately, this is how most "I tripled my salary in X years" stories look when you dig into the details.

The only unfortunate thing is how many people don’t find their way to the same story. That’s why I shared it the way I did. I thought I was asking a lot, and many people will too. When I just… got it… I realized my negotiating position.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: