Hacker News new | past | comments | ask | show | jobs | submit | more dabfiend19's comments login

pretty much any public company with salary plus rsus


The comment I'm relying to is pretty grayed out at the time of me making this comment but is this really that off?

I know a number of folks who work either as developers or on an IT team for a publicly traded company. 300k+ is a stretch but a lot of them pull an extra 30-80k / year more than working at a non-publicly traded company simply because of the stock options they get every year as part of their compensation.

Having publicly traded stock is nice because once you've gotten past the vesting period you can hang onto them for >= 1 year and sell them on demand for 15% long term capital gains tax instead of the usual 35%+ you would have to pay on income tax. Getting an extra 25k to 68k extra post-tax income every year on top of an average dev salary is a big deal.


technically, electric heaters are nearly 100% efficient at turning the electric energy to heat energy.


Heat pumps can be 300% efficient.


huge bong rip at 7pm everyday really works wonders for me. while it can increase the anxiety for some, and even me sometimes, it's always causes a "head change" where I'm like "why am I just staring at my screen right now?"


as opposed to?


Optionals would have been a way to solve this problem


You know what's fun?

Getting a nil where you are supposed to have an Optional.


Afaik this is impossible in swift and kotlin, only optional values can contain nill.


Try Core Data with Swift and you will see that happening. Lazy objects (vaults) are mapped from objc into Swift and will happily crash on something like a = b where both are not optional.


This is happening in objc code or in the swift part? I'm not terribly surprised though, my one experience with core data was miserable once we strayed even a little from the happy path and I ended up rolling my own since we didn't need full functionality anyways. And this was for an internal app, at Apple ┐( ∵ )┌


It's possible in Java.


Right, because the language has the "million dollar mistake" of nullable references by default, which you cannot change without breaking code. And the original comment was bemoaning that Go choose to to have nullable references by default too.


so... Just make that impossible. It's not like this is unprecedented at this point. It's a standard feature even C++ of all languages supports.


Somebody has used Scala


More likely a Java lib form Scala than Scala as such.

In "pure" Scala (not in the FP sense, but just without mixing with Java) something like that is almost impossible.


Unless something drastically changed in Scala 3, there is nothing to protect you from null in Scala. In fact even Java is effectively safer thanks to all the null checking done by IntelliJ


Null is basically non-existent in idiomatic Scala. So technically you're right but besides calling Java libs there is only an infinitesimal small chance to get NPEs form Scala code. (Scala's NPE is the MatchException ;-)).

For Scala 3 there are improvements. It's "null safe" as long as you opt-in (modulo Java libs, and of course doing stupid things like casting a null to some other type).

https://docs.scala-lang.org/scala3/reference/other-new-featu...


Idiomatic scala yes, but while on boarding engineers hit many cases, where they ended up returning null in places they shouldn’t.


So you actually complaining about people who don't know what they're doing? How is this related to Scala?

When you have people without clue on the team no language will safe you. You can also crash Haskell programs by throwing exceptions or just using List.head…


I'd say safety implies safety from incompetent developers. Similar to the use of the word in memory safety.


So you get an Optional which haven't been set instead of a nil pointer. What's better about that?


The type system knows about it and you're forced to check it


Right, but my point is, the code which would raise an error because the pointer is nil now raises an error because the Optional is not set.

Is there really that much of a difference between those cases?

I agree though that in an interface, Optional conveys a more explicit meaning than something pointer-like, which is always a good thing.


In practice it does make a big difference because if the type system knows about it then it can enforce handling of the exception. (Or more generally, it can just force you to pattern match on the optional time and make sure you handle the empty case.)

Go programs crash at runtime. In general, program failures and bugs should surface as soon as possible. Ideally no later than compile time. Instead, Go makes you wait until the app is running.

For an app that has a lot of configuration options, for example, there can be a latent bug that crashes the binary for some options. And that bug may not be detected for months because nobody was using that combination of config options.

The only real defense of this is to pepper your code with a bunch of nil checks. But these nil checks are also hard to test, so Go devs just learn to ignore missing code coverage. In fact, your code coverage metrics look better if you don't check for nil.

I'm sure at some point Go or a library will offer a version of optional types that is well-adopted. But my point is that by the time Go was designed, null references were already widely considered a bad idea and the source of a huge class of computer bugs. Go still deliberately designed them into the type system.


Some people think Optional/Either/etc are the absolute cure to a certain sort of problems and -- I speak from experience -- it is impossible to convince them that it's not.

Well, of course, if your end users, the business users, are okay if you present them an "optional result" which might or might not be a result, then Optionals/Either _are_ the cure. Unfortunately most end users are pissed if you tell them that you optionally shipped their purchase or that the refund will either be credited to their CC or not.


It's about handling those error cases or failing the build, rather than getting a null pointer exception at runtime that moves execution to some higher part that has no context and little ability to correct the problem, or just crashes. You can still handle it wrong, but you are forced to handle it rather than just, in your example, charging the card and crashing the thread when updating the database that you charged them.


One has to be addressed at compile time and the other is a runtime error. Sure, you can address it wrong, but it's better than any reference anywhere being able to throw and in my experience really does cut down on application crashes.


Interesting. I've been coding in languages which heavily rely on pointers for decades, primarily C, C++, C# and Turbo Pascal/Delphi, and in my experience nil pointer errors are quite rare.

I agree that it's nice to be explicit about optional stuff, but overall it's not been a huge deal in the projects I've been involved with.


Optional doesn't give you that automatically. In C++, you can happily dereference a std::optional without checking it.


As opposed to some modern type system feature that would catch such bugs at compile time rather than let them happen at runtime.


Most modern languages have solved the problem. There are a variety of ways to do it.


Can you list the variety of ways?


Basically if you need to manipulate pointers, use safe pointers and keep track of pointer ownership. This is how modern C++ and Rust work.

If you don't need to manipulate pointers then nil really just represents a degenerate or optional value. For optional values these can be encoded any number of ways depending on the type system. One common pattern is an optional type. Another is to annotate the type to indicate that it might be null.

The idea is that if a programmer doesn't check that an nullable or optional type is missing then the program should fail at compile time instead of crashing at runtime. Golang chose to crash programs at runtime.

So for whatever reason, Go has decided that null pointer dereferences are not a big deal. But God help you if you try to comment out a variable use without assigning it to "_". Then the program fails to compile.


> So for whatever reason, Go has decided that null pointer dereferences are not a big deal. But God help you if you try to comment out a variable use without assigning it to "_". Then the program fails to compile.

I think that's a good explanation of why people are so frustrated with this. There are lots of features like go fmt, go vet, the compiler checking that you use all variables and all imports that can feel a bit restrictive. But for something like null pointers, there is nothing. It's incoherent.


I don't know about "variety of ways". You just make it so pointers can't be null, then provide a mechanism for opt-in nullability, requiring an explicit check / conversion to get a non-nullable pointer (which you can dereference) and allowing free / cheap / implicit conversion from non-nullable to nullable. This can be:

* separate pointer types (e.g. C++ pointers v references)

* a built-in sigil / wrapper / suffix e.g. C#'s Nullable / `?` types

* a bog-standard userland sum type e.g. Maybe/Option/Optional

In modern more procedural languages the third option often will have language-level (non-userland) facilities tackled on for better usability but that's not a requirement.

For cases (2) and (3) it can (depending on language and implementation) also provides a mechanism for making other value types optional without necessarily having to heap-allocate them.


Usually it's forcing the programmer to handle the null case statically, by wrapping the underlying value in something like an optional type and defining the operations that access the underlying value. Think of swift's optional unwrapping in "if let" statements


why does a person's interest in the application of their work serve as a signal for if you should hire them? I mean maybe for someone in a product role, but how is it relevant to hiring an individual contributor?

not hiring someone just because of their internal philosophies feels like gate keeping to me.

if someone is a cynic and realizes most start ups arent out there "making the world a better place"... doesn't really have any bearing on their potential output.


It is gate keeping. That's the point. I don't want to work with people who feel like it's reasonable to be unaware or agnostic to the effect their work is going to have on actual people. I don't believe in the meme that someone's role ought to dictate if they need to consider the consequences of their creative efforts on other human beings.

I have a lot more respect for people who consider these things, and just have different opinions than I about what they consider worthy applications, than those who just consider it unnecessary to think about these things. We have an obligation, if we are going to call ourselves "engineers", to consider what we are working on from an ethical perspective.


I think there is a bit of goal most moving happening here:

- you started with ad systems as example of evil: they patently aren’t. They are more of a result of the deeper cause - folks don’t want to pay for things if possible. So now the bill gets moved to a different table, that’s all. All the humanitarian efforts (if any) are standing on the shoulders of the money generated from ads

- if someone says ‘I just want to solve hard problems’ - it is quite a leap from there to assuming they don’t care about social problems. May be they don’t feel empowered/qualified to tackle the big social questions and are just trying to make a living and possibly be productive doing so. Or they don’t want to tackle a social conversation in a workplace setting.

I am very wary of the forcing that’s happening of making everyone involved in social/philosophical questions whether they like it or not. A lot of people just want to make it through the day/build expertise in something and make it through their life. They’d prefer to pay taxes and let other entities / experts deal with those. This doesn’t mean apathy, it just means a lack of time and ability. I think that’s worth respecting.


> goal post moving

You're being silly. The guy is explaining his perspective. He's explaining what he believes and why he believes it. He's not writing a thesis or constructing some logical argument. This isn't a debate. Applying the term "goal post moving" to this makes absolutely no sense.

I just feel like you're taking a confrontational approach rather than just trying to understand his position. Nothing he says is inherently contradictory.


Lol isn’t it odd you consider the defense confrontational while the op started with calling a bunch of folks morally challenged?

Fwiw - I don’t work on ad systems. I was just stating my opinion about how borderline ethical considerations from misuse are pervading engineering and science today. What about intent?


I didn’t say anyone was morally challenged. I said there are a lot of people I’ve encountered in my career that are ethically apathetic. I highlighted ad systems (not all ad systems, just some) as the kind of thing I personally consider toxic and where I have encountered people who check themselves out from caring about the ethical dilemmas involved in developing such systems, focusing instead on the fun puzzles involved.

My point isn’t that I won’t hire people who worked on such things, my point is I won’t hire people who are completely disinterested in the ethics of what they are doing. I’m not imagining this, I have worked with many people like this in my several-decades long career. Beyond the ethics, this is just good business, since people plowing ahead on things while being blind to ethics is how people get harmed and lawsuits get filed.

This isn’t a revolutionary concept in other engineering fields: you can lose your license if you violate certain codes of ethics, either maliciously or due to ignorance or apathy towards following them.


I never said all ad systems are evil, yet you are saying no ad systems are evil.

I never said that if someone doesn't care about the purpose of their work, they don't care about social problems.

If you're going to turn this into a debate, at least try not tearing down strawmen.

The point of my post wasn't to make strong claims about ad systems being universally evil. It's just like, my opinion man, that some are. The point was to state that I do not want to work with people who, knowingly, do their work in an ethical vacuum, focused entirely on the technical problems at hand.


No you didn’t call them evil: you just called them

>Very, very toxic things for society, like human behavior modification (ad) systems

You didn’t say those points about people’s intents, you just said you won’t hire them / won’t work with them.

Sorry for paraphrasing. My argument stands.

Yes you’re allowed to have whatever opinions you want to hold. But here you’re proclaiming it in a public space where it can definitely be construed as judgmental.

Finally you call my arguments as fighting a straw man and yet you construct one yourself: ‘folks who work in an ethical vacuum’. My whole point is that’s probably a very minuscule amount of folks and something you are refining as a true Scotsman from your previous generic statements. My whole response is around how most folks do consider it but file it under fair use expectations and move on - so it is not a fair opinion. That’s all.


You aren’t making any sense.

You sound like you were offended by my characterization of ad systems and extrapolated a ton of imaginary arguments from there you’re attacking. I’m not sure who you are arguing with, but it sure isn’t me.


Its ineluctable. If you are an engineer, your work has a moral and ethical axis that inseparable from the rest. This is what our professional societies believe, it is what you are taught in school, it is in many ways no more than taking responsibility for your actions.

What you are describing is apathy. You don't get to stand apart from the work that you do because it is hard.


I see two issues with that way of thinking:

- morality and ethics are a gradient and are fluidly getting defined as we evolve. Are you still immoral or apathetic if you use electricity generated from coal? Or are you saying we are all apathetic but this is the one instance you want to stake your argument on?

- almost all systems get misused over time: are all those makers apathetic? What about the intent of the hustlers using such systems?


Great, you've successfully diluted the statement with questions that are adjacent, if you squint.

Engineering work has an ethical element to it. I do not see how what you are your 'just asking questions' intersects with this.


As someone founding an early-stage startup, I really appreciate hearing your thoughts on this and it's very encouraging to know I'm not alone.


Holy Christ, Are you seriously asking why ethics and concern for how the systems you design interact with end users and targets of those systems might be a worthy consideration?

Let me give you a concrete example:

Imagine you are an software engineer tasked with working on a facial recognition system to help police identify known criminals to help find suspects near the time and location of a crime. It observes nearby people and assigns a probability to them being a known criminal. Police department demands 80% accuracy for the product.

You design such a system using some blackbox facial recognition AI, and you get the following results:

Overall 78% accuracy with:

6.5% False Positive rate 31% False negative rate

Not too bad, you tweak some things, hit your 80% accuracy without messing with the false positives too badly, and you meet the specification provided by the client. Mission accomplished and you're ready to ship right? Makes the company money? No problems?

Cool. Except, because you didn't really care that much about how the technology you deployed would be used or the ethics surrounding its use, you failed to consider the right performance targets despite what your client asked for and your system is nearly 100% racist.

What happened?

You trained on equal numbers of prison mugshots, and mugshot like photos of people with no criminal records. You failed to consider that black people are over represented in the US prison system. (38% of prisoners but 13% of US population) Your classifier just learned to label someone a likely criminal if they were black and essentially no other criteria.

Yet, the actual likelihood the people identified by the system as "criminals" in fact have a criminal history is at most somewhere ~33% despite the fact your system labels it as 80% likely. Worse, even if we have a hypothetical situation where blacks and non-blacks are represented in their average proportions, there's a near equal number of black and non-black people with criminal histories in the vicinity of the crime! Worse still, since people tend to be more segregated than that, when blacks are in even more of a minority there will be more non-blacks with criminal histories around. When blacks make up a greater proportion, the likelihood of being falsely accused goes up even more.

And FYI... such systems with similar flaws have actually been built and deployed in the past. How do you think that plays on trust in the company and the technology in general in the long run? Considering end-use ethics brings value.


It's a very troubling example, but most of it is focused on product failures. Isn't that a bit orthogonal to ethics? Maybe there's some traction between how hard something is to 'get right' and if it should be attempted, but it sure doesn't seem black and white.


Warren Buffet has a great quote on this topic. He says he hires on three criteria: Intelligence, energy, and character. He adds, "Those first two will kill you if you don't have the last one. If someone's immoral you want them to be dumb and lazy".

Being a high performer is not a positive when someone's looking to take advantage of you.


sure we did. that ~90% number came before the south African variant, and many others. that's arguably why the j&j had a much lower rate than Pfizer, as it was tested amongst test pools with different variants circulating.


> that's arguably why the j&j had a much lower rate than Pfizer, as it was tested amongst test pools with different variants circulating.

J&J was tested in populations that didn't have the SA variant, and Pfizer was tested in populations that did (in addition to the other combinations you mentioned). Tests were done worldwide and we had natural experiments due to different tests in different time in different (relatively) isolated populations.

That effect exists, but only covers a small percentage of the difference in efficacy.


what does this have to do with the linked article?


I work at a smaller company, and here we convert the $ price to a number of shares by taking the 100 day VWAP of the stock from the date of the board meeting where your grant is approved.


I had to google it...

VWAP == Volume Weighted Average Price

https://www.investopedia.com/terms/v/vwap.asp


Either we work at the same place or that’s very common.

When I joined I got a random email that said “your rsu $ to shares conversion was X shares and here’s your schedule” and my schedule is all in # of shares not $


This is very common. It's basically the point of paying in RSUs and not in fixed dollars, to give employees incentive to care about the company/stock performance.


Yes, but then they take that calculation to grant you a set number of shares (ISOs, RSUs, etc.) that doesn't change. The value of your award grows with the growth of the company.

ESPP-like programs, on the other hand, are always dollar denominated and exchanged at a set rate at the end of the offering period.


GP is correct in that some companies (Stripe, MS, others) convert the dollars to shares at vest time, while others do so at grant time.

The first way makes employees lose out on an average of 2 years of stock growth.


I stand corrected, however, shares are fixed at time of grant is my point.


all of my friends at google say the grant is in dollars and stays in dollars at google. at the end of the quarter you get a variable number of shares based on current stock price. While this reduces upside, it also reduces downside.


Wasn't true when I was (re-)hired a year ago. Grant is in dollars, it's converted to shares based on the share price when you start, and then you that fixed number of shares divided by the number of vesting periods over the term of the grant. The vesting period depends on how big the grant is; low-level employees will likely vest quarterly, mid/high-level monthly. No cliff; they removed it a couple years ago.


I've been at Google fairly recently, that's not how it works.

Your offer states that you will get $X of shares, vesting over 4 years. The $X is converted to a number of shares shortly after joining, based on market price, and is locked in from that point forward.

Your friends are mistaken.


This is correct. I believe the share-price they pick is something like the average daily share price for the next full month after you join. So if you join Jan 28, the $$ value of your stock grant will be evaluated around March 1 to determine your grant-price, and then you'll get your first vested shares monthly (around the 25th, I think?).

I believe they used to due quarterly vesting for folks who didn't get much equity, but now that you can have vesting of fractional shares, ~everybody should be on a monthly vesting schedule.


Googler here. dabfiend19 is correct, around ~2019 I believe new L3 hires were switched to this new grant schedule. Prior to that, it was based on a fixed number of shares.


Fwiw, I asked around and couldn't find a hire where this was the case. The schedules are different but the conversion to shares happens once just after your are hired.

This is a bit of a change from e.x. when I was hired and my offer letter said N shares, and so when I joined 6 months later my grant had increased in value significantly. But it's still not the situation in these other companies, where you vest 6.25% of the dollar value each quarter, converted to shares or whatever.

Actually that wouldn't really be compatible with googles vesting schedule, now that I think about it.


you have misunderstood your friends, or your friends have misunderstood their grants


Your friends might be getting mixed up with annual refresher grants. Except for some rounding of fractional shares, your initial grant will vest the same # of shares every month for 48 months


Well if the stock tanks you’ll probably be laid off. I don’t see any upside with this model for the employee.


what do you use?


I’m not the parent commenter, but I use Lichess. It’s free, supported by donations (so their incentives align with mine more than a business’), open source (which I like ideologically), and in my opinion just has a nicer user interface.


I started playing chess online less then a month ago. I started with chess.com, because I knew about it beforehand for some reason. After about a week there I couldn’t take their constant nagging about getting premium account: “Get a free 7 day trail”, “Get a premium account to unlock more analysis”, “you’ve reached your maximum puzzles for the day, get a premium subscription to unlock more”, etc.

I did some research (mostly on the online-go.com forums) and joined lichess.org and haven’t looked back since. Superior in every way.

Perhaps I should thank chess.com for being so annoying, if it weren’t for their constant nagging I would probably have stayed on the platform and never discovered lichess.org.


Lichess is much better in most ways.


Chess.com is much better in most ways.


Maybe lichess.org?


lichess.org, and I'm also a patron


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: