If the regime survives, it is also going to target (and murder) a whole hell of a lot of innocent civilians that it suspects aided Israel (and many/most will almost certainly be innocent). Due process is not a thing with IRGC.
By looking the account up with Google's People API - https://developers.google.com/people
They would have to verify the account is active
If I log in using Google oauth, you already know the Google account is active.
AND the id hasn't changed
Yes, but that's an additional check, separate from the one you suggested would eliminate the issue:
If, when you logged into Slack via Google Oauth with the email address user@company.com, Slack checked with company.com whether user@company.com was a valid user that should be allowed to login, then this problem would be avoided entirely because the defunct company would no longer report any valid users.
> If I log in using Google oauth, you already know the Google account is active.
You know there is an active Google account but (for the public OAuth integration option) it can be any Google account from any workspace, or no workspace.
"A public application allows access to users outside of your organization (@your-organization.com). Access can be from consumer accounts, like @gmail.com, or other organizations, like @partner-organization.com." [1]
> Yes, but that's an additional check, separate from the one you suggested would eliminate the issue:
If you set up an internal OAuth integration option no separate check is necessary, it will actually restrict access to users of your workspace.
"An internal application will only allow access to users from your organization (@your-organization.com)." [1]
You can use the SAML integration option as well. [2]
Right, this additional check should not be necessary in a typical OAuth or OIDC flow. This workaround is only necessary in this case because the API Google offers to services has a hole in it.
> because really it comes down to the frequency of data parsing into and out of the protobuf format.
Protobuf is intentionally designed to NOT require any parsing at all. Data is serialized over the wire (or stored on disk) in the same format/byte order that it is stored in memory
(Yes, that also means that it's not validated at runtime)
Or are you referencing the code we all invariably write before/after protobuf to translate into a more useful format?
You’re likely thinking of Cap’n’Proto or flatbuffers. Protobuf definitely requires parsing. Zero values can be omitted on the wire so there’s not a fixed layout, meaning you can’t seek to a field. In order to find a fields value, you must traverse the entire message, and decode each tag number since the last tag wins.
Cap'n Proto (developed by Kenton Varda, the former Google engineer who, while at Google, re-wrote/refactored Google's protobuf to later open source it as the library we all know today) is another example of zero-copy (de)serialization.
> Protobuf is intentionally designed to NOT require any parsing at all
This is not true at all. If you have a language-specific class codegen'd by protoc then the in-memory representation of that object is absolutely not the same as the serialized representation. For example:
1. Integer values are varint encoded in the wire format but obviously not in the in-memory format
2. This depends on the language of course but variable length fields are stored inline in the wire format (and length-prefixed) while the in-memory representation will typically use some heap-allocated type (so the in-memory representation has a pointer in that field instead of the data stored inline)
This is still “env vars”, easy to read from /proc/*/env too see the decrypted secrets from a different process. Versus in-process only secret fetch where you’d need to scan the memory pages of the app, which is a bit harder - especially if you keep the credentials in memory in a scrambled format so a simple scan on process memory for “secret_prefix_” doesn’t find them.
If an attacker can read other processes' envs you've pretty much lost as they're
1. Inside your process which means they can see the decrypted values.
2. Root which means they can get into your process to see the decrypted values.
I'm not sure if your average dev has a threat model that assumes in memory scrambling let alone leaked env vars. After all we're talking about the standard way to do it being populating a file with the decrypted secrets and just leaving it there. All the security is already kernel security.
I'm honestly not sure who dotenvx is aimed at.
- No one security conscious is going to be cool just making the cyphertext available publicly or even internally.
- Someone scrambling in-memory secrets isn't using dotenv to begin with, is using SecretsManager and the like, and probably doesn't want to change those to now go through the filesystem. You now get less auditing because all those secrets are bundled and you now only know "they accessed the decryption key."
- And someone using dotenv for secrets doesn't have a threat-model where this meaningfully improves security.
In adittion, if I'm not mistaken, child processes inherit the parent env vars, so if your application forks or use subcommands, you may be exposing the whole environment trove to 3rd party scripts, no root needed. Also, most vulnerabilities that enables execution of code will happily leak the env vars, no root access or "being inside the process" thingy (I know, code execution is technically "inside the process", but without requiring privileged levels)
I’m advocating people use something like SecretsManager, not this thing. In-memory only > env vars > secret files on disk.
I find env vars very precarious because harmless developer debug logging, actions like sshing into a container and typing `env` etc can easily expose them.
File on disk can be read by an attacker with via subdirectory path traversal bug
It’s much less likely for in process only secrets to be exposed by common mistakes/bugs
Having been in this exact position multiple times now (once quite successful, others not), you should probably consider it a wash.
Unless the company hits unicorn AND your shares become liquid (secondaries don't count—you generally won't be able to sell enough shares to make a meaningful dent), you'd make just as much or more at a FAANG firm with way less risk.
Of course, I say this while not at a FAANG firm, because I prefer startup type work.
> So you would get paid like at another company but get equity on top and it's not a good deal? How comes?
If it were truly market rate (total comp not just base salary) then sure, it's a good deal. How likely are you to find that in an early startup? It must be pretty close to zero percent chance. But if you find it, sure, it's good.
You'll still work harder and be more stressed but it'll be a different learning experience which is always nice.
A lot of it comes down to management/team quality. Do you want to spend an awful lot of time with these folks? Do you think you'll learn from each other? Do these folks seem to know what they're doing and are the building a product that interests you? If you can say yes to most (all?) of those questions, then all-in-all, it's probably a wash. If not, run.
Depends on the options available to the candidate. Typically someone joining a startup very early probably has the skill to get FAANG salaries with less stress and more free time. There are also hundreds or thousands of mid size companies that pay very well nowadays, its not just FAANG.
Yeah but smaller startups might be more open to non-US applicants, FAANG and other more established companies don't seem to be interested in hiring abroad.
That's what makes the early startup scene the only thing available for some.
How come? Most large companies have big legal/HR departments that are very efficient at the whole visa application process. A small company won't have that expertise/staff. I mostly see startups being more concerned about the visa status of applicants.
Remote + non-US is not as welcoming, so the hurdles are way higher as it's not fitting the usual way. While startups have no prior experience anyway, so it's easier to convince 1-2 people instead of changing a whole system (I believe).
Most early stage companies turn out to be poor companies for employees. Long hours, toxic leadership, unclear roadmaps etc. Working at a small firm doesn't guarantee high quality.
We're down hundreds of officers though. And we don't and haven't had a mayor interested in bringing up a new system to replace the completely corrupt one we have.
(The latter part reinforcing your argument that we didn't try "depolicing" so much as, uh, "unpolicing"?)
The very next sentence highlights that the same problem existed before the Pandemic and police protests from 2020;
> Covid may have accelerated this trend, but attrition and hiring issues predate the pandemic. In the 2019 budget, Council approved over $700,000 for hiring incentives, citing the police department's difficulty filling positions.
Actually the very first sentence in the article immediately refutes your claim -- what a bizarre source to 'back up' the argument that Seattle defunded the police;
> "Why has Seattle lost so many police officers?" The answer is not that the Seattle Police Department was defunded.
Yes, I misremembered it and I was wrong about it which I discovered by googling it. But the number of police is way down, so it had the same effect as defunding. Part of the reason for the reduction is the Seattle City Council abused them by calling them murderers. The cops felt unsupported by the Council and unwanted, and they left.
(Most) GraphQL clients are optimized for relatively small/simple objects being returned, and you typically pay a cost for every single edge (not node) returned in a response / cached in memory.
It can quickly get to the point where your project is spending more time per frame processing GraphQL responses & looking data up from the cache than you spend rendering your UI
reply