Huh? Go has its tradeoffs, some people don't like it and that's fine. But nobody can deny that it is by far one of the most 'readable and reviewable' (the argument in question) mainstream languages.
The praise of Go didn't end with it being just readable, but also 'brutally pragmatic' and all that.
If Googlers think that Rust code is just as readable as Go, even though Rust obviously makes tradeoffs against readability for other features, I would be tempted to mark that as Googlers just having a culture about being overeager to think the tools they are using are the best.
Why can't I deny that it is readable and reviewable? I don't think it is, basically at all. It gets evangelized as if it is but I have yet to see a compelling argument that can convince me. I've had the displeasure of having to dive into some codebases and I specifically hate it for that reason.
Well, I can't speak for you, of course. I also haven't seen that code you reviewed. And I won't try to evangelize you. I'd say, in a nutshell, that Go is readable and reviewable because its grammar consists of the C-style basis that most every imperative language has, and not much more. It's the lack of features that makes it very 'WYSIWYG'.
Merging as a one-keyword feature for all flavors of SQL is almost impossible.
It gets hairy when you have columns with composite types. E.g. depending on database, records can be JSON objects, protobufs, structs, or other composite types, like tuples.
It is possible to define semantics to merge each of these, but they vary and the merge operation becomes verbose to allow to handle each weirdness of each of these underlying types.
Merging is also sensitive to NULL vs 0 vs a DEFAULT, plus in some databases, NULL is still allowed even when a column is marked NOT NULL.
You'd almost need a sub-language for the merging operation, specifying how to handle each corner case.
Some databases like ArangoDB (https://www.arangodb.com/) allow you to use Javascript instead of SQL.
However, using a type-unsafe (read weak typing), turing-complete language introduces the usual problems we know and love, such as infinite loops, runtime type errors, exceptions, and the like.
Personally, I'm looking forward to a WASM runtime for databases -- so we can run webassembly on the database. This COULD be carefully designed to be statically checked and, possibly, make it really hard to write runaway loops.
Many databases support other languages as well (eg. PostgreSQL supports many including Python, by default). One challenge is lack of standardization. (SQL is a weak standard, but at least a standard).
Weak typing: what about TypeScript?
Slow loops: yes, this is a problem. However, SQL (and even more so, GraphQL) also has a problem of large results / operations spanning too many entries. During development, the number of entries is fine, but not in production. Specially if indexes are missing, this is also a problem in SQL. (Endless loops are actually less of a problem than slow loops: it's easier to detect endless loops during development).
To process large results in a good way, often, pagination is needed; best would be keyset pagination. What if a statement returns a "continuation" statement in addition to the result set? If the client wants to get more results (or process more entries), then he would need to run the "continuation" statement.
Say a database doesn't provide SQL, but instead a set of low-level APIs (plus a ton of tools). Developers would then (be forced to) write properly modularized, versioned code on top of those APIs.
Gravity is infinite at the singularity (the middle of the black hole). Everything gravitates towards that point. Our best understanding is that no information can exist here.
Black holes "evaporate" over time -- by emitting Hawking radiation. This is probably where the information goes, in my layman understanding.
> Black holes "evaporate" over time -- by emitting Hawking radiation. This is probably where the information goes, in my layman understanding.
No, the Hawking radiation and evaporation is exactly what causes the problem. If black holes were forever expanding, we could simply say "they have a structure inside that we can't detect, but that structure preserves the information; but, since it's past the event horizon, it will be, even in principle, forever beyond reach of our understanding and experiment".
However, if black holes eventually disappear, it means you have something like book => unknowable inside of the black hole event horizon => something observable outside. The problem now becomes that, from Hawking's discovery, the "something observable outside" is random thermal radiation, which can't contain information by definition. Hence, not just something unknowable, but a paradox (an inconsistency in the formal model).
Go solves 99% of the problem quite nicely. You don't often deref pointers in Go, because of auto-addressing. E.g. there's no arrow operator as in C, the dot derefs when necessary.
I would say `v := (p.)` could've been the deref operator, but what do I know.
That you don't deref pointers in Go is not what fixes this issue.
What fixes this issue is that, like most other languages which are not C, Go understands that "is a pointer" is a property of the type, not the name / value.
So even if it used C-style declaration, Go would say
> We will show, by induction, that [for all sets] of n horses, every horse in [each] set has the same color.
> Now [assume that] for all sets of n horses, every horse in [each] set has the same color.
QED.
This is just tautology. No further analysis needed, really.
(If your proof includes your hypothesis as an assumption, then it must be a proof by contradiction.)
EDIT: Before you refute, read again carefully. The assumption _is_ the hypothesis. It is not existentially quantified or set in the base case. It is the entire, universally quantified hypothesis.
This is how induction proof work. You prove that something is true for a base (n = 1) case then you prove that if it is true for n it has to be true for n + 1. Therefore it is true for any n.
The flaw is that the second part of the proof requires n >= 2
I understand and agree but read carefully. The assumption _is_ the (entire) hypothesis. It is not existentially quantified or set in the base-case. It is the entire hypothesis.
You should read “n” as “some arbitrary n” in the quote you posted. It’s not an assumption for all n.
edit: sorry, you’re right that the wording is a bit sloppy. The “n” in the first quote is “for all n” and the n in the second quote is “some specific n” or “some arbitrary n”. They’re not meant to be the same statement. I don’t think it’s a very carefully written article.
The assumption that the case holds for n while the goal is to show the case holds for n+1. It is not well worded to make clear the intent of making an assumption about modal possibility. However, to see it as begging the question requires imposing a statement of modal necessity that simply isn't there.
I thought the same thing as you at first, but you need to read more carefully.
The proof is showing that it is true for n=1, and then showing that if it is true for n (the part where we "suppose it is") then it is true for n+1, proving by induction that it is true for all n >= 1.
Yet, that assumption is not needed for the proof and can simply be removed. I think OP just wanted to say "what follows is the proof for the induction step".
Proof by inductions often involve showing that Pn implies Pn+1. That is, that a statement's truth for n implies it's truth for n+1. That's what's being done here, and it's a perfectly valid part of this type of proof.
That's only if you equate the recruiter to a minimum wage worker and yourself to their customer. There's a superiority bias in that rhetoric. The recruiter is a well-employed tech worker. And you're not their client or superior.
You can easily find the salaries of recruiters at FB. They're half of the base of an L3 with no RSUs. So sure recruiters are tech workers that are better paid than eg the service staff but let's not pretend there's not a two class system at FAANG.
Recruiters at FB seem to earn ~50-60% of the total compensation of Engineers at equivalent levels, doing a bit better at the bottom of the ladder (I don't see a single IC3 recruiter earning under $100k in the US; they'd have to be earning ~60k to be earning "half of the base of an L3 with no RSUs").
>Recruiters at FB seem to earn ~50-60% of the total compensation of Engineers at equivalent levels
...
>they'd have to be earning ~60k to be earning "half of the base of an L3 with no RSUs"
The people that email you are not L3s. They're actually often contractors. When they're not they're definitely L1s (or maybe L2s since I think L1s are actually service staff).
If they're contractors I can't "easily" find their salaries, since they aren't listed on levels.fyi (and other sources e.g. glassdoor are grossly unreliable). And, uh, I think the proper point of comparison there would be to line them up with the compensation of contractor devs; from what I hear they also earn substantially less than FTEs.
Personal opinion: Google carefully treads the gray lines of legality and morals/ethics. Facebook leadership, on the other hand, decided a long time ago that they care about legality only. They seem very proud of that decision, too.
Google might still have a better reputation in certain circles but it is unclear what company is actually worse for the world at this time (btw, if you have a link where somebody tried to actually quantify it, it would be an interesting read).