Hacker Newsnew | past | comments | ask | show | jobs | submit | more ender7's commentslogin

Console hardware is vastly underpowered compared to PCs and yet frequently competes with mid-tier graphics cards in terms of performance and graphics quality.

The reason is that game devs are able to optimize their games specifically for the GPUs in the console hardware. Google is presumably betting that if they get enough market share, devs will start to optimize for _their_ hardware, and reap similar rewards.


Also, unavailable to most people.

Yes, I suppose it was kind of nice when the internet was only available to the tech-savvy, to those who didn't mind maintaining a second job as a Unix administrator. But for people who just wanted to _write_, it was not welcoming.

Yes, you can go to wordpress.com and sign up for your own blog, but then you're just on the wordpress platform, and subject to their whims. If your plan works, then wordpress.com becomes so big that they're the bad guys now.

Facebook, Medium, and others are here because we, the tech world, didn't give the rest of the world any other options. We didn't settle on a standard, containerized server platform that was simple enough for users to drag "Wordpress.server" onto "SomeHostingProvider.com" and get a working server. The tech ceiling was _always_ too high, and it was our own hubris that we weren't willing to build a more welcoming environment.


I think the problem is not with being tech-savvy but with not wanting to spend anything. It isn't difficult to create a ready-made self-maintaining decentralized publishing tool in the form of a VPS image that people could run. But then people would need to pay for hosting the VPS (single dollars per month) and probably for the software, so that somebody maintains it (also single dollars per month).

I think people are too cheap for that and will rather throw their work into one of the black holes (like facebook), than pony up those dollars.


Also, what happens when I'm not around to pay those peanuts? All gone? Not acceptable to most people.


This is a weird line to argue from... What guarantees any platform you use (yes, even including wordpress) won't delete you data or terminate your service agreement? Even more laughable, if it was unfairly terminated, how can you argue when you are dead?

If you want longevity -- and want a VPS, just load up your vps (in the case of DO) wallet, pre-purchase out your domain name for a few years in advance and sit back. That's the best you are going to get, without paying somebody specifically. If you are popular enough, your stuff will be archived anyway.

But for heavens sake, if you actually have a need for post-life longevity for your content, put it in a will. Plan for death.


Your data should be local and in your backups. It's then posted somewhere publicly when you want to share it.


Didn't people used to say that sort of thing about music? That people are too cheap so they'll just pirate everything?

Maybe it's convenience that is required.


I agree that its not money that is the obstacle. But I dont know if its convenience either - there are platforms out there now that will let anyone host a website with less effort than sigining up for facebook and learning the platform.

Maybe the problem is that unlike buying, music making a website requires people to be creative and the majority of people are just not creative and don't wish to be creative.


I agree that convenience is required, but hosting has an inherent cost to it, which can't be ignored. Similarly, maintaining software (any software!) has a cost to it, which we do our best to ignore, but which in the long term needs to be taken into account, otherwise we end up giving our data to corporations which offer "FREE" hosting and software.


I don't recall it being hubris. What happened when Web browsers appeared is that there was a brief period when a university (usually you were at a university) would host your pages, and you could believe the decentralized dream (if you heard that part), but there was a sudden commercial gold rush, and motivations switched to greed (not hubris).

You didn't want to see X happen; you wanted you to get the money for X happening. And maybe that reduced to you want you to get the money, and X was a path to that, and the actual X didn't matter.

Also, there were relatively few people who already understood Internet, online, or software development at the time. Perhaps the majority of people pitching Web startups were all new to all of that.

CS department culture never recovered from the gold rush, and a lot of the gold rush ideas were institutionalized.


This is a very interesting angle, care to expand upon it? Which gold rush ideas do you feel where institutionalized?


I've not fleshed out his angle on this, but the first thing that came to mind was the gamification of every social interaction. "Ratio-ing" on TWIT comes to mind. There was a time when we measured threads by the level of social engagement (response) rather than like/share and it was a good thing to have hundreds of replies and sub-conversations.


The real reason this isn't available to most relates to today's DDoS supporting Internet.

Today everyone has to use a CDN to even try to defend against such attacks; and all they do is bulk filter the attack out while degrading the end user transparency of the service. Under 'load' some websites have to load an active filter page and execute code on the clients to authenticate that it's a valid client, rather than an attacker.

The proper solution is to identify compromised devices and isolate them from the Internet. For hosts under attack to use a side channel to the ISPs routing the packets to ask them: "Please do not send anything from X to me for a bit; unless they satisfy to you that a user is in control." The request should be 'signed' by an end user key, authenticated by their ISP, and filtering should begin at the edge of that ISP. If they feel it necessary, they too can send a request to their ISP. Until this escalates to the backbones. Then it can press further back, down to the compromised node. That would allow infected end users to be quarantined, informed, allowed to download security updates and some other limited website interactions (manufacturer websites for updated firmware, some after-market firmware sites/tool sites like OpenWRT/DD-WRT/Linux distros, etc).

Fix the DDoS issue, also fix the home upload bandwidth issue, and you too can host your own family photos/videos.


> fix the home upload bandwidth issue

The “home upload bandwidth issue” is “it's not a thing consumers demand, and we have business-class service for people who do have a need forit.”

I'm not sure what there is to solve...


> also fix the home upload bandwidth issue, and you too can host your own family photos/videos.

Not possible without investing literally dozens of billions of dollars into laying fiber - and no matter where you look, actual physical infrastructure like roads, bridges and public transport is outright decaying so where should that money come from, and where in the world do enough actual digging crews exist to lay all that fiber.

DSL simply is physically unable to do symmetric high speed and for coax/cable-tv internet there always remains the problem of oversubscription.

This is the core fuck up of our time.


Is there any other physical reason for asymmetric speeds except the asymmetric spectrum allocation? Either for coax or adsl?

People don't usually use much upload and providers don't want you to upload, so you get lower upload speeds vs. download speeds, even in hardware and standards.


I assume you are talking about the US. Seems like a pretty reasonable investment if you cut some military funding or put a small tax on the richest Americans?


IMHO, there is no shortage of ways the US government's spending allocation could be improved; doesn't mean any of them are politically viable.


Or maybe, ask those consumers to pay $200/month for the Internet that they use, instead of stealing other people's money?


Creating essential infrastructure is stealing but having military spending higher than the next seven countries COMBINED is business as usual.


Well, the United States Constitution obligates the government to do a fair number of things. Providing IT infrastructure for people wanting to self-host family photo albums is not one of those things.

There is a mechanism for amending the constitution of the United States if enough people want to elect representatives to force other people to pay for their upload bandwidth.

Military spending is a different bucket. If you object to Military Spending (and I do, as you appear to do), take it up over at the counter of not-false-equivalences.


I am aware of the reality, I was just making a facetious comment.

I'm not in the US. We have our own problems here in Australia. We did all pay for IT infrastructure but the government completely fucked it up as expected.


The whole Telstra privatisation, split-up-ification, semi-not really privitization, going public with monopolist rights thing was a little weird to watch from over here. But, hey! At least some "very important people" made a lot of money!


If you get your own domain for that wordpress.com hosted site, you're free to move to other hosting if you need to.

One option is to learn enough to do it yourself on a barebones server, but that's not the only option. A google search for "wordpress hosting" turns up several turnkey solutions that a person without a lot of tech background could move to if they were sufficiently displeased with something wordpress.com did.

I think this is a good model for the hosting/platform piece of the puzzle. I'm not sure anyone has a great answer for discoverability yet


So I have been putting together a way for people to more easily make their own blogging platform. It would kind of mimic a blog or social media platform, but since everything is committed to a repository using the JAMstack it could easily be converted to a fully managed platform. Any feedback would be wonderful. https://your-media.netlify.com/post/make-your-own-media/ Everything is owned by the end user. This is only providing a recipe for people to use. I will also mention that https://www.stackbit.com/ is doing basically the same thing but more from a “Make life easier for Website designers” perspective.


The problem these platforms solve is more discoverability than it is actual hosting.

A blog hosted somewhere on some user-owned server is not going to be easily discovered.


It never was too high, the bar for what constitutes an education in at least the US has been far too low for far too long.


>The tech ceiling was _always_ too high, and it was our own hubris that we weren't willing to build a more welcoming environment.

I hate this meme and other criticism like it. It belittles work accomplished (asserting that there was nothing done). Ascribes intent from that perceived outcome (that nobody even tried). And ascribes motivation for that intent (hubris).

I haven't found a better way to demoralize people from trying in the future than this sort of quip.


> The tech ceiling was _always_ too high, and it was our own hubris that we weren't willing to build a more welcoming environment.

Building a more welcoming environment is a mountain of work, and most people would prefer to get paid for it.

And monetizing your work becomes a lot easier when you are a company selling an end-to-end product, then when you're a lone dev working off donations.


I seem to remember being able to use something like geocities, or a cheap/free /~user/ hosts and FrontPage or similar to generate badly formed HTML that let me share a lot of ideas before I was deeply familiar with programming and specialty knowledge.

Yeah, I had to learn about FTP, and not much more. There was a lot of availability, and diversity without a lot of overhead to it.


Well, we already have numerous explanations of why things are the way are. Is there any proposal for changing the status-quo that people can get behind? That is always the stumbling block. Only a handful of people seem to be bold enough to let their convictions guide them..


jesus, man why are the top comments to posts like this always like this? Is everyone on HN that apathetic and jaded?


While beautiful, most of these pieces are primarily ceremonial or intended for generals who would rarely involve themselves on the battlefield†.

Almost all of these pieces date from after the unification of Japan under the Tokugawa shogunate (c. 1600). From 1600 to 1850 Japan experienced a stable period marked by very little real armed conflict. During this time, the samurai transitioned from soldiers to what were effectively mid-level bureaucrats. However, unlike most bureaucrats, they managed to retain all of the trappings of a martial lifestyle, including ornate armor, beautiful swords, and the occasional mortal duel. It was during this time of relative peace that these (sometimes ridiculous) fashion pieces developed, somewhat complicated by the tradition of incorporating pieces of much, much older helmets into the "core" of the helmets (one of the helmets in the OP has a core dating from the 14th century, but was significantly embellished later on).

†This is generally true of what arms and armor have survived from around the world. The stuff that was actually used rusted away long ago; the highest chance for survival was to have been so valuable that no one dared to actually take it onto a battlefield.


> and the occasional mortal duel.

Duels under Tokugawa were forbidden and punishable by death of both opponents. The only fighting that samurai could see was terrorizing of unarmed peasants.

Samurai were not warriors in European sense but more of a mob enforcers. Good for terrorizing peasants not really fit for fighting in any military sense.

During Meiji when peasants got professional military training and leadership samurai became toothless.

[BUSHIDO: WAY OF TOTAL BULLSHIT] https://www.tofugu.com/japan/bushido/

[1] https://en.wikipedia.org/wiki/Siege_of_Kumamoto_Castle

[2] https://www.historynet.com/satsuma-rebellion-satsuma-clan-sa...


That is during the Shogunate. Before the unification, The samurai were involved with actual on field battles.


Mostly using arrows not swords. But yumi is inferior weapon compared to reflexive bow used by Chinese, Korean, Mongol or Turkic soldiers.

Also if you look at any Japanese castle (Himeji had been well preserved by US Bomber Command for navigational purposes) you will quickly realize that any continental army will take at most a week to dry it's moats and dig mines under it's wall. Fortunately Japan never had seen invading army on its soil.

[1] https://en.m.wikipedia.org/wiki/Yumi

[2] https://en.m.wikipedia.org/wiki/Composite_bow

This samurai / bushido hype is so much out of proportion and simply untrue.

While in the same time real history of Asia is full of military class of exceptional value. Indian Rajput, Islamic Gunpowder Empires, Malays, Mongols - to name just a few

https://en.m.wikipedia.org/wiki/Gunpowder_Empires

https://en.m.wikipedia.org/wiki/Rajput

https://en.m.wikipedia.org/wiki/Malacca_Sultanate

https://en.m.wikipedia.org/wiki/Qing_dynasty

https://en.m.wikipedia.org/wiki/Hwarang


I am not going to compare medieval continental technology to medieval Japan because that would be against the spirit of comparison. Japan was pretty closed society even when they traded with the continent. While it is true that the continent saw cutting edge tech in terms of unbalanced battle situations, we can surely appreciate the Japanese stuff for it's aesthetic vibe.

If Japan was suddenly in the continent, I am sure they would have to compete and modify their war society like the rest did. But the root aesthetics would be the same.


This is the use case they highlighted in the keynote. It doesn't appear to be a system designed for shipped games.


No, in fact one of the bug comments refutes this sarcasm:

> The default driver on the distribution we support is broken.

They do test the default driver on the distributions they support. The tests show that the driver is unacceptably unstable. It may work better on other distros/versions, but they don't have time to test those. They're happy for others to help out in testing these unsupported combinations, however:

> Again, if someone wants to spend the time to test thoroughly to narrow down the blacklist, we will accept patches.


There's a decent argument that current desktop OSes are a poor man's OS. The permissive-by-default approach of traditional OSes is deeply problematic and attempts at reigning that in (Mac App store sandbox, capabilities work on *nix) have largely failed to gain any traction.

It's certainly not elegant that we build a proper security model from within a browser rather than an OS, but it might be the only practical approach (short of some large-scale migration to iOS/Android/Fuschia etc, which seems...unlikely.)


> There's a decent argument that current desktop OSes are a poor man's OS. The permissive-by-default approach of traditional OSes is deeply problematic and attempts at reigning that in (Mac App store sandbox, capabilities work on *nix) have largely failed to gain any traction.

What if they've failed because they're a poor idea that damage the reason why computers have become an ubiquitous tool and drivers of innovation? In Apples little golden garden, Linux, Chrome and hundreds of other things you've come to understand as required features of an OS would not exist. If Microsoft would ban competitive browsers like Apple did, we'd never dig out of the cesspool of IE6 internet.

And to build such innovative and updated software, you NEED the ability to modify parts of the system, not a sandbox.

(Disclaimer: This does not mean the security approach does not need to be updated. Sandboxes aren't a general solution though.)


Some perspective from the interviewer's side may help here:

- A Google interviewer's (and I would assume any interviewer's) primary goal is to come out of the interview with enough confidence to give a positive or negative score. If they sit down to write feedback and have to give a neutral score, the interview wasn't productive. This means that the interviewer is just as eager to find evidence for a positive score as a negative one -- there isn't an incentive to "getcha" with cheap or tricky questions.

- Doing interviews at Google is volunteer work. You are not interacting with a professional interviewer, you're interacting with someone whose day job is being an engineer. They don't have an evil agenda; they are doing this because they want to help Google hire the best candidates, and by inference make sure their future coworkers are good people to work with.

- Interviewers overwhelmingly _want_ their candidates to succeed. It's a true joy when I have a candidate who glides through a question (or finds a solution that was even better than mine). When candidates struggle, it's not a pleasant experience for the interviewer either.

- In the end, the point of technical interviews is to avoid the terrible experience that is working with an incompetent or uncooperative teammate. Interviewers are trying to find people that (a) can work well with others and (b) can get the work done.

- The system is _highly_ prejudiced towards suppressing false positives. This is the right decision, but it comes at the cost of a high rate of false negatives. Were myself or any of my colleagues to re-interview for our jobs, I would expect about a 60% hire rate. This is not even taking into account the constant ebb and flow of hiring demand. Sometimes there just isn't any headcount. And sometimes you just happen to get questions that you don't click with. This is also the reason that recruiters are so eager to bring you back to interview 6 months later.

- Recruiters and interviewers have very different incentives. Recruiters want to maximize the number of people they get hired; interviewers want to hire people they want to work with. This can lead to behavior that seems schizophrenic from the outside: the recruitment side of the pipeline constantly pestering people to interview, but once the candidate enters the interviewing pipeline the process is slow, deliberate, and careful.


>The system is _highly_ prejudiced towards suppressing false positives. This is the right decision,

This is textbook Google propaganda that has been repeated at least since I last worked there 5ish years ago. It's bullshit though because the ratio of competent to incompetent engineers was the same as at FB, MSFT and NFLX (with the latter tending to prune the fastest).

Just because you generate a system that spits out a lot of false negatives, it doesn't mean it has done anything to reduce false positives. This should be immediately obvious given that the relationship between the questions asked in G interviews and actual software engineering is non existent.

Don't repeat the trope that Google's hiring system is actually better at eliminating false positives. There is no evidence of it and if it truly was better, everyone would adopt it in a heartbeat and we wouldn't be working with bad engineers who spent a few months on leetcode to get into jobs way over their heads.

The reason Googlers never care to critically question the sorting hat is because it picked them.


"The reason Googlers never care to critically question the sorting hat is because it picked them."

I think this is a truism about the quality of most organizations - people who thrive in a specific organization coalesce into the organization and enhance those qualities within the organization that are specific to them.

I presume the fact that Google uses non-professional recruiters makes the recruitment process more about cultural alignment than it absolutely needs to be to gauge the capability to add value in a software engineering process.


All the FAANGs operate under a very similar hiring model, because they all get far more applicants than they can hire and can afford to have a high rate of false negatives. "Everybody" can't adopt it even if they wanted to, because your average business doesn't get a million applications a year.


That isn't really what the comment you're responding to was about. That comment was specifically about the fact that Google purports to target eliminating false positives and the trade off is accepting a high rate of false negatives. In fact there is not necessarily a relationship between the two: or at least not one that Google's process measures.


> "Everybody" can't adopt it

Oh, but they can and do. The difference between 100 and 1000 resumes is not material—both too many to look at. What I've found is those in the second bracket simply throw out those without a degree and then cargo-cult common practice.


> There is no evidence of it and if it truly was better

Year after year the Googlegeist survey finds that one of the things Googlers most enjoy about working at Google is their fellow employees

> if it truly was better, everyone would adopt it

Google has an abundance of money and an abundance of applicants who would like to work there. Companies with fewer applicants per position or lower salaries relative to the industry average may need to be more open to false positives if they want to be able to hire anyone at all. Smaller companies also have the advantage that they can usually fire people more easily than larger companies, which helps lower the cost of false positives for them.


>Year after year the Googlegeist survey finds that one of the things Googlers most enjoy about working at Google is their fellow employees

Hiring has very little to do with that. Perf review, feedback mechanisms, and work environment are orders of magnitude more critical to that. I've worked two startups with completely different hiring processes from Google and the other employees were amazing to work with there too. The key is feedback to correct issues and a quick PIP/fire process for folks not cutting it.


>This is textbook Google propaganda that has been repeated at least since I last worked there 5ish years ago. It's bullshit though because the ratio of competent to incompetent engineers was the same as at FB, MSFT and NFLX (with the latter tending to prune the fastest).

But those companies use hiring practices that approximate, to a high degree, what Google does. Facebook and Netflix certainly do, and Microsoft has a high enough rate of bad hires that you are required to reinterview to switch teams, so that high performing teams can keep reject bad candidates who are already at Microsoft.


I don't work for Google, so I don't know how much this applies to you folks. In my experience, while what you are describing is ideally correct, it's often just that -- an ideal.

For example, you'll have people insisting (and consciously agreeing) that their objective is to come out of the interview with enough evidence to give a positive or negative score. In the back of their minds, though, they will often be projecting their own insecurities -- about their expertise, about their career, about their job, about their team. Halfway through the interview, things end up being about something else altogether, like interviewers trying to reassure themselves that they're better than who they're interviewing (it's especially hard not to fall into this if it's been years since you last had to implement a red-black tree and you're interviewing a fresh graduate who dreams this stuff in their sleep).

It's very hard to get past these things. I struggle with them every time I interview someone, and it's very hard to know when to chalk it up to "the system" and when to chalk it up to your own baggage. Pretending that it's only the former only perpetuates this stuff -- and empowers the ones who actively enjoy abusing candidates and making them feel like crap just for the heck of it. Which is very common everywhere -- including, from what I've heard among my peers, at Google.

So far, the most relevant compass I've found for these things is made out of two questions:

1. If I were a candidate, and I'd have gone through this interview, how would I feel about it? 2. If I were to go through this exact interview today, would I still get hired?

If the answer to #1 isn't too good, there's probably some individual-level things you can change, but if the answer to #2 is bad, the problems tend to be more systemic in nature.


> The system is _highly_ prejudiced towards suppressing false positives

I wonder if the algorithm centric interview style at Google can really achieve that. From my experience, algorithm centric questions + plus white board coding have bias towards academic people. (Maybe that’s fine for google) However, the way to crack that kind of interview is really just practice like hell on leetcode. Just take a minute and think, who are most motivated in doing that? Good, experienced engineers have no trouble finding jobs in Bay Area, and why would they waste their time on leetcode for skills that are mostly going to be useless in real work? New grads and engineers that have trouble finding jobs are most likely to spend hell of their time on leetcode. I think the interview style at Google is in fact increasing false positives instead of suppressing it. It also has high false negative for sure.

There are tons of other ways of doing interviews, in which interviewer gets a lot more and very relevant signals from candidates while keeping candidates pressure low, and not wasting their time, but Google is not doing it, like asking practical questions, letting candidates write and test their code in their own computers, has a debug session, etc.


I'm a UX Engineer at Google. The tasks we ask you to complete in an interview are very practical - sketching out the same kinds UIs that you might build in the real world.

The impractical part is that you'll probably be coding in a Google Doc rather than a text editor. It can be a bit disorienting.

I've also interviewed at Airbnb, where I was asked to code in Codepen for UI and in node for algorithms. I felt more comfortable in a more realistic coding environment, but one of their computers crashed mid-interview and the other's network access was broken. Coding in a doc and hand-waving when necessary is better than working in a more realistic environment if the hardware isn't reliable. (Realistic doesn't nec. mean unstable, but if you're expecting candidates to write working code in a fixed period of time, you need to make sure your communal interview machines are well-maintained.)


Do you (and the company) try to find new ways to interviews such that e.g. mid-career candidates don't need to practice in advance?

The current interview process at Google and probably at other companies as well seems to be frozen in time - one needs to be a fresh grad or prepare weeks/months in advance or be "into" competitive/sports programming, which in most cases has nothing to do with the daily job.

So no plans to have a fresh look on this?

Or maybe you keep these practices (and other companies follow) so that it is harder for engineers to change jobs easily?


I’m going to write a blog post about it one day.

I believe it is inappropriate to ask engineer to prepare in advance for the interviews.

Last time I got contacted by their recruiter and sent links to coding websites - I replied “Great for someone just out of college”.

Google, you are boring company with insane interview process. When I worked there 8 years ago I met many people, who thought passing the interview made them better than other. I regret I didn’t tell them that they should check their heads.


I got interviewed by Google twice, started by direct invitation from their HR team as I never applied to Google, naturally I bombed both times as I am not the PhD kind of developer they are after.

In every single time their recruiters were telling me our wonderful my CV was and they wanted definitely to have me there, naturally with a selection role totally unrelated for the kind of positions that I was applying for.

The third time I got a direct invitation from their HR team, I made it clear I wasn't interested if it was going to be again the same old way. Never got contacted again.


Recruiter's job is to get you there at any costs. I've being told before that cool team A,B and C wants to talk to me just to find that some boring team D is interviewing me.

Google stands on feet of clay


This means that the interviewer is just as eager to find evidence for a positive score as a negative one -- there isn't an incentive to "getcha" with cheap or tricky questions.

This is only true if the interviewer is uninterested in what happens after the hiring process. If they want to make sure they're winning a reputation doing great interviews for more good hires than bad hires then there's an incentive to be cautious, and that caution could well manifest as trying to catch out anyone who might be 'gaming' the interview process. Those false positives reflect badly on the interviewer; the false negatives don't because they might have been real negatives.


Such a feedback loop -- of identifying which interviewers give the "most accurate" scores -- doesn't exist. Nor is it clear how you would build such a system (how do you quantify a "bad hire" or "good hire" in such a way that isn't lost in the noise?). Interviewers are trusted to do the best job they can.

Remember, interviewing is volunteer work, not something that will advance your career. The results of the interviews and committee deliberation are confidential, so there's no way to gain a reputation for being a "great interviewer".


Ah, but you are woefully naive if you think some interviewers don't slip in who enjoy having people struggle with problems so they can stroke their own ego. There are also the ones that have seen the quality of engineers significantly decline over the last 6 years of massive expansion and just want to gatekeep.

One of the major flaws in Google's process is assuming that the engineers are incentivized to find good hires.


I hope that at least engineers are given courses in "how to be a good interviewer".

It's not a natural skill, dare I say it, especially for an engineer.


> Nor is it clear how you would build such a system (how do you quantify a "bad hire" or "good hire" in such a way that isn't lost in the noise?).

Sounds like a good interview question.


Yes and No. You want to build your reputation as a good interviewer, but that doesn't only mean you are tough and let only amazing candidates pass. That also means that you are usually aligned with the interview committee, and if you constantly are a NO when 90% of the committee is a YES and the candidate ends up being hired, then you'll end up building this reputation of being too tough or just not getting the right signals.

I've done 300+ interviews at Uber where the process is somewhat similar to Google, and OP's points are true. As an interviewer all you really want is get good signals either good or bad. And yes, an interview is much nicer when the candidate is doing great.


It doesn't work that way. At Google, the only people who can see your ratings are the people directly involved in that candidate's hiring process. As a hiring manager, you do learn which of your reports/fellow interviewers take a tough line when scoring, but that doesn't make them great interviewers and doesn't factor at all at performance review time.

Source: I work & hire at Google. Opinions are my own.


I don't think that interviews really generate reputation in a large company... First, multiple people interview each candidate, so credit/blame is always distributed. Second, and more important, ain't nobody got time for that.


I agree that everyone has good intentions, but at the end of the day these interviews can be gamed really easily.

Before interviewing at Google I spent ~3 weeks doing leetcode style problems on a whiteboard I bought just for this purpose. Did not make me any better as a SWE, but definitely helped me clear my interviews. Without the practice I would have failed my interviews.

Having said that, I don't think I have any better alternatives; any interview process is ultimately going to be game-able in some manner.


I was confused about interviews in big companies before. Especially sometimes you have interviewers from not the hiring team. I was also confused about how leetcode questions are used, and sometimes feel that justification on hiring or not hiring are not based on evidence. Then one day I changed my view. I am seeing interviews a way to evaluate candidates' wanting to a job, and his effort of trying to achieve something. If he can invest in time in leetcode, he definitely can learn and do well in any tasks. We are humans, and we can improve. So the interview does its job. Nothing is perfect of course.


The issue with Leetcode and HackerRank as I see it, is that you're reproving to various companies each time and each interview that you know how to code. However, the only alternative to this seems be an SAT for programmers, which isn't better.


To be honest, coding is not difficult. Also many jobs do not require 'that' much knowledge of low-level informations anyway. Especially nowadays, for most positions, you don't need to know re-ordering lines of code can affect the cache, you use hash whenever it is possible and do not need to implement your own data structure. If you do need something in your work and you currently don't know, google and reading will definitely teach you. Programming is not a special power only few people can have. Actually, if you are consistently learning, you will probably do well in any tasks. If people are not willing to put some time into getting the job they want, maybe they don't want the job enough.


Aren't recruiters paid a commission for successful hires? Also, do interviewees who choose to interview a second time have a higher pass rate?

Consider their financial incentives first before ascribing some altruistic motives.


The engineers doing the interviews certainly have no financial incentives either way... Recruiters make contact and did a preliminary "do you have a pulse" conversation, and then shepherd the process, but don't have input into the decisions along the way. Their incentive is thus to find great candidates who they think will make it through the process.


I think that depends if you're talking about internal or external. At least everywhere I've worked our in house recruiters don't get paid a bonus when they hire someone, it just their job. If you're working with a recruiter that doesn't work for the company hiring you that person may have a more direct financial stake in your employment.


If your not professional interviewer or at least trained you should not be interviewing.

At British telecom you had to pass a 3 day course before you where allowed to sit on internal review boards.


You mean if you work at google you can get out of interviewing duty? That is actually a nice perk if they can really afford it.


+1. This is exactly my experience doing 300+ interviews at Uber.


I think it's a year later, now, for return interview?


I swear every time I read things written by people at Google my opinion of the company drops.

Mostly I just can't picture people from companies like Amazon or MS posting such stuff.


This article is just a recapitulation of the old pure functional programming vs. imperative programming argument but wrapped up in new packaging. Most of these issues, such as needing to validate types at codebase boundaries, functions that have side effects, and unsound static type systems, could just as easily be leveled at most imperative languages (C++, Java).

So yeah, go use a proper functional language (many of which compile to JS!). Complaining that an imperative language isn't a soundly-typed functional language is tautologically true I guess, but who cares?


It's not really, though. Because even pure FP folks read this article and say, "This is just nonsense and ill-considered propaganda."

For example, the idea that you can't trust Typescript because it might call untyped code but you CAN trust Purescript is just wrong. It's not even just wrong, it's utterly misrepresentative of what actually happens in Elm and Purescript. It's misinformation.


Could you elaborate? Why is it wrong/misinformation?


Not OP, but I'd guess the idea is that calling untyped code in TS is basically Pushing the Big Red Button, or Launching the Nukes™. When doing that in TS, you do it knowing it's your own responsibility to test and establish trust in the code. And Elm and PureScript are the same. In other words, Elm and PureScript won't save you from screwing it up the way you can with TS.

All pure/statically typed languages have escape hatches. It has been leveraged as a counter-argument to using them. But but but, they will say, Haskell has unsafePerformIO!!! Well, yeah, sure it does. You can subvert the borrow checker of Rust, too.

But you have to do it explicitly. You can grep your code base for "unsafe" (literally). You can lint for that. You can code review for that. You can document it. And that's what makes it OKish. In a way, types are like big fat oven mitts, and you're the lead programmer of a bakery, where you pop buns in the oven all day. You can discard the oven mitts for a time, though, and do whatever delicate work you need your actual fingers for. Just don't touch the hot stuff, lest you get yourself burned!


Purescript code calls into untyped code via its FFI for expediency or async integration constantly, for example.


No, the author is showing off how smart they are, and in an unprofessional and nasty manner to boot.

It's kind of you to gift the article with some genuinely thoughtful conclusions, but you're doing all the work there, not the original.

As the grandparent says, it's easy to poke holes in something, especially when disregarding important requirements that influenced its design. The fact that the OP can't point to any implementation of the "right" way to do things is telling.


I mean, all the arguments to support my conclusions, are their arguments. If someone spends 10 pages giving you facts and anecdotes and other types of data, and then uses them to support a "dumb" conclusion, they've still done useful work. They don't realize what exactly it is their data proved, but they did do the work required to prove something. Those facts and anecdotes support a conclusion, whether or not they feed it to you at the end/in the abstract.

And, that being said, I think they're right about the conclusion, too. A shorter way to say "Protocol Buffers are a bad choice of common ESB-bus format, even though being an ESB-bus format is the primary thing they're for and what everyone tries to use them for" is "Protocol Buffers are a wrong design." If something doesn't work when used to do the thing it's advertised to do, then it's broken, even if it can do something else.

The only difference between "Protobuffers Are Wrong" and my conclusion is that I'm making the implicit context of their argument explicit. Read between the lines of their argument—they are talking about the use of protocol buffers (specifically, gRPC) in an ESB-bus common-format scenario. None of their arguments make sense if they aren't.


How is the author showing off exactly? Because he said “coproduct” and “Prism” (both of which have more well-defined meaning than profobufs btw lmao)


The poster might be being kind, but their conclusions are spot on.


So citibank and NTP will need to change their domain structure. That's okay: those are confusing structures.

This is a tradeoff -- continuing to support already-confusing differences is not worth the loss of the ease-of-use gain referenced in the grandparent.


It seems like the Internet is moving more to a "centralized" design where certain actors have decided "well, here's how we're going to do things now, deal with it".

The golden age of the Internet is already dead, guys. We're unfortunately over the hump.


EXACTLY!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: