We're probably closer to the beginning than the end. Look at the numbers and you'll see things are worse than they were a year ago, in terms of cases and deaths. Imagine if we didn't have the vaccines.
> It just seems harder to meet people _in general_ than it did even two years ago. Like where is everybody? Does everyone stay home literally all the time now?
The pandemic is probably a bigger deal to others than it is to you.
From what I'm seeing it's kind of a chicken and egg problem. A lot of people are happy/eager to return to normal, but there aren't a lot of events, there are still restrictions in place, and people are no longer used to seeing people as much. I include myself in that. I'd like to go to events, but I don't really spend much time looking for them, because I assume said events aren't happening.
> I’m gonna go against the HN grain here and, despite having been fully for remote work for the first year or so of the pandemic, come out and say that life has really gotten a lot more repetitive and frankly disappointing since covid and the death of the office. I kind of miss meeting people in the office.
I'm sorry you're having a tough time with it. That sounds really unpleasant.
> My company is hybrid at the moment and I’m actually writing this from the very empty office. Things just aren’t the same.
Ignoring the pandemic, it sounds like the people at your company just don't agree that in-person work is necessary for them to get what they need out of their employment. The issue is either side attempting to force its preferred working conditions on the other. aka the solution isn't to make all of those people that don't want to be in an office come in again, but instead for you to find companies/circles that have people with similar interests as you. Being around people that don't want to be around you is unpleasant for both sides.
The thing with the needles is crazy scary - I keep seeing them a lot recently when in Seattle, not only downtown.
Just the other day I visited Capitol Hill around 15th Avenue and an entire park is full of tents, and a person defecated in an alley by Kaiser Permanente in broad daylight. Reminded me of Tenderloin district...
Feels like the city is going downhill - especially if you have family I'd stay away
(Unrelated to this post) Hello! I've sent two emails to hn@ for help with my account over the past few weeks. Is there a better way for me to contact you/whoever's on the other end of that email for help?
> Managed services offer big benefits over software.
TF can be used as a managed service.
> Managed service providers offer big benefits over software. With CF and AWS support, help with problems are a support ticket away.
The same is true with TF, except 100000% better unless you're paying boatloads of money for higher tiered support.
> I only run workloads on AWS, so the CF syntax, specs and docs unlocks endless first party features.
CF syntax is an abomination. Lots of the bounds of CF are dogmatic and unhelpful.
> I have seen Terraform turn into a tire-fire of migrations from state files to Terraform enterprise to Atlantis that took an entire DevOps team to care for.
CF generally takes an entire DevOps team to care for, for any substantial project.
Sure, but I never seen that myself. If TF was used it was always own set up infrastructure at best.
> The same is true with TF, except 100000% better unless you're paying boatloads of money for higher tiered support.
Again, all places I worked had enterprise support and even rep assigned. I think I only used support for CF early on, I don't know if it was buggier back then or I just understood it better and didn't run into issues with it.
> CF syntax is an abomination. Lots of the bounds of CF are dogmatic and unhelpful.
I would agree with you if you were talking about JSON, but since they introduced YAML it is actually better than HCL. One great thing about YAML is that it can be easily generated programmatically without using templates. Things like Troposphere make it even better.
> CF generally takes an entire DevOps team to care for, for any substantial project.
Over nearly 10 years of my experience, I never seen that to be a case. I'm currently in a place that has an interesting approach: you're responsible for deployment of your app, so you can use whatever you want, but you're responsible for it.
So now I'm working with both. And IMO I see a lot of resources that are not being cleaned up (because there's no page like CF has, people often forget to deprovision stuff), also seeing bugs like for example TF needs to be run twice (I think last time I've seen it fail was that it was trying to set tags on a resource that wasn't fully created yet).
There is also situation that CF is just plain better. I mentioned in another comment how I managed to get datadog integration through a single CF file deployed through stackset (this basically ensured that any new account is properly configured). If I would end up using TF for this, I would likely have to write some kind of service that would listen for events from the control tower, whenever a new account was added to OU, then run terraform to configure resources on our side and make API call to DD to configure it to use them.
All I did was to write code that generated CF via troposphere and deploy it to stackset in a master account once.
Right, your post is mostly "I like the thing that I've used, and I do not like the thing I haven't used". They're apples and different apples.
> Again, all places I worked had enterprise support and even rep assigned
So, again, you've worked at places that were deeply invested in CF workflows.
> but since they introduced YAML it is actually better than HCL. One great thing about YAML is that it can be easily generated programmatically without using templates.
Respectfully, this is the first-ever "yaml is good" post I think I've ever seen.
> Over nearly 10 years of my experience, I never seen that to be a case. I'm currently in a place that has an interesting approach: you're responsible for deployment of your app, so you can use whatever you want, but you're responsible for it.
I'd love to hear more about this.
> And IMO I see a lot of resources that are not being cleaned up (because there's no page like CF has, people often forget to deprovision stuff), also seeing bugs like for example TF needs to be run twice (I think last time I've seen it fail was that it was trying to set tags on a resource that wasn't fully created yet).
I guess we're just ignoring CF's rollback failures/delete failures/undeletable resources that require support tickets then?
> There is also situation that CF is just plain better. I mentioned in another comment how I managed to get datadog integration through a single CF file deployed through stackset (this basically ensured that any new account is properly configured). If I would end up using TF for this, I would likely have to write some kind of service that would listen for events from the control tower, whenever a new account was added to OU, then run terraform to configure resources on our side and make API call to DD to configure it to use them.
Again respectfully, yes, the person that both doesn't like and hasn't invested time into using Terraform at scale probably isn't going to find good solutions for complicated problems with it.
While this is true and AWS support is very responsive and useful, it doesn't mean they solve all the problems. Sometimes their help is: "I'll note that as a feature request, in the meantime you can implement this yourself using lambdas".
I'd much rather my engineers have spent their time practicing their actual jobs and not brainteaser interviews. To wit, the interview questions I ask are usually in line with the role they're assuming.
The author wrote this post in the context of hiring at a "hard technology startup". I interpret that as a place where engineers are expected to be able to do cutting edge computer science research (it's not a "well-established [role] to do specialized tasks"). It's not clear to me what sort of questions you would ask "in line with the role they're assuming" for this scenario.
One idea here is to give candidates an open problem in CS and ask them to solve it. Which is a nice idea, but one of the other things I learned through administering hundreds of interviews is that open-ended questions aren't great interview problems because they depend a lot on creativity/lateral thinking/insight, which highly benefits from being in a relaxed frame of mind. So these questions end up being a test of how relaxed the candidate is.
Stress creates tunnel vision, which is terrible for generating interesting research ideas but it's OK for solving these kind of leetcode problems. So checking for a very high level of programming aptitude, as a proxy for CS research ability, is an approach which is a bit more fair to candidates who are stressed out by interviews. (I also think it is a pretty decent proxy, because doing great CS research requires you to quickly & fluidly generate & evaluate algorithms / data structures which might solve your problem, which is a big part of what a great leetcoder does.)
If you're targeting a very high level of generic programming aptitude, it is arguably most fair to make use of a standard method of measuring it. Leetcode problems are the industry standard for measuring programming aptitude. People know to practice them a lot and they know what to expect. If you came up with your own unique way to measure programming aptitude, that would create an even greater burden on candidates to practice (they'd have to do a whole different sort of practice in order to succeed in your interview), and also create anxiety due to an unexpected interview format.
I think a lot of readers are overreacting because they didn't pay enough attention to this bit: "It's applicable if you're building an extraordinary team at a hard technology startup." The vast majority of companies in SV are not doing hard technology and don't need an extraordinary team. People should not feel inadequate if they aren't capable of improving the state of the art in technical areas of CS such as databases. This is ordinarily the domain of PhDs, and filtering for demonstrated algorithmic aptitude (as opposed to academic credentials) is actually a pretty egalitarian approach.
So let's be clear. You're honing in on this block:
> "It's applicable if you're building an extraordinary team at a hard technology startup."
And you're accepting that a timed question about tic tac toe is enough to prove you're capable of being on an "extraordinary team" at a "hard technology startup"?
Yes, I think to a large degree being able to build hard technology is a matter of being extremely fluent with writing code and reasoning about data structures and algorithms, as I stated. Being able to solve this problem in 10 minutes seems like a hard-to-fake demonstration of such fluency. When I think about CS open problems and when I solve leetcode problems with algorithmic content to them like this one, it feels like I'm using the same part of my brain (or at least there is significant overlap).
If you don't agree with me that the fluency I described is a significant asset to advancing the state of the art in CS, what do you think a significant asset is?
> Yes, I think to a large degree being able to build hard technology is a matter of being extremely fluent with writing code and reasoning about data structures and algorithms
Sure. Again. I don't really think an algorithm that shows up as an introduction to algorithm proves much more than a person read "Intro to Algorithms". So again, a timed introductory problem proves some elite technical skill?
> If you don't agree with me that the fluency I described is a significant asset to advancing the state of the art in CS
You're building a cute lil strawman. I think the question is totally out of line with the stated goal. If a college sophomore can answer a question, you're not really assessing much of anything. Also, working at a "hard startup" has nothing to do with "advancing the state of the art in CS".
> what do you think a significant asset is?
If I'm handling hiring for a "hard startup" and am in search of engineers fit for an "extraordinary team", I'm probably going to spend more time finding applicable skills than opening up to Chapter 1 in the closest algorithms book.
Perhaps you and I just have different views about what the pool of programming talent looks like. I think of your average CS sophomore as someone who will most likely struggle with Fizzbuzz in an interview setting. I'd say if they're a college sophomore, they just read Intro to Algorithms for the first time, and they can solve this problem in 10 minutes, that means they have an extraordinary ability to deeply and rapidly master material, and they'll be able to handle whatever challenge you throw at them. (Either that or they started programming way before they started college.)
In my observation the default state of a student reading a textbook is it goes in one ear and out the other. Most students temporarily acquire a superficial understanding of the concepts which allows them to answer test questions and get a decent grade. To see something in the wild and instantly recognize that it's isomorphic to a concept you studied years ago requires a level of mastery/passion well beyond what it takes to get an A. (I'm talking about school in general here, of course the fact that interviews index so heavily on data structures/algorithms ends up distorting things a lot from the baseline. Still, if you solve this problem in 10 minutes you're one helluva sophomore.)
>Also, working at a "hard startup" has nothing to do with "advancing the state of the art in CS".
I think of "hard technology" like rethinkdb as being exactly equivalent to cutting edge stuff that advances the state of the art in some way... again, maybe there's just been a misunderstanding/miscommunication here
When you use a front-end framework like React, there are UI changes you can make that must hit the server for information and other UI changes that can be done locally without interacting with the server. LiveView really shines in cases where you’re hitting the server anyway, but JavaScript can be more efficient for client-side-only changes. So, if you want, you can combine LiveView and JavaScript to optimize your site for both types of UI changes. You don’t have to. You can use LiveView exclusively, but the choice to combine LiveView and JavaScript is there if you want it.
Personally, I think React is a bloated choice if you’re going to use both LiveView and JavaScript. In cases where I use both, I typically use a small framework like AlpineJS or vanilla JavaScript for client-side-only changes.
> Personally, I think React is a bloated choice if you’re going to use both LiveView and JavaScript. In cases where I use both, I typically use a small framework like AlpineJS or vanilla JavaScript for client-side-only changes.
Good to know! That's probably why this boilerplate generator only has a handful of (what I think are) smaller JS libs and not Vue/React?
No, the purpose of live view is to act as a "smart server-side caching layer" so you don't throw everything away and start from scratch each time you hit the server.
That’s really not what it is. Live View is a replacement for most of what you can do with JS, but isn’t great for everything. The browser connects to the server via web socket and a process is started to manage the DOM state. Diffs are sent to the browser in response to various events such as clicks on elements or PubSub broadcasts.
> Take note of the time and let them do their thing. Answer any questions they might have as they go. The moment the program outputs the correct answer, take note of the time again.
Gross.
> If you decide you want to hire the candidate, the interview must last at least six hours (with an hour break for lunch). Have your engineers interview the person, one by one, for about 45 minutes to an hour each.
Gross. The hardest pass.
> First, the candidate needs to feel they've earned the privilege to work at your company.
Gross. Equally hard pass.
> Second, your engineers need to feel they know the measure of whoever they're going to be working with.
Gross. Random engineers aren't qualified to assess talent or fit. Random engineers _may be_ qualified to pick candidates that match their gender/race biases though.
> and more importantly spend time having a little trepidation about how they did.
I would go so far as to say "Fuck you" to this author.
Ha, I read your comment first and thought you were somehow overreacting, but then I read the piece and I agree with you. The author doesn’t seem to realize that the flavor of interview he’s running is deeply trainable, and that one can become very, very good at tic-tac-toe style questions after a year or two of intense study. But who would do that? Well, a whole lot of IOI/ICPC contestants did. So, he’s mostly just hiring for people who joined the same club as, I assume, himself.
It's funny. The older I've gotten, the more code I've written, and the slower I am to write code.
When I was fresh out of college the 'obvious' approach just appeared, and I was off to write it.
Now, any problem I am slow to peel apart in my mind, decide what the best way to approach it is, based on the language I'm using, what I'm feeling right now, readability, maintainability, etc.
Tic Tac Toe? Interesting; should I store board state as a 2d array, or a single array? The obvious approach would be to try and play every possible game with backtracking, but perhaps instead it would be simpler to just generate every possible permutation of 5 Xs and 4 Os? Would that work; on the one hand a badly playing O might lose after just 3 Xs, but we could still fill out the board if we 'kept playing', so it maps one to one, so maybe! Of course, then you risk situations where both X and O wins, so maybe not? Also, what about rotations; we could reduce our work by ensuring we didn't try to solve for cases that are just rotations of one another, but is the book keeping of that more work than just brute forcing it, given the constrained nature of the problem? Etc etc.
Writing working code matters, yes. But if you're looking for people who think just a few seconds before typing anything, and then seem frustrated they can't type fast enough, you're optimizing for juniors.
Which is touched on; you don't need to evaluate game states, you merely need to count them. A game that ended with X or O before the board filled out could still have additional Xs and Os written on it; while that would "lose" the winning state, it's immaterial since you aren't interested in the state, just the outcomes (or, possibly, the ways to get there).
I wasn't actually asking or pondering over it (the answer would in part be determined by what is meant by a "valid game"); I was just presenting a train of thought akin to what I would have in such a situation, examining the question in various lights as to ways to solve it, and generating clarifying questions to ask or ideas to explore.
IOI contestants (and people that can memorize lots of difficult algorithms) tend to have high intelligence. If you're looking to hire really smart people and you don't care about false negatives, it's a good way to approach the problem.
The entire point is to hire people that 1) are interested in exploring algorithms on their own time (i.e. they're interested in more than just getting paid for a job, they're interested in interesting problems too) 2) are competitive and want to win 3) are hard working and intelligent (i.e. they had to study for these competitions and do well in them).
You might miss a lot of smart people but it's pretty likely if you ask really hard algorithm questions that you'll filter out any not smart people. If you're reacting badly to this style of interview, that's because you're not their target talent pool.
Actually I’m a fan of algorithms questions! Of course, getting an engineering staff full of IOI medalists is a wonderful situation and will attract more talent. But I and others here are recognizing the role played by practice and grinding on this specific flavor of problem. The author states that intelligence is unchangeable and divides candidates into talent bands - but actually, his test is significantly impacted by factors other than intelligence. His framing is just weird, to be honest. “People who think faster than they can type” isn’t really a meaningful category, yet he’s turned it into a theology of interviewing.
> So, he’s mostly just hiring for people who joined the same club as, I assume, himself.
Not so. From the article:
> First, I cannot myself pass this interview. Last time I tried, I got the correct answer after about forty minutes or so. I could get it down with practice, but it doesn't matter-- I think slower than I type. That's a no hire. The point of the interview is to hire extremely talented engineers, not engineers as talented as me.
I think this is mostly a sign that he’s out of practice - he’s a former PhD student working on database tech, the man has spent plenty of time writing algorithms. I’m claiming that he’s under weighting the effects of practice on how well (specifically, how fast) candidates perform here. When people are coding at typing speed, it usually means they’re not really thinking at all - rather recalling a similar template from memory.
He even says "I could get it down with practice", it's quite astounding that he doesn't get the importance of preparing for this specific type of test. I've been doing competitive programming in highschool, was decently good at it (got to IOI, got some medals). First year in uni they wouldn't allow us to compete in ACM, by the second year I felt like I was competing with my hands tied due to the lack of practice. One year of pause is all it took.
There's _some_ truth in some things that he says though, even though HN doesn't appreciate it. Like, anecdotally, I have a friend who refused a Google offer (when Google was much smaller but still a big-ish name) and went for a startup because the interview problems at the startup were very difficult and he thought they were going to have interesting problems to solve. I took note of that and made sure my interviews were as difficult as the candidate could take it - like, go progressively harder until the candidate is stuck, then back off. This works fairly well especially with just-out-of-university hires, there's a certain type of people that notice it and like being challenged. And they're often very good employees. (of course, what the author suggests in the article is WAY over the top, I'd agree his process is broken).
Serious question... how did these kind of fifth grade playground insults become normalized on HN? It's a blatant violation of the guidelines https://news.ycombinator.com/newsguidelines.html and yet somehow comments like this get upvoted.
You're welcome to flag my post if you think it's inappropriate.
The author of this post is either ignorant or malicious. Some of their behavior is so ignorant or malicious that it warrants direct adversarialism. This blog post is gross.
Curious, what about my post included a "playground insult"?
Would you please not post like this to HN? It's not what this site is for. No matter how strongly you disagree with someone about $topic, it's not ok to dump poison into the ecosystem.
Doesn't read as fifth grade to me. Just has a little spiciness to it, I appreciate that. Think about it, the person could have linked and linked and linked to other articles to back up some point of disagreement. But instead, they shared their own opinion, relatable, easy to follow, and refreshing.
Please don't post like this to HN. It's directly against the site guidelines, which ask you not to fulminate or call names. Maybe you don't feel you owe people who you consider wrong about engineering interviews better, but you definitely owe this community better if you're participating here.
I'm sure you can make your substantive points thoughtfully, so please do that instead.