Hacker Newsnew | past | comments | ask | show | jobs | submit | iamsomewalrus's commentslogin

I use Go as my preferred language and I think the author is mostly right.

There’s no way for me to know or even check what are the possible errors this function can return? Sure, sometimes a comment in the library might be illuminating, but sometimes not.

I agree that errors as values that I can handle at the call site rarely feel useful.

Some of the Is, As ergonomics have improved, but damn if coding agents don’t love to just fmt.error any code it writes. Thus hiding the useful information about the error in a string.


The article claims:

"The user now has an interface value error that the only thing they can do is access the string representation of."

This is false. Didn't used to be, but that was many years ago.


>this is false.

It's mostly false.

Technically, if you use `fmt.Errorf`, then the caller can't get anything useful out of your errors.

Types are promises. All the `error` interface promises is a string representation. I acknowledge that most of the time there is more information available. However, from the function signature _alone_, you wouldn't know. I understand that this is more of a theoretical problem then a practical problem but it still holds (in my opinion).


The subtlety missed in these conversations is that almost all the information there is for typical error handling --- that is, all the information that would be present in a typical Rust error handling scenario as well --- is encoded in the type tree of the errors.

(Rust then improves drastically on the situation with pattern matching, which would simply improve Go with no tradeoffs I can really discern, just so we're clear that I'm not saying Go's error story is at parity with Go. But I'd also point out that Rust error handling at a type level is kind of a painful mess as well.)


>that is, all the information that would be present in a typical Rust error handling scenario as well --- is encoded in the type tree of the errors.

it's only present when you downcast though?


It's not super ergonomic, but the information is there, in about the same density as it would be in a Rust app, supporting the same error handling strategies (ie, discriminating between transient and durable errors, &c).


I feel that in general, in the past 20+ odd years there has been an over emphasis on complex control flow with errors.

Lots of different fine grained error types with complex logic spread out over several call layers.

ime it’s better to aim for simpler handling, which seems to match go


genuinely curious, can you steel man stored procedures? views make intuitive sense to me, but stored procedures, much like meta-programming, needs to be sparingly used IMO.

At my new company, the use of stored procedures unchecked has really hurt part of the companies ability to build new features so I'm surprised to see what seems like sound advice, "don't use stored procedures", called out as a cargo cult.


My hunch is that the problems with stored procedures actually come down to version control, change management and automated tests.

If you don't have a good way to keep stored procedures in version control, test them and have them applied consistently across different environments (dev, staging, production) you quickly find yourself in a situation where only the high priests of the database know how anything works, and making changes is painful.

Once you have that stuff in git, with the ability to run automated tests and robust scripting to apply changes to all of your environments (I still think Django's migration system is the gold standard for this, though I've not seen that specifically used with stored procedures myself) their drawbacks are a lot less notable.


> My hunch is that the problems with stored procedures actually come down to > version control

Git? (and migrations)

> change management

Again. Just like any other code.

> and automated tests.

Just write an automated test like you write any other kind of test?


That's exactly what I'm saying. If you do those things stored procedures stop sucking.


It's also about separately scaling your business logic from the data layer


You give no reasons why you think it's a sound advice.

My experience is following

1) Tx are faster when they are executed a sql function since you cut down on network roundtrip between statements. Also prevents users from doing fancy shenanigans with network after calling startTransaction.

2) It keeps your business logic separated from your other code that does caching/authorization/etc.

3) Some people say it's hard to test sql functions, but since pglite it's a non issue IMO.

4) Logging is a little worse, but `raise notice` is your friend.

> At my new company, the use of stored procedures unchecked has really hurt part of the companies ability to build new features

Isn't it just because most engineers aren't as well versed in SQL as they are in other programming languages.


Stored procedures are great for bulk data processing. SQL natively operates on sets, so pretty silly to pass a dataset over the wire for processing it iteratively in a less efficient language, and then transfer the resultset back to the database.

Like any tool, you just have to understand when to use it and when not to.


It’s about what you want to tie to which system. Let’s say you keep some data in memory in your backend, would you forbid engineers from putting code there too, and force it a layer out to the front end - or make up a new layer in between the front end and this backend just because some blogs tell you to?

If not, why would you then avoid putting code alongside your data at the database layer?

There are definitely valid reasons to not do it for some cases, but as a blanket statement it feels odd.

Stored procedures can do things like smooth over transitions by having a query not actually know or care about an underlying structure. They can cut down on duplication or round trips to the database. They can also be a nightmare like most cases where logic lives in the wrong place.


off the original topic, but on topic for this -

Yes! Gibson recognized they were getting cut out of the vintage market and started making not only the reissues (RIs), but also the limited edition copy-of-famous-person's-guitar. What gets me is that Reissues these days are priced so close to vintage instruments. It's so hard to justify the purchase.


ADHD-haver here. I do the same thing but with music. I created a playlist of albums I've heard hundreds of times. The songs are in album order and when I play the list I play from the start of an album somewhere in that list.

It keeps that part of my brain occupied, but not focus, while I work on the task at hand.


I go through really weird phases with music. Music is a huge part of my life, sometimes I play piano, guitar, produce.

There's points I'll realize I've gone 6mo+ without opening spotify except for when I'm driving. Just months and months of the news or youtube as background and no music.


^ I can vouch for this. Worked at Goodreads from about 2013 to 2017 ish. I’m sure my name haunts git blame now.


Oh man, thousands of up votes for a Led Zeppelin reference. This mirrors my experience as a musician. Learning to play gave me the confidence to go into different scenarios and recognize that I had to tolerate being bad first and then I would get it. It's a journey. It's a sacrifice. Strangely, it never felt like a sacrifice at the time.


i helped build this! it's not what i do anymore, and i don't want to go into too much detail, but the TL;DR is that, yes, just walk out (JWO) really does use machine learning models to identify which products customers are picking.

This video from a year ago goes into more engineering depth than promotional videos: https://www.youtube.com/watch?v=S5t6aYhj6pU

the non-obvious challenges of working in this space are that you have to deal with real world constraints that you just don't have to think about in 100% cloud based software. hardware goes down! internet connections go out! electricity goes out! how much processing can you do in store vs out of store?

that's just the tech in the store. what about humans? humans are now in your programming state moving shit around. in-store associates miss-stock items. kids do kid shit. people don't place things back exactly where they found them.

Then you have interesting distributed problems: how do you handle late data? what should be a massively parallel problem is really a graph of interactions that have to be resolved in just the right order so you can generate an accurate receipt.

and you know what's crazy? the vast majority of the time it works! exactly how they say it does: with machine learning models.

bonkers.


This is my experience as a line manager but also a manager of managers.


Hire for fire feels like an urban myth. We do a lot of work to find you, interview you, hire you, and train you. Managing someone out is also a lot of work.

That being said a company as big as Amazon will always have edge cases.


The myth I heard was: Manager has 10 employees who are absolute rock-star engineers. They love their team, work great together, and deliver amazing results.

This manager obviously doesn't want to hurt this awesome team or lose any members, but Amazon wants them to churn out the bad engineers (which this team may not have).

So instead of finding faults where they don't exist, our manager hires 2 engineers with little to no chance of working out in the team long term, give them some busy work, puts them on a PIP, then eventually fires them, only to soon replace them with two new victims.

It sucks to be a victim, and it sucks to knowingly do that as the manager, but it does work out great for the team and the manager - so I think it could happen (maybe not super common, but it isn't unrealistic in my mind).


That's the problem with applying a curve as a rule. In a large org it's probably true across the board. But there will be outlier teams. How does an org handle those?

You can critique Amazon for basically saying "don't worry about it" so managers may invent plans like hire-to-fire to game the system and overall someone's gonna get screwed on that team, but it's a tough problem to solve - otherwise, what's to stop managers from all simply claiming "no, my team's a special case"? Introduce special cases and it'll just be gamed through that mechanism.

I'm a firm believer that if you want to avoid things like that you have to avoid large organizations. A large organization is highly incentivized to be bureaucratic so to minimize the effects of unexpected losses and to keep the money machine running.


>otherwise, what's to stop managers from all simply claiming "no, my team's a special case"? Introduce special cases and it'll just be gamed through that mechanism.

You lost me. Why should we want to stop managers from saying that?


You got the myth part right: having a single team with 10 rock-star engineers.

1. Amazon isn't so special that even they can control the distribution curve to only have rockstars,

2. This is too big for a single team in the first place, let alone 10 rockstars,

3. You don't want any rockstars, let alone only rockstars on your teams,

4. Managing low performers is a magnitude more work than solid performers; no manager would do this to themselves on purpose. It would be easier to cut their 2 least rockin' stars.


I hate that some people say rockstars and mean ‘really great developers that write beautiful working well documented code quickly and work great as part of a team’.

And other people say rockstars and mean ‘assholes that churn out new features really fast with incomprehensible code and then leave others to maintain their monstrosity as they move on to the next thing”.

I assume Amazon is large enough to have a team somewhere with 6-10 of the ‘good’ version, and also a team somewhere with 6-10 of the assholes.


> Managing low performers is a magnitude more work than solid performers; no manager would do this to themselves on purpose. It would be easier to cut their 2 least rockin' stars.

Then what do you do a year (and a half for Amazon) later for the next cycle?

One of the many problems with stack rank cull N% of your employees every year is that "every year" part. At whatever level it's mindlessly applied, if the company wants to keep it at the same head count you by definition have a steady churn of hires, which I'll grant you won't always be good, and fires.


If 2 pizzas can’t feed 10 people, the chances are there are some health questions to be asked about a given team.


> Hire for fire feels like an urban myth. We do a lot of work to find you, interview you, hire you, and train you

Your unspoken assumption here is that recruiters, HR and managers have perfectly aligned incentives. It is trivial to find instances when this is not the case - e.g. a high-performance team being forced to fire their "least effective" member[s]- this relies on the theory that talent is uniformly distributed across Amazon.


Amazon manager here: I just did a mental count of our URAs while I have been at Amazon. More than 3/4 are new hires fired in less than a year.

Hire-to-fire is neither a conspiracy nor a conscious tactic. It's the natural consequence of what we managers are put into.

If HR wants you to URA someone out, who would you rather lose? The experienced SDE who is productive and is responsible for projects you don't want delays in; or the new hire who's still trying to learn company tools.

That's why hire-to-fire is a thing.


It's cheaper to allow an employee on the low end of HV transfer in from another team.


i think this is too pessimistic. via FBA they've also enabled thousands (hundreds of thousands?) of small businesses and a cottage industry of others that have formed around them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: