Hacker Newsnew | past | comments | ask | show | jobs | submit | scott_w's commentslogin

True unlimited PTO can exist in the UK, though the "unlimited" PTO can't, as employees can't agree to less than the statutory minimum.

That said, unscrupulous employers try to get around this by putting stupid requirements on taking PTO that practically mean taking your legal allocation aren't possible. Things like "we need minimum staff levels to cover" and, shockingly enough, you don't have enough staff to actually give out everyone's PTO. Combine this with requiring long timeframes to book it and your manager "forgetting" you had PTO booked and insecure job contracts and you have a recipe for grinding your staff to dust.


> First: it's not the best use of our time.

I want to push back strongly on this. I think this attitude leads to more bugs as QA cannot possibly keep up with the rate of change.

I understand that you, personally, may not have exhibited this based on your elaboration. However that doesn’t change the fact that many devs do take exactly this attitude with QA.

To take a slightly different comparison, I would liken it to the old school of “throwing it over the wall” to Ops to handle deployment. Paying attention to how the code gets to production and subsequently runs on prod isn’t a “good use of developer time,” either. Except we discarded that view a decade ago, for good reason.


It’s entirely dependent on the situation. Some areas, additional charges work best. In others, it’s possible/necessary to redesign road and street layouts to prioritise higher-density modes of transport and physically discourage low-density modes like cars. This might be priority lights for public transport, lowering speed limits and narrowing streets. In some contexts, it’s necessary to completely disallow cars with things like bus lanes, bike/pedestrian-only areas. Separated tram/metro lines, too.

Most of this infrastructure, in practice, also aids emergency vehicle use as they can usually fit down bike lanes and are obviously able to fit in bus lanes.


Cut him some slack, he might have been having a heart attack at the time and in need of one of those ambulances!

> Yes, I actually do think if Sanjay Ghemawat were instead Wojciech Przemysław Kościuszko-Wiśniewski, white European but otherwise an equal engineer, and I chose to elevate Jeff Dean over him, I would later feel equally bad about it.

You need to take a breath, read what people write, and stop trying to win the argument.


> stop trying to win the argument

Mr Rayiner is a lawyer by profession ;) https://news.ycombinator.com/item?id=11340543


Thanks! This explained to me very simply what the benefits are in a way no article I’ve read before has.

That’s great to hear! We are happy it helped.

I was at a podiatrist yesterday who explained that what he's trying to do is to "train" an LLM agent on the articles and research papers he's published to create a chatbot that can provide answers to the most common questions more quickly than his reception team can.

He's also using it to speed up writing his reports to send to patients.

Longer term, he was also quite optimistic on its ability to cut out roles like radiologists, instead having a software program interpret the images and write a report to send to a consultant. Since the consultant already checks the report against any images, the AI being more sensitive to potential issues is a positive thing: giving him the power to discard erroneous results rather than potentially miss something more malign.


> Longer term, he was also quite optimistic on its ability to cut out roles like radiologists, instead having a software program interpret the images and write a report to send to a consultant.

As a medical imaging tech, I think this is a terrible idea. At least for the test I perform, a lot of redundancy and double-checking is necessary because results can easily be misleading without a diligent tech or critical-thinking on the part of the reading physician. For instance, imaging at slightly the wrong angle can make a normal image look like pathology, or vice versa.

Maybe other tests are simpler than mine, but I doubt it. If you've ever asked an AI a question about your field of expertise and been amazed at the nonsense it spouts, why would you trust it to read your medical tests?

> Since the consultant already checks the report against any images, the AI being more sensitive to potential issues is a positive thing: giving him the power to discard erroneous results rather than potentially miss something more malign.

Unless they had the exact same schooling as the radiologist, I wouldn't trust the consultant to interpret my test, even if paired with an AI. There's a reason this is a whole specialized field -- because it's not as simple as interpreting an EKG.


> I've gotten a lot of value out of reading the views of experienced engineers; overall they like the tech, but they do not think it is a sentient alien that will delete our jobs.

I normally see things the same way you do, however I did have a conversation with a podiatrist yesterday that gave me food for thought. His belief is that certain medical roles will disappear as they'll become redundant. In his case, he mentioned radiology and he presented his case as thus:

A consultant gets a report + X-Ray from the radiologist. They read the report and confirm what they're seeing against the images. They won't take the report blindly. What changes is that machines have been learning to interpret the images and are able to use an LLM to generate the report. These reports tend not to miss things but will over-report issues. As a consultant will verify the report for themselves before operating, they no longer need the radiologist. If the machine reports a non-existent tumour, they'll see there's no tumour.


I've seen this sort of thing a few times. "Yes, I'm sure AI can do that other job that's not mine over there.". Now maybe foot doctors work closer to radiologists than I'm aware of. But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field. Apparently there are one or two incredibly easy tasks that they can sort of do, but it comprises a very small amount of the job of an actual radiologist.


> But the radiologists that I've talked to aren't impressed with the work AI had managed to do in their field.

Just so I understand correctly: is it over-reporting problems that aren't there or is it missing blindingly obvious problems? The latter is obviously a problem and, I agree, would completely invalidate it as a useful tool. The former sounded, the way it was explained to me, more like a matter of degrees.


I'm afraid I don't have the details. I was reading about certain lung issues the AI was doing a good job on and thought, "oh well that's it for radiology." But the radiologist chimed in with, "yeah that's the easiest thing we do and the rates are still not acceptable, meanwhile we keep trying to get it to do anything harder and the success rates are completely unworkable."


AI luminary and computer scientist Geoffrey Hinton predicted in 2016 that AI would be able to do all of the things radiologists can do within five years. We're still not even close. He was full of shit and now almost 10 years later he's changed his prediction, though still pretending he was right, by moving the goal posts. His new prediction is that radiologists will use AI to be more efficient and accurate, half suggesting he meant that all along. He didn't. He was simply bullshitting, bluffing, making an educated wish.

This is the nonsense we're living through, predictions, guesses, promises that cannot possibly be fulfilled and which will inevitably change to something far less ambitious and with much longer timelines and everyone will shrug it off as if we weren't being mislead by a bunch of fraudsters.


"History doesn't repeat itself, but it often rhymes", except in the world of computer science where history does repeat.

I doubt this simply because of the inertia of medicine. The industry still does not have a standardized method for handling automated claims like banking. It gets worse for services that require prior authorization; they settle this over the phone! These might sound like irrelevant ranting, but my point is that they haven't even addressed the low-hanging fruits, let alone complex ailments like cancer.

IMO prior authorization needing to be done on the phone is a feature, not a bug. It intentionally wastes a doctor's time so they are less incentivized to advocate for their patients and this frustration saves the insurance companies money.

Heard. I do wonder why hospitals haven't automated their side though. Regardless, the recent prior auth situation is a trainwreck. If I were dictator, insurance companies would be non-profit and required to have a higher loss ratio.

2 quibbles: 1) a more ethical system would still need triage-style rationing given a finite budget, 2) medical providers are also culpable given the eye-watering prices for even trivial services.


I would love to know how much rationing is actually necessary. I have literally 0 evidence to support this but my intuition says that this is like food stamps in that there is way less frivolous use than an overly negative media ecosystem would lead people to believe.

Radiology has proven to be one of the most defensive jobs in medicine, radiologists beat AI once already!

https://www.worksinprogress.news/p/why-ai-isnt-replacing-rad...


> Punishing in public

Honestly, the "praise public, reprimand privately" truism that people learn is, along with the shit sandwich, one of the most harmful maxims in management.

There are situations where, as the leader, the team needs to see you act. Let's take an example of someone speaking to another team member in an inappropriate way. If you reprimand privately, nobody knows you did that. Now, you have a team that thinks it's ok (or is raging that you think it's ok) to talk to each other in that way. If you call it out publicly, now everyone knows it's not.

It is a double-edged sword, though. I'd not put a junior on full blast for introducing a bug, or a team member for missing an issue in a code review. That would send completely the wrong message.


It's not one or the other, but should align with the culture. Like the old board chair of Starbucks said: if you're going to be an asshole, be a really good one.


> It's not one or the other

I'm not sure if you're disagreeing with me or not, given that was the gist of my comment.


Not disagreeing, but adding a cultural context. So if your culture is a trading floor like boiler room, sure, shame publicly, all day long. Maybe not at a cancer nonprofit


I think there's more nuance to what I'm saying: it's not just based on your company culture but on the situation you're faced with. Let's make my example more concrete. Let's say a member of your team calls another team member stupid for an honest mistake, in a public setting (so was witnessed by the rest of your team). Telling that person there and then "that's a disrespectful way to speak to a colleague and is against our values" will:

a) Demonstrate to all witnesses that the behaviour is not in line with your values

b) Make the victim of the behaviour feel seen and know they aren't alone

c) Make clear to the person receiving the feedback that you're unhappy

To achieve this same result in private conversations is monumentally more effort, if not impossible. If you pull them into a private conversation, you're still publicly reprimanding them, just without giving clear communication to everyone. Do you wait until later to reprimand in private? Then you need to speak to everyone about what was said and repair the damage that the delay in speaking up caused.

However, there are plenty of situations where calling out something so publicly would be the wrong thing to do, like pushing bugs to production, as you'd likely be seen to be overreacting. You still want to give the feedback if, say, the team member was ignoring processes. It's just usually better done in private.


I like the demo.

For people looking to transition to management, one thing I’ve learned is that a big part of my job is getting everyone to do only one thing at a time. Every stakeholder (including engineering managers) are obsessed with the idea of “sneaking a bit more work in,” and I’ve never seen it work. I will actually go as far as to refuse to estimate work if I have something more important for the team to focus on. After all, estimation is work and we have a higher priority!

The benefit is that you’ll often find nobody actually estimated the business value/priority before asking for the work estimate, so you end up wasting less time overall. The hard part is resisting the pull of your boss asking you to do something.


'Estimation is work' is a maxim I wish more organizations understood!

You really highlighted the core tension here: The theory of management (WIP limits, focus) is logical and easy to understand. But the practice—actually looking at your boss in the eye and saying 'No, we won't even estimate that right now'—is pure emotional friction.

That specific 'hard part' you mentioned—resisting the pull of authority—is exactly the muscle I'm trying to help people build without burning bridges. It’s the difference between knowing the rule and having the stomach to enforce it in a balanced way.


It's often better to say nothing at all rather than to reply with an LLM generated response.


Fair check! I use it to polish my phrasing (especially trying to keep up with this thread volume), but the 'scars' and the management experience behind the comment are 100% human. Point taken though—I'll try to keep it rawer


Deeply, deeply insulting.

This is clearly another LLM response bud. Stop using it to communicate it’s too obvious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: