Hacker Newsnew | past | comments | ask | show | jobs | submit | more jaabe's commentslogin

I think this is horrible advice, and while it’s anecdotal I’ve seen no reason to suggest that not writing code comments leads to higher quality code.

I once managed a developer who at a time could have written this very blogpost. He was also a big fan of the solid principles, and I think he generally wrote good code. I actually also agree with parts of the article. If you name your properties and functions in a reasonable way, then you don’t need a comment telling the future what you put in firstName on an employee object or what getEmployees() does. When it comes to business logic and intend, however, you need to help the future. My developer learned this the hard way when some business logic changed and he had to fix a system he had written a couple of years earlier. Suddenly the once so clear code wasn’t clear anymore, and he had to spent a week reintroducing himself to exactly what intent had gone into making the system. Maybe he was just bad at practicing what he preached, but he started writing code comments after that.

And that’s someone unable to get into their own mindset, it’s much, much worse if the code is handed off to someone else. If I task two developers with solving the same problem, as clear as they possible can using self-explaining code. Well, I’ll get two very different results, because what clear code is, is very subjective.


If actual usage reflected front page position we’d all be talking about C, C#, JAVA or PHP.

I follow a few job agents because I do part time work as an examiner for applied CS students, and I occasionally advice on what languages they should teach to make sure their students are job ready, and nothing has really changed for a decade. Python demand is really the only area that has seen a real increase, but not for developers as you need some sort of degree in either statistics or math to get the ML/BI jobs.

It’s always a little sad to see students who spend a lot of time buying into the hype and building their final projects in something like Rust or Go and before that node/express. Because they’ll have a much harder time finding a job than the ones who didn’t do that. Not because learning Rust is bad, but because it’s safer (and cheaper) to chose their competition and it’s not like there is really a lack of freshly educated developers anymore.


I think it’s a lot easier to be anonymous if you keep them, but never use them for anything real. I quit using Facebook two years ago, but I didn’t stop posting the occasional picture of some event I attended.

I simply uninstalled all their apps and deleted anything that I had ever posted which wasn’t completely irrelevant. I also deleted almost every picture of myself and prohibited people from ever tagging me.

This way I look like a normal boring guy who likes board games if you glance over my profile, but you couldn’t actually get anything to use against me.

It’s very Orwellian I guess, but it means I never have to explain why I’m not on social media.


We’re required to have password expiration by law in the public sector of Denmark. So I’m sure we’ll continue to have it for at least some years to come.

I must admit I never really understood the function of it. Obviously lifetime access is more damaging than 3 months access, but the truly devastating thing is the unauthorised access itself not the length of it. Also the policy results in really bad practices like people using summer2019 as their password or writing their current password down on post it’s. We tried blocking stuff like summer2019, but people get really creative. People also forget to renew their passwords, costing hundred of hours in the process.

We have 2FA now, which will soon be required by our adoption of the GDPR, but you have to wonder why we didn’t get that decades ago instead of the password expiration.


Writing passwords on paper is recommended by security professionals, in the common case where your physical security is far more trustworthy than your digit security, because it supports the use of long, strong password. A 2FA device is very similar to a Post-It note.


Actually most security professionals have a serious downer on writing passwords down.

I can see some circumstances where it could make sense, as you say where physical security concerns are less of an issue.

That said I wouldn't say a 2FA device is like a post-it note really.

Assuming you're thinking about TOTP like google authenticator, access to the codes is protected by the devices' security, which adds a bit more to it than a post-it under a keyboard.


For example Bruce Schneier recommends writing down the password and keeping it in a relatively safe place like the wallet (where people keep other sensitive information like credit card numbers).

https://www.schneier.com/blog/archives/2005/06/write_down_yo...

I don't think anyone recommends writing down the password on a post-it note and put it on the computer screen at work.


Even then, if it's an OS password (drive encryption n/inc) and they have physical access to the disks containing assets then it's already game over.


I briefly worked at a place that enforced quarterly password changes and I literally used <Season><Year> as my password. I am not good at remembering passwords and I don't think I'm that unusual. Writing them down seemed worse than using a poor password that I can at least remember.

Probably these days if forced I would use <Prefix><Season><Year>. I don't know how much better that is. But luckily now I work for myself.


How often have you had information stolen off a credit card, passport, driver's license, insurance card, or other item with sensitive information printed on it that you routinely carry around in your wallet?

For most people, the answer is "never".

We are actually quite good at safely keeping secrets on paper in our wallets, and so generally writing down a password and keeping it there is fine, especially if the choice is between doing that with a strong password or using a weak password that you memorize.


Plus, people usually have a better memory that they give themselves credit for. With reasonably short random password (say, 10-12 chars, uppercase, lowercase, digits) that you use often, you will memorize it after a week, at which point you can simply destroy post-it note you carried in your wallet.


Plus if your wallet gets stolen, you will know someone potentially has your password, and change it.


Writing down is much better than using a guessable password. Your physical location is more secure than a password in a rainbow table


There is Runbox, a Norwegian fastmail equivalent if you’re looking for something non-American, but g-suite e-mail offers privacy.


Why does it need refactoring? Because you think you can write it better or because you can actually improve its efficiency?

If it’s the former, don’t. If it’s the latter turn it into a task and have it prioritised.


What high quality software?

I work in the public sector in Denmark, we operate 300-500 systems from private suppliers and none of them work, none of them are particularly cheap either.

Our medical software on life supporting machinery is about the only software that actually always does what it’s supposed to, but it goes decades without changes. Everything else is a broken mess, regardless of what principles of development the companies adhere to.

I think the only software that we operate, which is both high quality, stable, secure and capable of adding/removing features when we ask is our dental software, and that’s actually some of the cheapest software that we buy. It’s not made by a tech/development-house though, it’s made by a couple of former dentists who do it as a side product on their main business which is selling dentist equipment.

So maybe the real issue lies with the development houses? But our experiences are obviously anecdotal so it’s hard to say.


Obviously anecdote != data, and you don't say what the software does. But your story would seem to imply that domain knowledge is more important that software development knowledge. There's a certain logic to it - someone who deeply intuits what the software is trying to model will naturally gravitate towards the correct abstractions, even if they're ugly and ad-hoc; while a professional software developer with no domain experience with happily build a shiny tower of abstraction that doesn't fit the problem domain, and chaos will ensue.

There's a reason we emphasize nailing down requirements before committing code. But what if it goes a level deeper than that? Perhaps what we actually need to do is understand the mindset that is generating those requirements. Perhaps, for some types of software, that's equivalent to being a domain expert.


> But your story would seem to imply that domain knowledge is more important that software development knowledge.

I think the story suggests alternative explanation: that the product is a side project for the people that make it, probably even considered as a marketing expense. So they don't have the usual software-house incentive to fleece the government while delivering worst possible product. Wouldn't be surprised to discover that these dentists don't consider it a high-pressure project, so they actually take time to do it right and be proud of their work.


The real issue is that you pay them to deliver bad software.

I know it sounds like a weird thing to say. But had you as a customer demanded and were willing to pay for something different, you would get that.

Think about how the public sector buys a software development project; what the sort of process the supplier has to go through, how they qualify, how they bid, how the requirements are formed, how the software is tested, delivered and so on.

Had the public sector prioritised the internal quality; it could have done so. But it chooses not to.

In a public sector IT project the actual softare development is only small fraction of the cost. Other parts. Sales, legal, management, testing, documentation ... have much bigger impact on the suppliers ability to make money. Thus those are the parts you get and that is what drives the cost.


Yes! Some organizations are incapable of buying software.

Buying yet-to-be-developed software is easy with the right software company - you just need to provide your problems and priorities, and an open mind and let them manage the process. We do that for our customers, and we have happy customers.

But if you're incapable of choosing a good partner or you let your internal politics dominate the process, then it is extremely difficult. Even with a good development company, a dysfunctional buyer can easily be a factor of 200-500% in lost productivity.

Off-the-shelf software should be easier - you can just try it out. But the wrong organization can easily be incapable of that too, bundling everything up to save money without understanding how much more complex it makes everything and how ill-equipped they are to handle that complexity, never trying things out in practice, writing long spec lists instead, bikeshedding over unimportant implementation details, prioritizing development contract minutia over working systems, putting too many layers between the developers and actual users, going for a big bang.

There are many ways to screw it up.


Software is not about code. Software is a reflection of culture. The code is a means to a social end.

And if everything about a culture is broken - the relationships, the management insight, the goals (collective effectiveness and pride-in-professionalism vs individual ego and greed), the hiring and HR systems, the procurement, the sales - any software that crawls out of the swamp is going to reflect all of that.


When you buy big enterprise systems you enter contracts that aren’t easy to exit. You also bind so much money into those contracts that you don’t really want to leave them either, even if the company sucks at delivering. Maybe you’ll fight them in the courts for a few years and maybe they’ll compensate you a few hundred million, but once you enter these deals you’re basically in them until the law dictates that you have to do another round of bidding.

I’ve done this with a lot of difference companies and a lot of different development and project management philosophies though, and they all fail.

We’ve gone full waterfall, we’ve gone full agile and everything in between. We’ve done long detailed requirement specifications and we’ve invited companies into the heart of our business, to let them literally work inside our offices sitting shoulder to shoulder with our domain knowledge. None of it produces high quality software.

The highest quality software we have, aside from a few small suppliers, is the software we build ourselves. It’s anecdotal again, but it’s the same story I hear in my network of digitalisation managers across the countries public sector and banking.


With the way government contracts work, you'll almost never get really high-quality software that way. The contractor simply does not have any incentive to do so, as it isn't in the contract. Instead, the contract usually gives them the incentive to drag things out as long as possible and make sure development costs are as high as they can get away with; "cost plus" contracts are notorious for this.


I’ve worked in the private sector though, things weren’t better there.


Does humanity even know how to make quality software of the size where it costs on the order of hundreds of millions?


It’s a quite interesting question. Our national tax ministry has had almost nothing of expensive scandals over the past 15 years.

Two years ago they setup a focused devops team inside their organisation. I don’t know the exact details of it because my knowledge is from a 45 minute summit talk, but apparently this team managed to build a national scale system in 3 months that actually work. That would have cost them billion on the private market, and would likely never have worked, yet they did it with a relatively small team.

Maybe the problem is scale. I mean, sometimes I wonder why our contract include numerous product owners, key account managers, groups business analysts, project managers and God knows what else.

This is a little unrelated to buying big systems, but when we wanted to build a RPA setup, one of the consultant agencies had an offer which included 6 business side people and one technician. I mention it, because sometimes buying enterprise systems feels exactly like that.


Quality software is not just a matter of the customer demanding it.

Medical software usually comes with medical devices. You'd need a manufacturer that is good at developing both the devices and the accompanying software, and have a medical organisation that is good at their core business (being doctors) and knows enough about medical equipment and software to choose the manufacturer that has good quality in both. Even though another manufacturer may have superior or more affordable equipment and not be as good at the software side, etc (if that can even be judged before using the stuff for a while).

And all sides need to stay profitable while doing this.

Who says going for the manufacturer with the quality software is even worth it? Maybe it's better to go for the one with the better MRI scanner and make do with the crap software, etc.


The software market has strong information asymmetry: typically the seller knows far more about it than the buyer. Buyers struggle to assess quality and must rely on other signals ("nobody fired for buying IBM", "everyone uses Giant GloboConsultingCo", "I really like the font" etc).


Sounds like actually a team size issue. Two guys in their spare time can't create a codebase big enough to have severe agility or maintenance issues. But the moment you ask for a feature they can't handle in that very limited funding envelope, there's going to be an issue.


Or they have simple ways to get the data in and out of their software so the features don't need to be in there. The best software stays small and resists having all those features added.


I work in the Danish public sector and we currently have 300 systems from various contractors. Almost all of them are moving toward a C#/JAVA api based backend and an Angular front. Only one of those systems uses Vue. None of them use React.

Outside the public sector React is a bit more popular, but it’s still mainly used outside of enterprise and Vue rarely sees any use.

The job-market doesn’t seem to follow the tech hype cycles much, at least not when you live in a country like mine.


I can confirm this for the Dutch public sector as well. I'm saying this as a longtime React contractor. There's some React contracts, but very few and no Vue that I know of.


Maybe just afraid to use tools developed by someone you don't know?


You don’t actually have to setup your web-development environment like the hype dictates. I mean, work in a Danish municipality with more than 5k employees and 60k citizens who directly use our, mainly web-based, solutions daily. We build around 50 a year, and maintain a few hundred of them, and almost none of them are build with a JavaScript MVVM framework.

Not because Vue/React/Angular aren’t nice, but because we don’t have to. If it doesn’t need to run offline, then a MVC framework with Ajax will do just fine, and they are extremely productive. Both because errors are server side, but also because asp mvc hasn’t really seen radical changes for almost a decade. We also rarely put things in containers, we don’t do automated CI/CD and we certainly don’t orchestrate things with Kubernetes. Not that there is anything wrong with doing that, but we’re not Netflix, we can publish a new build directly to an IIS during a slow period and no one will notice the service being gone for 5 seconds.

Sometimes I think it pays to do a little JOMO instead of all that FOMO. Especially if you don’t want to burn out. Changes are a constant in our business, but you need to make absolutely certain that adopting those changes make sense, and it has to make sense in the real world, not on hacker news. Our business doesn’t care one bit about the front-end tech stack, as long as it performs like they want it could be written in ASP webforms.


What do JOMO and FOMO stand for? I think of FOMO as "Fear of Missing Out", the controversial motivator driving retention for free-to-play games like Fortnite, but that doesn't seem to be correct in this context.


"Joy of Missing Out"


It’s completely anecdotal but I do hire people and I work as a part time examiner for CS students, so I do come into contact with what motivates people. Unfortunately I see a lot of fear driving people to push forward, picking up technologies because they think they have to, not because they actually need them.

I mean, if I do a quick search on the Danish job agents, no one is looking for someone who knows GraphQL. Yet I see people desperately trying to learn it in their free time because they are afraid of becoming obsolete.

I wonder where this fear comes from. It’s a fairly recent thing in my experience, and it’s not like it’s a buyers market. We still desperately need more developers.


It probably comes from the internet and how the web development media is heavily dominated by FAANG companies and trendy startups.

If your reference point is these companies and their developers, then it can appear like you're getting more obsolete by the day, since they're always talking about the hottest new tech and you're seemingly not using it.

It's turned web development and software engineering into a game of keeping up with the joneses. And I suspect just like with social media as a whole, it's made a lot of people feel like they're inferior to the rest of the population/their peer group.

But I've got an article about that planned at some point. About how the Silicon Valley focused tech media and internet has given people the wrong impression about what most such work is like in most industries.


I would guess "Joy of Missing Out"


They’re productive, because they’re stateless, isolated and have a super short edit-compile-run cycle. Depending on the architecture of your react/angular app, things become too big, fragile and slow


But you don't work for a business, you work for the government. I think that makes a big difference in terms of what's expected and also what the consequences of non delivery are.


Having worked both in the public and private sectors I think it’s actually the exact opposite. If we miss a deadline, or if a system goes down when it really wasn’t supposed to the consequences can be as severe as someone dying because life-necessary data wasn’t available. The only sector which is as focused on security, on-time-delivery and risk-management that we are is banking, which is frankly rather similar to the public sector, except their business focus is much more narrow and they generally have a lot more resources.

This is actually part of the reason we try to think about what technology we use and how we use it. We have too many critical systems and too few people, so we need to be extremely efficient in our choices. If something is cool, but adds complexity, then it’s just not worth it.

Because we are the public sector we do a lot of benchmarking though, and we actually have ASP WebForms legacy apps that are rated as high as your favourite mobile app by our citizens. Which was my point, your users don’t actually care about your tech-stack when it delivers.


Where does non-delivery part comes from, since he said that they are productive?


Yes I know he says that but in general if they're are no real incentives to achieve delivery and no consequences for non delivery, as is usually the case for public servants, then the results tend to be not very good.


Who benefits from anti-vaccination or flat-earth?

Conspiracy theories are probably a lucrative industry if you get enough of a following.


Just like any negative effect; It's influencers who get paid to spread anti-vaccination beliefs even if they don't believe it.

See: far right/left, and other unusual controversial topics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: