Hacker News new | past | comments | ask | show | jobs | submit login

The team I joined uses feature flags like this a lot.

> When are those conditionals deleted?

For us, almost never. A few weeks after the feature flag becomes True everywhere, we just never even look at what would happen if the flag is False. This helps the confounding of multiple flags, but also thankfully multiple flags usually don't collide-- they're all fairly specific changes.

> What strategy do you use to decide when to permanently enable a feature and simplify the codebase?

When I'm refactoring/cleaning-up other stuff.

The really big benefits I see for feature-flags is that:

- You can merge into master faster. Often my feature has added some utility function or refactored something or added some css that the other engineers could (and should) use. So it's great to have the code rejoined sooner.

- Our tester can test on prod, for the few situations where our QA server is not an exact enough replicate of prod.

- It makes it really easy to show marketing/business/TheBoard new features that are _almost_ ready but all our bugs weren't quite finished before boardmeeting/sprintend/etc.




>> When are those conditionals deleted?

> For us, almost never.

That's dangerous. I remember an incident in which a team accidentally emptied their production experiment config (which controls which features are on: during ramp-up as a whitelist of enabled users and/or a percentage in (0%, 100%); most experiments in the file had long since reached 100%). Suddenly all their experiments reverted to 0% and they were serving live traffic in a totally untested state. Some of these experiments had been on for literally years and now suddenly were off. The combination of experiments had never been tested. There had also been many changes to the code since most of these had last been disabled. As you might expect, this didn't go well.

(That incident could also be part of a long and sad cautionary tale about the dangers of global configuration files, btw.)

The best practice I've seen is for the configuration to specify an end date for each experiment. If it's still listed in that date, a ticket is automatically filed against the defined "owner" of the experiment. The ticket reminds the owner to finish the experiment, remove the conditionals from the code, do a release, wait until they're confident there will be no roll-back to prior to that release, and then remove the experiment from the config.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: