Hacker News new | past | comments | ask | show | jobs | submit login

Any favorite strategies for achieving this in practice, e.g. across site infrastructure migrations? (changing CMS, static site generators, etc)

Personally about the only thing that has worked for me has been UUID/SHA/random ID links (awful for humans, but it's relatively easy to migrate a database) or hand-maintaining a list of all pages hosted, and hand-checking them on changes. Neither of which is a Good Solution™ imo: one's human-unfriendly, and one's impossible to scale, has a high failure rate, and rarely survives migrating between humans.




On the personal level: For every markdown article/README I write, I pipe it through ArchiveBox / Archive.org and put (mirror) links next to the originals in case they go down.

At the organizational level: my company has a dedicated “old urls” section in our urls.py routes files where we redirect old URLs to their new locations. Then on critical projects we also have a unit test that appends all URLs ever used to a tracked file in CI and checks that they still resolve or redirect on new deployments. Any 404 for a legacy URL is considered a release-blocking bug.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: