Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've been using a mutated blend of our own patterns with the ones you mentioned, but you're 100% spot on.

The reason we have a product today with 100% uptime and reasonable tech debt is that we front-loaded the cost of data by making sure we normalize (which is non-trivial in maritime), along with cleaning and consolidation with other sources. We also have our own internal consensus algos that pull multiple sources where possible and pick the most likely.

And hopefully this is an anecdotal data point for anyone, but we've been running a pretty performant system that services 200,000 ports, 50,000 airports, 50,000 crew, 5000 vessels and millions of miles of GeoJSON routes in real-time with Postgres alone. In my experience a well organised relational db can outrun most document stores, in most applications.

The only caveat I'd say (if there is one) is normalization as a hard rule. We denormalize some fields when we need to squeeze performance out of a commonly accessed metric (caching essentially, but through the db which I think makes it denormalization), and it's been quite helpful. Of course you need stronger checks and balances in code to make sure things don't desync, but it can be helpful to speed up large, common queries.

I'll link some of the technical write-ups of how we use Postgres and PostGIS below in [1] and [2], if anyone's interested.

[1] - https://hrishioa.github.io/large-geospatial-queries-and-opti... [2] - https://hrishioa.github.io/subqueries-and-ctes-an-example-of...



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: