The best way to discuss any speed thing is with anecdotes, right? /sarcasm
I had a system that required the parsing of large json chunks. The system pulled the json from an API, pushed the data into a json-type column, then sorted the data into normal form.
I originally tried using straight Python to pull the data, but decided that I ought to keep the original data for record keeping, plus testing was a lot faster without constantly calling the API.
When all was said and done, the whole operation took about a minute to complete. I decided to try trigger, which caused the entire process, from call, to printing "done," to take less than a second.
In this case, the trigger was signficantly faster.
The danger of this anectdote, and all stories with databases, is that all things have to be posted with "in this case." Any time you read triggers, aggregation, CTEs, etc, are fast or slow, consider that this is almost always told in a vacuum. There are so many variables, that the term "fast" is wholly useless
I have no experience in using the json column in postgres (well, sort of did, but it was just a json type, before json column was a real thing).
For me, I try to do as much on the database side like sorting, which does require a good schema and table designs. This is why folks often criticize MongoDB. One of the reasons was the convincence of “schemaless”.
When Mongo was first introduced, I think a lot of developers, including me, saw Mongo as an excuse to move away from relational databases. So we began dumping all kinds of shit. Doing fancy stuff on Mongo side is not possible without a good design either.
What people probably did was just pulling data from multiple collections, and do filtering and “joins” on the server (client) side. I would find myself writing a for loop over doing a bunch of stuff. Yikes.
Of course there are other criticisms against MongoDB, but ultimately developers like myself did not (and probably still) have any decent clues how to use databases well. Learnig to use databases right is something I really want to be good at.
I had a system that required the parsing of large json chunks. The system pulled the json from an API, pushed the data into a json-type column, then sorted the data into normal form.
I originally tried using straight Python to pull the data, but decided that I ought to keep the original data for record keeping, plus testing was a lot faster without constantly calling the API.
When all was said and done, the whole operation took about a minute to complete. I decided to try trigger, which caused the entire process, from call, to printing "done," to take less than a second.
In this case, the trigger was signficantly faster.
The danger of this anectdote, and all stories with databases, is that all things have to be posted with "in this case." Any time you read triggers, aggregation, CTEs, etc, are fast or slow, consider that this is almost always told in a vacuum. There are so many variables, that the term "fast" is wholly useless