I've often wanted to write a compiler that would accept a schema definition like protobufs or graphql as input and generate the required SQL to update the database if a schema change was committed. It would also be read by the application layer so its model definitions were kept up to date.
As you say, there's a tremendous advantage in using triggers to remove data invariants, but then there's also the issue of the schema's state, and it can be prone to error depending on the complexity of the trigger. It's definitely recommended to use SQL's schema inspection to verify and test it. Perhaps even based on schema definitions from protobufs or whatever.
I understand the appeal and you're not the only person to want it: I've seen people and companies implement db schema to graphql, db schema to models, and models to db schema.
Based on personal experience, this is a bad idea though. You do not want a 1:1 correspondence between your database schema, your backend models, and your graphql schema, because the way you organise information in each layer should be different.
Database schema needs to be performant for expected queries. That means de/normalisation decisions; sometimes the same data will be stored in multiple locations.
Backend models need to express the domain, because this is where your business logic is. (There's a reason people bitch about ORMs: when you get to complex enough usecases they're not flexible enough in either direction and you need extra models wrapping THAT.)
Graphql schema is a view of your backend; sometimes several fields will be fulfilled using the same model, sometimes your model should not have a reflection in graphql schema (because you do not want to expose this data to frontend/the world), and sometimes your graphql schema will be full of deprecated fields because client apps have not been updated (see Facebook policy of never removing anything.)
Everything you say is so true. In a similar vein, all those ready-made REST libraries that help you shoehorn your business models into 1-to-1 mappings between them and your REST resources/endpoints, have no reason to exist. And yet you still see people battling with them.
REST is still HTTP + more exotic verbs + headers + more serializing/encoding options. You can build a small library, specific for you project's needs in a matter of 3-5 days. And on top of that, you don't forfeit any possible future optimizations, which you most certainly will by choosing any ready made library.
It's feasible to make such a tool to update the database schema so that it would match a changed definition, but I don't think that it's feasible for it to update the actual database.
To do that, you need knowledge about how and why the schema was altered, and schema definitions like protobufs or graphql don't contain that. For example, how would you distinguish between renamed column (where you need to keep the data) and deletion of a column plus adding a different one?
There's a reason why migrations are the standard level of abstraction, packaging changes to schema with scripts ensuring consistency of data throughout the whole version history.
As you say, there's a tremendous advantage in using triggers to remove data invariants, but then there's also the issue of the schema's state, and it can be prone to error depending on the complexity of the trigger. It's definitely recommended to use SQL's schema inspection to verify and test it. Perhaps even based on schema definitions from protobufs or whatever.