That’s only the case if you spend all day rerunning deployments. If your task is more frequently to transition the cluster config from A -> B then the distinction blurs and you go from a 10:1 delta ratio of the different classes of state to maybe 3:2, at which point it feels like splitting hairs.
Especially if the locals vary between prod and pre-prod, and worse if dev sandboxes end up with per-user instances, which for us was mercifully only needed for people working on the TF scripts, so we could run our tests locally.
We have multiple separate environments per application. For environment specific inputs we use variables.
The distinction is very clear in our team. Locals are used as const (like an application name), variables are for more dynamic user/environment inputs and data is to fetch dynamic information from other resources.
Zero problems. If a local becomes more environment specific a quick refactor fixes that. You can also have locals that use variable or data values if necessary.
One big win we also have is that we stopped using modules except for one big main module. We noticed from previous projects that as soon as we implemented modules everything became a big problem. Modules that are version pinned still required a lot of maintenance to upgrade. Modules that weren't version pinned caused more destruction than we planned. Modules outputs and inputs caused a lot of cycle problems,... Modules always seem too deep or too shallow.
Especially if the locals vary between prod and pre-prod, and worse if dev sandboxes end up with per-user instances, which for us was mercifully only needed for people working on the TF scripts, so we could run our tests locally.