In general, these models are approximations of an ideal, or some kind of statistical summary across systems that are too complex to completely model.
There's a wide gap between "this algorithm is crap" and "this algorithm stops working if we publish the whole thing publicly and people can explicitly tune data to make number-go-up." That's like claiming a machine learning algorithm is crap because it's possible to build bespoke counter-inputs that maximize badness in the output; that's possible with most ML algorithms, but when someone's not trying to break the machine on purpose, those algorithms often work great.
>but when someone's not trying to break the machine on purpose, those algorithms often work great.
To be fair, that's the exact thing that's wrong here. Creative tools for professionals can assume good faith; no one is trying to break an IDE unless their job is QA for said IDE.
Tools for advertising almost always have bad faith actors, or those actors are the largest presence. The problem becomes untenable when the tool creator has a symbiotic relation with the bad actor.
There's a wide gap between "this algorithm is crap" and "this algorithm stops working if we publish the whole thing publicly and people can explicitly tune data to make number-go-up." That's like claiming a machine learning algorithm is crap because it's possible to build bespoke counter-inputs that maximize badness in the output; that's possible with most ML algorithms, but when someone's not trying to break the machine on purpose, those algorithms often work great.