I realize that you responded to a specific statement, not necessarily the entire context of the thread. However:
Saying that a person’s credit score is entirely due to their own financial decisions is incorrect because it’s overly simplistic, that’s true, although the main factor is that person’s behavior (whether that behavior is their fault or not is a different story). It can also depend on circumstances specific to the person but not directly related to their own actions (e.g. their credit provider revises credit limits across the board due to external factors, so their credit utilization changes too, without them having used any more or less of it).
In addition, and what you’re alluding to, is that these models are continuously revised. A set of behaviors and circumstances that lead to a higher score in one economic environment may not do the same in another.
Credit scores as implemented in for instance the US are not a direct reflection of a person’s moral character or intended as a reward for good behavior. They’re uncaring algorithms optimized solely for determining how risky it is to lend you money, so that financial institutions can more accurately spread that risk across their customers and maximize their profits. This also enables credit providers to give out more credit overall, based on less biased criteria (not unbiased, because models are never perfect and financial circumstances can be proxies for other attributes).
One can feel however one wants about whether this system is good or not. But it’s definitely different in kind to ”social credit” systems like the one China has implemented, which directly takes into account far more non-financial factors and determines far more non-financial outcomes, effectively exerting much more control over many facets of people’s lives.
> although the main factor is that person’s behavior (whether that behavior is their fault or not is a different story).
This is the whole crux of the situation so buying it in a disclaimer misses the point.
Every lender and background investigator I’ve ever interacted with have treated credit score as a social credit marker, but sure, your mileage might vary.
> They’re uncaring algorithms optimized solely for determining how risky it is to lend you money, so that financial institutions can more accurately spread that risk across their customers and maximize their profits.
This is a fallacy; algorithms are “uncaring” in an anthropomorphic sense, yes, they lack a psychological capacity to care, but their designers are very much not, as you admit in the very next sentence.
> But it’s definitely different in kind to ”social credit” systems like the one China has implemented, which directly takes into account far more non-financial factors and determines far more non-financial outcomes, effectively exerting much more control over many facets of people’s lives.
We entirely disagree on this point. Probably because we have different definitions of “non-financial factors” and “non-financial outcomes.”
> This is the whole crux of the situation so buying it in a disclaimer misses the point.
It maybe doesn’t adress the point you’re interested in, but it doesn’t miss the point I was making, that the goals and mechanisms revolves around how well a person manages credit. For the credit provider everything else is secondary or irrelevant, including whether it’s because you’ve made poor decisions or external factors have screwed you over.
> Every lender and background investigator I’ve ever interacted with have treated credit score as a social credit marker, but sure, your mileage might vary.
This is probably the crux of why we’re not on the same page, because I don’t understand what this means. I’m genuinely asking, what do you mean when you say that they treated it as a social credit score marker? What business did you have with them (or they with you) that didn’t involve whether or not to extend credit? What does the term “social credit score marker” mean to you?
> This is a fallacy; algorithms are “uncaring” in an anthropomorphic sense, yes, they lack a psychological capacity to care, but their designers are very much not, as you admit in the very next sentence.
I don’t see how you explain that it’s a fallacy, and I don’t think it is, but I concede that it’s a confusing word choice - I should probably have just omitted the word “uncaring”. My point was once again that their sole goal is determining the risk of extending a person credit - whether that would be a nice or moral thing to do or not doesn’t factor into it.
> We entirely disagree on this point. Probably because we have different definitions of “non-financial factors” and “non-financial outcomes.”
I assume here that you mean that people’s financial status, including their access to credit, determines a lot of aspects of their lives, too (correct me if I’m wrong). I don’t think any reasonable person disagrees with that. I do however think that you underestimate how constraining it can be when additional variables are factored in to more directly control what you are and aren’t allowed to do, and how.
Saying that a person’s credit score is entirely due to their own financial decisions is incorrect because it’s overly simplistic, that’s true, although the main factor is that person’s behavior (whether that behavior is their fault or not is a different story). It can also depend on circumstances specific to the person but not directly related to their own actions (e.g. their credit provider revises credit limits across the board due to external factors, so their credit utilization changes too, without them having used any more or less of it).
In addition, and what you’re alluding to, is that these models are continuously revised. A set of behaviors and circumstances that lead to a higher score in one economic environment may not do the same in another.
Credit scores as implemented in for instance the US are not a direct reflection of a person’s moral character or intended as a reward for good behavior. They’re uncaring algorithms optimized solely for determining how risky it is to lend you money, so that financial institutions can more accurately spread that risk across their customers and maximize their profits. This also enables credit providers to give out more credit overall, based on less biased criteria (not unbiased, because models are never perfect and financial circumstances can be proxies for other attributes).
One can feel however one wants about whether this system is good or not. But it’s definitely different in kind to ”social credit” systems like the one China has implemented, which directly takes into account far more non-financial factors and determines far more non-financial outcomes, effectively exerting much more control over many facets of people’s lives.