Thousands of engineers at Facebook (and Google, and probably Twitter, and ...) work on stopping this kind of abuse. There are many very hard problems in this area; it is not remotely easy to "solve" a constantly evolving adversarial landscape.
There is, in all likelihood, a way to build an upvote system that is resistant to factory automation. The problem is that such a system doesn't have the exact feature set of the current system. Which means you have to give someone (probably several someones) bad news whether they want to hear it or not.
This is not a strong suit for software devs in general, and in the case where the 'right thing' involves financial concerns, we almost always lose. Even when we are right.
So rather than losing millions on taking away some feature that someone got promoted for, we lose millions by investing multiple man-decades into palliative care.
Being as how fake likes are possibly fraudulent and detrimental to the paying customers, I feel like these companies should be the ones openly demonstrating their attempts at stopping them. Otherwise, I'll default to "they allow it".