The parts of CS that are the most math-like (which include fundamental algorithms) don't have a replication crisis, but the ones that are the most social-science like probably do, or would. I would bet large sums of money that a lot of the literature on stuff like "does OOP lead to better code", "does IDE syntax highlighting lead to fewer bugs" etc. would fail to replicate if anyone bothered trying.
The thing is, the general sense I get is that people in CS already have so little confidence in these results that it's not even considered worth the time to try and refute them. Which doesn't exactly speak well of the field!
I worry about ML papers in particular. models are closely guarded, often impractical to train independently due to ownership of the training/test set, or computing power or details left out of the paper. there's no way to mathematically prove any of it works, either. it's like social science done on organisms we've designed.
Some measurements are interesting and valuable without being replicable. For example, the number of online devices or the number of websites using wordpress. Take the same measurement at a later point in time and the results are different. Yet I wouldn't call those fields maths-like.
Research into this stuff is very young and so I think it's fair to be skeptical of the results. I'm hoping we'll eventually come up with more rigorous, reproducible results.
The thing is, the general sense I get is that people in CS already have so little confidence in these results that it's not even considered worth the time to try and refute them. Which doesn't exactly speak well of the field!