GOFAI basically consists of inference and reasoning techniques, some of which cease to work well when you scale them up too much (computational complexity) or when there is uncertainty involved.
There have been some efforts to scale reasoning towards greater scale (description logics) as well as problems with uncertainty (ILP, Markov Logic), but they've been de-emphasized or forgotten in recent times because you get a lot of mileage out of end-to-end deep learning - where essentially hidden state within the network deals with the uncertainty on its own, and where the additional compute overhead + rule engineering effort doesn't seem warranted.