Note that genetic programming is a specific subset of genetic algorithms that focuses on searching for "programs" (hence the name), typically encoded in the form of a tree structure similar to an AST, though other representations exist. In theory, you could use GP for almost any situation where you'd want to synthesize a mathematical expression or piece of code for which you have a criteria to optimize (i.e. "fitness"). In practice, since GPs are basically just semi-randomly searching a huge combinatorial space, they tend to work best in low-dimensional problems, ideally with a fitness function that is cheap to evaluate. They can work nicely for finding nonlinear symbolic formulas for regression, for example. But there's also some other cool results over the years - Hod Lipson has published several cool results in robotics with them.
Until a few years ago, the popular deep learning methods like CNNs weren't great at that kind of thing, but LLMs definitely changed that - it's safe to say that by drawing on huge amounts of data LLMs are much better at most practical programming tasks. They're not necessarily a complete replacement though, there's definitely space for hybrid approaches, eg https://arxiv.org/abs/2401.07102 or https://www.nature.com/articles/s41586-023-06924-6.
Until a few years ago, the popular deep learning methods like CNNs weren't great at that kind of thing, but LLMs definitely changed that - it's safe to say that by drawing on huge amounts of data LLMs are much better at most practical programming tasks. They're not necessarily a complete replacement though, there's definitely space for hybrid approaches, eg https://arxiv.org/abs/2401.07102 or https://www.nature.com/articles/s41586-023-06924-6.