The thing about GAs is they're probably one of the easier to understand tools in Machine Learning, even someone with almost no math background can successfully implement a GA. However knowing when to use them optimally does require understanding of the rest of ML. Essentially if your optimization problem is convex, or "convex enough" (as is the case when minimizing the cost function for linear/logistic regression, SVMs, neural networks and more), GAs aren't the best solution. However when you have a really ugly optimization problem on your hands, they are a really good last resort.
There are plenty of applications, however you'll never see them everywhere because they'll never beat optimization techniques like gradient descent, hill climbers, back propagation etc. in cases where those techniques work.
So while GAs are very easy to understand, finding an ideal use case for them takes a bit of knowledge. Looking to solve problems with GAs is harder than having a hard problem and realizing GAs may be helpful.
There are plenty of applications, however you'll never see them everywhere because they'll never beat optimization techniques like gradient descent, hill climbers, back propagation etc. in cases where those techniques work.
So while GAs are very easy to understand, finding an ideal use case for them takes a bit of knowledge. Looking to solve problems with GAs is harder than having a hard problem and realizing GAs may be helpful.