LLMs can be used to propose candidates for chemical, medical, coding, and basically any domain where you got enough data to create a model but it is exponentially hard to search for solutions. They can be the core module in an agent that learns by RL or evolutionary methods - so the quality only depends on how much data the agent can generate. We know that when an agent can do proper exploration like AlphaGo Zero it can reach superhuman levels, it builds its own specialised dataset as it runs.
For example: Evolution through Large Models - https://arxiv.org/abs/2206.08896