It is more visible with image synthesis. Sure, the style can be freely switched in a few seconds, fitting compositions from trained concepts is very impressive.
But there still is no real creativity. No emerging concepts aside from complete accidents that cannot be replicated again. The same is true for the other direction with models like Clip could create an interpretation of generated images. It is impressive, but there are still clear limitations. You cannot expect linear growth here, it could be that the current AI approaches are wrong, we hit a plateau and need fully new approaches for significant improvements. What we now have is insane amount of data and more powerful hardware, it could be that we have years of iterative and slow improvement while people fine tune their models.
I think LLM have the same problem overall, it is just more difficult to notice.
But there still is no real creativity. No emerging concepts aside from complete accidents that cannot be replicated again. The same is true for the other direction with models like Clip could create an interpretation of generated images. It is impressive, but there are still clear limitations. You cannot expect linear growth here, it could be that the current AI approaches are wrong, we hit a plateau and need fully new approaches for significant improvements. What we now have is insane amount of data and more powerful hardware, it could be that we have years of iterative and slow improvement while people fine tune their models.
I think LLM have the same problem overall, it is just more difficult to notice.