The previous models were either 1. Limited in their capacity to create something that looked very cool, or 2. Gigantic models that needed clusters of GPUs and lots of infrastructure to generate a single image.
One major thing that happened recently (2ish weeks ago) was the release of an algorithm (with weights) called stable diffusion, which runs on consumer grade hardware and requires about 8GB of GPU RAM to generate something that looks cool. This has opened up usage of these models for a lot of people.
One major thing that happened recently (2ish weeks ago) was the release of an algorithm (with weights) called stable diffusion, which runs on consumer grade hardware and requires about 8GB of GPU RAM to generate something that looks cool. This has opened up usage of these models for a lot of people.
example outputs with prompts for the curious: https://lexica.art/