Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The previous models were either 1. Limited in their capacity to create something that looked very cool, or 2. Gigantic models that needed clusters of GPUs and lots of infrastructure to generate a single image.

One major thing that happened recently (2ish weeks ago) was the release of an algorithm (with weights) called stable diffusion, which runs on consumer grade hardware and requires about 8GB of GPU RAM to generate something that looks cool. This has opened up usage of these models for a lot of people.

example outputs with prompts for the curious: https://lexica.art/



Is Lexica finding results previously computed? Or generating them? I could only work with very simple queries like "photo of a cat".


It's just a database of submitted works I think. You can try scrolling down on the opening page to see random prompts and outputs.


It's ~1.5 million entries inputted by users during the beta period on Discord.


There are a lot of prompts and results that aren't being included. Not sure what the criteria were.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: