Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the other things I've been noticing is diffusion models are starting to get quite good at UI design. They're still only well-tailored for landing pages (due to most of the training data being based on portfolio sites like dribbble), but still the output is at a point where I'd at least start with some AI riffs before jumping in myself on design.

Once these are at a point where we can automatically interpret them into usable workflows, it's going to be incredible how quickly you can develop your ideas. I'm really excited for it.

Some examples of outputs:

https://image.non.io/cd90cc33-4a6a-41d8-abd2-045d3a272010.we...

https://image.non.io/5a0c3fc7-37f8-4e72-aba9-cd61f3c18517.we...

https://image.non.io/920adf7c-a554-41bd-a29c-77bebed1cdad.we...



Are there any particular models you’re using for this, or are they equally good at this in your opinion?


Flux is infinitely better than the others from what I've found, but I haven't tried SDXL 3.5 yet as that just launched. The output you're seeing above is a combination of two LoRAs I've trained for this purpose.

Other things that are important are img2img and inpainting flows to give the model more context for generation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: