Hacker News new | past | comments | ask | show | jobs | submit login

Is there a way to have current AI tools maintain consistency when generating multiple images of a specific creature or object? For example, if there are images of 'Dr. Venom' they need to look similar, or if there are images of the same space ship.



Yes, right now you have 3 options:

- dreambooth, ~15-20 minutes finetuning but generally generates high quality and diverse outputs if trained properly,

- textual inversion, you essentially find a new "word" in the embedding space that describes the object/person, this can generate good results, but generally less effective than dreambooth,

- LORA finetuning[1], similar to dreambooth, but you're essentially finetuning the weight deltas to achieve the look, faster than dreambooth, much smaller output.

1: https://github.com/cloneofsimo/lora


> Is there a way to have current AI tools maintain consistency when generating multiple images of a specific creature or object?

...but, all of these can't maintain consistency.

All they can do is generate the same 'concept'. For example, 'pictures of batman' will always generate pictures that are recognizably batman.

However, good luck generating comic cells; there is nothing (that I'm aware of) that will let you generate consistency across images; every cell will have a subtly different batman, with a different background, different props, different lighting, etc.

The image-to-image (and depth-to-image) pipelines will let you generate structurally consistent outputs (eg. here is a bed, here is a building), but they will still be completely distinct in detail, and lack consistency.

This is why all animations using this tech have that 'hand drawn jitter' to them, because it's basically not possible (currently) to say: "an image of batman in a new pose, but that is like this previous frame".

So... to the OP's question:

Recognizable outputs? Yes sure, you've already been able to generate 'a picture of a dog'.

New outputs? Yeah! You can train it for something like 'a picture of 'Renata Glasc the Chem-Baroness' now.

Consistency across outputs? No, not really. Not at all.


From my experience playing around with dreambooth in the last few weeks generating images of a specific person or pet (not just a generic concept), it surprisingly works really well. But you have to make sure to feed it enough pictures, make sure to label the images properly, use smaller learning rate, use prior preservation loss and make sure to not overfit, etc.

For the animation stuff where you need frame to frame consistency, the new diffusion based video models show that it's possible [1][2]. These are not open source yet as far I know, but it's highly likely that we'll get them within a few months.

1: https://arxiv.org/pdf/2212.11565.pdf

2: https://imagen.research.google/video/paper.pdf


> generating images of a specific person or pet (not just a generic concept)

There's no difference between those things. It's a specific label that directs the diffusion model. It doesn't matter if your label is 'dog' or 'betty' (ie. my personal dog). Anyway...

> it's highly likely that we'll get them within a few months.

Yep! It's not a technical limitation of the technology for sure; but the OP asked:

> Is there a way to have current AI tools ...

...and right now you can't do it with the current AI tools that are publicly available.


I think you can get the effect you're looking for by using the previous cell as an init image and only repainting the character.

As for consistency of character details, I think that will depend on how many images you use to train dreambooth etc. and how varied those images are.[1]

[1]: https://www.youtube.com/watch?v=W4Mcuh38wyM


consistency isn't really that difficult with more or less static images. I haven't tried to do "same outfit many poses" yet, because i don't really know what poses are called, and there's no guarantee that the humans that trained/tagged the input images knew, either. I've been messing around with "batch img2img" and i sort of like the jank; i am wondering if a more aggressive CLIP would help at all, but i think it boils down to there really isn't enough detailed tagging to make this worth messing with too much.

what i mean is, assuming this technology moves forward, and GPUs continue increasing VRAM as they have, and enough people are interested in doing extremely detailed tagging with small shapes, the sorts of issues you're talking about will go away over time. Or, alternatively, someone or a group could develop a way to scan hundreds of outputs and collate them according to similarity, allowing a human to use batches that are similar enough to do something like short comics or whatever. As it stands, when i do txt2img or img2img i will run off 20-40 images. I'm also wondering how much seed fiddling could be done - when i first got "Anything v3.0" every image was some person sitting at a dining table near a window with food in front of them, dozens in a row. I have no idea how it happened, but there was enough global cohesion between images i thought it was trained on just that for the first hour or so.

Each of the below images is a set of 4 images (i think generally called a grid in SD), so each image is a set of 4 "2 panel comic strips" - they aren't really intended to flow between the grid squares, but you'll notice that the clothing, hairstyles, etc between strips matches, even if they don't match between individual images. My personal favorite - and the one i used for something online, is the top left set in the first .png https://i.imgur.com/BWek3YI.png https://i.imgur.com/LHchsj5.png

P.S. if anyone knows what the source art could possibly be, let me know?


This is the next frontier for AI art as it will let you build a series, graphic novel, or even video with consistent objects. There’s techniques like textual inversion that let you associate a label with an object, but they rely on having multiple images of that object already, so it won’t work for an image you just generated. To get around that, some people have tried using tools to generate multiple images of a synthetic object, eg Deep Nostalgia that can animate a static portrait photo.

So in theory you select one photo with the AI image generator, create variants of it with separate image tools, then build a fine-tuned model based on some cherry-picked variants.

I think this will get easier as AI image tools focus more on depth and 3D modelling.

The “aiactors” subreddit has some interesting experiments along these lines.


Check out this video by Corridor Crew -> they're able to use Stable Diffusion to consistently transfer the style of an animated film (Spiderverse) onto real world shots.

https://youtu.be/QBWVHCYZ_Zs


The concept of “similar” is AI-complete (ie, only you knows what seems acceptably similar to you), so basically, no.

You can force a model to generate nearly the same actual pixels with DreamBooth, which can be interesting for putting people’s faces in a picture, but otherwise I’d call it overfitting.


Is AI-complete an actual complexity class? Genuinely curious, I’ve never heard of it.


I think it's just an informal term for things that seem to require human-level AI.


Ah, don’t care for it in that case. Seems like it’s cashing in on the formality associated with algorithms research.


I think you’re insulting all of philosophy there.

But there is a paper about it: https://www.aaai.org/Papers/Symposia/Spring/2007/SS-07-05/SS...


Two parameters in stable diffusion webui are denoising (similarity to source) and cfg scale (adherence to prompt). img2img does as it sounds, and inpainting allows masked modifications to a base image with a great deal of control and variability.

I'd recommend giving it a shot if you have an Nvidia GPU with ≥4GB VRAM.

Edit: There are also training and hypernetworks, but they require a body of source material, keywording, and significantly more time and compute resources, so I haven't attempted either.


Seems like there is some way. There's a startup[1] that I've been seeing around on twitter[2] which makes it easy to create in-game assets that are style-consistent. Haven't tried it yet but it looks promising!

[1] - https://www.scenario.gg/

[2] - https://twitter.com/Beekzor/status/1608862875862589441?s=20


Textual inversion can kind of do this, but I haven't been impressed by examples I've seen. It seems more suited to "Shrek as a lawnmower" than "Shrek reading a book".


Hugging face has everything you need to get started with stable diffusion textual inversion training here. It's awesome to get it running but as others have said it has limitations if you're trying to get multiple images for a narrative made etc.

https://huggingface.co/docs/diffusers/training/text_inversio...


midJourney lets you upload a reference image. For a portrait of someone's face this produces consistently the same person.


I would recommend looking into depth map of the source material then generating off of the resulting depth map. That will keep the structure the same so things don’t pop in and out. Then the suggestions of dreambooth or textual inversion to get the colors etc right.


You can teach the AI a new item/character. https://www.youtube.com/watch?v=W4Mcuh38wyM


There is a way which someone recently discovered to work great on reddit.

In automatic1111 UI you can alternate between prompts e.g. "Closeup portrait of (elon musk | Jeff bezos | bill gates)". Final image will be a face that look like all three. See this https://i.redd.it/8uq52mnausu91.png

Now do the same with two people but invert the gender. The female version of what I gave example of won't look like anything you know about. And it will remain consistent.

It kind of works.


You can work with embeddings.


Yeah, look up textual inversion




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: