Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Scribble Diffusion – Turn your sketch into a refined image using AI (scribblediffusion.com)
373 points by zsikelianos on Feb 28, 2023 | hide | past | favorite | 89 comments



This looks like its just a quick front-end to the ControlNet scribble model (perhaps with a customized underlying model instead of base Stable Diffusion), with canned settings, presumable a canned negative prompt, and maybe some common stuff added beyond the user input to the positive prompt. Which is not to be dismissive, its a very nice demo of what SD+ControlNet scribble can do.

But for people who like it, the ControlNet scribble model (and the other ControlNet models, depth-map based, pose control, edge detection, etc.) [0] are supported in the ControlNet extension [1] to the A1111 Stable Diffusion Web UI [2], and probably similar extensions for other popular stable diffusion UIs. Should work in any current browser, and at least the A1111 UI, with ControlNet models, works on machines with as little as 4GB VRAM.

[0] home repo: https://huggingface.co/lllyasviel/ControlNet but for WebUI you probably want the ones linked from the readme of the WebUI ControlNet extensions [1]

[1] https://github.com/Mikubill/sd-webui-controlnet (EDIT: even if you aren’t using the A1111 WebUI, this repo has a nice set of examples of what each of the ControlNet models does, so it may be worth checking out.)

[2] https://github.com/AUTOMATIC1111/stable-diffusion-webui


I recently added support for it on ArtBot[1] and it supports additional options like models, steps, control type, etc, as well drawing on a canvas[2] and generating input from that. It’s pretty wild!

Free and open source [3].

Bonus: it works on Firefox (unless you’re using private mode — because Firefox doesn’t make IndexedDb available to web apps in private mode and things end up breaking)

[1] https://tinybots.net/artbot/controlnet

[2] https://tinybots.net/artbot/draw

[3] https://github.com/daveschumaker/artbot-for-stable-diffusion


You probably don't need canned positive prompt. ControlNet has a "guess mode" and in reality, basically just send the control to positive part with empty prompt while not sending control to negative part with empty prompt.

Edit: nvm, this particular demo does require you to type in positive prompt.


> You probably don’t need canned positive prompt.

With more playing, I’d say this probably doesn’t have a canned positive prompt, just the user input.


Hey, Scribble Diffusion author here.

Thanks for your kind words and feedback. I love seeing all these links to your scribbles.

I'm an engineer at Replicate, which is a place to run ML models in the cloud. [0] We built Scribble Diffusion as an open-source app [1] to demonstrate how to use Replicate.

This is all built on ControlNet [2], a brilliant technique by Lvmin Zhang and Maneesh Agrawala [3] for conditioning diffusion models on additional sources, in this case human scribbles. It also allows for controlling Stable Diffusion using other inputs like pose estimation, edge detection, and depth maps.

ControlNet has only existed for three weeks, but people are already doing all kinds of cool stuff with it, like an app [4] that lets you pose a stick figure and generate DreamBooth images that match the pose. There are already a bunch of models [5] on Replicate that build on it.

I see a few bits of feedback here about issues with the Scribble Diffusion UI, and I'm tracking them on the GitHub repo. If you want to help out, please feel free to open an issue or pull request.

[0] https://replicate.com

[1] https://github.com/replicate/scribble-diffusion

[2] https://github.com/lllyasviel/ControlNet

[3] https://arxiv.org/abs/2302.05543

[4] https://twitter.com/dannypostmaa/status/1630442372206133248

[5] https://replicate.com/explore?query=controlnet



What a simple, but excellent concept! I kind of expected it to crash after being posted to HN, but surprisingly it is still going strong. What does it cost to host something like that? What does it cost to generate each scribble image?



If you haven’t played with Stable Diffusion based stuff before, note that as well as telling it what (in terms of subject) you want in the prompt, you can also tell it style things. Compare:

“a goofy owl”: https://scribblediffusion.com/scribbles/oymg4kadgvezxppvwkf5...

“a goofy owl, realistic photograph, depth of field, 4k, HDR”: https://scribblediffusion.com/scribbles/va5l24amjnb55g62renz...

“a goody owl, pointillism”: https://scribblediffusion.com/scribbles/5dfnru4f6zguphjvvdrl...


Looks awesome, I think this is a great mix with prompt engineering.

It seems this does not work on Firefox? I could only draw on about half the canvas and it was pretty buggy. Dont support the chrome monoculture!


Hey, Scribble Diffusion author here.

Sorry about the trouble. The Firefox incompatibility is the result of a bug in the underlying npm package we're using to render the drawing tool and canvas.

The issue is being tracked here: https://github.com/replicate/scribble-diffusion/issues/17#is...

We may need to wait for a fix to that, or consider swapping out the package we use for scribbling on a canvas.


Maybe put a banner notifying users about this? That was a pretty bad experience while I struggled to make it work & thought the author/you had shipped something totally broken lol.


You can run it locally with a different (much better) WebUI see https://github.com/Mikubill/sd-webui-controlnet for example.



Agreed. I nearly just closed the tab when it didn't work on Firefox.


wow, yeah, very frustrating experience


There's a solution linked at the bottom of the page: https://github.com/replicate/scribble-diffusion


Is there?


I think he is referring to “fork the repo and fix the bug yourself” as the solution, but given the existence of far more featureful publicly available Web UIs for Stable Diffusion + ControlNet, if you just want to get something that works on Firefox, you don’t need to expend that much work if you are wiling to host it yourself rather than just using something someone already has up on the web.


Really shows how specific you need to be in prompts. Same awful scribble of bird and flower

"hummingbird drinking from tulip" disaster https://scribblediffusion.com/scribbles/n2ekqcs7vnegdcezcdvg...

"hummingbird on left drinking from tulip on right" perfect https://scribblediffusion.com/scribbles/ri5y2kzanzcs7dvxhlgy...


This is so great for mockups where you have a faint idea of what you want [0] Treasure map - https://scribblediffusion.com/scribbles/4unwo6iq2jhwlmuqpu6v...

[1] “treasure map in the style of lord of the rings maps” https://scribblediffusion.com/scribbles/t36as45npjapxfekcuuy...


Will be nice to be able to upload a picture/sketch instead of having to sketch something out in the scratchpad. I tend to sketch block diagrams from time to time and hate having to use some tool to draw/drag/align things. Very cool idea though!


Hey there. Scribble Diffusion author here.

If you already have your own images, you can use the Replicate model directly: https://replicate.com/jagilley/controlnet-scribble -- you can upload your image using the Replicate web UI or do it programmatically using the model's HTTP API.


How much does an image cost you?

I don't quite understand why some image generators are free and others charge credits, assuming the non-free ones aren't just goldrushing.

Is it just free until oops it gets too popular?


I run a custom Stable Diffusion bot for a small community that has generated many tens of thousands of images. The community wanted to know what it was costing me, so I dug into it a bit and have a fairly "literal" answer to that question (generally). With the RTX3060 I'm running it on, using a kill-a-wat, I very roughly calculated that generating a 512x640 image consumed ~ +170 watts for about 6 seconds (on top of the baseline power consumption of the PC when idle). This comes out to a little over 1,000 Watt-seconds or 0.000295 kWh per image. I'll leave it as an exercise for the reader to look up their current cost per kWh and work out what that comes out to. This is extremely, extremely rough, but helped me wrap my head around the amount of energy we were using to generate all these images and roughly how much of my power bill I had to thank my friends for, haha.

Of course running on rented/hosted GPU's, it's a simpler, but much more expensive story — basically however much you're paying for GPU instances to run Stable Diffusion divided by how many images you generate. :)


Uploading my own images was one of the reasons I created my own version of this. It works great with kids pictures! This is so fun to see my kids smile on their face every time they see the AI version of their drawings!

Uploaded photos Examples: https://imgur.com/a/jeWgRvH Website: https://crappydrawings.ai


This is great. I presume it's made with Replicate? It looks like it's being updated, as I was getting endless loading indicators a moment ago.

Now trying to decide if this is healthy for kids.... who may lose motivation to ever pursue detailed art and drawing because "AI can do it for me based on rough sketch". But at same time, it may motivate them to make more rough drawings with interesting ideas.


Yes, it uses Replicate indeed (scribble-diffusion) What I find is that kids (at least mine) are motivated to create even more drawings on their own and then see what it looks like with AI, which gives them more ideas, so they create even more drawings. So, so far, I'd say it has a positive impact. But I'm also pretty curious how this may end up impact on the long term.


Good to hear. Hopefully kids learn the value of original and thoughtful concepts that inform not only AI renderings, but the quality of whatever creative work comes after raw ideas, such as script writing.

I can see storyboarding for film will make use of this. Although having the characters and background remain consistent from scene to scene is something I don't know how can be done, if every submission is a roll of dice.


Get ready to see many images with memes (Loss, Dickbutt) hidden in them. Might be good or bad, depending on your stance on wasted time.


this is neat! I'll have to try this out.


This is cool. At first I was skeptical that the sketch had nothing to do with anything and it was just feeding the prompt into an AI. But using the example "sun behind mountains"[0] it seemed to work alright. What really blew my mind was when I paired the sketch with a prompt that made no sense at all[1]. Somehow the AI cobbled together a picture that used the prompt, but also has all the same geometry of the sketch.

[0]: https://scribblediffusion.com/scribbles/elun6gwkxrcr7eqy5jao... [1]: https://scribblediffusion.com/scribbles/tpsty6qcxjfrxbxz3n6b...


I love how it synthesized Hide-the-Pain Harold to satisfy the prompt.




Really captures the feeling of a stroll through the Tenderloin


Best ever


The drawing doesn't work for me on Firefox.


I'm surprised, it actually did pretty well with a prompt I've been trying on all the generative AIs: "princess volcano kitty cat"

https://imgur.com/hVfCIwU

I use that prompt because it's something my preschooler draws on a daily basis with no qualms at all, but can often be hard for generative AI to imagine. So it's getting close to the imaginative abilities of a 3 year old! Halfway there.


Looks like the ControlNet scribble model which is super fun to play with. I've done some examples of what you can get with more detailed sketches https://twitter.com/P_Galbraith/status/1625842298914471938?c...


I opened the page and did not change anything of the given example. After multiple tries, I continued to fail to produce the prompted image:

https://i.imgur.com/08o9zkG.jpg

https://i.imgur.com/KkBtTyd.jpg

Eventually I did get something which looked like a partial success, but with low resolution and not something I’d consider appealing:

https://i.imgur.com/SJWkwBp.jpg

https://i.imgur.com/BpSUD4j.jpg

This is not a dig on the author. I have yet to see a simple prompt give a good result with Stable Diffusion. Are there examples of it?


Literally my first result : https://scribblediffusion.com/scribbles/omrpgn4kfnh3nh3lz62q...

are you generally an unlucky person? :D


I drew three dots and wrote "spinning symbol", and it even animated the result.



https://scribblediffusion.com/scribbles/l2jnivkxlvbt7amhc4oq...

Well, it guessed which character I drew...

https://scribblediffusion.com/scribbles/vaoxhqknfrdxpb2osym2...

Not bad. Even came close with the headband. But it figured her ear was an AirPod. (Maybe a small persocom earpod? Is this the AI's waifu-sona?)


I firefox there's something wonky going on with the scribble area; the bottom of the image doesn't seem to show up until after you've drawn a lot more in the top of the image.


Yeah sorry about that. Tracking the issue here: https://github.com/replicate/scribble-diffusion/issues/17#is...


Very cool.

Naturally does better with what I imagine are in-distribution[0] doodles/words than out-of-distribution[1], but still very cool (and fun!).

[0] https://scribblediffusion.com/scribbles/h373dd42xbduzerzhlos...

[1] https://scribblediffusion.com/scribbles/23xdfz5mtffwfgib32t7...


This would be so much fun as a feature in Drawful.


I can't wait for a tool that would allow me to draw architecture diagrams, and maybe even process animations to explain how my software works.

https://scribblediffusion.com/scribbles/nca3ivborbebhipju6jg...


That use case is very LLM unfriendly, because you want very specific and precise structure of output, but the LLM knows lots of stuff that is absolutely not what you want at all.


This seems to be broken now - As soon as I've drawn a line, it vanishes, and I can't draw anymore. Using Firefox on Linux.


“squirrel with acetylene torch attacking a tank”

Didn't get the squirrel part quite right, but maybe it's my drawing :-)

https://scribblediffusion.com/scribbles/bxc3jaofkzdh5nyjlff5...


If you expanded the prompt to describe the squirrel in redundantly verbose detail it would probably pick it up just fine. The more synonymous descriptors you can work in, and the more naturally you can phrase it, the better it will work.

But IMHO the interesting thing about controlnet is being able to use pre-rendered production art as a basis and allowing SD to respect the original proportions/model/etc. For rough sketches I prefer img2img without controlnet as it gives the algorithm leeway to fix, reinterpret, or "be inspired by" my input image without being too attached to it (since it's full of imperfections anyway).


Yeah, the results I got out of this immediately told me it was stable diffusion.

I have no idea how that (have to spend hours just getting a prompt that looks semi decent) can be considered state of the art when something like midjourney (any prompt will produce something great) exists.


https://scribblediffusion.com/scribbles/pearx2rswrbzhktoimoe...

I feel like stable diffusion is mocking my scribbling skills


I drew the first thing I saw in front of me: https://scribblediffusion.com/scribbles/mv7bqwabzrchnfkwzdnt...


Smurf vs Predator

(using firefox hence the very poor drawing)

https://scribblediffusion.com/scribbles/i3p6uxpzf5cmbeety5nw...





It misread your prompt as melted coffee flavored ice cream.



I would love a wire framing tool that converts crappy sketches (ideally on an iPad with Apple Pencil) into low and high fidelity app mockups, and possibly other things like mind maps. Does his exist anywhere?



Can you place more than two objects in the sketch? I'm trying to sketch a house with a tree side-by-side, but it always drawing trees in front of the house.


I just tried to draw exactly the same thing. I'm no artist, so I thought a house, a tree, the sun and some clouds were my best bet (like I drew when I was 5; now 55). :-)


Yes I drew a gecko on a rock with a palm tree and it was perfect


entering everything with my mouse, but the results are great, even though my skills of entering are not so: https://twitter.com/_glass/status/1630856704714743809?s=20


I would love to be able to paste an image in and resize it and other basic stuff.

Ie. Let me stage a mock up. Because I can’t draw a cat.


This older one does pretty good with cats https://affinelayer.com/pixsrv/



Wow, that's great, look at the level of detail: https://scribblediffusion.com/scribbles/pnj5qy3libhjvp3ybjaf...


It's buggy on Firefox for me. Can't draw anything.


got some cool images by keeping the owl drawing and changing the prompt to "batman on drugs"


Is it possible to do the opposite?


Yes, scribble annotator (included with the A1111 WebUI ControlNet extension) does image -> scribble.


Needs an eraser tool


If we are using words then how is the scribble being used?


Technically or practically?

Practically, they are combined. (Triangle) + 'ice cream' = cone ; (Triangle) + 'mountains' = peak ; (Circle) + 'ice cream' = scoop ; (Circle) + 'mountains' = sun


How are they combined though?

Is the graphic used as a graphic?


So, technically, and beyond that, theoretically, and even anatomically.

In the brain, visual processing, left hemisphere was found to contain details; a right hemisphere to contain structural relations. So a whole is composed of elements and relative positions.

In Convolutional Neural Networks, "near, direct" layers contain analytic detail and "far, abstract" layers contain synthetic shapes.

So, implementation-wise, you can take e.g. descriptions as abstracts and a "pre-acquired" memory of details as «graphic».

Edit:

About the "combination", well that the whole purpose of this new technology proposal,

"ControlNet"

- i.e., formerly you may have had some "transformer" from input to output, and now "conditional controls" are added (through a "zero-convolution" technique) - see Adding Conditional Control to Text-to-Image Diffusion Models - https://arxiv.org/abs/2302.05543


wow so amazing I llke this


I think it is biased to the text and not the scribble. I scribbled a car and tried one with a the text car and bar. The first one resulted in a SUV and second resulted in home/Kitchen Bar. I don't even think the scribble matters.



You went to the trouble of registering a domain but you didn't go to the trouble of even testing on Firefox? Even if you didn't have the time to fix it, then a message acknowledging that it's broken on some browsers is a nice gesture.


> You went to the trouble of registering a domain but you didn’t go to the trouble of even testing on Firefox?

How much trouble do you imagine registering a domain is?


How much trouble is firing up a browser and pasting in a URL?


Installing a browser is more trouble than buying a domain (done each several times.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: