nice although keywords and photos are more practical query methods than sketches which are harder to input and thus less expressive (i.e. sketch of a bear returning teddy bears and not real bears).
Great to site but the title is all wrong, these aren't photo realistic images from drawings, this is taking a sketch, classifying it and trying to find a similar photo.
From the site
Abstract
We present the Sketchy database, the first large-scale collection of sketch-photo pairs. We ask crowd workers to sketch particular photographic objects sampled from 125 categories and acquire 75,471 sketches of 12,500 objects. The Sketchy database gives us fine-grained associations between particular photos and sketches, and we use this to train cross-domain convolutional networks which embed sketches and photographs in a common feature space. We use our database as a benchmark for fine-grained retrieval and show that our learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification. Beyond image retrieval, we believe the Sketchy database opens up new opportunities for sketch and image understanding and synthesis.
In fact the original title of the page is:
The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies
Patsorn Sangkloy Nathan Burnell Cusuh Ham James Hays
You could probably apply the same code. The dataset ("acquire 75,471 sketches of 12,500 objects") sounds adequate, and if not, can be boosted by first training a CNN to do photo->sketch (throwing away information is usually easier than imagining it) and using that to boost the dataset for sketch->photo.
I believe one came out recently, I saw it on GitHub a few days ago. It mainly did landscape and searched for similar images then combined them to form a new one. It also supported image manipulation, eg turning a brown square purse into a green rounded one. I'll see if I can find it and edit the link in.