Hacker News new | past | comments | ask | show | jobs | submit login
The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies (gatech.edu)
81 points by mihau on Oct 1, 2016 | hide | past | favorite | 12 comments



So if someone sketched something that hadnt been sketched before it wouldnt find an image for it. Not nearly as interesting at it seemed at first.


nice although keywords and photos are more practical query methods than sketches which are harder to input and thus less expressive (i.e. sketch of a bear returning teddy bears and not real bears).


There definately are a lot of applications for scetch based retrievals as some features are hard to describe eg. a landmark from a specific angle etc.


Communication between people who speak different languages that aren't commonly known.


Great to site but the title is all wrong, these aren't photo realistic images from drawings, this is taking a sketch, classifying it and trying to find a similar photo.

From the site

Abstract We present the Sketchy database, the first large-scale collection of sketch-photo pairs. We ask crowd workers to sketch particular photographic objects sampled from 125 categories and acquire 75,471 sketches of 12,500 objects. The Sketchy database gives us fine-grained associations between particular photos and sketches, and we use this to train cross-domain convolutional networks which embed sketches and photographs in a common feature space. We use our database as a benchmark for fine-grained retrieval and show that our learned representation significantly outperforms both hand-crafted features as well as deep features trained for sketch or photo classification. Beyond image retrieval, we believe the Sketchy database opens up new opportunities for sketch and image understanding and synthesis.


In fact the original title of the page is: The Sketchy Database: Learning to Retrieve Badly Drawn Bunnies Patsorn Sangkloy Nathan Burnell Cusuh Ham James Hays


This is a recent paper that actually creates photos from sketches of faces, using deep nets: https://arxiv.org/abs/1606.03073

Figure 4 does this for self portraits of some famous artists.


You could probably apply the same code. The dataset ("acquire 75,471 sketches of 12,500 objects") sounds adequate, and if not, can be boosted by first training a CNN to do photo->sketch (throwing away information is usually easier than imagining it) and using that to boost the dataset for sketch->photo.


Would be super cool if someone applied deep generative models to synthesize new natural images from input drawings using this dataset.


I believe one came out recently, I saw it on GitHub a few days ago. It mainly did landscape and searched for similar images then combined them to form a new one. It also supported image manipulation, eg turning a brown square purse into a green rounded one. I'll see if I can find it and edit the link in.

EDIT: https://github.com/junyanz/iGAN

Heres a good video showing it off https://www.youtube.com/watch?v=9c4z6YsBGQ0&feature=youtu.be


We've updated the submission title from “Photorealistic images from drawings”.


My vagina drawing skills, while not effective, did not disappoint.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: