Hacker News new | past | comments | ask | show | jobs | submit login
Two Weeks of Colorizebot (whatimade.today)
130 points by sillysaurus3 on Aug 13, 2016 | hide | past | favorite | 15 comments



Half-joking, half-serious, but ... how long before someone writes a YCbCr 4:0:0 video encoder that uses colorizebot on the resulting frames? Obvious use case being absolutely extreme video compression at substantial color accuracy loss. Or is this algorithm hopelessly below the threshold for real-time usage? That, and I imagine there'd need to be some strong multi-frame averaging if the colors shift too greatly between individual frames that are mostly identical in the original material.

EDIT: ah, videos at the bottom, yay! But yeah, serious color shifting issues, darn. So if this is to work at all, it'd need some color stabilization.

https://www.youtube.com/watch?v=tWdaMKKH5MI


Seems like it should be combined with hints from calculating optimal flow on previous frames.


Cool idea nicely executed and documented. Their other posts are worth a look too.


I am astonished by this because I don't get how it can paint based on an algorithm and get everything so right. How does this even work? How it know metal is metal and what color to use? Does it paint black people as white?


At a guess, it's looking at texture and shape. There's a lot of information in texture.

Stuff that looks like skin gets pink or sepia

Stuff that looks like grass gets green

Stuff that looks like sky gets blue

Stuff that looks like fabric gets red or blue

Stuff it's not sure about gets sepia

Very cool to see it done automatically. It's about the level of a mediocre human colorizer which isn't bad for an algorithm...


Could you refer me where I could learn these analyzing algorithms? And where to code bots overall. I have some pretty cool ideas and the internet is an infinite playground to throw bots in, lay back and get you results.



Would be fun to see an entire B/W movie processed...


They did show some example-videos in the article. From the Beatles-video, this looks quite a few steps away from working well. When they change camera, walls etc. suddenly changes colors, as the images are processed separately.


Sorry. Didn't notice.

Yes, the videos are not as impressive as the stills.


Would it be require a lot of changes to repeat the training process mentioned in the article with video instead of stills? Aside from the increase in data being processed.


Schindler's list with all-pink nazi uniforms...


A lot of the colorizations seem really poor. They don't even look lifelike. What are the technical reasons why some of the 'old' b/w photos seems much more realistic than the others? Is it the type of film or processing that were used in creation of the originals?


I feel that things like this are very important to create and have, even if they may not be easily made commercial. It's like pure science within IT. Very cool!


And popular subreddits like /r/pics have banned the bot. :-(




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: