IG co-founder here: users 1 and 2 were our first two attempts at creating users end to end when getting Instagram v0.1 hooked up to the backend we'd written. There was a bug so they were left in an incomplete state; post bugfix, 3 was my co-founder Kevin and 4 is me.
Hey Aaron, sorry to hear and thanks for posting. We're hiring at Instagram for roles in NYC, SF and Menlo Park; if you/folks on your team want a direct line feel free to email me and I'll connect them to the right folks on the team.
Browsers are all over the place, unfortunately. It's part of why sRGB because the only reasonable color profile for Web use. I think we'll see wide color become common in apps before the Web.
All over the place in what way? Support for different color profiles? Actually handling color spaces at all? The fact that there's no consistency when it comes to untagged images? The mess which is plugins? The ability to specific CSS colors in a specific color space?
We built this in already! We don't have a "1x" or "2x" indicator, but the dual lens camera is fully used in Instagram now and will do the smart transition between 1x>2x optical and 2x+ digital zoom.
I used the same approach as the Webkit image, so the same applies here, too (it's also why we only serve Display P3 photos to iOS clients with wide color screens, most Android devices would treat them incorrectly)
Good to know--I didn't run it through my Pixel. Some devices will do a relative projection from Display P3 to sRGB, which means that it will look "relatively" like it would in Display P3 but projected onto the sRGB color space.
Edited to add: and some other ones are doing something even less fancy, which is just to ignore the color profile entirely and just assume sRGB and display it incorrectly, taking for example what would have been the maximum red point for Display-P3 and making it the maximum red point in sRGB.
Since you brought up your Pixel: what is the point of adding something 1% of your customers maybe can see instead fixing that horrible horrible compression that makes uploads from android (80% world wide market share) look like crap?
(Not an android user, I just want to figure out how a company of your size prioritises between bugs and features.)
I can't speak for OP but say this does effect 1% of users today, what percentage does it effect in 6 months, or a year? Not bad to be proactive.
And regarding android compression issues, although resources are always finite, I imagine in this case the android team is fairly separate, so they may very well be working on that compression issue while iOS is pushing forward into new terrain.
> I imagine in this case the android team is fairly separate
This is likely it, right here. So many people forget that larger companies have different teams working on different things. I bet a lot of their "iOS people" that are working on this project have no clue how the Android app works, and Instagram likely has a separate team working on the compression issues.
I don't use IG, so I wasn't aware of that problem or how long it had been around. That said, the general sentiment stands: one of their teams working on one thing doesn't show that they don't have another team working on something unrelated.
Well, given how Instagram has treated its android users can you blame them?
I've seen a number of SV companies releasing ugly and buggy android apss then use their lowering android user base as a proof that android users use like their services.
To be honest, things could be worse. You could be a tinder user in win phone...
We started using OpenGL in 2011. Our CPU-based image filters used to take 4+ seconds with pixel-by-pixel manipulation, and now can render 30+ times per second.
If you have some sample images where the current image pipeline is going wrong let me know and we can look into improving.
I also went through that process with my app Picfx. Using OpenGL for filters is much quicker, the only downside I've found is being limited by the texture size, I did set up a way to process images in tiles but ultimately decided to just limit images to the texture size. Great info on the colour space, I'm sure it will be useful.
Instead of fixing the horrible JPEG encoding can you please add support for webp? It's quite a bit smaller and well supported with polyfills since it's just a single vp8 frame
You don't need a polyfill to deploy Webp. Chrome automatically sends webp in the Accept headers for images, so on the CDN level you could implement some logic to seamlessly swap in Webp images for the browsers that support it. Imgix does this for instance.
1. Scaling down in linear colorspace is essential. One example is [1], where [2] is sRGB and [3] is linear. There are some canary images too [4].
2. Plain bicubic filtering is not good anymore. EWA (Elliptical Weighted Averaging) filtering by Nicolas Robidoux produces much better results [5].
3. Using default JPEG quantization tables at quality 75 is not good anymore. That's what people referring as horrible compression. MozJPEG [6] is a much better alternative. With edge detection and quality assessment, it's even better.
4. You have to realize that 8-bit wide-gamut photographs will show noticeable banding on sRGB devices. Here's my attempt [7] to reveal the issue using sRGB as a wider gamut colorspace.
"When we started this project, none of us at IG were deep experts in color."
This is pretty astonishing to me. I always thought applying smartly designed color filters to pictures was basically Instagram's entire value proposition. In particular, I would have thought that designing filters to emulate the look and feel of various old films would have taken some fairly involved imaging knowledge. How did Instagram get so far without color experts?
In the "How I Built This" podcast for Instagram, Kevin Systrom specifically says the filters were created to take the relatively low-quality photos early smartphone cameras were capable of and making them look like photographer-quality photos. Filters were about taking average photos and making them "pop" but not necessarily by virtue of having deep domain knowledge of color.
I was never under this impression. I was always under the impression the filters are just created by a designer playing around with various effects until it looked nice.
Because it isn't complicated or novel to make compressed 8-bit jpegs have color filters. There are tools for the job and they've been around for a long time.
Working in a different color space than standard requires a little bit of familiarity and finesse that modifying 8-bit jpegs for consumption on the internet did not require.
Many photographers and printers are familiar with this dilemma in a variety of circumstances, where the cameras create images in a different color space and higher bit depth that can't be perceived at all with any technology or the human eye.
I'm sure the comment you're replying to wasn't thinking of the algorithm that applies a filter to a jpeg, but the process by which that filter is created in the first place. The assumption being that there's some sort of theory to colour that allows you to systematically improve the aesthetic qualities of images.
The creative process isn't novel. There isn't even the capability of any layer masking in most mobile apps, including Instagram, compared to pre-existing more robust tools on desktop (and other mobile apps), severely limiting the 'technical interestingness' to begin with.
Bit of a jump in topic, but I'm kind of curious: I don't use instagram myself, bugt I'm sure it resizes images for online viewing, saving bandwidth ans such. Does it do so with correct gamma[0]? Since that's a thing many image-related applications get wrong.
Not necessarily on topic - but given your code samples are written in Objective-C, how much of your codebase is still in Objective-C and what's your opinion on porting it over to Swift?
Good q. All of it is still Objective C and C++; there's a few blockers to starting to move to Swift, including the relatively large amount of custom build + development tooling that we and FB have in place.
It's getting used more and more in our app--few recent examples are the "Promote Post" UI if you're a business account and want to promote a post from inside Instagram, the Saved Posts feature, and the comment moderation tools we now provide around comment filtering.
Peculiar this is being downvoted - compiler speed / reliability / support for incremental builds are all issues with large Swift projects. I've even see people go as far as splitting up their project into frameworks in order to bring build times below times as large as 10m and restore incremental compilation.
I know there's not much love for the API these days, but is / will there be a way to access wide color images from the API? iPads and Macbook Pros also have wide color screens nowadays, so it would make sense to display them for specific use-cases in third party clients.
Great article. Question not directly related to your writeup: in your casual usage of a phone/photo app with the new, wider color space, do you notice a difference in your experience of using the app/looking at images? Or, in other words, does the wider color space feel at all different?
Photos only. Apple's APIs only capture in Wide Color when shooting in photo mode, and their documentation only recommends using wide color/Display P3 for images.
Just P3, though the wider gamut available in our graphics operations should benefit photos brought in using Adobe RGB too since iOS is fully color managed.