Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because

1) the Mac world has moved onto high-DPI screens mainly,

2) subpixel rendering provides virtually no benefit on high-DPI screens, and

3) subpixel rendering adds a ton of complications across the board (you can't scale anything, for starters), and was always hacky to begin with -- e.g. taking a screenshot and putting it on the internet gives it a rainbow halo effect on everyone else's screen

Honestly, these days with graphics cards that support Retina, the "right" way to do subpixel rendering on non-Retina screens wouldn't be with fonts at all. It would be to render everything internally normally at 3x resolution (3x horizontal by 1x vertical), and then apply subpixel rendering to the whole screen when downscaling for display. But with so few non-Retina screens being used by Mac users, why bother?



Thank you for your explanations. I particularly like your comment on down scaling the triple resolution.

Actually I am using a Retina display. I’m still noticing the difference very much. Hence I don’t think subpixel rendering would be only a technique for obsolete devices.

But I don’t think subpixel rendering is a performance issue, my MacBook Pro running 10.11 displays everything fast even though being 6 years old.

And if the complications were already there, perfectly working, why remove them? I understand that it is hard to add these features into the iOS libraries, but then the others should have been kept as is.


> Actually I am using a Retina display. I’m still noticing the difference very much.

I'm curious why you think you notice a difference? Macs have never had subpixel rendering for Retina, so if I'm understanding you correctly, it's not something you could know from experience. Why do you think it would make a difference you could notice if they did?

(And of course, Retina rendering is far sharper than lo-dpi subpixel rendering.)

I suppose you might be able to take an image of subpixel-rendered text e.g. at 24 pt, scale the bitmap to 50% width/height on a Retina display, and put it next to the same text at 12 pt, and see if you could tell the difference. Although you'd need to be careful to get the same precise vertical baseline alignment.


Using a macro lens, I took photographs of text on macOS 10.11. I don't know where your statement is originated. But here, the text is indeed displayed with subpixel rendering – on a Retina display.


Apologies, you're correct -- my information was wrong. [1] Now I'm curious to experiment myself to see if I could tell the difference! And congratulations on your excellent eyesight. :)

The only seeming authoritative source I could find that justifies why it was ultimately removed is from an "ex-macOS SWE" here [2].

[1] https://graphicdesign.stackexchange.com/questions/8277/how-d...

[2] https://news.ycombinator.com/item?id=17477526


Retina and other Hi-DPI displays don't generally use the RGB column layout that ClearType and other subpixel rendering algorithms abuse to provide higher luminance resolution. Using ClearType on a PenTile or AMOLED display is just going to give you color fringing with no actual increase in resolution.


Indeed. On the other hand, all Mac displays and almost all external desktop displays provide the RGB pattern. Otherwise, there is still the option to disable the sub pixel antialiasing.


> Actually I am using a Retina display. I’m still noticing the difference very much.

I suspect you are not running at the "correct" 2X Retina resolutions, which for whatever reason are no longer the default on Apple Macbooks. Instead, they end up running at a 1.75x or 1.5x of native resolution, which results in slightly less crisp rendering for everything, not just fonts.


Yes, I am running at the "correct" 2x Retina resolution, indeed. I understand your suspicion, and I confirm that deviating from that yields worse results.

Actually, the requirement to stay with the 2x is a reason I dislike macOS 11.0 Big Sur (even more than 10.14+) because it increases paddings everywhere. Hence, its effectively a loss of real estate. This loss can be mitigated by increasing the resolution to <2x retina, but of course with subpar visuals, unfortunately.


I wish Apple would fix this so 1.75x or 1.5x were equally crisp.

The solution is conceptually simple: just like the iPhone X renders at 3x, Macs should be able to render internally at 3x as well, so that downsampling to 1.75x or 1.5x will still have full detail, zero blurriness.

I wonder if the reason it can't is performance, if it's battery, or if there isn't enough memory on the video card or something.

But seeing as MacBook Pros can support multiple 4K monitors... it seems like the memory and performance are there, no?


What you are describing is essentially what MacOS does. E.g. on a Retina display with a 2880x1800 screen, Apple used to render “2x” assets to a 2880x1800 buffer, so that it has the same amount of space as a 1x 1440x900 screen. If you want more space (which is now the default), it renders using 2x assets to a 3400x2000 or 3840x2400 buffer (these numbers are approximate), then scales it down to 2880x1800. So it’s never scaling anything up, only down. Of course it’s still not as sharp as rendering at the native resolution. Using 3x assets wouldn’t help unless the actual resolution of the screen was higher.


Do you have a source for that?

Because I'd love if that were true, but every explanation I've seen contradicts that.

The default isn't more space, as you write -- it's actually less space, for bigger elements. Which is why you would need to render at more than 2x. It does, indeed, scale up -- there are plenty of articles from when Retina's scaling options came out that state it does lead to a slight amount of blurriness because of this.

To be clear, under Display > Scaled, the "More Space" option is true 2x retina, while "Default" through "Larger Text" are the ones that upscale.

You can actually verify this yourself -- it you take a screenshot of any UX element at "More Space", and then take a screenshot of the same element at "Larger Text", they're pixel-for-pixel identical. For everything less than "More space", MacOS is scaling up.


What? The default is most definitely "more space" as of a couple versions of MacOS ago (or maybe it was based on the product, e.g. when they came out with a new version of the MacBook Pro in 2016 or so). I know on my 2018 MacBook Pro 13" the default was one notch over on the "More Space" side vs. exact 2x Retina. And that only makes sense, as running as if you only have the space of a 1280x800 1x screen would make me go nuts, you can hardly fit anything on the screen. I think that's what drove Apple to change the defaults from the exact 2x Retina, despite the minor loss in quality from having to render at the larger size and scale down. On iMacs which have bigger screens (I'm typing this now on a 5K iMac) exact 2x Retina is the default.

You are correct that if you go down to the "bigger text" side of things that it does scale things up, and for those sizes using 3x would give a sharper image. I hadn't even considered that though because I think most people think either the exact 2x retina resolution is fine, or if anything they want more space. The only people who would use the "bigger text" option are probably people with poor eyesight in which case it doesn't matter if its slightly more blurry.

EDIT: see screenshot here: http://imgbox.com/uxcHERt3 Default for MacBook Pro 13" is "Looks like 1440x900" which requires rendering at a resolution of 2880x1800, which is then scaled DOWN to the native resolution of 2560x1600.


We're in agreement on how it works technically:

> I know on my 2018 MacBook Pro 13" the default was one notch over on the "More Space" side vs. exact 2x Retina

That's what I meant -- it's less space (one notch over, the one labeled "Default") compared to exact 2x which is labeled the "More Space" option.

> The only people who would use the "bigger text" option are probably people with poor eyesight in which case it doesn't matter if its slightly more blurry.

I guess that's where we disagree -- my eyesight is great but I like the text on my screen to be comparable with the size of body text in books, not the size of footnotes. I like a comfortable amount of text information on screen, not crammed. And the fact this is the default option makes it seem that Apple agrees.

And that's precisely why I wish it didn't add the bluriness from the upscaling, why 3x internal would be valuable.


No, the one labeled default is one notch higher on the more space continuum than exact 2x Retina (at least on the 13 and 15” MacBook pros). On other machines, like my iMac, the default notch is exactly 2x Retina. Check out my screenshot and do the math yourself. “Looks like 1440x900” is the default notch, which means rendering at 2880x1800, which is higher resolution than the MacBook Pro 13’s 2560x1600 screen.


I stand corrected, thank you.

There were a bunch of articles way back when the scaling options were introduced that claimed anything less than "maximum space" introduced blurriness... but they were obviously wrong.

I just did the math and double-checked with screenshots, and indeed — on my 13" MacBook Pro the default is higher than 2x, not lower. It's only at the leftmost "larger text" that blurriness is introduced.

Thanks so much for the info, and I'm happy to know I am getting maximum clarity out of my Mac after all! Always good to get my misinformation corrected.

And so, never mind about the whole 3x thing... you're right, unless you need text extremely large. Cheers!


It wasn't "perfectly working", there was a lot of text on translucent views either looking horrible or changing antialiasing type when animated on macOS.


I was not aware of that. On the other hand, I am mainly looking at static text with opaque background (including PDFs). At least there, the subpixel hinting would benefit the keen readers' eyes.


If you're reading a ton of PDF's, try comparing Acrobat Reader with Preview -- they use entirely different font rendering. Preview is unhinted undistorted macOS rendering that preserves exact letterform positioning and widths, while Acrobat uses hinting to distort letterforms but align closer to pixels.

If you're really looking for maximum crispness you might prefer Acrobat. It makes subpixel rendering unnecessary for all perfectly vertical and horizontal strokes, since it's trying to avoid antialiasing them in the first place.


Indeed, Acrobat rendering is vastly superior in that respect. Thank you for that mentioning that.

Apart from the Acrobat solution, on macOS 10.11, it is possible to turn off text smoothing in Preview.app's PDF settings (which retains subpixel rendering). This gives, to my eyes the best PDF rendering that macOS had up to now.


I'm pretty sure that most developers are not using retina screens as their second display. Maybe things are different in The Valley, though...

But I get it. From their perspective, those aren't Apple screens, so they don't need to support them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: