Look, I like good typography as much as the next person—maybe even a little more. When a CSS property came along with promises to doctor all my type with ligatures and carefully calculated kerning—not some half-assed tracking, but real kerning—I jumped at it. I put text-rendering: optimizeLegibility
on headings, body copy, navigation links; I slapped it on images just in case a ligature might appear in the background of a photograph, blurred, like an aesthetically satisfying poltergeist.
Truly, these were my web typography salad days; an era of copy littered with fi and fft and—be still my heart—st ligatures. The letters of “water” were themselves kerned water-tight, and I looked desperately for reasons to type “AVAST” in all caps, as if to spite the jankily-kerned word I once knew.
The differences between text-rendering: optimizeLegibility
(left) and text-rendering: auto
(right) are subtle, even for the keenly design-eyed.
In the back of my mind, though, I knew these newfound wings were made of wax. All this typographic power came with a cost: text-rendering: optimizeLegibility
is slow—and by “slow,” I mean that it can bog down an entire page, from initial render time to repaints. More than that, though, it’s buggy: Android in particular has serious issues trying to render a page that uses optimizeLegibility
heavily, especially the older versions that are still, sadly, very common today.
The bugs may not make much sense, but the speed issues do. There could be thousands of tiny calculations involved in kerning a long run of text, and that puts a heavy strain on a device’s processor. In a modern desktop browser like Chrome, well, it isn’t great, and on an underpowered mobile device it’s a nightmare—especially when @font-face
is in play. All the work that goes into optimizeLegibility
has to take place before type can be rendered at all, meaning a drastically prolonged Flash of Invisible Text or a prolonged Flash of Fallback Text, depending on the method used to load them.
WebPageTest.org’s timeline view, using a throttled connection. The only difference between these two pages is p { text-rendering: optimizeLegibility; }
and p { text-rendering: optimizeSpeed; }
The key, as you might expect, is moderation. Enabling these features on the occasional subhed won’t do any serious performance harm; not noticable harm, anyway.
In fact, the default setting of text-rendering: auto
doesn’t always leave us completely sans ligatures—pun intended—or with character spacings you could drive a truck through. In Firefox (and possibly WebKit and Blink, eventually), any type with a font-size
of 20px
or above is opted into optimizeLegibility
’s features. On type that large the effects are much more noticeable, and a few short runs of text aren’t apt to hurt our performance in any measurable way.
Of course, if we want to opt out of these features entirely, there’s always text-rendering: optimizeSpeed
, which does away with optimizeLegibility
’s features—and costs—entirely, no matter the type size. I’m almost always willing to defer to the browser by keeping the default text-rendering: auto
intact, but if you find yourself working on a page with a healthy amount of text with a font-size
larger than 20px, you may want to finesse things a little with text-rendering: optimizeSpeed
.
Comments
We moved off of Disqus for data privacy and consent concerns, and are currently searching for a new commenting tool.
I suspect there may be some other variable affecting the results of your test. I ran a similar test using a page with the entire text of Moby Dick (more than 200,000 words), and optimizeLegibility only added about 0.2 seconds compared to optimizeSpeed. Considering that most web pages have much less text than Moby Dick, I’d say whatever extra time is needed for kerning and ligatures is negligible.
Very strange.
I ran tests with the entire text of Moby Dick as well and the result were consistent with Mat’s findings.
These results are averaged across multiple runs, tested in Google Chrome 42.0.2311.135 (64-bit) on OS X Yosemite 10.10.1:
optimizeSpeed
275\u2009ms Rendering
4\u2009ms Painting
180\u2009ms Other
optimizeLegibility
9\u2009s Rendering
96\u2009ms Painting
7\u2009s Other
Fonts used?
No fonts explicitly set.
I let the browser and OS use the default, which becomes Times.
Running another batch of tests, explicitly setting the body font to `sans-serif` which results in Helvetica being used yields the same results as before.
Does font-choice matter?
I\u2019m guessing things are worse with @font-face in play, which it is in the post example.
I kinda jotted this post down and fired it off because I was shocked by the perf difference I was seeing, but I\u2019d _really_ like to get a solid set of tests nailed down: using @font-face, using plain ol\u2019 `font-family: serif`, different sized runs of text, Chrome v. Firefox v. WebKit, etc. Is that something any of you would be willing to chip in on, if I spun up a repo over the next couple of days?
I could certainly help out, it’s the kind of thing I care a great deal about (hence testing it in the first place).
I would also be interesting in seeing *if* it makes any difference between a font that has had its kerning data stripped out (possible with Fontsquirrel’s Webfont Generator) vs. one that still has it.
Drop me an email or @ me when you’ve got something set up.
Depending on the amount of text, the font(s) you spec could also have a large impact. Some fonts have significantly more kerning exceptions and a higher UPM value (i.e. resolution) which would presumably add more processing time. A faster option would be something like Verdana (low number of points since it’s a sans-serif, zero kerning exceptions, very few ligatures if any at all, available as a local resource on many machines), while a more intensive face would be something like a serif face from H&Co (more points, extensive kerning, remote fonts, most styles broken up across multiple font files to deter pirating).
If you really want to target optimizeLegibility *specifically*, I’d run the test on only local fonts. Otherwise you are introducing delays that are not related to that setting specifically.
Surely this has more to do with precision than the specific minutia of each font Nexii Malthus…
I have not looked into specifics, but it would be fair to assume that interpolating more points at a higher precision would entirely feasibly be where the extra time is taken.
One other thing I am interested in for the tests, would be optimizeLegability on GPU vs off GPU, could those talkng about testing please link any GitHub, I would be quite interested in reading up on this and maybe seeing if I can add to any tests.
Sounds great, Lewis. I\u2019ll get some sort of test repo set up next week, and I\u2019ll keep everyone posted.
Well, if a custom font were used I had considered the possibility of a sub-optimal rendering path, but this doesn’t seem to be the case.
How does optimizeLegibility work under the hood? Are you implying that it actually calculates the kerning for all 200,000 words on load? Or does it only worry about the visible viewport?
I discovered a bug with this property that caused links to be unpressable in some versions of Android. Which is nice. Was an issue on A List Apart, which I helped them fix.
Microtypographic features like expansion and protrusion are coming eventually, it’s just really hard to get that stuff right, and to make it performant. It’s further complicated by the fact that there’s a question as to whether the fonts should have to bake-in those behaviors, or whether the renderer should add the behaviors itself\u2014see XeTeX vs LaTeX.
Macro-scale features like ligatures, though? Why are browsers still not getting those right?
I’ve learned from experience that if you’re playing around too much with non-standard CSS features, especially when you’re applying it to the entire document, you’re bound to get burned. They can have lots of unintended side effects, and if you’re working on a major site you’re asking for it. The safest way I’ve found to enable kerning in fonts is using the standard, well-supported `font-kerning` property. This only applies to Open Type fonts, however. I’m not totally sure what the means in terms of modern browsers, as in — among SVG, WOFF, or OTF/TTF I’m not sure which one browsers will prefer. Someone else could probably chime in with some helpful info there.
that’s awesome! I’ll send to every ux colleague I have as a nice argument for their font perfection madness.
If you want ligatures, the better approach is to use the standard CSS font-variant-ligatures or font-feature-settings properties. Those enable exactly what you want without all of the baggage.
See https://developer.mozilla.o… and https://developer.mozilla.o…
\”The text-rendering property is an SVG property that is not defined in any CSS standard. However, Gecko and WebKit browsers let you apply this property to HTML and XML content on Windows, Mac OS X and Linux.\”
https://developer.mozilla.o…
Hey Mat,
Is the conclusion presented still valid?
After all 5yrs have passed, and browser support/test matrices have changed.
I’ve tried some light testing via Webpagetest.org, but couldn’t see any performance differences.
Admittedly, I wasn’t testing Android mobile, and I’m using Mac OS locally.
So my result was far from formative.
If someone wishes to retest, I set up a few simple pages using Moby Dick as copy with browser default fonts:
sans-serif
sans-serif/optimize-auto
sans-serif/optimize-legibility
sans-serif/optimize-speed
sans-serif/optimize-geometric
serif
serif/optimize-auto
serif/optimize-legibility
serif/optimize-speed
serif/optimize-geometric
I’d be very interested in any new conclusions drawn, or even a better test matrix.