Dan Petrovic’s recent Whiteboard Friday at Moz got me thinking. The premise was his own recent finding that only 16% of people he surveyed read all of an article word-for-word, and this just happens to be the exact same statistic Jakob Nielsen came up with in 1997.
It’s an interesting finding, but Dan’s analysis — that web copy hasn’t improved in 20 years — feels wrong.
Have we still not learned how to write for the web?
It’s been nearly two decades, and we still haven’t learned how to write for the Web, says Petrovic, before moving on to discuss content-writing techniques and introduce his expandable text plugin.
To begin with, and with respect to Dan, I have a hard time believing his research is directly comparable to Jakob Nielsen’s. Unless he faithfully reproduced the experimental method (this isn’t made clear, and feels unlikely), arriving at exactly the same percentage is likely just coincidence.
But what really interests me is this: Whether or not we’ve successfully learned how to write for the web, web content has undoubtedly changed enormously in 20 years, and that fact alone casts doubt on his thesis.
20 years in web content
When I first used the Internet in 1997, much of it was dense text in Times New Roman. Not knowing any different, people more or less wrote as they would for print. The rest was clunky attempts at pizzazz (star-field backgrounds, terrible gifs, scrolling banners, etc.), and well-intentioned but ultimately not-very-usable attempts at “proper” design.
While still far from perfect, most modern websites are much more usable. Lots of written content, even on privately run blogs, is created with at least a peripheral awareness of principles of writing for the web.
So assuming Dan’s method and results are sound, something’s askew. Web content is undoubtedly more usable than it was in ’97, and people are more used to reading it. So how can “engagement” still be so poor?
Reading vs. understanding
In short, I suspect he’s measuring the wrong thing. I’d guess there is a fairly constant proportion of people who don’t read articles word-for-word, no matter how well adapted to online reading habits. Why? Well, a lot of reading on the web is transactional: we’re looking to get specific information, nothing more.
And that’s fine. What’s important is comprehension. In other words, accepting that over 90% of people will skim read your copy for the bits they need, how do you ensure they understand what you want them to understand?
That’s exactly what Nielsen’s (and others’) guidelines are meant to address. The inverted pyramid, short sentences and paragraphs, signpost headlines, etc. They’re all about making information effective for people who won’t read it properly.
In this sense, Dan has missed the point.
All this considered, how do we measure quality engagement? Standard metrics like bounce rate and time-on-page, for instance, aren’t always great proxies. For a truly well designed page, we reasonably might expect visitors to arrive and leave again relatively quickly. They found exactly what they came for, and went on with their lives.
For that reason, and as with so many things in SEO, much depends on context, intent and audience. Product page copy, blog posts, news articles… they all have different audiences and intended outcomes.
So what’s important is this: what action do you expect in response to your content, and how will you measure it?