Quality Content is in the Eye of the Consumer

Yesterday, I had the opportunity to speak on a panel at TC Camp. The topic for the panel was “What is Quality Content?” The first thing I noticed is that we have a difficult time defining the term quality when it comes to content. We all have ideas about the characteristics of quality, but we have a difficult time taking a broad view of the term itself. Here is how the Oxford Dictionaries defines quality:Gold medal winner

qual·i·ty noun \ˈkwä-lə-tē\

  • the standard of something as measured against other things of a similar kind;
  • the degree of excellence of something

Joan Lasselle, one of the other panel members, was the first to point out that, as a discipline, technical communications (and marketing communications I might add) has no standard of excellence. I agree completely. We do not have a bar that has been set, a number, a grade, or any set of common standards that we can use for an objective measure of overall quality content.

Our measurements of quality are based largely on a subjective declaration of characteristics that we agree on. For example, Content Science has a Content Quality Checklist that contains a number of attributes, including:

  • Does the content meet user needs, goals, and interests?
  • Is the content timely and relevant?
  • Is the content understandable to customers?
  • Is the content organized logically & coherently?
  • Is the content correct?
  • Do images, video, and audio meet technical standards, so they are clear?
  • Does the content use the appropriate techniques to influence or engage customers?
  • Does the content execute those techniques effectively?
  • Does the content include all of the information customers need or might want about a topic?
  • Does the content consistently reflect the editorial or brand voice and attributes?
  • Does its tone adjust appropriately to the context—for example, sales versus customer service?
  • Does the content have a consistent style?
  • Is the content easy to scan or read?
  • Is the content in a usable format, including headings, bulleted lists, tables, white space, or similar techniques, as appropriate to the content?
  • Can customers find the content when searching using relevant keywords?

This is a nice list of items to keep in mind when you develop content. Unfortunately, when it comes to actually measuring quality, many of these attributes fall short. Many of them are subjective. There is no common measurement that we can use to compare a piece of content against a standard. Let’s look a little more closely at a few.

Meets User Needs / Usability

We all want our content to meet the needs of our consumer and be usable. But how, exactly, can we measure this? Sure we can look at things like the number of support calls compared to…something. Compared to other products that we have developed content for? Compared to what standard? If our content met ALL of the needs of our consumer, they’d never have to call us.

Relevance

We want our content to be relevant. Just what is relevance? Is it the same for each of our content consumers? No. It is not. We’d need to know what each and every person needs in order to be 100% sure that our content is relevant for each one. That is not possible, because we don’t know what each of our consumers believes is relevant. Relevance is subjective.

Accuracy / Meets Technical Standards

Accuracy is one of the attributes that is measurable. The Oxford Dictionaries defines accuracy as “the degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard.” If we say “The phone is 5 inches long,” we are referring to an objective measurement. It either is or is not 5 inches long. Accuracy deals with facts, not feelings or opinions. The same is true with meeting technical standards. If there are standards, we can measure against them.

Readability

Readability is another measurable attribute. We have scales and measurements that we use to score readability. For example, the Flesch-Kincaid Reading Ease test and the Flesch-Kincaid Reading Level. Here is the formula for the Flesch-Kincaid Reading Ease (FRES) test:

<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />
206.835 - 1.015 \left( \frac{\text{total words}}{\text{total sentences}} \right) - 84.6 \left( \frac{\text{total syllables}}{\text{total words}} \right).<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br />

Higher scores indicate content that is easier to read; lower scores indicate content that is more difficult to read. Here is how to interpret the score:

Score Meaning
90.0–100.0 easily understood by an average 11-year-old
60.0–70.0 easily understood by an average 13- to 15-year-old
0.0–30.0 best understood by university graduates

Another test for readability is the Dale-Chall Readability Formula. This measurement uses sentence length and the number of ‘hard’ words to calculate the U.S. grade reading level. There is a list of 3,000 words that are common to children in the fourth grade. You can read all about the Dale-Chall Readability Formula here.

Findability

Findability is an interesting attribute to consider. There are varying degrees of findability. Take the “macro”: Will a search engine (particularly Google) present my content to a consumer when the consumer searches for it? Companies spent boatloads of money on search engine optimization and advertising, trying to get their content to float to the top of the results page. Unfortunately, every time we seem to have figured out Google’s magic potion for raising the ranking of a webpage, they change the algorithm. In fact, the updates to Google’s algorithm happen incredibly frequently. Moz does a great job of tracking all of these changes. That said, to some extent you can measure macro findability based on page ranking. Just don’t expect the results to be the same from day-to-day, particularly using organic search.

Then there is the “micro”: Assuming my consumer was able to locate my content on a bookshelf, in a search engine, on my website, etc., can the consumer find the exact topic that she needs? How good is my index? How good are my cross-references? How good is the navigation within my media? Does my content even contain the information she is looking for? In both the macro and micro cases, we can measure findability in terms of the time it takes to locate a specific piece of information. For example, in the global content strategy workshop that I teach, one of the exercises is to measure the time it takes to find the Walmart.com site for India. It is much longer than you’d expect.

In order to have a good standard, we need a baseline measurement. How long does it “usually” take to find an international website? How long does it usually take to find the instructions for how to change the lightbulb in my GE Profile microwave oven? (The answer is “too long.”) If we have a baseline measurement, then we can measure the findability of any particular piece of information against it. I don’t know of a findability metric that is shared among all content. Here is an interesting article in The Usability Blog on the topic.

Bottom Line

The definition of quality content is a combination of many factors. Some of these factors are objective and can be measured against a standard. Others can be measured, but we need to create a baseline against which to measure them. The rest of the attributes are subjective and squishy. Subjective attributes have no measure. They only have a “feeling” or a “belief.” While feelings and beliefs can be explored and averages can be created, there is no definitive measurement for subjective attributes. What is relevant for me is not necessarily relevant for you. Organization that I deem logical may or may not jive with the way you think.

The number of subjective attributes that we use to define quality content makes it impossible to truly measure how any one piece of content ranks. After all, what are we ranking it against?

Val Swisher

Val Swisher is the CEO of Content Rules. She is a well-known expert in global content strategy, content development, and terminology management. Using her 20 years of experience, Val helps companies solve complex content problems by analyzing their content and how it is created.

When not blogging, Val can be found sitting behind her sewing machine working on her latest quilt. She also makes a mean hummus.
Discuss
Blog · Content Development · January 26, 2014
 

 

 
  • http://diyblogger.net/about Dino Dogan

    did I just see a math formula? NOOOOOOOO!!!!! lol

    Great post. I agree.

    • http://www.contentrules.com Val Swisher

      I know – Ewwww Math!! But, that’s what it takes to be measurable! Alas! :-)

  • http://www.sdicorp.com/Resources/Blog/tabid/77/articleType/AuthorView/authorID/24/lkunz.aspx Larry Kunz

    Great minds think alike, Val. I saw your article just after I’d posted a piece called Good, not perfect on my own blog. Bottom line: while I appreciate the importance of all the factors you mentioned — design principles, editorial guidelines, metrics — I’m beginning to realize that quality boils down to just one of the items in your list: Does the content execute those techniques effectively?

    Now how do we define “effectively,” and how do we measure it? That’s something we still need to work on.

    • http://www.contentrules.com Val Swisher

      Exactly, Larry! How do we quantify “effectively”? Therein likes the big problem. If we cannot quantify it, how do we measure it? I’m off to read your post! Thanks.

  • Shelley

    There is a book written by John Guaspari in 1985 entitled “I Know It When I See It” that very accurately defines quality by parable. Before any geniuses launch into deep discussions of what quality is, I strongly suggest you get a copy of this book… and maybe just read it!!!!

    • http://www.contentrules.com Val Swisher

      1985 ey? The more things change, the more they stay the same. ;-)

  • marielouiseflacke

    Re. Readability formulas.

    It might be interesting to check J. C. Redish’s article published in IEEE Transactions on professional communication, 1981,vol. 24 and entitled: “Understanding the limitations of readability formulas”.

    (a) “Writers who use a readability formula, however, should do so with caution.Just as an engineer must know the specifications, uses, and limitations of any methodology or tool in the profession, the technical writer needs to understand the origins, uses and limitations of readability formulas.”

    (b) ” A formula that counts only sentence length and word length or familiarity of the words is not sensitive to the order of the words or the complexity of the grammar. Sentences with misplaced clauses, dangling participles, or misused words will score as well on a readability formula as sentences of equal length that have none of these problems”.

    May I also add that the above formula applies to the English language ONLY? It can’t be used to estimate the readability of any foreign language (i.e. translation).

    • http://www.contentrules.com Val Swisher

      You are 100% correct. All readability formulas must be understood and taken for what they are. Things like the Flesch-Kinkaid score and others like it tend to look at things like sentence length, number of syllables in a word, and so on.

      This is one of the reasons why I happen to like the Acrolinx tool. They use a Natural Language Processor (and artificial intelligence engine) that looks at significantly more things in a sentence than any of the tools. I have found it to be more accurate in understand various grammatical constructs, parts of speech, and more.

      Another problem we face is the new, friendly “voice” that companies are using in their content. Companies want their customers to be friends now, not just customers. Text that uses a friendly tone can be difficult for people who have English as a second language to understand (and let’s not even talk about trying to translate it!). We have a long way to go!

      Oh, and YES. This is ENGLISH only. Thanks for pointing that out!

      • marielouiseflacke

        Another article that might be of interest: “Evaluating text quality: the continuum from text-focused to reader-focused methods” by Karen Schriver – Published in IEEE Transactions on Professional Communication, 32, 238-255 -Published in 1989… aren’t we re-inventing the wheel?????

        • http://www.contentrules.com Val Swisher

          To some extent we might be. But, I think that content is dramatically different today from 1989. Even technical content is written in an entirely different voice/tone, using entirely different tools. The content we create describes products and procedures that we couldn’t even dream of in 1989. But the problem remains. Quantifying quality is tricky business.

  • Mysti Berry

    Love this article. No wonder improving content quality is hard :)

    • http://www.contentrules.com Val Swisher

      Thanks! Don’t we know it!! ;-)

  • Shelley

    There is an interesting point of view (at least to me) that I haven’t seen mentioned in this discussion or in any discussion I’ve read about quality. There are some things in this universe that simply cannot be measured. I believe that quality is one of those things. For example, how do you measure the value of happiness. Do you count the number of smiles in your day, or the number of compliments you receive.

    Everything that Val lists in her statement about quality is accurate and correct. But nothing in that list can be quantified. We may answer some of the questions with a “Yes” or “No,” for example, “Is the grammar correct?” But we cannot quantify any of the elements in Val’s list, for example, “Does its tone adjust appropriately to the context?” We cannot measure anything in that list.

    We can measure the effects of quality. Do our customers like our products and continue to buy them. Remember, “quality is in the eyes of the beholder.” We cannot assign a numeric value to this thing we call quality BEFORE it reaches that “beholder” and he/she renders a decision about how he/she feels about our product.

    We can certainly ask our fellow employees and peers to “represent” the end user and render an opinion about the quality of our product. But first, it is exactly an opinion, not a measurement of quality, and that opinion is often overshadowed by the ingrained culture of the company and loyalty to our friends. We may use Val’s criteria to justify our opinions, but we cannot state quantitatively how much or how little the work meets the criteria (for most of those criteria). We never know the level of quality we actually produce until it reaches the ultimate user.

    We simply cannot quantify quality. Just my humble opinion.

    • http://www.contentrules.com Val Swisher

      I agree that we cannot quantify quality. We can quantify spelling errors. We can quantify grammar errors. But quality is subjective. What is excellent quality to me, might not be excellent quality to you. Agreed.

Get the Scoop

our monthly dose of compelling content delivered to your inbox

strategy | development | globalization writing | terminology | XML | ebooks

 

Recent Posts