Heed your partisans, not the static in Arbitrons
Commentary originally published in Current, Feb. 25, 2002
By Torey Malatia
How will we evaluate success in programming choices if public radio takes the third direction discussed in the accompanying article? I would maintain what is our most widely held objective: to generate the largest possible group of listeners who are station partisans. There are highly useful statistical measures of listener "loyalty," but we must avoid misusing calculations that, like other audience data, are based on an unsteady foundation.
This article accompanies Torey Malatia's commentary on "A Third Direction" for public radio. Malatia is president and g.m. of Chicago Public Radio (WBEZ). He co-founded This American Life with Ira Glass and still provides the program's "management oversight."
Station partians are the core listeners who seek us for the unique service and value we provide. They choose us more than anything else on the radio. The key is to enlarge this core audience.
The Public Radio Program Directors Association has stressed this idea in its recent refocusing on the development of core listeners. David Giovannoni’s Audigraphics data has tracked "loyalty," a related concept, from its early publications. To a large extent, Audience 98 studied the core listener. Partisans or core listeners are our most likely source of financial support and positive word-of-mouth.
But the new course will require using some evaluative tools, especially Arbitron surveys, differently than we do today. We’ve all been cautioned that Arbitron uses inadequate, highly volatile methodology. The question it attempts to answer—how often and how long do people listen?—can really be answered only by objective observation. Arbitron knows its weaknesses, of course. The company will begin field-testing a new monitoring technology this spring in Philadelphia that may prove vastly more accurate. But its present methodology relies on sample listeners to report their own listening in paper diairies. Extrapolating from this kind of self-reporting is venturing into the land of conjecture.
So Arbitron reports are often puzzling and contradictory. It seems unlikely, for example, that objective reports would show as many "exclusive cume" listeners who never listen to another station. (They certainly would be exposed at least to other stations while sitting in waiting rooms, riding in cabs and so on.)
Beware of micro measures
Though we know the frailties of audience surveys, we nevertheless parse Arbitron’s tenuous data at the microscopic levels where Arbitron’s methodology is weakest.
For example, when Arbitron measures core listeners ("P1"), it simply adds up the quarter-hours in a diary. The station that the diary keeper listens to for the most quarter-hour units is the subject’s primary radio station. Audigraphics plunges much deeper, tracking hour-by-hour behavior, and defining "loyalty" as the listening given to one station as a portion of all radio listening in a time period. For public radio, this is a most valuable analysis. High loyalty (in its general sense as well as in its restricted Audigraphics meaning) is indeed our goal, central to building a healthy audience of partisans.
But remember that this yardstick is based on shaky Arbitron surveys. Worse yet, it is based on a multi-level subdivision of Arbitron’s volatile data. Though evened out by Audigraphics’ two-book rolling average, loyalty tracking in any single Audigraphics report is inconclusive and questionable. This metric is extremely helpful, but must be tracked over time.
Though the basic Arbitron number called P1 has limitations as a measurement, it still has some immediate value. P1 counts the cumulative weekly listeners for whom yours is the primary radio station. This broad measurement tells you something worthwhile about your success in attracting core listeners, our key aim.
These P1 data are not offered in all printed reports or processing software—Tapscan, for example, does not offer it. The P1 report can be found in the commercial-radio "Maximi$er" software and—very comprehensively—in "PD Advantage."
Kurt Hanson, founder of Chicago-based Strategic Media Research, considers public radio’s growth of fervent listeners, measured by P1 figures, to be its single most useful calculation of Arbitron data. Hanson, whose background is in commercial broadcasting, is publisher of RAIN: Radio and Internet Newsletter, a daily Internet report about Web radio (www.kurthanson.com).
"I would track P1s over time as the most important thing," Hanson told me a few weeks ago. "Public radio listeners are likely to be disinclined to participate in market research studies [so] even the P1s may be an underestimate. [Nevertheless], P1s tracked over time is a relatively useful measure."
Hanson recommends deriving what he calls the Cume Conversion Rate (CCR), which is merely the percentage of cume that is core, and charting its performance over several surveys. "Typical listeners spend 15 hours weekly listening to their primary station, about five hours to their secondary station, and less than that to a tertiary station." To remain competitive, public radio must have "an extremely high conversion rate," with a goal of a core audience about half the size of the weekly cume.
This isn’t easy to achieve. As the audience becomes more fragmented, the CCR tends to fall off. Large market stations currently have an average CCR of about 33 percent. Medium and small market stations may have higher rates.
Some software will calculate the CCR for you. Audi-graphics subscribers can find this simple P1 percentage, based on a two-book rolling average, on page 1 of every report.
Putting ourselves in a creative vise
Misusing audience data—by expecting every hour to build audience within a few months, for example—puts programmers in a creative vise. Opportunties for experimentation and creativity escape as we look for peak performance in slivers of our schedule using data that would be more meaningful at measuring longer periods of the schedule. Some programmers make changes based on three weak Arbitron surveys—about half of the adequate number—even when the incumbent programming passes the test of audience comments and fits into the station's mission and strategy. In cases like that, when P1s in the surrounding daypart are increasing steadily, a program should not be sent to the scrap heap because of a dip in loyalty or average quarter-hour listening. It may not be performing optimally, but it is not a failure.
Knee-jerk reactions to microscopic levels of Arbitron data ultimately prevent us from building distinctiveness, freshness and value. But reasoned, patient evaluation can help us truly advance our effectiveness and enhance our standards. If we are cautious in using audience data, we’ll have the freedom we need to revitalize our services, experiment with new concepts and offer programming of lasting, individual value.
To Current's home page Related article: It's time for public radio to head off in a Third Direction, Torey Malatia writes.
Web page posted March 4, 2002
The newspaper about public television and radio
in the United States
A service of Current Publishing Committee, Takoma Park, Md.