This provides recommendations re posting radio schedules via Arbitron ratings with consideration of the current diary-based service as well as forthcoming deployment of the Portable People MeterSM.

Sampling issues have been addressed extensively for Arbitron’s diary service (ref. Radio-Schedule Audience Estimate Reliability Final Report (September 1995), republished in 2004). This is the most recent assessment available, but thoroughly covers the issues involved in radio posting, albeit without making any conclusions regarding an optimal posting methodology. The paper provides a somewhat detailed discussion of the statistics/mathematics behind the Arbitron analysis which we will not repeat here (the paper is available on the Arbitron site http://www.arbitron.com). Key findings reflect common caveats in audience research:

  • Sampling error increases with tighter definitions of the period or population measured and can have an important impact on achievement of expected results
    • Individual stations or units vs. entire schedules
    • Single survey books vs. book averages
    • Specific hours vs. dayparts (especially important when hours included in a schedule are not evenly distributed across the entire daypart)
    • Tightly defined vs. very broad population segments
    • Smaller vs. higher sample in-tabs
    • And so on…
  • Forecasts of expected future performance based on historical data may not be achieved for a number of reasons including real (unanticipated) change in audiences, station performance in failing to air purchased units or sampling error (manifested as “bounce” in the ratings from book to book).
  • Rolling up ratings rather than impressions can introduce significant rounding error

As a practical matter, purchasing a deep list of stations (a reach-oriented approach) in order to increase overall ratings reliability may not meet the needs of advertisers with a strategic emphasis on frequency. Similarly, posting on dayparts only would ignore the bias that might be introduced by “overloading” units in specific hours (e.g., 5-6a as part of a 5-10a rotation).

This poses a challenge for reliable posting of schedules in which an advertiser might reasonably hold individual stations accountable for their own performance. From a statistical point of view, employing a book-average provides a simple mechanism for reducing the impact of sample bounce from book to book, but it is not a cure-all. For example, as stations change formats or on-air talent, averaging books would likely mask the difference in audience resulting from the change (assuming some portion of the differences are real and some portion sampling error, with no reasonable way to pinpoint the difference).

Thank you for your interest in our white paper abstract on Radio Posting Methodology.  To receive the full white paper in PDF format, please complete the form below:

White Papers