BARB Explained

BARB Explained: How we reduce data volatility

16 April 2019

It is widely acknowledged that panel-based research studies, in which a group of people represent a wider population, are subject to data volatility.

However, BARB’s panel is essential for us to get a real-life insight into people’s television viewing behaviour. Panels are expensive to set up and run, but they produce high-quality data. By contrast, big data generated from online sources are more cost-effective, but less valuable than the people-based data obtained from a panel. We do collect device-based census data on online television viewing from PCs, tablets and laptops, but these data alone cannot tell us about the people watching; for this, we need our panel data. The key, therefore, is how we minimise data volatility.

The BARB panel is composed of 5,300 UK households who, together, represent television viewers across the UK in terms of demography, geography, ethnicity and TV platform. An important indicator of the health of a panel is a low churn rate (the extent to which households leave the panel). The BARB panel has an average annual churn rate of 20%, which includes churn instigated by us in order to keep the panel balanced. 20% is generally regarded as low for a project of this type, which indicates that our panel data are robust.

Nonetheless, we are constantly looking at ways to reduce data variability. The design of our panel includes targets for the number of homes of different types. We derive these targets from UK Government census categories and population figures; there is no better source to ensure that our panel properly reflects the UK population. To meet these targets and maintain the stability of our data, we continuously assess the effectiveness of our panel recruitment techniques, and introduce new techniques if required, particularly to help access hard-to-reach groups.

Data volatility is naturally greater for channels that target a smaller subset of the panel, such as channels aimed at specific groups, or those that are only available on one television platform. Bigger samples would contribute to reducing data variability. One potential option is to work with third-party data sources, such as information collected by television platform operators from set-top boxes. These data are often called return-path data.

We need to consider the impact of integrating any third-party data source on the audience levels we report across the whole population. Our investigations show that using third-party data from only one platform destabilises viewing figures for channels that are watched across multiple platforms. As a joint industry currency, we would need to work with all platform operators to generate a sample of return-path data that delivers value across the board. We don’t currently have this level of cooperation.

We are therefore focussing on the option of building a larger panel of homes that is representative of the whole UK. A larger panel Is likely to mean increased cost – and wouldn’t necessarily lead to higher viewing figures – but it would help us to achieve our objective of reducing data variability overall.