The Upfront Season, Exact Commercial Ratings and the Power of Position

As the networks, advertisers and agencies start the annual waltz of the wallet known as the upfront, I thought it would make sense to take a look at some of the value that is delivered.

It is not a surprise that where an ad is placed has a large impact on how many people see it. As an example, I looked at 1,588 major quick service restaurant 30-second units, which ran from December 31, 2012 through February 24, 2013. The chart below shows the index of the Exact Commercial RatingTM to the telecast rating for the restaurant’s spots. (The index is the exact second rating for the ad divided by the telecast rating.) The live indices are the teal columns; the indices for the 3-day DVR audience are shown in orange.

ECR Index

So, while 93 percent of the live telecast audience on average watched the ad when it was the last ad prior to a program promotion, only 62.2 percent of the 3-day DVR audience did. In contrast, when the ad was the first paid ad after a program promotion, it actually did slightly better than the telecast average, but with the DVR, only 87.9 percent of those viewers saw the ad. Where an ad is positioned in a pod has an impact.

61 percent of the quick service restaurant’s ads ran in the middle of pods, which can be seen in the pie chart below. They did do well with the first position with 24 percent of ads, versus 15 percent, being in a last position.

Distribution of QSR Ad Copy by Position in Pod

There was also a difference by network. The table below shows the range of the average 3-day DVR ad index to telecast across the 40 networks, which the restaurant used. The index ranged from an 85 to 53.5. Networks were not identified since many factors could have impacted the scores, including daypart mix, position mix, average rating, genre, etc.

Range of Network 3 Day Ad Indices

In fact, I made several attempts to build sophisticated models to predict what the 3-Day DVR Index would be by including network, daypart, program rating, pod position, and percent of the program that was live. I failed.

Then I tried something simpler. I said to myself, what if each network had the same distribution of ads by pod position as the average network did? In short, if you truly created the same rotation across networks, what would happen to the indices? Now, I could only do this with the networks that had ads placed in each position, which cut the list down from 40 to 13 networks. The chart below shows the networks in teal with what their original average was across all ad positions; the orange line shows what happens when I weight averaged the results to reflect the distribution of units across all networks. With only one exception, all the network scores went up.

Range of 3 Day DVR Ad Index for Networks with QSR Ads in Each Pod Position

So, in other words, if you create a buy where the ads across networks are distributed in the same manner, the holding power of your ads won’t differ as much. A “fair” rotation works. But, paradoxically, an “ unfair” buy can work as well. If an advertiser decides to push for those first positions, more people will see their ads. And that push for first becomes an interesting item to negotiate. And only Rentrak, with its second-by-second Exact Commercial RatingsTM, can provide the granularity and stability to make that type of buy.

__

In case you don’t know, I am Bruce Goerlich, Chief Research Officer at Rentrak, the global standard in movie measurement and your TV Everywhere measurement and research company. I have been in the research end of the marketing business for more than 30 years primarily on the ad agency side, with my last stint prior to Rentrak in the role of President, Strategic Resources Zenith Optimedia North America. Somewhere along the way I morphed from young Turk to old fogey. Now that I have grey hair and am horizontally-challenged, I can speak with some authority on advertising and research issues – which I will do from time-to-time on this blog.

Sandy & TV Viewing – Another Look

In the last blog post, we saw that in New York, TV viewing rose in the afternoon hours, and then fell as power was lost throughout the region. I’d like to look a bit at other markets; did a similar bump and fall in viewing occur? And did viewing go back to normal after the storm hit? We also saw that affiliates’ share rose during the storm. Did that happen across markets? And did the share go back after the storm?

For those short of attention span (or time), the answer is that across the markets we looked at, including Washington, D.C., Philadelphia, NY, and Boston, there was a consistent pattern of higher viewership during the day Sandy hit that then fell back to normal after the storm. Only New York, which was heavily hit by power outages, had a large drop in Prime viewing during the storm. All the markets had a jump in affiliate share across the day of the storm, which then fell back the week after.

Local news is clearly a place where viewers go to seek information. In addition, viewers quadrupled their ordering of movies On Demand, and all On Demand viewing increased by 60% on average across the markets. People watched what they wanted to. Bottom line, TV is still a connecting, powerful medium – though not as strong as Mother Nature! And only Rentrak, with our “big data” approach, can provide these insights.

Details to follow.

The chart below shows Homes Using Television (HUT) in the New York market by hour for the Monday before the storm, October 22, the day of the storm, Monday, October 29, and for Monday, November 5, the week after the storm hit. These projections are based on the approximately 90,000 homes Rentrak has in the market. (Note that our HUTs are higher than the traditional metric because we do not filter out duplicate viewing (when a home views more than one program.) We will count that home twice in our HUT.) What you can clearly see is an increase in HUT on the 29th in the daytime hours versus the 22nd, when many more people were at home… and then the storm hit, power was lost in many areas, and viewership never grew to what is normally seen in Prime. By the week after the storm, viewership patterns came back to normal. Prime Time viewing was a bit lower on November 5 than on October 22, where the last Presidential debate brought in more viewers.

Image

This pattern, without the sharp loss of viewership in Prime due to power issues, was reflected in Washington, D.C. (with more than 50,000 homes in the Rentrak footprint), Philadelphia (with more than 30,000 Rentrak homes), and Boston (wih more than 20,000 Rentrak homes). Of the three, Philadelphia had the greatest drop in Prime, though slight compared to New York.

Image

Image

Image

What is also interesting is the share of viewing that major affiliates had across all markets. Their viewing picked up during the day, and held fairly steady, even as the storm raged. Numbers came back to normal across all markets the week after the storm. All the markets had a “bump” in Prime share for the affiliates during the Presidential debate on October 22.

Image

Image

Image

Image

In terms of Video On Demand, as the table below shows, comparing the number of transactions during the storm to the Monday before the storm, there was a huge spike in viewing across all markets, with the biggest increase, by a factor of more than 400% in paid transactions for movies. Viewing for pay cable On Demand transactions went up by more than 50%, as did viewing for free TV programs. The numbers may even be understated in NY because Time Warner lost reporting information from one of their data warehouses.

Image

And by the week after the storm, on November 5, VOD viewership fell back, much closer to the levels of the Monday before the storm.

Image

So what we saw from the first with the storm in one market holds true across several markets. TV usage was up, local station viewership was up, and VOD transactions were up. And might I say, only Rentrak, with tens of thousands of homes in these markets, and millions across the country, along with a census view of On Demand transactions, can give its clients this sort of robust learning.

In case you don’t know, I am Bruce Goerlich, Chief Research Officer at Rentrak, the global standard in movie measurement and your TV Everywhere measurement and research company. I have been in the research end of the marketing business for more than 30 years primarily on the ad agency side, with my last stint prior to Rentrak in the role of President, Strategic Resources Zenith Optimedia North America. Somewhere along the way I morphed from young Turk to old fogey. Now that I have grey hair and am horizontally-challenged, I can speak with some authority on advertising and research issues – which I will do from time-to-time on this blog.

Political Ratings – A Nation Divided?

As we head to the closing weeks of the election season, I’d like to highlight the relationship between political orientation and TV program selection. As you fans of this blog know, Rentrak has a political segmentation. We based it on looking at actual program viewership, with the anchors of the “news” networks of MSNBC and Fox being the left and right ends of the spectrum. Based on hours of viewership, we can divide our millions of homes along a spectrum of “low involvement” to “very conservative.”

If we look at the two poles of “any liberal” and “any conservative” we get, at first glance, a very polarized viewing community as the graph below shows. The horizontal axis is the index of each program for conservative viewers compared to the average for over 7,500 prime time programs across 230 networks in September. The vertical axis is the index of liberals for those same programs. We have removed all programs from the news networks of MSNBC, Fox, PBS and CNN. In addition, in order to be clear with the graphs, we are only showing the top 750 programs, ones that have a .3 rating or higher.

It really does seem like a divided country. There aren’t many shows that are in the upper right hand quadrant appealing to both liberals and conservatives, and there aren’t many shows in the lower left hand quadrant, shows that aren’t above average for either liberals or conservatives.  It looks like a pretty tight line with shows having either a liberal or conservative skew.

Okay, so what is each side watching that the other isn’t? Let’s dive into the liberals first. (No bias intended here!) The graph below “blows” up the liberal quadrant, the upper left from the first graph. We’ve only included the programs that have greater than a 110 index for liberals, and less than a 90 index for conservatives. The size of each point reflects its total U.S. rating: bigger points have higher ratings. I’ve also called out a few programs by labeling them and coloring them red. (To reflect the “red, white & blue” of our nation’s flag, not for any partisan comment!)

The theme for liberal shows is comedy, Hispanic programming, adult oriented cartoons and sitcoms. There isn’t a cop show or a western in the bunch!

It looks very different when you apply the same filters, but this time just look at the conservative quadrant (the lower right quad from the original graph), as shown below. Here we have detective shows, older dramas, NASCAR and religious programs popping up.

So are we doomed to a country where there isn’t a common cultural heritage (yes, TV is culture!)?  There is hope. A lot of TV does sit in the middle, not quite skewing overly conservative and not quite skewing overly liberal. When we look at those shows in the middle, with and index of between 90 and 110 for liberals and conservatives, you get quite a healthy list of shows. The graph below takes the shows right from the middle section of the first quad map.

The Simpsons skews a bit liberal, but it doesn’t lose too many conservatives. Vegas and Survivor: Philippines do a bit better with conservatives, but liberals aren’t running screaming out of the room. And the highly rated shows like Sunday Night Football and Dancing with the Stars are getting both Donkeys and Elephants.

Three lessons here, I think: 1) If you want to, target and reach on a side of the political fence on a concentration basis. (E.g. just talk to the political beast you want to talk to.) 2) You can also talk to both sides at once in TV, and talk to a lot of them at the same time. 3) Knowing which programs to pick for concentration or conciliation isn’t that simple. It requires a finely tuned segmentation tool. And Rentrak has it.

In case you don’t know, I am Bruce Goerlich, Chief Research Officer at Rentrak, the global standard in movie measurement and your TV Everywhere measurement and research company. I have been in the research end of the marketing business for more than 30 years primarily on the ad agency side, with my last stint prior to Rentrak in the role of President, Strategic Resources Zenith Optimedia North America. Somewhere along the way I morphed from young Turk to old fogey. Now that I have grey hair and am horizontally-challenged, I can speak with some authority on advertising and research issues – which I will do from time-to-time on this blog.

Sticks and Stones

One thing I love about TV (and there are lot of things besides just veg’ing out in front of one, which I highly recommend), is all the ways it can deliver value to marketers beyond just audience levels. A couple of ways it can do this are captured in Rentrak’s weekly release with Bluefin Labs on Stickiness and Social Media. (Hence the allusion in this blog’s title for those not quick of mind – “Sticks and stones can break my bones but names can never hurt me.”)

Stickiness equals engagement. And engagement delivers more impact for advertisers. Stickiness is a measurement of time spent viewing. The value of time spent viewing a program for an advertiser is that the more time a person chooses to spend with the program, the more impactful the ads are in that program (see footnote for more on this). Because the Stickiness Index is based on time spent, and the range of most programs is up to 180 minutes, its indices do not go beyond that.

The Social Media Index measures chatter about TV telecasts. Chatter equals “what’s remarkable” – quite literally, these are the TV episodes and events that evoke remarks from the audience. The Social Media Index is important for advertisers who want to be topical. Bluefin Labs tracks comments made on Twitter and public Facebook accounts about TV programs within +/- 3 hours of the telecast airing window. Programs that generate the most social media comments will have a higher Social Media Index rating. The Social Media Index covers a range from 0 to many thousands; its value indicates the amount of social media “chatter” that a given TV show is generating relative to all the shows that were discussed via social media.

So here we have two metrics, involvement and remarkability. How do these metrics relate to each other? To overall ratings? To DVR recording? In short, how can these be leveraged? To help look at this, we went over 12 weeks of our summer Engagement reports.

In terms of the traditional metrics of viewership, ratings and DVR lift, the Bluefin Labs Social Media Index doesn’t connect in a simplistic way with ratings and DVR playback. (In each chart, a dot is a program, with the vertical axis being the Bluefin Labs Social Media Index, and the horizontal axis being either DVR playback lift, or ratings. Some programs are labeled so I can hammer home my points.)

The chart below doesn’t show much more social chatter as DVR playback increases. Why is this? Well the most talked about programs were the Olympics and NBA Finals, with one special event. And sports (and other live events), aren’t recorded and played back at high levels. (“Don’t tell me what happened!”) But sports are talked about. Consumers tend to tweet about TV when they are watching live. When watching live, people tweet because they can have a shared TV experience with others; the TV content “syncs” everyone. When consumers watch via DVR, they tend not to tweet because there’s no notion of the shared experience.

When we look at ratings and social media chatter, the same phenomenon continues. There is a bit of step function and the high-rated events just pop out in terms of “internet water cooler chatter.”

The multiple leverage value of sporting events continues when we look at the interaction of Rentrak’s Stickiness and Bluefin Labs’ Social Media Index.

We can see program involvement scoring high with these mega sports events, both in terms of Engagement – eyes staying on the screen – and the Social Media Index – talking about the game around the virtual water cooler. When you throw in the high ratings, no wonder sports events can get those high CPM’s.

Just a final note – we will go back in later this quarter to do a special look at these metrics for new season programs. The Olympics and the NBA were so strong this summer, they swamped out a detailed look at the power of engagement and chat on regular series – where a lot of action happens!

In case you don’t know, I am Bruce Goerlich, Chief Research Officer at Rentrak, the global standard in movie measurement and your TV Everywhere measurement and research company. I have been in the research end of the marketing business for more than 30 years primarily on the ad agency side, with my last stint prior to Rentrak in the role of President, Strategic Resources Zenith Optimedia North America. Somewhere along the way I morphed from young Turk to old fogey. Now that I have grey hair and am horizontally-challenged, I can speak with some authority on advertising and research issues – which I will do from time-to-time on this blog.


Footnote: This goes back to work done at Zenith Optimedia where the agency showed that if one person chose to watch only the first 15 minutes of a program, and a second person watched the full 30 minutes of the program, and both were asked to recall an advertisement which ran in the first 15 minutes, the second persons’ recall of the ad was much higher.

‘P’ is for Politics

As we wade knee-deep into the presidential race, often called, the “silly season,” I’d like to turn my readers’ attention to the relationship between how people vote and what they watch. One of the cool things about having millions of TV sets to play with is the ability to set up robust segmentation systems that actually work in the marketplace. This is what Rentrak has done in the political arena by creating seven groups of homes: Very Liberal, Somewhat Liberal, Middle of the Road, Somewhat Conservative, Very Conservative, Low Involvement and Mixed. Households were scored on their viewership to 50 programs identified in surveys as being very liberal or very conservative. Low Involvement homes watched hardly any of these shows, and Mixed Households watched a lot of both conservative and liberal shows. (See the grid below.)

This system works really well in terms of showing the alignment between voting and viewing. Going back to the days of yesteryear, e.g. January, we compared the viewership composition of the Republican debate prior to the primary election by counties that Mitt Romney won and Newt Gingrich (remember him?) won. It is clear that the viewers in the 34 counties who voted for Romney were more moderate and liberal, and the viewers in the 33 counties who voted for Gingrich were more conservative. Perhaps if Gingrich had used Rentrak’s data to place more targeted ads against moderates in the Romney counties he could have done better?

Being able to identify the political leanings of viewers can be a powerful aid in more efficiently placing the huge amounts of money political campaigns spend.  Maybe “silly” can now be replaced with “sensible.”

In case you don’t know, I am Bruce Goerlich, Chief Research Officer at Rentrak, the global standard in movie measurement and your TV Everywhere measurement and research company. I have been in the research end of the marketing business for more than 30 years primarily on the ad agency side, with my last stint prior to Rentrak in the role of President, Strategic Resources Zenith Optimedia North America. Somewhere along the way I morphed from young Turk to old fogey. Now that I have grey hair and am horizontally-challenged, I can speak with some authority on advertising and research issues – which I will do from time-to-time on this blog.