Saturday, October 27, 2007

The Price to Pay, to Program Today

Programming music is not getting easier. There are forces, because of PPM and new requirements to manage the positioning of multiple stations, that can’t be met with simple tools and simple programming.

There is a lot to learn. But a series of discussions this month with leading programmers has reminded me that we are well equipped to meet the challenges. At Steve Casey Research, we have the tools to help you learn what you need to learn to be successful.

This learning is the price you must pay, to be a successful programmer today. I will explain what those issue are, why you must deal with them, and how my little company can help.

Today, the successful programmer understands:

  • MusicVISTA
  • Variety Control
  • Music Fit Analysis
  • Pure Core Format Fans
  • Demographic Shift
  • Center Fit Rank
  • Passion
  • Burn Analysis
  • Discrete Music Cluster Prioritizing
  • Lane
  • Core Versus Variety Songs

Are you familiar with these terms? These some of the issues and technologies that you simply must understand if you are to program in today’s environment. Is it hard? At first, yes it is. But it is the price to be paid, to be a successful programmer today.

We need real analysis for 3 reasons:

1. Execute your music image. Thanks to the desire for stations to manage their “lanes”, music fit analysis is a critical tool. Otherwise, the “lane” can’t be decided by anything more than personal opinion. There is no component of audience feedback, at the song by song level.

Without Pure Core Format Fans analysis, Music Fit analysis and Variety Control from Steve Casey, you are flying blind. Your plan may be excellent, but you must create a feedback loop with the listeners.

2. Identify and appeal to true loyalty. Thanks to the problems of P1 stability that have been revealed through the PPM survey, your only solid signpost is the collective group that think most alike and thus define the format. They represent your most valuable prospects.

Everybody agrees with this, and only Steve Casey Research has Pure Core, to let you find and study the opinions of those that define your lane. PPM simply reveals the truth that Pure Core has always been designed for!

3. Manage your song by song flow. Thanks to the PPM, every tune-out is recorded. No “fuzzy memory diary entries” later in the day. Programmers now say “We have to manage the presentation song by song, quarter-hour by quarter-hour”. Some leading consultants have been saying this for many years. Fit analysis, applied against an agreed upon center music position for the station, allows you to manage flow. You can switch between core music and variety music, between hub and spoke, between vertical and horizontal.

On the need, everybody agrees now. Variety Control, combined with identifying your quintessential sound, is the analysis tool that gives a programmer the ability to manage the flow. It is your main tool for fighting back in the new world of PPM. Again, Variety Control and Center Fit Analysis were not designed for PPM. PPM simply reveals the truth that many consultants have always told us. PPM reveals the truth that I designed Variety Control to allow us to manage.

The Consulting Role of Steve Casey Research

I am a consultant. I add real analysis to music tests. The value of that shouldn’t be discounted. Right now your “normal” music test is a simple device: A survey of how many people like the songs you test. And that’s all you’re getting. You’re getting a tabulation.

What I add is my specialized knowledge, skill at bridging research and programming, and advanced analysis tools. I draw bigger pictures, make additional distinctions, and warn you of problems with the test itself, the target and the station’s positioning. Now, you get an analysis.

We provide up front consultation regarding the design of the test. We don’t charge for that. But we want you to gather high quality data, regardless of which research company to hire to do the data gathering. I work with nearly every radio research firm, and most are great people doing excellent work.

Our MusicVISTA system is the state of the art in music research analysis. It isn’t a jazzed up spreadsheet. It is a powerful tool.

The 20+ page analysis document I provide captures key indicators, ideas based on my expertise, and critical clustering information. That gives us a solid analytical foundation. From there, we talk.

Why aren’t we doing this for every AMT?

Because it is harder than looking at a simple ranking of songs.

Too bad, but the new world of PPM means that program directors need to learn some new skills.

It is hard. But what is hard is not my analysis, or the tools. What is hard?

  1. Understanding more precisely what your “lane” is.
  2. Understanding which songs are in and out of your lane (or center position)
  3. Knowing how to build intelligent categories to separate and control your balance
  4. Seeing a song as a tool that helps you create consistency or variety, rather than a simple 76 Pop Index.
  5. Building intelligent clocks that keep you from straying out of you “lane” for too long, and which bring  listeners to a comfortable place at key points in the hour.

Programmers Who Achieve, Master This

Steve Casey did not decide that these are necessary skills. PPM says they are. The structure of modern radio groups says they are. Many very smart programmers and consultants say there are. I happen to agree.

Balance Programming and Costs

In the balance of  cost control and adequate performance of the station and by the programmer, this is the balancing point.

We must seek to perform well in these “hard” areas. If we don’t, then cost savings will be overwhelmed by low ratings, high staff turnover, products that the sale department can’t count on, and programming initiatives that fail not for lack to a good idea, but because of poor execution or the inability to fine tune the execution.

There is no way around the need for new tools and new skills. Steve Casey Research exists to make it happen.

To repeat my role in making you successful in today’s radio environment:

  • · We provide up front consultation regarding the design of the test.
  • · We encapsulate our analysis in a software system called MusicVISTA. It is the state of the art in music research analysis.
  • · The 20+ page analysis document I provide captures key indicators, some of my expertise, and critical clustering information.
  • · From there, we talk.

Sunday, October 21, 2007

"Pure Core Format Fans": How We Find the Listeners that Matter Most

Memos are flying. Blogs are being posted, on the subject of the “limits of P1”.  PPM, which gives us data from listeners from more than one week of listening, is showing us that many shift their P1 station from week to week.

Some radio programmers and researchers, because of the P1/P2 shift over multiple weeks, are now saying we should give up on “loyalty” altogether, and stop thinking about P1s. But that is not correct.

The problem is not new, and I have been pointing out the limitations of our research screening for at least a couple of decades. This problem of unstable or uncertain P1 (or "most listened to") information has plagued both our weekly current music research and our library studies for years. But  as long as a radio station can actually be liked more or less depending on how well it is programmed, we need to do our best to measure satisfaction.

What Is Pure Core?

Pure Core is our patent pending technique for finding the people who act in concert to define a format through their shared set of opinions about the music. It is, in effect, a more stable "P1"

Pure Core is "P1 to the format".

P1 allows to find the center of the format, as constructed by the listeners, not our personal preference. It is self evident that the people who are most loyal to the format are our best potential customers. So we need to continue to do our best, try to make our best even better using what we learn from PPM, and then use my proven and now obviously necessary “Pure Core’ analysis tools to correct the limitations of our loyalty screening.

  • We should continue to invite our best customers to participate in our studies.
  • We should work on ideas that improve the quality of our “loyalty” questions.
  • We should completely resolve the problem, once the test is complete, by focusing on Pure Core format fans, which specifically solves the problems of P1/Most/Favorite screening questions.

Detail:

We have 3 common measures of satisfaction:

1.  “I listen to you most”

2.  “You are my favorite station”

3. giving us the most quarter-hours in a given week in a diary or metered survey (and we can’t use this one for screening research studies)

We know better than ever that none of these work perfectly.

But we have limited budget and sample size, so we need to do the best to recruit listeners who will help our fans stay fans, and hopefully add even more. So we screen in with some sort of “use a lot” or “like a lot” screener. And it would be great if we could experiment a little, and make the effort, no matter how imperfect, as good as possible. I’m sure there is room for improvement.

Still, when experienced researchers like Carolyn Gilbert and Bob Harper at Paragon point out the difficulties, they are right to do so.

Solution:

Today, the only way to clean this up is in the post-analysis. After the music test is complete, the Pure Core format fans can be determined. They are “P1” to the music. And not because they say so. Their pattern of responses to all the tested songs proves they are. We must get our station to appeal as much as possible to the people who create the widest possible consensus.

Pure Core is the tool that accomplishes this.

Steve Casey Research is the only research company in radio looking past P1 to the true definitions of loyalty and consensus.

Of course, I'm excited about that. Pure Core is the most powerful single programming tool I have discovered. It is something that I hope many more stations will take more advantage of.

Not every competitive situation is complex enough, or important enough for this extra effort. But many are, the cost is very reasonable, and Pure Core is only one of the state-of-the art tools applied to the analysis. And as we are seeing more clearly than ever, there is otherwise a huge price to pay in terms of reduced clarity and confidence.

Tuesday, October 16, 2007

Stopset Design: Common Sense Thrown Out the Window

Several sources have reported that some Clear Channel music stations are moving from 3 stopsets to 2 stopsets per hour.

Why?

Because PPM data shows that most people are not tuning out during a stopset.

This raises a few questions:

1. When we survey listeners properly, they indicate that they prefer more frequent, shorter interruptions. Isn't that important?

2. When we survey listeners properly, they seem to become far more uncomfortable once breaks are more than 2 minutes in length. Isn't that important?

3. If a wife hasn't actually left for another state and filed for divorce, there is no problem with the marriage, right? Isn't that what these programmers are saying? Could it be that people, once they pick a station that they are mostly happy with, simply are too busy to walk over and change the station every time a commercial break comes on? If the station abuses that decision enough, might they simply pick a different station, rather than keep fussing around with their radio? To me, there is absolutely nothing in the PPM data that suggests anything about whether people are happier with long commercial breaks or short ones. They simply aren't asking for a divorce twice an hour.

4. But by the way, some of them are! It is not true that 100% of the listeners stick around through the commercial break. I'd like to know more about that:

    • How do the numbers break down by listening location? If you look only at in-car listening, where it is far less of an effort, do tune-out rates jump?
    • Has anybody actually gone to shorter, more frequent breaks and measured the effect?

5. Can anybody explain why both research survey results, and the 25+ years of the Drake-type formats seemed to show that 6-8 short breaks an hour worked well?

6.  Long breaks weren't thought to be an advantage, ever, for most of commercial radio's history. Until certain classic rock stations began creating very long music sweeps as a point of differentiation. But that was the answer to "what unique thing can we do and promote?" It was not the answer to the question, "What will the listeners actually enjoy the most"?

Have we forgotten that this was a promotion, and not a programming philosophy?

Monday, October 15, 2007

P1/Favorite/Most and Music Research

How do find the people who matter most?

A number of blog entries are focused on this issue. I think most of this is fueled by the fact that the multi-week PPM measurement methodology has revealed something not available through the one week diary survey: People can change their P1 station from week to week.

What does this mean for music research?

We need to begin with the end in mind.

Our goal: Use the results of the music test to fine-tune our music, and make the people who spend a lot of time with us continue to do so.

The process: Only people who listen to us are going to quickly hear and appreciate the changes we make based on a music test.

The answer: We focus on behavior.

I suggest you screen your respondents with the phrase:

"Which radio station do you listen to the most"?

In my experience, there isn't usually very much difference between this and the attitude question "Which radio station is your favorite". But, several programmers and researchers have reported a problem with getting listeners of certain formats (like soft AC) to name the station as a "favorite".

This is worth more thought and study, particularly when we try to get maximum benefit from what are called "perceptual" surveys.

But for screening an auditorium music test, we're probably as focused as possible when we ask people about their listening, rather than their attitude.

Extra credit:

But why not have it all? Ask them about their favorite station too. And when you get a different answer, probe, probe, probe. When behavior and expressed attitude don't match up, how valuable is it for us to understand why? Very!

PPM Evidently Doesn't Solve All The Problems

"NEW YORK -- October 12, 2007: Recruiting 18-34-year-olds to join Portable People Meter panels is still a problem, said Arbitron SVP/Chief Research Officer Bob Patchen at Friday's conference call to discuss the status of the PPM rollout. "

So reports today's Radio Ink email.

Things are not getting better, are they?

For years, we worried about the difficulty of getting 18-24 men to fill out and return diaries. And we still worry. But now, the entire 18-34 demo is a problem? The 18-24 group is coming at 62% of the target rate. And in some parts of New York, the entire 18-34 group is coming in at 55%.

This is progress?

The answer from Arbitron is to show them the money.

Yes, that will work. But there is a real danger here. It is possible that people who won't carry a harmlesss little PPM around unless you pay them a lot of money are in some important ways different from the people who cooperate for a lower incentive.

Given the complexity of human motivations, how are we going to understand this? I think it could be argued that every participant should receive the exact same incentive. Instead, Arbitron will do this only when needed to get the response rate needed. But in the world of research, "more quantity" does not mean "more quality".

It is a problem with no easy answer. If it were easy, Arbitron would not still be struggling with it after 30+ years.

What is it about 18-34 year olds? I understand the difficulty of filling out a diary. But the little pager? It seems so easy.

What do you think?

Sunday, October 07, 2007

How Can We Learn Whether the Listeners are Still Happy?

Here is a good example of how you might take one of several approaches to asking a research question.

Listening More or Less

A lot of researchers ask something like "Compared to six months ago, are you listening more, less or the same to KXXX?"

This isn't a great question, because people don't pay that much attention to the amount of time they spend with a station. And they rarely remember accurately how much they listened, in general, several months ago.

What we're really trying to find out is whether they are happier with us or not. And if we've really irritated them, we'd like to know why.

Enjoying More or Less

So we can get closer to this with "Compared to six months ago, do you enjoy listening to KXXX more, less, or the same?"

Sounding Better or Worse

Some researchers use a slightly different question, "Compared to six months ago, has KXXX's programming gotten better, worse, or the same?"

This may be a typical example of left brain versus right brain. In the first version, we're asking for feelings. In the second, we're asking for an intellectual evaluation. In fact, both are probably useful, and if there is a way to include both, you may learn more.

And then, be sure and ask "Why?"

Changes in Preference

If you want to ask about listening more or less, you can, in a limited way. Ask "Thinking back to about six months ago, at that time what radio station did you listen to most?" If the survey participant is going to remember anything, it will which station was in first place in their hierarchy. Needless to say, that is still far from perfect. But it can be instructive, particularly if your "Why?" question yields a coherent reason as to why their #1 choice has changed.

A System for Tracking Popularity Shifts

The best way to track churn is to make it a discipline:

1. Create an in-house tracking effort. Often done in order to provide a screening opportunity for call-out music research, conduct an ongoing telephone survey of station listening and preference.

2. After six months (or perhaps 3 would be more timely and useful) do a follow-up call to respondents to thank them for their participation in the survey and ask them which stations they currently listen to and prefer.

3. After you have accumulated a reasonable number of these interviews, tabulate the changes in listening that have occurred.

An example of how this last technique was very useful:

Because it is, in effect, a panel, we can create a "second opinion" when the ratings don't go our way. A radio station was considering whether to renew the contract of the morning show. A recent Arbitron ratings trend was not very good. However, this tracking technique showed a 15% net increase in "Most"  for the station, and the station chose not to panic. The next Arbitron trend bounced up higher than the first trend had bounced down. And everybody lived happily ever after.

Had the station's own tracking panel found a loss of preference, we would have invited (and paid) people who had abandoned the station in for a focus group. We would have probed for any common reasons for the loss of excitement about the station. And if the common thread had been the morning show, then the outcome could have been very different.

Saturday, October 06, 2007

PPM Gives Us Yet Another Reason to Focus on Quintiles, not P1, P2, etc.

A study of PPM results by DMR and the University of Wisconsin shows a lot of instability in which station is P1 (first preference) for a given panelist, from one week to the next. It is yet another weakness in the very term "P1".

In the thousands of books I analyzed with InstantREPLAY diary analysis, I ignored P1, P2, etc. For me, the key has always been listening quintiles. Jim Yergin with Westinghouse, when he developed reach and frequency tables in the mid 1970s, showed that there is a great consistency in quintile distribution of listening. From memory, about 66% of all listening tends to come from Q5 (if you consider 1 as light, 5 as heavy) listeners. You could have a single P2 listener who contributes more listening than another listener who is P1 to your station.

Even before the multi-week survey that is part of the PPM methodology, we knew from our experience with call-out music research and from comparing screen-in interviews for AMTs with what people put on their answer sheets that some people change their P1 station from week to week. It will be interesting to learn how much the amount of listening, as well as P1, P2, etc. choices, vary from week to week. And this will be interesting both on the individual station level and for radio overall.

Appealing to heavy radio listeners has, of course, always been important, since they are the key to building decent TSL, versus only cume. And the work done by myself and Michael Albl, (then head of CMM’s Nest Marketing division) in the mid 1990s, showed that it is very difficult for a diary keeper to become a Q5 listener unless they pick your station for their at-work listening. This is something else that will be interesting to explore in the new world of PPM.

Unfortunately, for research studies other than ratings, we can’t afford to pre-test people and get a detailed picture of their listening behavior. We have to go with stated station preference, which is a much more legitimate use of the term “Preference One”, to find those who are buying into what we are selling. But as the ratings show, we don’t learn nearly as much about behavior as we wish we did when somebody says we are their favorite station.

The Sky Is Cracking

Mark Ramsey, on his hear 2.0 blog, has posted this short presentation from Ford about the new Microsoft Sync in-car entertainment and communication system.

The Sky Is Cracking

1. Thanks to Mark, again, because he keeps finding all this great information!

2. I want this! Do I have to buy a Ford? Maybe. They are evidently ready to go - now. I'm picturing a really strong reason to get that iPhone.

3. In the presentation, they use Virgin Radio, London, a station I began working with almost 10 years ago. So for a couple of minutes, I thought "cool!". Then I realized who they did not use as an example. It is sad that no US radio station or organization (can you spell NAB) figured out how to be involved with this enough to at least get one of our stations used as an example. Wouldn't it have been great if the presenter could have made reference to the "In-Car Enhanced Experience" inititiative? You know, the one that the major US commerical broadcasters are working on to bring new benefits to you through additional Internet streams?

4. Oh. No such initiative? You're kidding, right?

5. The sky is not falling - yet. But is that a crack I see up there?

As they say at the end of the presentation: "Enjoy radio from anywhere on the planet".

The only real barriers to that are size and convenience.

The cell phone has solved the first. And, by setting up cars to accept iPods and hands-free operation of cell phones, the car manufacturers and the governments who are legislating the hands-free requirements are taking care of the second.

Soon, it will be normal for your cell phone to use a wi-fi signal for data (including streaming) if available, and switch to G3 if it is not. Maybe OnStar will even flip automatically to satellite when necessary. All of this will be completely automatic.