Monday, March 17, 2008

Web 3.0 Use It. Or Lose It?

Web 3.0

Revolutionary technology that Amazon uses.

You can use it too, and increase ratings. Or ignore it. And lose listeners.

Why does Amazon know more about your listeners than you do? Why are some Internet radio stations programming music better than you?

Give me 2 minutes and I’ll tell you why. Give me 1 minute more and I’ll tell you how to fix it.

Remember Web 2.0? So yesterday. Web 3.0 is today. It is the harnessing of patterns found in user databases to create collaboration and optimized products.

They call it Collective Intelligence.

One of the best examples of a Web 3.0 company is Amazon.  If you’re like me, you will often have found their recommendations for other books incredibly insightful.  I know I often leave the site having bought a book or two more than I intended!  In fact, their ‘if you like this then you may also like that’ approach may feel like a recommendation from a friend who knows you very well, but it’s actually created using cutting edge technology that does far more than simply suggest their best-sellers.  It analyses the most popular titles among people who like the same kind of books that you do, so that the recommendations are much more likely to ‘fit’ you.

How we compare to Amazon:

Amazon will send you a notice based on your preferences in the context of other people.

“We noticed you bought Sahara by Clive Cussler. Others who have bought that book have also enjoyed Earthquake by Jack DeBrul”.

How do they know that?

They know it because they built a collaborative intelligence engine based on purchases of books.

What you need to succeed with your listeners is a collaborative intelligence engine based on opinions about songs.

They noticed that people who like Clive Cussler also like Jack DeBrul.

You need to notice that people who like Elton John also like Phil Collins.

An AMT doesn’t become smart because of how you gather the data. It’s about how you analyze it.

Dumb AMTs are like the back page of a trade magazine. Score and rank only. No understanding of common tastes. Primitive. Loser.

You can do better. Much better. And you must.

Two minutes are up. Here’s how to fix it.

I provide leading edge analysis, collectively called MusicVISTA 3.0, that uses the same Web 3.0 technologies applied to the music tastes of radio listeners.

Music fit analysis. Pure Core format optimization, Foundation Cluster segmentation. Design based on 35 years of experience. Stations in more than 40 countries win with these tools, provided by Steve Casey Research.

With an AMT, we create, through our music test sample, our own database of “customers”. In the past, we tabulated. Today, we mine the data for meaning. Rather than purchases, we have song preferences.

With an AMT, we present our own list of “books” and determine which they “buy”. And we do something even more powerful than Amazon does. We find out which ones they won’t “buy”. It is as though Amazon sent out free books for everybody to evaluate. Our techniques are actually superior to those of Amazon.

Amazon sells books. What does that have to do with programming music?

We sell an enjoyable experience that you ‘pay for’ by listening to the commercials. The songs we play most often, the sounds we keep coming back to and the artists we play out of stopsets are all “relevant recommendations” to our listeners that make the cost of listening worth it. We say, “If you like this music most, we are the right station for you”.

So we know not to recommend Martha Stewart books to Clive Cussler fans. Or Metallica to Madonna fans.

We can find a valid center of gravity. Are we about Clive Cussler or are we about Martha Stewart?  Or both? The patterns in the data (whether from the Web or from an AMT) can guide us.

Can we play both? If they both satisfy members of the same “tribe”. By using collaborative intelligence we can accomplish the two things that we must:

1. We know we’re going off on a tangent.

2. We program our station in such a way as to get back to a comfort zone quickly, so people don’t wait very long for what they came to our station to hear.

We have to be cohesive enough to be trustworthy.

If Amazon recommends books that don’t make sense for you, or Match.com sets you up with lousy dating partners, they will quickly lose your trust. Our listeners have to know that we give them the type of music they like without making them wait too long. Or we lose their trust. We must be consistent enough to be credible and worthy of their attention.

To do that for the most people possible,  we use some very powerful  tools that work just like the collaboration tools deployed on the Web 3.0. We look at who is most excited about the music. Where is the greatest agreement? What is the center of the format? These tools work every time. But they work because they build on the following:

We are a recommendation engine for people who like: ____________________.

And we need to learn what gets written in the blank.

It really is that simple. Looking at the patterns of agreement among user databases is part of the most innovative businesses, from  Google with its PageRank system that you use to search the Internet, Amazon’s recommendation engine, Match.com’s matchmaking algorithms, music recommendation engines, Last.fm and more.

In fact, radio stations are more about affinity groups than these other businesses. We need to be more focused than Amazon.

I have created powerful tools, proven and refined over a decade around the world, that will help you understand these patterns and use that information to greatly improve the quality of your programming and the size of your audience. Rush to catch up with other leading businesses. Call me at 406.209.1541 or write me at scasey@UpYourRatings.com. I’ll explain, we’ll plan, you’ll succeed.

Friday, March 07, 2008

PPM: Concept Versus Implementation

In every human endeavor two things have to be right before success occurs.

The concept has to be good. A lot of people think that PPM electronic measurement is a good concept. The owners of radio groups have signed off on the concept.

I'll come back to the quality of concept issue. I don't think it is as clear as some believe.

But implementation clearly isn't working out.

1. Sample size. It stinks. Maybe that is because broadcasters are too cheap to pay the needed costs. But when you look at the way they've managed their stations over the past few years it seems very difficult to believe that they would compromise quality just to cut costs. Right.

2. Compliance. Didn't they test this? Did nobody know that people would leave these things in their cradles? Did nobody know that women would lock them up in their office desks when they went out to lunch? Arbitron tells participants to wear the meter "rise to retire". But they don't get it. And so far, we are evidently such sheep that we will say "ok!" to the notion that everything is working when a survey participant only carries the PPM around for 8 hours. That's all Arbitron asks for. What happened to "rise to retire"?

Now, about that concept quality Kool-Aid.

1. To make this thing worth the cost, it was necessary to paint the diary as a dismal failure. But this has been tested thoroughly. About 15% of listening is lost with a diary, and most of that is incidental and very short term listening. Yet the quotes about the "broken and outdated diary" keep coming. Tell a lie often enough...

15% versus 50% when the PPM sits in the cradle or locked up in a desk or buried in a purse.

2. One of the benefits was to be granularity. You could see people tuning out of individual songs. Forget that. The sample size don't begin to support that. I don't see that as ever changing, do you? It is a nice idea, but we in the radio industry simply don't want to pay for it.

3. Arbitron likes to show how the PPM can measure the audience for a single football game. I was creating that same analysis for WGN, Chicago as long as 15 years ago, using diaries. Yes, the numbers are bigger for the PPM survey than for the diary survey. But how many of those numbers are from very short listens? Perhaps the sample size is larger. Their charts don't show that. But that's only statistics anyway. More important is the actual measurement. What is the TSL distribution curve? A manager I spoke with was told by an Arbitron executive that they are trying to make it possible in the future answer that. But not yet. Why not? isn't this thing ready for prime time?

4. Pursuing that same line, PPM cume levels are silly. They show listening to an average of 6 stations per week. Why? Because the PPM system does not measure listening.

That bears repeating. The PPM system does not measure listening.

PPM measures exposure. Not listening. Apples and oranges.

And turnips. As in turnip truck. As in did we just fall off one?

Why do we care?

Because advertisers want to buy attention. Not a computer chip that picks up an embedded signal that happens to be in the same room.

Because Arbitron cripples our ability to apply the survey to radio programming analysis.

This is a huge problem that could cripple our business.

To work on our programming, we need to understand purposeful behavior. We need to study listening that was done on purpose, not incidental behavior that may have very little correlation to our programming efforts.

Consider the insanity of using today's PPM data to draw the conclusion that 2 stopsets an hour is the right way to group spots.

Insane. See my most recent blog.

New oldies stations (after they were dismantled in droves to create Jack). Fewer smooth jazz stations. All because of the hunt for PPM success.

It was bad enough to realize through the diary system that 20% of our cume gives us 65% of our listening. How much harder will it be to find and understand the behavior of our true audience when it is masked by a sample that listens to vastly more stations for incredibly shorter times because of methodology that measures exposure rather than listening by intent?

I have personally tried to gain access to PPM data to start to look for answers. I have tried to reach out to Arbitron and help them. Their lack of responsiveness has been very disappointing. Maybe they don't want too many questions asked. I don't know. But they should welcome with open arms anybody who will invest the time to study how to improve the PPM system.

Because right now, it sucks.

Thursday, March 06, 2008

Stopset Insanity

There a couple of very interesting blogs in my inbox today.

First, Jerry Del Colliano writes about the questionable responses radio as an industry has come up with for many of our challenges.

What I want to focus on is his point #7: Too many commercials.

Jerry suggests (along with some other ideas):

"No stop set is longer than one spot. You heard me right -- one spot. You've been wrong -- listeners don't like to listen to commercials crammed into stop sets and advertisers don't like it, either. Drake had it right in his glory days."

Basically, he is right.

At least he is a lot closer to the truth than most of the industry.

The newest fad is dropping down to 2 stopsets an hour.

Insanity.

The rationale: They don't tune out. So no problem. But, as Tom Taylor relates in his blog today:

"This is real Kool Aid”, about 92% of the audience staying through a good-size commercial pod. “A station that has consistently long stopsets and higher than average total commercial load will be a less desirable overall choice for listeners. Most of us have seen in research that the listener is very able to distinguish differences of as little as two minutes in total hourly load if this is consistent over time. The simple comment on this is that ‘if the listener is still using your station, they already made the decision that it is listenable and has a bearable commercial load.’ If you have too many commercials, measuring how long they stay during stopsets is GIGO [Garbage In, Garbage Out]: the listeners who notice this or even perceive this, are just not there to begin with…at the start or at the end of the stopset. They are gone.”

Jerry may have gone overboard to make his point.

The research I've seen in my short 38 year career suggests that (as Jerry asserts) Drake and others did have it right. In simplest terms, you can get away with two units. And than you must also work hard on the production quality, spot placement and separation, compelling promos and jingles to increase the overall "what's in it for me" level, and carefully prioritize spot placement for those times when the log isn't 100% sold out.

I, and other researchers, have suggested time and time again: Ask your listeners. Give them a scenario: Some cardboard pie slices to represent songs and more pie pieces to represent some commercials. Tell them to become the program director. Have THEM put the pie together. If the people who listen love it, they will get a raise. Now, go! Build an hour.

How do they group the commercials?