Friday, June 29, 2007

Polishing MusicVISTA (Thanks, LJ)

You might think that, nearly a year after release, our new music research analysis software would be fairly well polished. But three things conspire to keep that from ever happening:

1. I keep adding things. That's not entirely my fault, of course. Radio programmers keep asking for things. And every time I add another weapon to the arsenal, there is a new opportunity for feedback about how to make it better. I see that as a good thing.

2. My clients number, over time, in the hundreds, while Microsoft can count on hundreds of thousands of Word users. They get a lot of feedback, quickly. I bet there are features in MusicVISTA that no client has yet used.

3. I'll never be as quick to notice roadblocks as the programmers who simply want to get their music changes on-air.

And that third point leads me to extend a huge "Thank you!" to LJ Smith of WCOS, Columbia, South Carolina. He pointed out two little irritants that I completely missed.

a. If you right-click on the results spreadsheet, up pops a little dialog box that lets you assign songs to categories and levels. Once it is up, if you select a song and right-click it, focus will shift to the little box (because the computer thinks your right-click is an attempt to make the category dialog box visible). More than one programmer has thought that the category assignment function stopped working!

In fact, you should select songs with the left button. But that is easy to forget if you've just used the right button 5 seconds ago. So now, when the category assignment box is visible, the right-click does absolutely nothing. It doesn't take long to stop trying and flip to the left button, which is the standard for selecting rows in a spreadsheet. Problem solved. Thank you LJ!

b. When you change the cluster that the music test center fit rank is based on, you might immediately want to use the scroll wheel on your mouse to look through the results spreadsheet. But as LJ pointed out, the focus actually goes to the title search box, and you end up scrolling though that list, instead of your results. Again, easily fixed.

Every suggestion that might help me make it easier to work with your test, avoid confusion, or eliminates something that you simply would not expect, is welcome. Very welcome.

If you have a recent or upcoming music test, I certainly invite you to allow me to work with you. You'll get the benefit of my experience with programming research, and the power of the always being improved MusicVISTA analysis software. We'll create the most powerful analysis of your listeners' music tastes that you have ever seen. It is all designed to make your ratings increase. And if you're curious why your ratings will grow, contact me. I'll send you some information that will make it clear.

Songs Everybody Likes: Feedback

In a blog posted in April ("What Do You Do With A Song That Everybody Likes") I related the story of how one station handled a song that test with no negatives, but which got few (5%) favorites.

This station felt that favorites equal passion, and they felt that the song could not be a primary song for the station. In other words, they decided that "FAVORITE" as a listener opinion about a song was worth far more than a simple "LIKE" opinion.

I asked readers of the blog to tell me how much more important FAVORITE is compared to LIKE, in their opinion. I thought I'd summarize the responses I got.

Tie Breaker

Most programmers seemed to treat FAVORITE as a tie breaker. If all other factors are the same, the the edge goes to the song with more FAVORITE entries as opposed to LIKE.

What is interesting is that nobody seemed to feel that FAVORITE is the primary criteria. Positive feelings (LIKE + FAVORITE) seems to be what programmers focus on.

Burn

One programmer offered the opinion that:

"..focusing on "favorites" ultimately will lead to burn, a great way to make a favorite less so."

Frankly, that seems like a valid concern to me.

Industry Standards

The standard in our industry, when creating an overall score for a song is to give a FAVORITE score 50% extra credit compared to LIKE. If anything, that may be slightly more aggressive than the consensus of the people who responded to my question. That is the amount of "extra credit" for FAVORITE that I use in my own music research analysis, and after testing various ideas since 1974, I've become comfortable with it.

Other researchers, particularly those using a 1-7 number scale, will be found doing one of two other things:

1. Combine the top 2 scores (6+7), and simply use that. I'm no fan of that, because I think it literally takes the 4 and 5 answers out of the equation altogether. And we don't really know what 6 and 7 mean. My guess is that this is almost like counting only FAVORITE. Worse, a "6" score might, for some people, simply mean LIKE IT. We have no way of knowing. Gathering opinions from people that you simply don't understand the meaning of seems unnecessarily primitive. We can do better.

2. Combine the scores according to their 1-7 value. So a 7 score gets 7/6 more value than a 6 does. If we think of a "5" as the center point of LIKE and "6.5" as the center of FAVORITE, then the ratio is 6.5/4.5 or 13/9. That is very close (slightly less) to the ratio normally used in a semantic scale like mine (anchored by words rather than a simple number rating). So that seems reasonable, if you use a number scale.

Importance to Music Test Analysis

This issue is important, when we seek to do accurate music research analysis. Even music "fit" analysis requires a smart decision about the importance of FAVORITE responses. In deciding how similar two songs are, we have to compare responses. If one person rates the song a "FAVORITE" and another person rates it as "LIKE", are they the same opinion? No. How different are the opinions of those two people? Whether ranking songs on popularity or on compatibility, our decisions about what opinions actually mean are what guide the computers that crunch the data.

Thursday, June 14, 2007

Listener Control? Yes, But Not Like This

I just read another blog about giving listeners more control. It is far from the first I have come across. But I finally felt a need to comment. It read something like this:

"..part of the Web reality is letting the listener have a greater say in what is programmed. After all, they're doing this now with iPods and TiVos..."

This is wrong in so many ways.

1 Giving the listener a greater say has been at the very center of commercial radio since the beginning. What have we been doing, purposely trying to give listeners the impression we don't care what they want? The methodologies we use have changed over time. We started out looking at record sales. But we discovered that wasn't very representative of most audiences, and certainly no way to measure our library of well known music, also known as oldies. So we started doing actual listener surveys. And to this day, we debate the best way to choose who to talk to, and how to ask the questions, and what to do with the answers. That will continue, and it should.

2. On the other hand, we live in an era where many radio companies have decided that giving the listener a greater say isn't too important. They standardize playlists across many markets. They do little tactical research like weekly callout or AMT/music library testing. And they also do little strategic research like market studies or focus groups. But that is not a function of iPods and TiVos. That is pure arrogance. They think: If we build it (and own all the competitors, hehehe... they will come).

3. Am I living in an alternate universe, or is it painfully obvious that iPods and TiVos are programmed for ONE PERSON? Somebody wake me up if I'm missing something, but I think we are in the business of negotiating the conflicting desires of THOUSANDS UPON THOSANDS OF PEOPLE. Much harder job, don't you think? And one I've always been proud to be a part of. Not everybody has that skill. Hell, almost nobody is great at it. But some radio programmers, owners and particularly consultants keep talking and writing about letting rank amateurs program radio stations like they were individual MP3 players. What am I missing?

4. You want people to feel like you hear them? How about:

  • Play well produced commercials instead of annoying junk.
  • Limit interruptions to the music to about 2 minutes.
  • Do regular strategic market research.
  • Do weekly call-out, do it well, and believe it.
  • Research your music library every 3-4 months.
  • Stop burning out your library. Use cluster analysis and music fit invormat to make your most played music category more cohesive, not simply shorter.
  • Stop letting recording labels plan your programming.
  • Understand the core music of your station. You'll need better research, like I provide through Steve Casey Research. You need music fit analysis and Variety Control to get your music controlled properly on a quarter-hour basis.
  • Understand your core target audience. You'll need better research. Tools like my Pure Core format fan analysis will allow you to build a strong identity and fine-tune it for the tastes of your most promising listener prospects.

That is listening - and giving more control - to the listeners.