Author: Shyguy
Monday, November 12, 2007 - 1:22 am
|
|
This article is posted on Drudge Report right now. http://www.nytimes.com/2007/11/12/business/media/12radio.html?ex=1195534800&en=1 e773c35ca1a60c8&ei=5099&partner=TOPIXNEWS When will this technology make a national rollout? Is the technology flawed or does it work? Is it a good thing for the industry or is it as this article indicate going to hurt the niche/ethnic formats? I seem to rememebet that in the last book that Don Coss's stations are in a significant decline from previous books. This sounds as if it will hurt his business even more if and when this makes a national debut.
|
Author: Semoochie
Monday, November 12, 2007 - 2:42 am
|
|
I don't think I would describe it as a national rollout. It's more a matter of unveiling a few markets at a time. I believe it's our turn in 2009 but could be wrong on the date.
|
Author: Tdanner
Monday, November 12, 2007 - 8:26 am
|
|
Portland will join the "meter" crowd in 2009. The technology is not flawed, it does exactly what it was designed to do. It recognizes radio signals, stores how long it (the meter) was within "earshot" of that signal, and downloads that data to Arbitron each night when returned to its cradle, which Arb installs with a dedicated line in your home. The methodology has some serious flaws. Where diary keepers were asked to record all their listening during a one week period, meter-ites must carry the meter with them all day - every day for up to 2 years. Drop out rate and decreased cooperation rates are very serious problems for the markets (Phili and Houston) which are currently fully switched to the meter. With cellphones and ipods etc, people just don't want another device they have to carry at all times. And just like with the diary, Arb is really having trouble getting and keeping the young (18-34), blacks, and Hispanics. The meter, which is closer to capturing real listening (rather than recalled listening) is delivering increased cume audiences, and seriously lower TSLs. This combination used to be seen consistantly in diary days for CHRs and news (NOT talk) stations with a Westinghouse model (give us 22 minutes we'll give you the world.) To succeed in a high cume/low TSL world you must tighten playlists, decrease breaks and chatter, decrease spot breaks, and use very blunt marketing. Billboards, busboards, anything that can say listen right now for your favorite song, or for the latest breaking news. The over 40 demo is better about doing what they say, and has fewer and fewer choices targeted at them on the radio. As a result, the revised and revived 60s and 70s oldies format has made a comeback. I suspect, but there is not enough data yet to examine the theory, that the meter will be deadly for "Jack,Ben,Charlie" which by definition plays a tune out at least every two or three songs for almost everyone who tunes in.
|
Author: Missing_kskd
Monday, November 12, 2007 - 9:55 am
|
|
Seems to me, the changes in methodology are not exactly a bad thing. It's just gonna be different. Not all that excited about: "To succeed in a high cume/low TSL world you must tighten playlists, decrease breaks and chatter, decrease spot breaks, and use very blunt marketing. Billboards, busboards, anything that can say listen right now for your favorite song, or for the latest breaking news. " ...but this: "I suspect, but there is not enough data yet to examine the theory, that the meter will be deadly for "Jack,Ben,Charlie" which by definition plays a tune out at least every two or three songs for almost everyone who tunes in." seems to be not all bad. I can get this kind of scene on a pod. Interestingly, I get tune outs on my own damn pod! Sometimes a known good tune just does not work, no matter what is delivering it. Given this stuff, what's the impact then on establishing daily connections between radio and it's listeners? A tighter playlist is not a significant worry to me, given there are value adds. (cool people, local happenings tie-ins, etc...) Again, the very broad playlist is for the pod, right? A station then, doing the blunt marketing, could get to own some trigger right? eg: Something bad happens, you are near a radio, tune to KEX for the scoop. Maybe get to know the people there --or the programming, for syndicated stuff, while you are there. Doesn't this then mean we get some simple identities, maybe with people working them for value adds, instead of "lotto radio"? Some triggers might be: energy, party, soul, news, etc... also could do, lifestyle stuff: green, moms, dads, men, women, kids, gay, ethnic, etc... Result being tunes / syndicated content are there to support the main theme, with people / station identity being the main product --where the value is! Anyone can play the top whatever in whatever genre. Anybody can play it a lot. Anybody could choose to listen or not. Tuning at random, looking for that tune is something kids do. Their trigger is the tune, more often than not. As we get older, we don't do that as much --or that's what I've seen anyway. Tuning, based on some trigger, seems to associate that action for some value in return. The better the industry gets at letting people know what that value is, and for what trigger it is to be associated with, the more listeners they are gonna get, and the more regular listeners they will get, right?
|
Author: Newflyer
Monday, November 12, 2007 - 9:32 pm
|
|
This just has to be asked sometime... Anyone wonder if they record... everything? Not just radio listening? Every word you say to your kids? Significant Other? Boss? Would being required to have this device on one's person at work violate a non-disclosure agreement and/or company policies regarding 'personal electronic devices' at the workplace, meaning someone risks their job by agreeing to be surveyed?
|
Author: Sutton
Tuesday, November 13, 2007 - 4:50 am
|
|
To follow up on Tdanner's solid info, all media that choose to encode can get rated. So, for instance, you could see how many people listen to a station's web stream if that stream is encoded. If Safeway radio chooses to encode, you could see how many people are listening to that. It means that program directors need to be brand managers now, not just radio geeks.
|
Author: Tdanner
Tuesday, November 13, 2007 - 7:09 am
|
|
Newflyer: Either you have confused Arbitron with the United States Government or your meds are making you paranoid. It WAS just revealed (CNN website) that Homeland Security has installed devices at ATT and at least 5 other providers which send copies of every email, voicemail, phonecall, websearch, and websites visited etc. to a government supercomputer. If the device violated an employment contract, Arbitron would first go to bat for the employee and see if the device would be allowed. If not, they'd simply replace the person in the sample. I would wager that there are no employees of the CIA/Langley in the Arbitron meter universe!
|
Author: Egor
Tuesday, November 13, 2007 - 7:37 pm
|
|
And just think, this is just the very beginning!
|
Author: Tadc
Wednesday, November 14, 2007 - 12:56 pm
|
|
Although it's been back in the news recently, the "secret room" at AT&T in SF has been known for some time now. Kudos to Qwest for telling big brother to go pound sand rather than bending over for the man.
|
Author: Missing_kskd
Wednesday, November 14, 2007 - 1:45 pm
|
|
Yep. Qwest overall has been a very good netizen. They allow you, in fact will encourage you, to choose your own ISP, over either qwest.net (business) or MSN (residential). Also one of the few providers who encourages home networks as well. Sad to see them in court over it, when it really should be the other way around.
|
Author: Tdanner
Monday, November 19, 2007 - 7:09 am
|
|
Trouble over the sample sizes is growing for Arbitron. Major clients are really unhappy. So are the agencies. From Mediaweek: "The roll out of Arbitron’s portable people meter radio ratings service may be in trouble. Four of the ratings firm’s biggest customers, representing more than a quarter of Arbitron’s total revenue, are, in the words of Howard Beal, mad as hell over low PPM samples among young demographics and they’re not going to take it any more. In a letter last week to Arbitron’s top three execs, including Steve Morris, president and CEO, Clear Channel, Cumulus Media, Cox Radio and Radio One called for Arbitron to take “immediate action” to fix low PPM samples among young demographics. Agencies, who believe the PPM will bring new accountability to radio, agree with broadcasters that something needs to be done. “We need to work with Arbitron to get better results because the meters are better than diaries and we can’t go backwards,” said Janice Finkel Greene-executive VP of broadcast strategy for Initiative. Arbitron had no comment. What this means for the roll out of the PPM going forward is anyone’s guess. But if Nielsen’s experience with its initial roll out of local people meters is any indication, the growing firestorm could force Arbitron to delay the industry’s transition to electronic measurement, especially in New York, which went live last week. “ If New York samples aren’t up to Houston samples, then maybe Arbitron should just hold off another quarter or two,” said Brad Adgate, senior VP and director of research for Horizon Media. “It’s a very important market. More ad dollars are spent there than any other metro.” PPM sample performance has been a mixed bag, good in Houston, but weak among 18-34 year-olds in Philadelphia and New York, giving broadcasters reason for concern. Adding to their anxiety, shops such as MindShare, are already requiring PPM audience delivery guarantees, as is the practice in TV."
|
Author: Roger
Monday, November 19, 2007 - 7:25 am
|
|
ARBITRON SIGNS DEAL WITH FEDERAL GOVERNMENT TO IMPLANT PEOPLE METER CHIP. Homeland security says chip only for research purposes (NOT) tracking....... Chip also said to be able to be used to make point of purchase debit payments and provide user identification. News at 11, January 3, 2010.
|
Author: Missing_kskd
Monday, November 19, 2007 - 7:59 am
|
|
Hilarious! Love the point of sale bit. I'll bet that sale idea been pitched at least once.
|
Author: Tdanner
Monday, November 26, 2007 - 4:43 pm
|
|
ARB Pulls the plug (for at least 9 months)... everywhere but Houston and Phili. Arbitron announced on Monday (Nov. 26) it will delay the commercialization of its Portable People Meter radio ratings service in nine markets: New York, Nassau-Suffolk and Middlesex will be delayed by nine months; Los Angeles, Riverside and Chicago by six months; and San Francisco, San Jose and Dallas by three months.
|
Author: Sutton
Monday, November 26, 2007 - 4:51 pm
|
|
People are making business decisions based on PPM info. It's not even accredited yet, and as flawed as the diary method is, the diary method is accredited. I would love to see PPM data work but Arbitron would be endangering what credibility they still have if they hadn't made this move.
|
Author: Tdanner
Monday, November 26, 2007 - 7:40 pm
|
|
I believe it was accredited in Houston. Then Arb decided to go ahead with Phili as meter only prior to accreditation.
|
Author: Newflyer
Monday, November 26, 2007 - 8:06 pm
|
|
Newflyer: Either you have confused Arbitron with the United States Government or your meds... No, I'm sure there's secret government contractors... Also, proud to say that I have never knowingly taken prescription medications for any of these so-called 'illnesses' they've come up with lately...
|
Author: Semoochie
Monday, November 26, 2007 - 8:42 pm
|
|
I'm not paranoid; it's just that everyone's out to get me!
|
Author: Sutton
Tuesday, November 27, 2007 - 5:43 am
|
|
Tdanner, yes, you're right about Houston and Philly. The New York data was looking very scary, though.
|
Author: Tdanner
Tuesday, November 27, 2007 - 7:32 am
|
|
The first wave of advertisers has already announced "pay on delivery"... if they agree to a schedule and the "instant" ratings fail to reach levels quoted, advertiser refund. Dropout rates & less than diligent use of the meters by those in the sample is much more serious than predicted in both Houston and Phili. Media Ratings Council has still NOT approved Phili, and poor samples with the young and with black population may delay that accreditation. With advertisers deluged with so many new, cheaper, and "sexier" forms of advertising, radio really, really needs to get this right. If advertisers loss faith in the audience estimates, the medium will die. People meters, like flying cars, are a really great idea. But a worthless idea unless they actually can do the job they were created to do.
|
Author: Markandrews
Tuesday, November 27, 2007 - 9:49 am
|
|
Anybody interested in bring back Pulse or Hooper? (Tongue in cheek, but only half-kidding)
|
Author: Robin_mitchell
Tuesday, November 27, 2007 - 11:02 am
|
|
Pulse surveyers traveled door-to-door in selected census tracts. They actually checked which stations were last tuned on the household's radios. They showed the "one" person in the household being interviewed a roster of market radio stations by frequency, and program listings provided by the stations. Certainly, it was a more expensive process than mailing out self-administered diaries. Arbitron got the upper hand, because their radio reports duplicated the format used for measuring TV in the same Metros. I understand that initially Arbitron "comped" agencies using the TV reports with copies of the Radio reports. Pulse's personal interviews could be flawed, however. How professionally trained and consistent were the interviewers sent out to canvas the survey tracts? For the research to be pristine, each question must be asked precisely the same way by every interviewer. I recall one book in the early 70's in which KGO in San Francisco showed up 7pm-Midnight in the Seattle metro cume tables. Bill Ford was KOL's music director & pulled a late night airshift. He got a call from a listener, who recounted the whole Pulse interview process they had gone through. The interviewer actually asked, "Do you ever listen to KGO in San Francisco? They come in pretty good nighttime!!!" I'm sure it was innocent. The interviewer was a talk radio fan, but their probe influenced the cume outcome in the daypart.
|
Author: Markandrews
Tuesday, November 27, 2007 - 12:33 pm
|
|
Wow...great story, Robin! So much for Pulse's pitch: "Nothing takes the place of an interview in the home." There are flaws in *any* research methodology...the object is to minimize those flaws as much as possible. That's why there's a plus or minus factor, and the results are called "audience ESTIMATES"...Bottom line I see is that the differences between the diary method and the PPM are like comparing apples to oranges, MRC accreditation and Arbitron's "assurances" notwithstanding. And agencies will do as they darn well please to crush any given station's rate card, no matter what the results are called...
|
Author: Semoochie
Tuesday, November 27, 2007 - 1:47 pm
|
|
I've mentioned this previously but before there was any talk radio in Portland at night, KGO had a 26 share and sold ads for local businesses. I remember hearing one for NE 82nd Avenue.
|
Author: Tdanner
Tuesday, November 27, 2007 - 2:49 pm
|
|
The MRC, of course, isn't at all interested in comparing of the diary vs meter methods. Diaries measure recalled radio listening, and the meter measures proximity to an audible radio signal. In both cases, we infer that actual conscious listening occured during the reported periods. But there's no way they are reporting the same thing. The MRC is a group of media research geeks, some of whom have been friends and mentors. All research (whether ratings, music tests, or political polls) must only meet 2 criteria to be "good" research: 1) Validity: Does the research measure what it purports to measure. If it says its measuring all radio listening for persons 6+ in a given area, it must be designed in a manner that will measure all radio listening for persons 6+. Badly designed questions and badly drawn samples can kill a study's accuracy before the first questionnaire is completed. 2) Reliability: Is the research reliable? Can it be replicated repeatedly within the reported margin of error? If giving the questionnaire to two (nearly) identical groups over a (nearly) identical time period should yield (nearly) identical results. An extremely well designed study with a sample size of 10 is valid but unreliable. Sample size is too small to allow randomness to give repeated results. An extremely badly designed study that is repeatedly administered to similar large samples will yield results that are reliable, but not valid. You get the same answers to badly designed questions over and over again.
|
Author: Rongallagher
Tuesday, November 27, 2007 - 6:14 pm
|
|
Okay there was Pulse and Hooper. Anyone remember the Barr survey? If it were to be believed, Barr claimed to have a device that could detect listening over a given area. It picked up harmonics of radios in use, I believe stationary radios only. Probably snake oil, but it was an option for small markets until others came along...
|
Author: Jr_tech
Tuesday, November 27, 2007 - 7:58 pm
|
|
Not necessarily snake oil... the local oscillator that is used in any super-het radio can be detected some distance away from the radio. Since this oscillator is equal to the frequency that the radio is tuned to +/- the IF frequency of the radio (usually 10.7 Mhz for FM and 455 Khz for AM), the frequency that the radio is tuned to can be calculated. A scan with a spectrum analyzer could possibly reveal the stations that nearby radios are tuned to. I just did a test, using a VHF communications receiver in narrow "CW" mode (sorry, I don't have a spectrum analyzer) and found that I could detect the local oscillator of a small transistor portable radio nearly 100 feet away. I picked up a signal on 79.2 Mhz... doing the math (add 10.7 Mhz) tells me the radio is tuned to 89.9Mhz...BINGO! Another radio, also tuned to 89.9, produced a local oscillator signal 10.7 Mhz ABOVE the tuned frequency at 100.6 Mhz, so there is a some potental for error here!
|
Author: Robin_mitchell
Tuesday, November 27, 2007 - 9:16 pm
|
|
Programmers tend to complain most loudly about Arbitron when the books are "soft" for their station. Arbitron's latest ruse is the "Electronic Book." Changes accompanying this change have been: grouping demos "for the ease of the user." That's right, now you can look at women 18-34, men 18-34...but the electronic book will not display 18-24 & 25-34 demos discretely. When Eugene's Maximizer data became available, think I learned why. The #1 station in the market 18-34 women, showed an INVALID SAMPLE SIZE for women 18-24...so it would not display the data. However by checking women 18-34, 25-34 could be subtracted and you could learn the results that were being masked by this computerese. Indeed, the station in question had a 60% with an invalid sample size, which contributed to their 18-34 female dominance. Friends, the sample size is so small...even if there are 30 diaries in any sex/age cell...they're spread over 12 weeks of surveying...and likely did not come back evenly over the 12 week period fcr which they're being averaged...but if they were...thats2-3 diaries per week....representing the entire population in that sex/age cell. Therefore, any week of the survey is represented by a totally inadequate sample....but when they're all combined for the 12 week period....each week's invalid sample becomes validated by hitting the goal for the sweep. What we see are "blurred" results at best. The idea of a metered sample with a daily representation of 30 is somehow more reassuring. One of the problems with PPM is HIGHER CUMES and LESS TSL. However, if the PPM is measuring reality...that simply means we've been living with a RECALL FANTASY as a standard for many years. Naturally, the bean counters are concerned with anything that might turn their business model upside down, and change the way they have to do business. One of the features of the PPM is a "motion sensor" like that used to trigger your vehicles airbags in a collision. Its supposed to be hyper sensitive. If the PPM is motionless for longer than a threshold...Arbitron may throw out the results...since perhaps the PPM is not being worn...but is detecting audio without a person listening. One of the most shocking developments of the PPM era, has been a serious decline in TSL in morning drive. My written question to Arbitron that has never been answered deals with a 6-10AM reality. Most people have NAKED HYGIENE time in the bathroom while preparing for work. I know I have a radio in the bathroom. Since a PPM CANNOT BE WORN IN THE SHOWER...is Arbitron missing 1-2 quarter hours of AM Drive listening each day??? This listening could be included by questioning each PPM respondent upon startup: Do you listen to radio in your bathroom before starting your day? If yes, approximately what time does this listening take place? Instruct them to take their undocked PPM into the bathroom. Then overide the "motionless" detection for this period each day. I asked the question. I never got an answer. Then again, I'm not a subscriber.
|
Author: Tdanner
Wednesday, November 28, 2007 - 7:31 am
|
|
Robin: Trying to look at a single week of data with diaries (which could easily have 2-10 respondents in a young discrete cell like W 18-24 in a medium market like Eugene) is futile. But looking at that small a demo in 1/12th of any survey period is futile. At the request of the NYMets, I once broke out discretely by individual day every Mets game of the season. It was like a textbook for "error range" and misuse of data. Arbitron never designed the study to be broken out that narrowly. And of course radio broadcasters have consistantly refused to pay for a sample that could. Opening Day of either 81 or 82, the day Tom Seaver returned to the Mets after years to pitch, listenership was (wait for it) zero. According to ARB, absolutely no one in NY tuned in for the Mets opener. But when you looked at the entire season (using only the QHs of each week when baseball was being played) you got a realistic and respectable rating for baseball. As I have said repeatedly, Arbitron's biggest problem is that radio pays for one survey, then uses the data in ways the survey was never intented for. You don't buy a Honda Accord and ask it to run the Indy 500.
|
Author: Robin_mitchell
Wednesday, November 28, 2007 - 9:08 am
|
|
Terry: I know. It was Arbitron that chose to spread their old 4 week sample over 12 weeks to create "continuous measurement." The concept of a Quarterly report, based on inadequate weekly samples yields a "blurred" look at what really happened during the quarter, as opposed to a discrete snapshot of "reality" at any given point. It is what it is. We accept trending of that blurred image, and the fact that generally rankings among the players doesn't change that much...so it is consistently representing something. I do like the idea of a "passive" measurement PPM in concept. However, Arbitron has never responded to my requests to define decibel levels required for detection/proximity to audio, so I have a focal point for what can be detected. Then there are the issues of selecting a representative sample...and trying to keep them in the sample for 2 years. If it were possible with a representative sample, "trends" would really mean just that. "Detection issues" could be drastically effecting TSL. Bridge Ratings published a study showing some 75% of the population now owns a cell phone. We've all seen the rampant use in vehicles. Some 58% "turn down" their radio when on the phone, even though it remains "on." As you've mentioned before, if the sample is not representative, a 2 year panel is a major penalty on reality, and a big hit to somebody's bottom-line. Radio forgets, they are estimates not reality. Unfortunately, it's a reality that must be dealt with daily in this Wall Street consolidated era. It's a shrinking fragmented universe for radio TSL. As with TV's experience with Cable/Satellite, there will still be #1 broadcast stations, albeit with lesser shares. It's been quite awhile since there has been a "clear cut" winner in Portland, without the tightly compressed shares we see under consolidation. And with the margin of error, it really is a "toss up" regarding who the winner is even a the top of the heap. Oh, but there is that consistency via Arbitron's quarterly "blurred" snapshot. We should have lunch one of these days.
|
Author: Robin_mitchell
Wednesday, November 28, 2007 - 9:31 am
|
|
Terry: I just reread your above response regarding Eugene. The inaequate sample women 18-24 was for the entire 12 weeks, when trying to display the data for the #1 radio station Mon-Sun 6am-Mid for that demo. I determined the share Arbitron was attributing to Women 18-24 for the book by looking at Women 18-34, and subtracting Women 25-34 from the book total. We both know the sample was inadequate. I wanted to see how much the weighed 18-24 data was influencing their outcome for Women 18-34 for the entire 12 weeks sweep. It was huge. For awhile, Arbitron had a feature called "Ask the Doctor" on their website. The question was, "Does Arbitron ever use the same diary respondents in subsequent surveys?" The Doctor's response indicated that they used in-house procedures to insure that a respondent was never used in a survey immediately following the one in which they participated. They go on to say, "all respondents are selected from the random universe." Might we interpret that to mean, in a market like Eugene with very dicey diary returns, when in trouble hitting goals...might they go back to the well of "previous random universe selected resondents" and grab one that has a history of returning a usable diary to "augment" their sample. It would be most dollar efficient, and Arbitron answers to Wall Street, too. It would explain the apparent "windshield wiper" effect of numbers seasonally in troublesome demos. By the way, Eugene's shortfall regularly occured with Men 18-24, women 18-24, Men 25-34, Women 25-34, Men 35-44, Women 35-44. It became very apparent to me that either Arbitron doesn't try very hard in their smaller markets, and/or only 45+ is interested in "paper & pencil" mechanics in this electronic age. Nowhere is this more apparent than with weekend numbers, since there are 60 Monday-Friday days in the 12 week pool, but only 12 Saturday and 12 Sundays. Weekend numbers were obviously determined for individual stations in their target demos by a literal "handful" of diaries for the 12 week period. Lunch? I'm sure we can fix all this stuff.
|
Author: Missing_kskd
Wednesday, November 28, 2007 - 9:35 am
|
|
What would the value be of such a non-blurred segment of time, say one quarter? By "value", I mean compared to the cost of what is provided now, (200 percent, 500 percent?) Roughly, how would the sample size compare? Again, by simple percentage?
|
Author: Tdanner
Wednesday, November 28, 2007 - 10:42 am
|
|
Robin: Without going "public" -- ask around some of the usual suspects. Lunch virtually every Thursday solves these and many other pressing questions. See you tomorrow?!
|
Author: Tdanner
Wednesday, November 28, 2007 - 10:58 am
|
|
kskd... not sure what you're asking. Robin refers to the quarterly reports as blurred because they don't represent any given moment (as a photo would) but rather a sketch or watercolor of what radio looked like during the previous quarter. There would be no cost change to shift from a sample gathered over 12 weeks to a sample gathered over one week (which would put enough sample into each day to give "photos) -- but even when the survey was spread over 4 weeks, broadcasters busted their humps to skew results during that 4 week period. Huge cash giveaways, commercial free programming, etc. Advertisers rightly assumed that the ratings for the 4 hyped weeks didn't represent the other 22 weeks. With the meters, in any given week there will be more participants -- but over a 6 month or year period there will be far fewer discrete participants. Participants can be used up to 2 (maybe more) years! Consider this realistic and scary scenario. Looking at the metered data, a PD notices that 3-4 of his most loyal listeners tune out every time a Toby Keith tune is played. All the PDs research tells him that Keith is not fried, his overall and P1 audience loves Keith, but an abnormally large number of his metered P1s hate Keith. All samples have quirks. Too many left handed folks. Too few 6x divorced folks. Something. Usually its something unimportant, like tea drinkers. But it can be a very unrepresentative dislike for something like one superstar performer. Does the PD program to his listeners or to those with meters? (We know the answer!)
|
Author: Missing_kskd
Wednesday, November 28, 2007 - 7:24 pm
|
|
Thanks! That's a great answer actually.
|
Author: Missing_kskd
Friday, November 30, 2007 - 9:35 am
|
|
After mulling this over, I'm not entirely sure the PD should program to the listeners, or the meter / diaries. Focus too much on the listeners, and you get this kind of self-selecting dynamic that's gonna get stale over time. Focus too much on blurry metrics and lose touch, become distant, difficult to connect with. I asked the value question because I was wondering what high enough resolution would cost, and what that would mean. IMHO, the cost (12x) would not deliver a value in return. (too much self selection) This is highly likely to be obvious stuff to you pros, but I find the line Arbitron is trying to walk to be an appropriate one. I just wasn't there, convinced it was more of a shell game than anything else. I do remain convinced that just playing tunes, will deliver diminishing returns. The primary driver in this, where my own personal experience is concerned, is flat out greater availability of music on demand. Pandora like services, one's own pod, lots of channels able to more tightly focus and match moods and trends, all deliver music that is more closely aligned with ones needs for the day than ever before. Go back a few years and matching up music to ones daily disposition (I don't know what else to call it!) was harder. Carry around a lot of CD's, tapes, etc... buying music was more difficult and selection was more limited. All of that combined to focus people on a fairly small subset of music. Today, that dynamic has changed and it makes broadcasting in all forms (radio, sat, cable) seem limited and more coarse than it was before. I don't think it has really changed, only our perception has changed. We've got more channels, so it's possible to be a bit less coarse, but that's about it. There are older people and younger people too. The older I get and the more I watch my kids, the more I realize people are very different, with 30 being about the time those differences really manifest. That helps me to see where focusing a broadcast is important. Either one is targeting young, old, female, male, etc... But then I think back to some times, like KGW where people from all ages listened more than generally happens today. I think that happened because people didn't know any better. (non availability of diverse music selections for most everybody, but the total music freaks, and limited media options) If one could get that 12x precision --the photograph -vs- the water color, it probably would not do any real good! Sure, just nailing one specific group would become more consistent, but there are a lot of groups, and people change, rendering the branding efforts over time, slowly worthless as one either follows that group, or blends the station identity over to appeal to another one, losing the focus along the way, only to sharpen it again, hoping to have it right... Might as well just have the water color as that's reality most of the time! So then, the value of such data is not equal to it's cost, meaning the holy grail of perfect sampling is just that --a grail, something that one never finds, no matter how hard one tries. I go back to daily relevance. Rather than programming TO some body of people, sorted by some factor, the PD needs to be RELEVANT. Was on a business trip to Idaho recently. There is a nice, small market, radio station there that's pretty relevant. They play music that is all over the map, their on air staff, does minor news bits, engages in the occasional commentary, shares a little about their lives. That station is relevant. If they just played music, it would match the overall mood there. However, they play music and they talk about it and around it for context, and that made more sense than just the tunes would otherwise. Talk stations are relevant too. Every day there is a news cycle. There are issues of the day, dirt to dig, poll results, etc... Same with sports stations and their stats, scores, player profiles, etc... The common element there is the activities of people. That's where relevance comes from. IMHO, the PD really should be given the freedom to find, cultivate, and direct the actions of people, who are relevant to the market they serve. From there, all of those metrics make a lot of sense, in that they can provide general indicators of just how relevant those people are. If that's done, then the question of to play or not to play Toby Keith, becomes somewhat secondary does it not?
|