Archive for 25 maj, 2009

My thoughts on effectiveness

maj 25, 2009

David Bonney

David Bonney är frilansande kreativ planner baserad i Berlin. Han har jobbat för bland annat DDB/Tribal DDB och McCann Erickson i London samt Plantage i Berlin. En av hans äldsta drömmar är att bo och arbeta i Stockholm. Här på Bloggen om effekt vädrar David sina åsikter om effekttävlingar från ett internationellt och något cyniskt perspektiv. Publiceras i fem delar under fem dagar.

Johan Östlund is a dear old friend and former colleague of mine at DDB London, where we were both planners back in the day.

Johan has asked me to share with you my thoughts on advertising effectiveness awards. But, to be honest, I’m not really sure why – I promise you there are some really good reasons why my thoughts are no more worthwhile than the next man’s.

Firstly, I’m not incredibly experienced when it comes to effectiveness awards – I’m not a patch on the great Les Binet (all hail Les) and I’ve only ever been exposed to one competition, the IPA Effectiveness Awards in the UK, for which I’ve written one entry which, quite rightly, didn’t win a thing.

Secondly, I’m not the most passionate advocate of effectiveness – I’m more of a “front-end”, creative planner, preferring to feel my way along and shed blood for the idea, the inspiration, the intuition, rather than the “back-end” aspects of accountability, econometrics and ROI.

Thirdly, I don’t feel all that educated about effectiveness – ok, not true, I recently completed the IPA Excellence Diploma which is like an MBA for advertising practitioners and is meant to put me amongst the most educated planners in the world. But, I admit I haven’t read an effectiveness paper in two years and I studiously ignore the IPA case studies when they get published (more on this later).

But, despite all this, I am happy to share my views (just try shutting me up!)

1) The un-science of effectiveness awards

I trained as a psychologist before going into the seedy world of advertising. Psychology is a science and in science you set out to disprove a hypothesis, to show that something isn’t true, that male pheromones don’t make females play computer games better or that a 30’ ad from a chocolate company doesn’t increase sales of chocolate. Now anyone conducting econometrics would argue that this methodology seeks to disprove – to explain the other things, besides communications, that can explain a brand’s growth. Fair enough.

But if you take a step back from the statistical methodology employed in the wiritng of effectiveness awards, the rest of our approach to “disproving effectiveness” is patently wrong. First, the agencies that produce the communications are responsible for conducting the “science” themselves – and it is clearly always in their interests to prove rather than disprove effectiveness. Now, excited at the prospect of being able to call themselves the most effective agency in the land, these agencies look at all the campaigns they have launched over the previous three years and whittle the list down to only those that may be twisted into plausible case studies of effectiveness. I’ve gone through this process with two leading agencies in London, and each time it was clear that only a small minority of our campaigns were deemed sufficiently effective or provable to be written up as case studies.

There’s our first mistake – surely every campaign that ever ran is evidence for or against the effectiveness of advertising? And surely a scientific method cannot willy-nilly decide which bits of the results they want to keep and which to get rid of? I binned half the results of the research for my undergraduate thesis and I repress the guilt to this day.

Fortsättning följer imorgon.

Annonser