Image courtesy
Online is a very accountable medium. Of-course it is. In fact it's almost too accountable. One campaign or one piece of website traffic analysis can provide an embarrassment of metrics to wade through if you feel so inclined. Impressions, click throughs, click-paths, cost-per-response, cost-per-action, download, frequency-capping, pass-on, geo-targeting, behavioural, dwell-time. You could be there all day. The big question is whether these metrics are currently being used to develop genuine understanding or whether agencies and site owners, to quote Andrew Lang, use statistics "as a drunken man uses lampposts - for support rather than for illumination."
Online stats are not foolproof. Ironically, despite the granularity of data available it is still impossible to definitively say how many people have visited a particular website. Different audience measurement sources can vary drammatically. Site-centric measures are victims to the technical complexity with which their data is built meaning that different tools measure different things, so the numbers are never the same.
Take Page views. Page impressions have long been a much looked at/sought after metric. They give site owners some big numbers to feel good about, but the internet is increasingly about measures and models which are not purely driven by page refresh. Technologies like Ajax allow users to conduct multiple tasks on a page including opening new windows without generating a page impression. And people are simply spending more time doing more things online which don't involve a refreshing a page (like watching video).
User-centric measures are arguably a more valuable indicator of scale but different methodologies can throw up different numbers here too. And whilst it is possible for those advertisers driven by customer acquisition to acheive unprecedented levels of transparency as they follow their prospect from ad to transaction, there remain few measures against the value of user engagement.
Yet in most areas this doesn't make online any less accountable. You just have to know what you should be counting, and the limits within which you're working. The complexity in online measurement comes with the understanding that no single metric will ever give you a complete picture, but equally looking at too many of them is unnecessary (and bad for your eyesight). Is the right thing being measured? Often not. So there is an absolute need to establish precisely what benchmarks for effectiveness and performance you're using (and that doesn't always mean click through) and why.
If online measurement is the sum of the parts, then one of those parts should also be human. The best approach requires a combination of the right data combined with human intuition, intelligence and interpretation (when does it ever not?). It's a bit like the picture above - before someone (hopefully a human rather than a robot) takes the trouble to make sense of all the different parts laid out before them they will remain as just that - parts. But together, and with some all important human intervention, they make something far bigger, better, and more useful - a car (in this case a dodgy red VW Golf). No single tool is best, but the right tools and the right human working in concert definitely are.
Online is the sum of all media. You can read it, watch it, listen to it, buy stuff on it. And because of that it requires stats which are broad and varied enough to provide insight across multiple formats. We are not at the stage yet where the device with which you access the internet reliably defines that it is you. But surely that's only a matter of time and when it does happen, the game will change entirely.
There are plenty of gaps in our knowledge about online effectiveness, but ultimately it is always about the audience. And so the area which is potentially the most exciting, challenging but most valuable is how you measure interaction. Why? To paraphrase Seth Godin: "Interactions are a million times more powerful than interuptions".