Your Metrics Are Bad and Why “Data Driven” Isn’t Enough

Being “data driven” is all the rage these days.

We all — businesses, government entities, sensor-equipped individuals — have more and more data that can help with decisions. The era of Big Data is here, yada yada yada: you know the annoying cliches as well as I do.

There are more and better tools. Dozens of startups are working on better ways to collect data, process it, query it, visualize it.

I recently talked with an entrepreneur who, fresh off of raising a big round of funding, was told by his investors that he needed to make his company more data driven. He wasn’t sure what “more data driven” actually meant, and he told me he wasn’t sure his investors did either.

It sure sounds nice, though — doesn’t it?

Honestly, I don’t know how I’d define “data driven”, and I’m not sure I care enough about the term to really think it through. But I’m pretty sure I know what’s missing.

Very, very few companies know what questions to ask of their data. They have metrics that are beautifully plotted on their real-time data dashboards. They’re calculated in technologically scalable ways, using something that’s much simpler than SQL, and they’re accessible by everyone inside the company.

But more often than not, the metrics are superficial and poorly thought through. They’re not reflective of the health of the product or the business.

I’ve certainly been guilty of this: for months if not years at my last startup, anything other than new user registrations barely mattered to me. For Circle of Moms, getting new users was extremely important, but at times distracted us from more important long-term goals.

And I see this again and again with tech companies. There’s a focus on one or two superficial metrics, rather than a deep understanding of what it will take to build out the broader ecosystem necessary to make the company successful.

I don’t want to be too negative: the understanding of these ecosystems has significantly improved in the decade-plus I’ve been in Silicon Valley. Ten years ago, entrepreneurs building consumer startups barely thought about distribution (if we build a great product, people will come to us!). Five years ago, entrepreneurs (myself included) started to realize that distribution mattered, but rarely took the next step (Facebook is the notable exception). Today, more and more entrepreneurs understand that both distribution and engagement matter, even if they can’t get at all of the underlying components.

Today, a few of the strongest consumer companies — Facebook, LinkedIn, Twitter — have built out growth and data teams that collectively measure and understand the key dynamics.

But there are still huge areas of our society — small non-tech businesses, government at all levels, medicine, academic studies, many startups — where there are lots of data, but not much understanding of what the data actually mean.

And that’s a big problem: I’ve long felt that having bad metrics is often worse than having no metrics at all.

If I were trying to gauge a basketball player’s skill level, my top preference would be to use a well-structured metric incorporating an entire season’s worth of extremely detailed, second-by-second data, looking at his impact on all aspects of the game. My second choice would be a good coach’s purely qualitative assessment of his skill. And my last choice would be a simple stat — say points per game — that was state of the art in 1950.

Today, most businesses are using the equivalents of the coach’s qualitative assessment and points per game to make their decisions. And quite frequently, “data driven” effectively means “we’re using points per game.”

Most of the new “Big Data” companies are focused on the relatively simple stuff: speed of processing data, ease of accessing data, beauty of data presentation. Those are all valuable, but they aren’t enough.

So how will the “bad metric” problem be solved? Certainly with some mix of better data training for everyone, plus tools that automatically discover and surface the important metrics. Both are important and I’m not sure whether training trump technology or technology trumps training.

Either way, if we want these new data to improve our collective decision-making, the good metric-bad metric problem badly needs to be solved.