Prof. Jan Bosch
Recently, I expert-facilitated a workshop at a company having the desire to become data driven. Different from the product companies that I normally work with, this company is a service provider with a large amount of staff offering services to customers. The workshop participants included the CEO and head of business development, as well as several others that are in or close to the company’s leadership team.
In many ways, this looks to be the ideal setup as one would assume that we have all the management support we need and some of the smartest people in the company with us. This was even reinforced by several in the company sharing that they’ve been working with data for quite a long time. Nevertheless, we ran into a significant set of challenges and we didn’t nearly get as far as we’d hoped.
The first challenge was becoming concrete on specific hypotheses to test. Even though we shared concrete examples of hypotheses and associated experiments when we kicked off the brainstorming and teamwork, everyone was having an incredibly hard time to go from a high-level goal of increasing a specific business KPI, eg customer satisfaction, to a specific hypothesis and an associated concrete experiment. There are many reasons for this. An obvious one is that many people feel that ‘someone’ should ‘do something’ about the thing that they worry about but never spend many brain cycles thinking about what that would look like.
The second challenge was that, for all the data the company had at its disposal, the data relevant in the situation at hand was frequently unavailable. Many companies I work with claim to have lots of data and many in the organization get really surprised that ‘just’ the data they need hasn’t been recorded. When you reflect on it, it’s obvious that this would be the case as the number of hypotheses that one can formulate is virtually infinite and, consequently, the likelihood of data not being available is quite significant.
The third challenge we ran into was that even in the cases where the data was available, it turned out to be aggregated and/or have a too low frequency of recording to be relevant for the purpose at hand. So, we have the data, but it’s in a form that doesn’t allow for the analysis that we want to do.
The response to these challenges is, as one would expect, to go out and collect what we need to pursue the experiment to get to a confirmation or rejection of the hypothesis. The funny realization that I had is that the more relevant and important the hypothesis is from a business perspective, the more likely it relates to regulatory constraints that limit what can be collected without going through a host of disclaimers and permissions. So, we ran into the situation that several of the more promising hypotheses were not testable due to legal constraints.
Finally, even if we had a specific hypothesis and associated experiment and we were able to collect the data we needed, it proved incredibly hard to scale to the point of statistical significance. Running a large-scale experiment that has a decent chance of failure, but that’s very expensive and risky to run kind of defeats the purpose of experimentation.
Becoming a data-driven organization is one of the highest-priority goals that any company should have. It allows for much higher-quality decision-making and operations while preparing for use of AI as a key differentiator and enabler. However, going from word to action is a challenging journey where, ideally, you learn from other people’s mistakes before making new ones yourself. We need the data, but we need to be smart in execution.