A Cardinal Sin: Doing Thought Leadership Research Without Hypotheses
There we were, gathered together in a hot conference room in the hotel basement. A team from a major consulting firm was summoned there to analyze the results of a major global survey the firm had just conducted. Reams of printouts of Excel worksheets stood piled all around us. For the better part of two days, we pored over those documents, head-deep in analysis. We searched and searched for the “ah hah” moments that would lead to the compelling insights that would fuel the firm’s point of view on the topic, as well as form the basis of a comprehensive research report the firm planned to publish on its findings.
The exercise was not inconsequential: The research was planned to provide the key messages and content that would underpin the firm’s marketing for the next 18 months. The team had to uncover something deep and differentiated to say if those marketing activities were going to generate any kind of interest from business executives.
The struggle ensued as the room got hotter until, near the end of the second day, the head of the practice the research was meant to support finally looked up and said what all of us were thinking: “What if we don’t have anything here?”
Gulp.
What went wrong? It wasn’t lack of commitment. The firm had already invested six figures in direct costs to develop and field the survey, as well as additional money in the form of consultants’ time on the project. It also wasn’t because it engaged the wrong survey participants. The firm had correctly targeted the right level of executives to take the survey. And it wasn’t a flaw in survey execution. A prominent and well-qualified survey house had successfully delivered the several hundred completed surveys the firm hoped to get.
No, the problem was that the firm committed one of the cardinal sins of research: Thou shalt not do research without hypotheses. In this case, the firm created a survey questionnaire that was essentially a grab bag of questions the practice’s consultants wanted answers to. It didn’t have a potential storyline in its head, nor did it really fully know what it was trying to prove or disprove with the survey. One of the team members admitted as much. “We just thought we’d throw this out there and see what we get back,” he said. Well, mission accomplished!
This firm’s experience clearly, and painfully, illustrated just how important hypotheses are to effective research of any kind. At their most basic, hypotheses are a clear statement of what a company intends to investigate. For example: “Cloud computing has helped companies significantly reduce their innovation cycle time” or “The highest-performing supply chains have a formal team dedicated to providing real-time insights into operational performance.” Hypotheses are absolutely vital for two key reasons.
They serve as formal guideposts to help ensure the research activities remain focused.
In any research effort, there’s always a temptation to try to cover far more than what’s possible—at least at any level of depth. Without hypotheses as a guide, you can end up “boiling the ocean,” which not only returns superficial or irrelevant data, but also ends up causing survey participants to bail because they’re asked to answer simply too many questions. (Then you have an entirely different problem on your hands: getting enough completed responses.) Or you can gather too little data and miss getting feedback that could lead to something interesting, novel and compelling.
Hypotheses provide the structure for developing the questions you’ll use to probe the topic—for instance, the survey questionnaire. By only developing questions aligned with the hypotheses (about three to five for each), you’ll stick to only what’s truly important. The key is to find the right scope: Each hypothesis should not be so broad that it can’t be covered adequately by a survey while, at the same time, not so narrow that it makes it difficult to make any new discovery.
They lay the foundation early for a story the firm could tell.
Good hypotheses force the research team to think about the logic of the research by presenting a preliminary story about the topic the team believes to be true (with the veracity of the story eventually supported or refuted by the research). For instance, the ultimate research report typically unfolds across a few main high-level sections:
- What’s the business problem at hand
- How prevalent it is and how it’s affecting companies’ business
- How most companies have attempted to solve the problem and how those efforts have fallen short (and why)
- What characteristics, mindsets, practices, approaches and other things set apart the companies that have solved the problem and what benefits they have generated.
If you develop a few strong hypotheses aligned to each of those sections, you’ll be sure to get data to support the entire storyline. That way, you won’t be scrambling during analysis to plug logic or narrative holes you didn’t realize you had. And a bonus: With hypotheses structured in this way, you’ll cut a lot of time off the back end. Analysis, point of view development, and report writing become much more efficient because you’ve already clearly articulated what you’re looking for and what you hope to say.
Think of it this way: Hypotheses are your research road map. They point you to, and keep you heading in, the right direction. Just as you wouldn’t set off on a long trip not knowing where you’re going or how you’re getting there, you shouldn’t conduct research without hypotheses. And that’s something the consulting firm in the hotel basement could have benefited from.
And what of that firm? The firm ultimately identified a number of acceptable insights and findings from the research—but not before undergoing many more weeks of analysis and meetings. And it was able to produce a research report that was received well enough by clients and prospects for the firm to establish an annual survey (with a commitment to using hypotheses going forward). But still, it’s hard not to think that, guided by good hypotheses, that first survey would have generated even more powerful insights and probably much more attention—with far less energy, time, and money spent to get there.