Beyond the Numbers: The Five Facets of Good Analysis

In my experience, many organizations approach research as a necessity but remain blind to its benefits or utility to the business. Research becomes a chore and seldom adds value to the enterprise at hand. As a result, time and resources are often wasted as data is simply gathered, stored and usually misused if it is used at all. This is often because there is poor analysis of the data.

Analysis is simply taking apart the information in order to understand its meaning and relevance. As a result, good analysis should provide a bridge between information and strategic action. But research is often abused in the absence of analysis. I have witnessed numerous situations where data is simply used to justify activity instead of guiding activity. There are five facets to the process of good analysis: understanding your vision, using appropriate methods, understanding the value of the data, understanding the meaning of the data and understanding the relevance of the research.

1. Understanding Your Vision
In business, it is always easiest to move forward with our strengths. This seems like a natural course of action, but it can sometimes lead to devastating results. This is especially true in research. The temptation is to take a “proven” approach in finding answers with little thought to its relevance to the vision of the enterprise and the current situation at hand. Standard instruments such as surveys and focus groups are familiar, but will they deliver the right answers? More importantly, are you even asking the right questions?

Analysis has to happen before research even begins. Important questions to ask include:

  • What is our vision?
  • What is keeping us from achieving our vision?
  • What do we really need to know to overcome these obstacles?
  • What questions should we be asking to understand these obstacles and how to overcome them?

2. Using Appropriate Methods
Coca-Cola’s market share had been in a steady decline from the end of World War II to the early 1980s. Coke’s chief rival Pepsi began to outsell the beverage maker. The ubiquitous “Pepsi Challenge” taste test campaign seemed to be working. The United States public was showing a preference for a sweeter cola. As a result, the company changed the formulation of their flagship beverage and launched “New Coke” in 1985. The launch was well supported by a battery of research results. The new, sweeter formulation consistently beat Pepsi and the old Coke formulation in taste testing. There was also a consistent vocal minority of detractors in focus groups; however, this only represented ten percent of the respondents. The data seemed to indicate that people would like “New Coke” more than Pepsi.


The real purpose of Coca-Cola was not to deliver the best tasting cola but to sell more cola than its rivals. The leadership of Coca-Cola came to realize the importance of other issues such as branding and perception.


The launch went well for the first week; there was a noticeable increase in market share. However, by the next week market share returned to pre-launch levels. After that, public opinion turned on Coke. People began to ridicule the company and demanded a return to the old formulation. In less than three months Coca-Cola re-instated the original formulation of their classic beverage, losing millions of dollars in the process.

The real issue was not which cola tasted better, but which cola was preferred: a subtle but vital difference. The market researchers at Coca-Cola were measuring a secondary issue—taste—instead of the critical issue—preference. The real purpose of Coca-Cola was not to deliver the best tasting cola but to sell more cola than its rivals. The leadership of Coca-Cola came to realize the importance of other issues such as branding and perception.

Once a clear understanding of the vision and issues is established, appropriate methods need to be chosen to match the issues. Simply knowing that you are going on a hunting expedition does not inform you as to which weapon to take. You will need very different equipment for hunting bear versus hunting squirrels. This same principle is even truer for research. The wrong tool can lead to misleading and inappropriate results.

Some good questions to ask in deciding an appropriate research tool would include:

  • What is the true, core issue at hand?
  • What is the best way to understand this issue?
  • Will the data produced by this method actually answer our questions?

3. Understanding the Value of the Data
Not all data is equal. Some data sets simply have more value than others. Fortunately, there are many tools that mathematics give us to understand and appreciate the value of data. Unfortunately, these tools are seldom used or understood.

Most leaders place accuracy of the data as the highest value in research. What few of them realize is that accuracy is a paradox. Mathematics defines accuracy as the percentage difference between the actual value and the measured value. In other words, to know accuracy you need to know the true answer before you measure the answer. What this means is that if you want to know about accuracy then you must know the “right answer” before you embark upon research. Accuracy is more of a measure of the value of a testing method than the value of a set of data. In most cases we perform research when we do not know the “right answer.” Fortunately, the science of statistics gives us many other tools to estimate the value of data.


To know accuracy you need to know the true answer before you measure the answer.


Many of these tools address the concept of variance which is a measure of how scattered the numbers are in a set of data. It is commonly accepted that data that is consistent indicates a higher possibility of accuracy. Regrettably, many study results ignore measurements of variance.

Executive summaries of data will typically report the average of results; however, there is often a more complicated story behind the simplified numbers. An overdependence on executive summaries is dangerous. It is imperative for leaders to understand the value of the data at hand before leaping to conclusions.

Some appropriate questions to ask when determining the value of a set of data should include:

  • Is a simple average of the numbers a typical value for this set of data?
  • How scattered is my data?
  • Do these values really indicate any differences or anything meaningful?

4. Understanding the Meaning of the Data
We all try to find patterns in data. For example:

  • Every fatality in the United States Space program has happened within a five-day span of the calendar dates 27 January to 1 February: 27 January 1967, 28 January 1986 and 1 February 2003. Does this mean that this particular calendar week causes fatal accidents at the National Aeronautics and Space Administration (NASA)?
  • A traffic study in Spain found that yellow cars are involved in four percent fewer accidents. Will painting every car yellow reduce accidents?

A lot of research makes the mistake of rushing toward finding correlations in a set of data and stops at this point. Technically speaking, a correlation in data simply means that there is a mathematical relationship between two sets of values. In other words, as one set of values changes, the other set of values also changes. In the example above, as the calendar nears the final week of January, the number of NASA fatalities increases. Unfortunately, many people jump to the conclusion that correlating data means that one value actually causes the other.

Consider this other real world example. The Children’s Environmental Health Center released a study claiming that second-hand smoke negatively affects the cognitive skills of children. The research showed a correlation between exposure to second-hand smoke and poor performance on basic cognitive tests. But this test does not prove that second-hand smoke negatively affects the child’s intelligence. It can be just as likely that parents with lower cognitive skills are also more likely to be smokers and are more likely to have children with lower cognitive skills.1 The research did not show how second-hand smoke would cause a lowering of cognitive skills; it only showed a correlation.

Careful analysis of the data must also be honest and logical. Correlation may indeed indicate causality, but does not necessarily prove it.

Some good questions to ask when trying to understand the meaning of a set of data would include:

  • Do any of the factors that I am considering correlate?
  • Do any of these correlations necessarily indicate any causality between them?
  • Are there other factors that were not considered in the research that could be causing these results?

5. Understanding the Relevance of the Research
In Numbers 13-14 we find that God promised to lead his chosen people to the land of Canaan. As the Hebrews approached the borders of Canaan, Moses sent out twelve men to investigate the land. When all twelve returned they related the same set of data: the land had abundant agricultural wealth, it was full of intimidating people and the cities were well fortified against an invasion. While the results were consistent, the recommendations were not. Ten of the twelve advocated abandoning the vision because a human invasion would be impossible. Moses and the other two investigators disagreed. They advocated clinging to the vision and relying on a different strategy: depending on God to give them the land instead of resorting to a standard human invasion.

You should never change your vision based on circumstances. A vision is born out of a desire to change the current context—not to conform to it. While research should never lead an enterprise to change its vision,2 it should always affect its strategies. A strategy describes the path from the current context toward the future vision. Good research is a reliable picture of the current context. Research that is not used to change strategy is wasted research.

Conclusion
Many organizations choose to cut corners and bypass some or all of the above five steps to save time and money. Any research done in this type of environment is dangerous in that it will likely become disconnected from the vision and strategic activity of the enterprise. Research can be a very powerful tool, but only when it is used and when it is used responsibly.


Endnotes

1) http://256.com/gray/thoughts/2004/20040511.html
2) The only exception would be research indicating that the vision has been achieved. This should mean that the enterprise should dissolve since its mission has been accomplished.


Scott Friderich has over twelve years of product and religious research experience in North America, Europe, Central Asia and the Pacific Rim. In July 2006 he founded Clarity Research to continue his work in research consultancy. He can be contacted at [email protected].