Research and Analytics: Better Accept With a Pinch of Salt

Published on : 04 Feb 2016
7 min read
Category : Leadership

Nostradamus should be the patron saint of the Big Data and Analytics cult. I say this because we have hardly paused and reflected on what analytics can or cannot do. Despite the warnings from Taleb, Kahneman and Dobelli we have chosen not to take note of their caution on the human proclivity for cognitive biases.

We often overstate statistical coincidences. We confuse predictive analysis with post-facto analysis meant for diagnosis or mere comprehension which is about what happened and not what caused something. We have become foolhardy to assert that we can predict human behaviour without framing it in a context. Elementary understanding of behavioural science should notify to us that leave alone predicting, even comprehending human behaviour without framing it in a context is impossible.

We also do not set up the hypothesis and test it before we declare causal connections. Worse, our quality of data, sampling and data collection is often appalling. I say these of the best analytical firms and teams with whom I have worked with and not of amateurs.

In my 32 years of working, with the occasional exceptions, I have rarely come across analysts and researchers who understand the difference between the 3 types of research:

  • Explorative

  • Descriptive Diagnostic

  • Experimental

The rigour required for each of these is very different in terms of the research design and construct. The statistical methods applied still can be the same for the three but the objective and limitations are where the differences lie. For the purpose of this article I will be using the term research synonymous with analysis, because that is what analysts do.

An explorative research can at best achieve the purpose of gaining hypothetical insights into a problem, extrapolate hypothetically trends and/or isolate a set of few possible not probable causal factors to a phenomenon, which requires deeper and more focused research. Almost all research which is survey type and is based on self-reported data falls into this type.

These are surveys where the respondents offer an opinion, preference, confirm satisfaction or dis-satisfaction, confirm a like or dislike, endorse a person, position or product, articulate a potential buy or sell orientation, promise advocacy, rank a set of subjects or variables on the basis of certain attributes, etc.

In this research, the researcher has no means to determine the authenticity of the response, verify it against behaviour especially future behaviour or intention to act and gather other secondary evidence to corroborate the self-reporting, above all the application of mind and seriousness of response for every surveyed item.

For any business to use exploratory research to make decisions on committing resources, altering the strategy, launching initiatives or products and taking calls on customer/employee behaviour or preferences will be a huge leap of faith. The output from this kind of research is only one step better than broad generalisation.

90% of all the research/analysis, which business organisations rely on, falls into the exploratory research category. 90% of the business managers who thump table on how they only rely on research for decision making, have no clue about the serious limitations of the research design they are trusting.

In a survey and self-reporting type of survey based research, the other serious limitation is the quality of data collection which is dependent on the commitment, involvement, intelligence, grasp, discipline and integrity of the data collector. Incentives offered for number of respondents surveyed further muddies the water.

Where it is an on-line survey the limitation becomes the motivation, commitment, involvement, patience, focus, discipline and application of mind of the on-line respondent. What makes the process of face to face and on-line data collection unreliable is the unpredictability of the rigour with which the response is reported or recorded. This inherent limitation often plays spoil sport with the quality of data.

Where a team is using secondary data, without the knowledge of the source of the data, authenticity of the source, interference and modification carried out on the data (popularly called as massaging data), uniformity and comparability with respect to time of collection, context in which the data was collected, the construct & inherent biases in the various forms or instruments which were used for collecting the data, nature of question which elicited the data etc seriously degrades the quality of data. It is this kind of data which is often mined and used by analytics teams of most organisations.

Let us examine few examples. One of the main reasons for some banks getting their retail unsecured lending wrong, during the 2004 to 2008 phase was due to the unreal faith, they had reposed in their analytics, to come up with what was popularly called as predictive credit scoring. The same was true of credit derivatives.

The analysts in this case placed extraordinary reliance on surrogate data, to predict the future credit behaviour of the customers, with no reference to future contexts. They believed that white collar employees or those who had savings account with a certain level of balance in their banks will default the least. The irony here is that assumptions were passed off as posits. There was no empirical basis that either of the 2 assumptions will hold true in reality. There was no comprehension that future contexts may not resemble the present context, especially the cyclic nature inherent in any economy.

In fact, business leaders were so flushed with pride about this analytical miracle tool, they made case studies on this and bandied it everywhere. As with everything that is claimed as research, numerous gullible admirers jumped on to this “Credit Scoring Marvel”. The fact that banks are no credit bureaus and have no access to behavioural data beyond their existing customers in their banks, was not grasped by people who otherwise are very perceptive. This kind of analytics was destined for disaster.

Of late we have become even wilder with predictive analysis. We today believe that by sitting in corporate offices and playing computer games with the captive bought out data and mixing it with proprietary data, we can direct our sales force to identify customers in the physical market place. The irony here is that the only thing we know about these people, who currently do not do business with us is based on the dubious, non-standard and randomly assembled bought out secondary data. This kind of potpourri (kichadi) data is riddled with all the limitations that secondary data suffers, which I have detailed in an earlier para.

We should take note that the predictive validity of psephology based on exit polls and weather predictions based on pure observed data is barely moderate. If this does not humble us about playing wild with behavioural data, nothing will.

The same is true in my experience with what are now called as “Employee Engagement Surveys”. The weights that an employee assigns to various factors that have a bearing on him are different and are also transient. The transient nature makes assignment of the weights difficult. The weights are also life stage and context dependent.

When the stock markets are in the middle of a bull run and an organisation’s stock is doing well, most employees will endorse or articulate preference for an ESOP loaded salary mix. Unfortunately these are not year to year reversible decisions. The next year if the bear chill catches the market, the same employee will dis-endorse an ESOP loaded salary mix, which he had endorsed or preferred a mere 12 months before.

The same is true with subsidised loans or company car. When interest is high the study will say loans promote engagement and when the interest falls, cash out would appear in the research as engagement enhancing move.

We also miss that different factors have different thresholds. Let us take for example pride as an engagement driver. The threshold for high endorsement for Pride in one’s organisation is very high. Any endorsement levels less than 85% for Pride is rare. However the threshold levels for Compensation, Equity, Fairness or Transparency in terms of endorsement levels are always moderate. The highest level of endorsement levels for these will be rarely higher than 65%. How would we normalise this rating idiosyncrasy? How will an engagement study help us to determine what independent variables drives these dependent variables. Worse is how could simple frequency distribution establish any causal relationship?

That we do not ask these questions is shocking to me. Would we accept pathology report if it had these limitations or fallacies?

The same is true with consumer research. When you ask an ICICI Bank customer, who has not experienced other banks for similar products and at the same level of transaction intensity or frequency, to rank ICICI Bank in comparison to say 4 other banks, what insight can this research throw. In a self-reporting survey how would the person collecting data verify, whether the respondent’s reporting of experience with other banks (assuming she has that) is authentic. Research and analysis surely cannot be based on trust only!

In my experience I have seen, when the factor or survey item which has the highest importance to the customer is scored high or low, then the rest of the factors or items in the survey are impacted to various degrees by this. In the survey type explorative research, there is no way to isolate the causality. This is like a lab test which confirms a state of illness but cannot give the doctor any insight into what may be causing it. It becomes a guessing game. We do not see it as a guessing game because we gaze at numbers & tables and convince ourselves that the conclusions flow from them and not our guesses.

The problem with behavioural survey based research is even confirming presence or absence of anything is at best generalised, with no acceptable levels of reliability or validity. Very few researchers in these studies care to report these because during no two successive years the same research design and research structure is used, making comparability impossible. Hardly anyone questions this gross indiscipline in research.

Is it not shocking that business leaders and board rooms gobble up this kind of research as gospel truth or unassailable fact? Donald R. Keough the former CEO of Coca Cola in his book, “The Ten Commandments for Business Failure” narrates when a customer survey revealed that customers wanted Coke to be sweeter, like most analytics devoted businesses, Coke went on to change the Cola formulation and made it sweeter. According to Don, it took Coke a full year to recover customer loyalty and get back on rails. Half-baked analytics had turned Coke into Pepsi!

In the year 1994, we at Brooke Bond Lipton India were enlightened by market research, which said that the Indian market was waiting to explode three fold on premium ice cream consumption. Paying heed to this the organisation acquired every litre of ice cream manufacturing and marketing capacity in India and not satisfied with it set up a global scale ice cream manufacturing factory at Nasik. This is the story of Kwality Walls. 20 years hence, I understand the market is still waiting to explode and the capacities acquired are not fully utilised.

The thrust of the article is not that research and analytics are waste. It is more to caution what kind of research is needed for decision making. Descriptive diagnostic research is a must, whenever we are seeking to establish causality. Experimental research is a must, whenever we are seeking to predict performance or behaviour. This is seriously expensive. Exploratory research can only help you frame hypothesis. Sadly what most global consulting firms purvey is exploratory research. Most consumer and employee research too are exploratory research.

Hence we should be aware that what we get from consulting firms and in-house analytics is hypothesis and not empirical insights. Where any research is not verifiable, when repeated multiple times or has poor causal proof or where performance change cannot be ascertained or verified, such research cannot be the sole basis for strategy change or resource commitment.

Almost no innovation in the world is a product of any analytics or exploratory research. Let me close with a few tongue-in-cheek comments. Check out what Steve Jobs thinks about consumer research and its usefulness for innovation! It took CIA 10 years to find Osama; analytics notwithstanding. Research and analytics in the behavioural area is useful but let us take it with a pinch of salt, when someone tells us that it can predict future behaviour with reliability.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *