Statistics denial, statistics debacles & the coming flood of statistical malfeasance

Analytics   |   
Published June 22, 2015   |   

This is the second instalment in the series of essays on Statistics Denial by Randy Bartlett, Ph.D. To read other articles in the series, click here.
There is a flood of statistical malfeasance heading this way. We are going to see more statistics debacles, like those at AIG, Fannie Mae, Moody’s, Fitch Ratings, S&P Ratings, et al.  Also, corporations are going to miss countless opportunities to better understand their customers; to design smarter products; and to escape disrupted industries.  The situation is being exacerbated by promotional hype, which at its worst minimizes the one thing we need to wade through messy data—applied statistics.

Statistics Debacles:
Here are a few of the countless failures in getting the statistics right and in integrating results into decision making.

  • AIG’s multi-million dollar cash factory relied upon risk models.  When the models were not properly maintained, they went from ‘insuring’ tranches containing 2% subprime mortgages in 2002 to tranches containing 95% subprime mortgages in 2005.  On paper, their $57.8 billion portfolio was listed as AAA.  When the music stopped, AIG found their credit rating downgraded and ended up in liquidity squeeze by September 16, 2008.  (See ‘The Man Who Crashed The World’ by Michel Lewis, Vanity Fair.)
  • Fannie Mae’s risk department told senior management repeatedly that a crash was coming in the mortgage market.  Management’s incentive structure led them to stay the course.  Figure 1 illustrates the dramatic drop in Real Housing Prices.  (See ‘A Practitioner’s Guide To Business Analytics,’ McGraw-Hill, p. 27-34.)

Figure 1: Real Housing Price Index
retail-housing-price-index

  • The rating agencies, Moody’s, Fitch Ratings, and Standard & Poor’s, engaged in such practices as relying only on client’s models and not obtaining granular data.  These practices undermined their own modeling efforts.  (See ‘The Big Short: Inside The Doomsday Machine’ by Michael Lewis, W.W. Norton.)  Not everything should be privatized.
  • Arbitron could have had the Google Analytics business, instead failed to handle the statistical aspects of Big Data.  Now they are owned by AC Neilson.
  • Google Flu Trends was unable to live up to their claim, ‘… we can accurately estimate the current level of weekly influenza activity in each region of the United States, with a reporting lag of about one day.’  Another case of Big Data hubris; more data did not circumnavigate their statistics problems.  (See ‘Why Google Flu Is A Failure’ by Steven Salzberg, Forbes Magazine)

We must anticipate misunderstandings about statistics and depictions of data analysis void of an understanding of statistics.  A low statistics literacy IS part of what is being denied (Statistics Denial Myth #0) and makes us more susceptible to myths and misleading promotions.  As people acquire greater statistics literacy, they begin to more fully recognize the potential for a flood of statistical malfeasance.  Alternatively, we can see some of the problems by watching Statistical Review—to be discussed later.
Let us continue by sizing up the low statistics literacy, which might be unwelcome news for some.  First, for those familiar with the contrast in data quality between government sampled data (NHIS, NHANES, MEPS, ‘the Census,’ et al.) and data collected via sampling in private-industry, we claim that this illustrates differences in statistics literacy—especially on the consumer side.  Second, in the past, statistics was not taught until around college.  Now public schools teach descriptive statistics at around the 6th grade and informal statistical topics arise as early as the third grade!  We tend to train people for our past.  These two points suggest the magnitude of the gap between the training we have and the training we need.
Another aspect to the literacy problem is that when people learn statistics in the context of another discipline, a small number of them have trouble disassociating.  Think text analytics taught in an English class.
The Coming Flood Of Statistical Malfeasance:
We are going to have more statistics debacles merely because we are going to perform more data analysis, with or without the ‘Internet of Things.’  This flood will be exacerbated by a backdrop of low statistics literacy and a foreground of promotional mania ranging from subtle positioning to carnival barkers.  The quest for false novelty and a ‘not invented here’ mentality have produced a new crop of ‘UFO sightings,’ and reinforced the oldies.  These take the form of extreme mischaracterizations, rebrandings, and repackagings of statistics, which at their worst displace the main bastion protecting us from statistical malfeasance.  The first and most transparent mischaracterization is that this is really a ‘data rush.’  Once we spend all of the resources getting the data … all we need for data analysis are reports and data visualizations (or graphs, as we used to call them).
When statistics was repackaged as part of Six Sigma, this leveraged a false-novelty that successfully sold statistics ideas to new customers.  Six Sigma wisely incorporated Deming’s process thinking and was equipped for problems common to manufacturing, such as quality control, process control, and design of experiments.  However, the repackaging adulterated statistics and left behind the breadth needed to solve problems outside of manufacturing.  Manufacturing, where Six Sigma originated and with its engineering culture, is where it was the most successful.  Six Sigma was less successful everywhere else; it did not have a way to expand statistical thinking, tools, and best practices.
Six Sigma tried to turn everyone into a light analyst, which fared better in an engineering culture.  It did not try hard enough to incorporate applied Bottom of Form
statisticians/quants, who could be leveraged to review and raise the quality of the data analysis; to expand the breadth of practice; and to find breakthrough applications.  This combination assured that statistics was too light and clung too rigidly to a sequential approach or recipe.  Much of the learning about the business comes from advanced data analysis, which is accelerated by committing specialized resources to the data analysis.  When Six Sigma was least successful, it was either rejected or it created an environment that facilitated poor practice and hucksterism.
Close:
We want to ignore promotional hype that makes us run in circles with our panties on fire.  Some people have figured out how to make money from producing promotional hype; they just can not figure out statistics.  In the coming blogs, we will describe the approach wanted to avoid statistics debacles and be prepared for the coming flood of statistical malfeasance.  We just need to bravely face the ‘statistics,’ to employ best practice.
We sure could use Deming, right now.  Many of us, who consume or produce data analysis, hang out in the new LinkedIn group: About Data Analysis.