Accuracy and bias in machine learning models – Overview

Machine Learning   |   
Published January 24, 2019   |   

Machine learning workflows traditionally depend extensively on optimization and model training. Performance parameters such as bias and accuracy drive the consideration of a model as better than the rest and generally, a model is assumed as adequate for practical use if it meets these performance parameters. The tendency for the model to make certain predictions is not examined or questioned. However, its ability to comprehend as well as interpret this model is critical for enhancing the model quality, transparency, trust and for decreasing the instances of bias.

Provided that complex ML models are complex to understand, it is essential to employ approximations in order to understand how they function. One way to do this is by using an approach called as LIME: Local Interpretable Model-agnostic Explanations. It is a method that can facilitate a comprehensive understanding and explanation of the complex decisions that an ML model makes.

Accuracy in machine learning models

In general, a data science workflow in ML follows these steps: collect data, clean & ready the data, train ML models and finally select a good model based on performance criteria such as test errors and validation. Usually, when a model performs well, it is deployed into production. Little thought is given into the reason behind its performance, in other words ‘why did it perform well?

In machine learning, generally there is a balance achieved between model complexity and accuracy: when the model is more complicated it will be more difficult to analyze and interpret it. It is very easy to explain to a linear model as it only examines the linear relationship between predictor and variables. However, as it only takes into account linearity, it will not be able to model complex relationship which means that the prediction accuracy on test data is bound to be very low. At the opposite end of the spectrum is Deep Neural Nets. As they are able to reach innumerable levels of deduction, they can model even convoluted relationships and reach very high levels of accuracy. But, this complexity necessarily renders them as black boxes. It will be difficult to understand the complex relationship between all features that have led to the predictions rendered by the model, so the developer has to use accuracy, error and other performance criteria as a substitute for how reliable they think the model is.

Usual machine learning workflow does not involve attempting to comprehend the prediction made by an ML model. Some of the reasons why model explanation and understand should become a feature of the ML workflow are:

  • Improving the model
  • Improving Transparency & Trust
  • Determining & Eliminating bias

Among these, identifying and preventing bias is of most importance. Let’s have a brief look into the first two and then take a look into why eliminating bias is critical for an ML model.

Improving the ML model

Having a basic understanding of the relation between classes, prediction, and features which will help to understand the decisions made by an ML model and the features which were integral in that decision making can help the developers to determine if it makes any practical sense. Moreover, getting this extra information about how and on which features model predictions were made, a developer will be in a better position to determine whether their model is able to detect meaningful patterns and its ability on generalizing on new cases.

Transparency & trust

Getting to know ML models is essential to improve the trust and offer transparency about their decisions and predictions. A Machine Learning model regardless of the areas where it is employed, be it medicine or business can have grave consequences if it does not perform as it was expected.  If a developer has a better comprehension of their model, it can save time and expenditure in the long run.

Determining and eliminating bias

One of the most important reasons why there is a need to look closely and understand a machine learning model. A biased model is the result of biased training data. If the training data has subtle biases, the ML model will learn it and then generate a self-fulfilling prophecy. Some examples of biased machine learning models are: A Ml model was employed to advise a sentence length for prisoners. Its prediction revealed the inherent racial bias in the justice system. Another example is when it is used in recruiting, it often shows the gender biases which is embedded in our society till now.

Conclusion

ML models are thus a very powerful tool today and they are all set to become even more prevalent in the immediate future. It is critical that Data Scientists work towards eliminating bias and reduce it from being reinforced.