K-Nearest Neighbors and curse of dimensionality in python Scikit-Learn

Machine Learning   |   
Published March 1, 2016   |   

What is K nearest neighbors(KNN)? 

KNN is one of the simplest machine learning algorithm and it is a lazy algorithm, as it doesn’t run computations on a data set until you give it a new data point you are trying to test.

In this tutorial, I will not only show you how to implement k-Nearest Neighbors in Python (SciKit-Learn), but also I will investigate the influence of higher dimensional spaces on  the classification.

The implementation will be specific for a classification problem and will be demonstrated using the digits data set.

How K Nearest Neighbors Work?

Lets say you have several apples and oranges and you have an unclassified fruit. If K value is 3, the algorithm looks at the 3 nearest neighbors of the unknown fruit and classify the unknown fruit as orange(as there are two oranges and one apple).

If K is 5,  the algorithm looks at the 5 nearest neighbors and classify the unknown fruit as apple( 3 apples and 2 oranges).

If K is 5,  the algorithm looks at the 5 nearest neighbors and classify the unknown fruit as apple( 3 apples and 2 oranges).

KNN classifies an unknown item based on the concept of  majority votes.  Each neighbor can either be given an equal weight or the vote can be based on the distance. The similarity measure is dependent on the type of data, for real-valued data, the Euclidean distance can be used; For other types of data such as categorical or binary data, hamming distance can be used. Since there is a minimum training involved, there is a high computational cost associated with testing a new data. I recommend you to read Saravanan’s blog to know more about KNN.

Analyzing Digits Data Set

First I import all the required Python libraries  to my Ipython Notebook.

Ipython notebook to analyze digits data set

Seaborn is a Python library for making attractive statistical graphs, it is built on top of matplotlib. sklearn.datasets is used to import default data sets present in scikit-learn.  sklearn.cross validation is used to perform cross validation on your data set, and sklearn.grid_search is used to select the best parameter K. If you don’t know what is meant by parameter selection and cross validation, please watch week 6 videos of coursera’s machine learning course. I will explain aboutsklearn.decomposition and sklearn.metrics later in this post.

sk learn data set

I then load the digits data set and store these data and target values in X and Y variables. My X value has 1797 rows and 64 columns, and Y value has 1797 rows and one column.  You can print digits.DESCR to know more  about this data set.

Train-test split and mean normalization

Train and test mean normalization

I Split the data set into train and test set, in which I use 33% of the samples as my test data. I then mean normalize X_train and X_test.

Projection Of Principal components 

I create a scatter plot of the projections to the first two Prinicpal components.

SVD in python

First two prinicipal component

You can see here that I use sklearn.decomposition.TruncatedSVD  function to reduce the number of components.  It performs linear dimensionality reduction very similar to PCA, but operates directly on sample vectors, instead of on covariance matrix.

Cross-Validation To Estimate The Optimal-Value For K

I am going to do a ten-fold cross-validation to estimate the best K value. Apart from estimating the best K value, I am also interested in the influence of the number of dimensions I project the data down. This means that I am going to optimize K for different dimensional projections of the data.

compute test function

compute test function

Implementation of K nearest

implementation of k nearest in python

You don’t have to panic by seeing the above-mentioned code, I will explain the code line by line.  In our implementation of knearest section, I set different values for K( from 1 to 20).  Then I put these K values into a dictionary because GridSearchCvaccepts parameter values only as a dictionary.

nearest negibors

In the next line of this code, I call my nearest neighbors classifier from scikit-learn,knearest = sklearn.neighbors.KNeighborsClassifier().

Don’t get confused as I introduced Iris data set here. In this section, I am going to explain what GridSearchCV does use Iris data set.

First I will load iris data set and then perform a train test split.

iris data set

X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split(X, Y, test_size = 0.33, random_state = 42)

If you are really curious to know about random_state, read this stack over flow thread  here.

Then I fit nearest neighbors to my dataset.

Kneighbors classifier

clf = sklearn.grid_search.GridSearchCV(knn, parameters, cv =10),  here I pass my nearest neighbors classifier, parameters and cross-validation value to  GridSearchCV.  Even if you don’t understand what cross-validation or what GridSeachCV does, don’t worry about it, it just selects the best parameter K for you.  This is all you have to know about GridSearchCv.

best params

You can see GridSearchCv does all the hard work for you and returns the best k parameter.

Explaining The Effect Of Dimensions In KNearest Neighbors

Ok(!) Let me continue to explain the code of my digits dataset,

First I create two empty lists and a list containing numbers from 1 to 10.

accuracy =[]

params =[]

no_of_dimensions = [1,2,3,4,5,6,7,8,9, 10]

I then loop over my no_of_dimensions  using a for loop(for d in no_of_dimensions).

Then I call TruncatedSVD from Scikit-Learn:

svd =sklearn.decomposition.TruncatedSVD(n_components =d)

Then I fit svd to my training data(X_train) and apply transform method to my test data.

if d<64:

        X_fit = svd.fit_transform(X_train)

        X_fit_atest = svd.transform(X_test)

Now I fit my classifier to the truncated X_fit and Y_train. When you are fitting your classifier to your data set, remember  to use X_fit instead of X_train.

clf.fit(X_fit, Y_train)

Understanding Accuracy Scores

In this line of code “accuracy.append(compute_test(x_test = X_fit_atest, y_test = Y_test, clf = clf, cv =10))”  I  compute the accuracy score for every dimension using compute_test function. In compute test function, sklearn.cross_validation.KFold gives  the indices to do a 10 fold cross validation split. I then calculate the accuracy score for X_fit_atest andY_test.  

Conclusion

The accuracy gets better as the dimensions increase. I have enough data points that the curse of dimensionality does not harm my predictions here and the additional dimensions add to the class separability.

Hope this post has given a good idea of how k nearest neighbors operate, and how dimensions of the data affect your classification accuracy.