Computers can automatically classify data using the k-nearest-neighbor algorithm.
For instance: given the sepal length and width, a computer program can determine if the flower is an Iris Setosa, Iris Versicolour or another type of flower.
Dataset We start with data, in this case a dataset of plants.
Each plant has unique features: sepal length, sepal width, petal length and petal width. The measurements of different plans can be taken and saved into a spreadsheet.
The type of plant (species) is also saved, which is either of these classes:
Iris Setosa (0)
Iris Versicolour (1)
Iris Virginica (2)
Put it all together, and we have a dataset:
We load the data. This is a famous dataset, it’s included in the module. Otherwise you can load a dataset using python pandas.
import matplotlib matplotlib.use('GTKAgg')
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets
# import some data to play with iris = datasets.load_iris()
# take the first two features X = iris.data[:, :2] y = iris.target
print(X)
X contains the first two features, being the rows sepal length and sepal width. The Y list contains the classes for the features.
Plot data We will use the two features of X to create a plot. Where we use X[:,0] on one axis and X[:,1] on the other.
import matplotlib matplotlib.use('GTKAgg')
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap from sklearn import neighbors, datasets
# import some data to play with iris = datasets.load_iris()
# take the first two features X = iris.data[:, :2] y = iris.target h = .02# step size in the mesh
# Put the result into a color plot plt.figure() plt.scatter(X[:, 0], X[:, 1]) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("Data points") plt.show()
This will output the data:
Classify with k-nearest-neighbor We can classify the data using the kNN algorithm. We create and fit the data using:
# predict class using data and kNN classifier Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure() plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.title("3-Class classification (k = %i)" % (n_neighbors)) plt.show()
which outputs the plot using the 3 classes:
Prediction We can use this data to make predictions. Given the position on the plot (which is determined by the features), it’s assigned a class. We can put a new data on the plot and predict which class it belongs to.
The code below will make prediction based on the input given by the user:
import numpy as np from sklearn import neighbors, datasets from sklearn import preprocessing
n_neighbors = 6
# import some data to play with iris = datasets.load_iris()
# prepare data X = iris.data[:, :2] y = iris.target h = .02
# we create an instance of Neighbours Classifier and fit the data. clf = neighbors.KNeighborsClassifier(n_neighbors, weights='distance') clf.fit(X, y)
Load dataset and plot You can choose the graphical toolkit, this line is optional:
matplotlib.use('GTKAgg')
We start by loading the modules, and the dataset. Without data we can’t make good predictions.
The first step is to load the dataset. The data will be loaded using Python Pandas, a data analysis module. It will be loaded into a structure known as a Panda Data Frame, which allows for each manipulation of the rows and columns.
We create two arrays: X (size) and Y (price). Intuitively we’d expect to find some correlation between price and size.
The data will be split into a trainining and test set. Once we have the test data, we can find a best fit line and make predictions.
import matplotlib matplotlib.use('GTKAgg')
import matplotlib.pyplot as plt import numpy as np from sklearn import datasets, linear_model import pandas as pd
# Load CSV and columns df = pd.read_csv("Housing.csv")
Y = df['price'] X = df['lotsize']
X=X.reshape(len(X),1) Y=Y.reshape(len(Y),1)
# Split the data into training/testing sets X_train = X[:-250] X_test = X[-250:]
# Split the targets into training/testing sets Y_train = Y[:-250] Y_test = Y[-250:]