ML | Fictitious classifiers using sklearn

The following are several strategies used by the dummy classifier to predict the class label.

  1. Most common: The classifier always predicts the most common class label in the training data.
  2. Stratified: generates predictions, respecting the distribution of classes over the training data. It differs from the “most frequent” strategy in that it instead associates the probability that each data point is the most frequent class label.
  3. Uniform: It generates predictions evenly at random.
  4. Constant: the classifier always predicts the constant label and is mainly used when classifying non-majority class labels.

Now let`s see the implementation of the dummy classifiers using the sklearn library —

Step 1: Import the required libraries

import numpy as np

import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.neighbors import KNeighborsClassifier

import matplotlib.pyplot as plt 

import seaborn as sns

Step 2: Read the dataset

cd C: UsersDevDesktopKaggleBreast_Cancer
# Change the location of the read file to the file location

df = pd.read_csv ( `data .csv` )

 

y = df [ `diagnosis` ]

X = df.drop ( `diagnosis` , axis = 1 )

X = X.drop ( `Unnamed: 32` , axis = 1 )

X = X.drop ( `id` , axis = 1 )

# Separate dependent and independent per confusing

 

X_train, X_test, y_train, y_test = train_test_split (

X, y, test_size = 0.3 , random_state = 0 )

# Splitting data into training and testing data

Step 3: Train the mock model

strategies = [ `most_frequent` , ` stratified` , `uniform` , ` constant` ]

 

test_scores = []

for s in strategies:

if s = = `constant` :

dclf = DummyClassifier ( strategy = s, random_state = 0 , con stant = `M` )

else :

dclf = DummyClassifier (strategy = s, random_state = 0 )

  dclf. fit (X_train, y_train)

score = dclf.score (X_test, y_test)

test_scores.append (score)

Step 4: Analyze Our Results

 

ax = sns.stripplot (strategies, test_scores); 

ax. set (xlabel = `Strategy` , ylabel = ` Test Score` )

plt.show ()

Step 5: Train the KNN model

clf = KNeighborsClassifier (n_neighbors = 5 )

clf.fit (X_train, y_train)

print (clf.score (X_test, y_test)) 

Comparing the scores of the KNN classifier with the fictitious classifier, we conclude that the KNN classifier is actually a good classifier for data.