Change language

ML | Fictitious classifiers using sklearn

| | |

The following are several strategies used by the dummy classifier to predict the class label.

  1. Most common: The classifier always predicts the most common class label in the training data.
  2. Stratified: generates predictions, respecting the distribution of classes over the training data. It differs from the "most frequent" strategy in that it instead associates the probability that each data point is the most frequent class label.
  3. Uniform: It generates predictions evenly at random.
  4. Constant: the classifier always predicts the constant label and is mainly used when classifying non-majority class labels.

Now let’s see the implementation of the dummy classifiers using the sklearn library —

Step 1: Import the required libraries

import numpy as np

import pandas as pd

from sklearn.model_selection import train_test_split

from sklearn.neighbors import KNeighborsClassifier

import matplotlib.pyplot as plt 

import seaborn as sns

Step 2: Read the dataset

cd C: UsersDevDesktopKaggleBreast_Cancer
# Change the location of the read file to the file location

df = pd.read_csv ( ’data .csv’ )

 

y = df [ ’diagnosis’ ]

X = df.drop ( ’diagnosis’ , axis = 1 )

X = X.drop ( ’Unnamed: 32’ , axis = 1 )

X = X.drop ( ’id’ , axis = 1 )

# Separate dependent and independent per confusing

 

X_train, X_test, y_train, y_test = train_test_split (

X, y, test_size = 0.3 , random_state = 0 )

# Splitting data into training and testing data

Step 3: Train the mock model

strategies = [ ’most_frequent’ , ’ stratified’ , ’uniform’ , ’ constant’ ]

 

test_scores = []

for s in strategies:

if s = = ’constant’ :

dclf = DummyClassifier ( strategy = s, random_state = 0 , con stant = ’M’ )

else :

dclf = DummyClassifier (strategy = s, random_state = 0 )

  dclf. fit (X_train, y_train)

score = dclf.score (X_test, y_test)

test_scores.append (score)

Step 4: Analyze Our Results

 

ax = sns.stripplot (strategies, test_scores); 

ax. set (xlabel = ’Strategy’ , ylabel = ’ Test Score’ )

plt.show ()

Step 5: Train the KNN model

clf = KNeighborsClassifier (n_neighbors = 5 )

clf.fit (X_train, y_train)

print (clf.score (X_test, y_test)) 

Comparing the scores of the KNN classifier with the fictitious classifier, we conclude that the KNN classifier is actually a good classifier for data.

Shop

Learn programming in R: courses

$

Best Python online courses for 2022

$

Best laptop for Fortnite

$

Best laptop for Excel

$

Best laptop for Solidworks

$

Best laptop for Roblox

$

Best computer for crypto mining

$

Best laptop for Sims 4

$

Latest questions

NUMPYNUMPY

Common xlabel/ylabel for matplotlib subplots

12 answers

NUMPYNUMPY

How to specify multiple return types using type-hints

12 answers

NUMPYNUMPY

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

12 answers

NUMPYNUMPY

Flake8: Ignore specific warning for entire file

12 answers

NUMPYNUMPY

glob exclude pattern

12 answers

NUMPYNUMPY

How to avoid HTTP error 429 (Too Many Requests) python

12 answers

NUMPYNUMPY

Python CSV error: line contains NULL byte

12 answers

NUMPYNUMPY

csv.Error: iterator should return strings, not bytes

12 answers

News


Wiki

Python | How to copy data from one Excel sheet to another

Common xlabel/ylabel for matplotlib subplots

Check if one list is a subset of another in Python

sin

How to specify multiple return types using type-hints

exp

Printing words vertically in Python

exp

Python Extract words from a given string

Cyclic redundancy check in Python

Finding mean, median, mode in Python without libraries

cos

Python add suffix / add prefix to strings in a list

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

Python - Move item to the end of the list

Python - Print list vertically