ML | Logistic regression using Python

User Database — This dataset contains information about users from a company database. Contains information about user ID, gender, age, estimated salary, purchase. We use this dataset to predict whether a user will purchase a new company product or not.

Data —  User_Data

Let`s create a logistic regression model that predicts whether a user will buy a product or not.

Input libraries

import pandas as pd

import numpy as np

import matplotlib.pyplot as plt

Loading dataset — User_Data

dataset = pd.read_csv ( `... User_Data.csv` )

Now, to predict whether a user will purchase a product or not, we need to figure out the relationship between age and estimated salary. Here, user ID and gender are not important factors to figure this out.

# input

x = dataset.iloc [:, [ 2 , 3 ]]. values ​​

 
# exit

y = dataset.iloc [:, 4 ]. values ​​

Splitting the dataset for training and testing. 75% of the data is used to train the model, and 25% is — to test the performance of our model.

from sklearn.cross_validation import train_test_split

xtrain, xtest, ytrain, ytest = train_test_split (

x, y, test_size = 0.25 , random_state = 0 )

Now it is very important to scale the features here because the age and estimated salary values ​​are in different ranges. If we are not scaling objects, then the Estimated Wages function will dominate the Age function when the model finds the nearest neighbor to the data point in the data space.

from sklearn.preprocessing import StandardScaler

sc_x = StandardScaler ()

xtrain = sc_x.fit_transform (xtrain) 

xtest = sc_x.transform (xtest)

 

print (xtrain [ 0 : 10 ,:])

Output:

 [[0.58164944 -0.88670699] [-0.60673761 1.46173768] [-0.01254409 -0.5677824] [-0.60673761 1.89663484] [1.37390747 -1.40858358] [1.47293972 0.99784738] [0.08648817 -0.79972756] [-0.01254409 -0.24885782] [-0.21060859 -0.5677824] [-0.21060859 -0.19087153p] Here 

the estimated wages are sacred, and now they are in the range from -1 to 1. Therefore, each feature will equally contribute to decision making, ie

Finally, we train our logistic regression model.

from sklearn.linear_model import LogisticRegression

< code class = "plain"> classifier = LogisticRegression (random_state = 0 )

classifier.fit (xtrain, ytrain)

After training the model, it`s time to use it to predict data testing.

y_pred = classifier.predict (xtest)

Let`s check the performance our model — Confusion Matrix

from sklearn.metrics import confusion_matrix

cm = confusion_matrix (ytest, y_pred)

 

print ( " Confusion Matrix: " , cm)

Output:

 Confusion Matrix: [[65 3] [8 24]] 

Out of 100:
TruePostive + TrueNegative = 65 + 24
FalsePositive + FalseNegative = 3 + 8

Performance Indicator — precision

from sklearn.metrics import accuracy_score

print ( "Accuracy:" , accuracy_score (ytest, y_pred))

Output:

 Accuracy: 0.89 

Visualization of the performance of our model.

from matplotlib.colors import ListedColormap

X_set, y_set = xtest, ytest

X1, X2 = np.meshgrid (np.arange (start = X_set [:, 0 ]. min () - 1

stop = X_set [:, 0 ]. max () + 1 , step = 0.01 ),

np.arange (start = X_set [:, 1 ]. min () - 1

stop = X_set [:, 1 ]. max () + 1 , step = 0.01 ))

 
plt.contourf (X1, X2, classifier.predict (

  np.array ([X1.ravel (), X2.ravel ()]). T) .reshape (

X1.shape), alpha = 0.75 , cmap = ListedColormap (( `red` , ` green` )))

 

plt.xlim (X1. min (), X1. max ())

plt.ylim (X2. min (), X2. max ())

 

for i, j in enumerate (np.unique (y_set)):

plt.scatte r (X_set [y_set = = j, 0 ], X_set [y_set = = j, 1 ],

c = ListedColormap (( `red` , ` green` )) (i), label = j)

 

plt.title ( `Classifier ( Test set) ` )

plt.xlabel ( ` Age` )

plt.ylabel ( `Estimated Salary` )

plt.legend ()
plt.show ()

Output:

Analyzing performance metrics — accuracy and confusion matrix and graph, we can clearly say that our model is performing really well.