Logistic regression is used to predict whether a given patient has a malignant or benign tumor based on attributes in a given dataset.
Code: Load Libraries

Code: Load dataset

Output:
Code: Load dataset

Output:
RangeIndex : 569 entries, 0 to 568 Data columns (total 33 columns): id 569 nonnull int64 diagnosis 569 nonnull object radius_ mean 569 nonnull float64 texture_ mean 569 nonnull float64 perimeter_ mean 569 nonnull float64 area_ mean 569 nonnull float64 smoothness_ mean 569 nonnull float64 compactness_ mean 569 nonnull float64 concavity_ mean 569 nonnull mean concavity 569 nonnull mean points 569 nonnull mean conc9 null float64 symmetry_ mean 569 nonnull float64 fractal_dimension_ mean 569 nonnull float64 radius_se 569 nonnull float64 texture_se 569 nonnull float64 perimeter_se 569 nonnull float64 area_se 569 nonnull float64 smoothness_se 569 nonnullse float non compactness569 float64 concavity_se 569 nonnull float64 concave points_se 569 nonnull float64 symmetry_se 569 nonnull float64 fractal_dimension_se 569 nonnull float64 radius_worst 569 nonnull float64 texture_worst 569 nonnull float64 perimeter_worst 569_ nonnull float64 perimeter_worst 569_ nonnullnull area smoothness_worst 569 nonnull float64 compactness_worst 569 nonnull float64 concavity_worst 569 nonnull float64 concave points_worst 5 69 nonnull float64 symmetry_worst 569 nonnull float64 fractal_dimension_worst 569 nonnull float64 Unnamed: 32 0 nonnull float64 dtypes: float64 (31), int64 (1), object (1) memory usage: 146.8+ KB
Code: we are dropping columns — "Id" and "Unnamed: 32" as they play no role in forecasting.

Code: input and output

Code: normalization

Code: Separate data for training and testing.

Code Weight and Grade
def
initialize_weights_and_bias (dimension):
w
=
np.full ((dimension,
1
),
0.01
)
b
=
0.0
return
w, b
Code: sigmoidal function — calculates the zvalue.

Codex Forward and Backward Distribution
def
forward_backward_propagation (w, b, x_train, y_train):
z
=
np.dot (wT, x_train)
+
b
y_head
=
sigmoid (z)
loss
=

y_train
*
np.log (y_head)

(
1

y_train)
*
np.log (
1

y_head)
# x_train.shape [1] to scale
cost
=
( np.
sum
(loss))
/
x_train.shape [
1
]
# backpropagation
derivative_weight
=
( np.dot (x_train, (
(y_head

y_train) .T)))
/
x_train. shape [ 1
]
derivative_bias
=
np.
sum
(
y_head

y_train)
/
x_train.shape [
1
]
gradients
=
{
"derivative_weight"
: derivative_weight,
"derivative_bias"
: derivative_bias}
return cost, gradients
Code: update options

Code: Predictions

Code: logistic regression

Output:
Cost after iteration 0 : 0.692836 Cost after iteration 10: 0.498576 Cost after iteration 20: 0.404996 Cost after iteration 30: 0.350059 Cost after iteration 40: 0.313747 Cost after iteration 50: 0.287767 Cost after iteration 60: 0.268114 Cost after iteration 70: 0.252627 Cost after iterat ion 80: 0.240036 Cost after iteration 90: 0.229543 Cost after iteration 100: 0.220624 Cost after iteration 110: 0.212920 Cost after iteration 120: 0.206175 Cost after iteration 130: 0.200201 Cost after iteration 140: 0.194860
Output:
train accuracy: 95.23809523809524% test accuracy: 94.18604651162791%
Code: check results with linear_model.LogisticRegression

Output:
test accuracy: 0.9651162790697675 train a ccuracy: 0.9668737060041408