1. Downloading, Installing and Running Python SciPy
Install the Python and SciPy framework on your system if you haven`t already. You can easily follow the installation guide for it.
1.1 Install SciPy libraries
Running on Python 2.7 or 3.5+.
There are 5 key libraries that you will need to install. Below is a list of the Python SciPy libraries required for this tutorial:
1.2 Start Python and check versions
It`s a good idea to make sure your Python environment has been successfully installed and is working properly way.
The script below will help you check the environment. It imports every library needed in this tutorial and prints the version.
Type or copy and paste the following script:
If you get an error, stop. Now is the time to fix that.
2. Load the data.
Dataset — rainbow shell data
This is a well-known data used by almost everyone as a dataset "hello world ”in machine learning and statistics.
The dataset contains 150 observations of iris flowers. There are four columns of color measurements in centimeters. The fifth column — kind of observed flower. All observed flowers are of one of three kinds.
2.1 Importing Libraries
First, let`s import all modules, functions and objects that will be used. p>
A working SciPy environment is required to continue.
2.2 Download dataset
Data can be directly loaded into the UCI machine learning repository.
Using pandas to load data and explore descriptive statistics and data visualization.
Note — the names of each column are specified when the data is loaded. This will help later during data exploration.
If you have network problems, you can upload the iris.csv file to your working directory and upload it in the same way, changing the URL to the name of the local file.
3. Summarize the dataset
Now it`s time to look at the data.
Steps to look at the data in several different ways:
3.1 Dataset Sizes
3.2 Look at the data
# head code >
20 code >
sepal-length sepal-width petal-length petal -width class 0 5.1 3.5 1.4 0.2 Iris-setosa 1 4.9 3.0 1.4 0.2 Iris-setosa 2 4.7 3.2 1.3 0.2 Iris-setosa 3 4.6 3.1 1.5 0.2 Iris-setosa 4 5.0 3.6 1.4 0.2 Iris-setosa 5 5.4 3.9 1.7 0.4 Iris- setosa 6 4.6 3.4 1.4 0.3 Iris-setosa 7 5.0 3.4 1.5 0.2 Iris-setosa 8 4.4 2.9 1.4 0.2 Iris-setosa 9 4.9 3.1 1.5 0.1 Iris-setosa 10 5.4 3.7 1.5 0.2 Iris-setosa 11 4.8 3.4 1.6 0.2 Iris-setosa 12 4.8 3.0 1.4 0.1 Iris-setosa 13 4.3 3.0 1.1 0.1 Iris-setosa 14 5.8 4.0 1.2 0.2 Iris-setosa 15 5.7 4.4 1.5 0.4 Iris-setosa 16 5.4 3.9 1.3 0.4 Iris-setosa 17 5.1 3.5 1.4 0.3 Iris-setosa 18 5.7 3.8 1.7 0.3 Iris-setosa 19 5.1 3.8 1.5 0.3 Iris-setosa
3.3 Statistical summary
This includes counts, average, minimum and maximum values, as well as some percentiles.
It is clearly seen that all numerical values have the same scale ( centimeters) and similar ranges from 0 to 8 centimeters.
sepal-length sepal-width petal-length petal-width count 150.000000 150.000000 150.000000 150.000000 mean 5.843333 3.054000 3.758667 1.198667 std 0.828066 0.433594 1.764420 0.763161 min 4.300000 2.0000 25% 5.100000 2.800000 1.600000 0.300000 50% 5.800000 3.000000 4.350000 1.300000 75% 6.400000 3.300000 5.100000 1.800000 max 7.900000 4.400000 6.900000 2.500000
3.4 Class distribution
class Iris-setosa 50 Iris-versicolor 50 Iris-virginica 50
4. Data visualization
Using two types of graphs:
4.1 one-dimensional plots
one-dimensional plots — graphs of each individual variable.
Given that the input variables are numeric, we can create box and whisker graphics for each.
Generate a histogram of each input variable to get an idea of the distribution.
It looks like the two input variables are Gaussian. This is useful to note as we can use algorithms that can exploit this assumption.
4.2. Multidimensional plots
Interactions between variables.
First, let`s take a look at the scatter plots of all attribute pairs. This can be useful for defining structured relationships between input variables.
Notice the diagonal grouping of some of the attribute pairs. This indicates a high correlation and predictable relationship.
5. Evaluate some algorithms
Create some data models and evaluate their accuracy from invisible data.
5.1 Creating a validation dataset
Using statistical techniques to assess the accuracy of the models we create on invisible data. A concrete estimate of the accuracy of the best model from invisible data is made by evaluating it from actual invisible data.
Some data is used as test data that algorithms cannot see, and this data provides a second and independent idea of how accurate the best model can be.
Test data is divided into two parts, 80% of which we will use to train our models, and 20% that we will store as a dataset for validation.
# Split validation dataset
array [:, code >
X_train, X_validation, Y_train, Y_validation
X, Y, test_size
X_train and Y_train — this is training data for preparing models, and the X_validation and Y_validation sets can be used later.
5.2 Test harness
Using 10-fold cross-validation to evaluate accuracy ... This will split our dataset into 10 pieces, train by 9 and test by 1, and repeat for all workout split combinations.
The Accuracy metric is used to evaluate models. This is the ratio of the number of correctly predicted instances divided by the total number of instances in the dataset multiplied by 100 to get a percentage (for example, with 95% accuracy).
5.3 Build models Evaluating 6 different algorithms: The selected algorithms are a mixture of linear (LR and LDA) and nonlinear (KNN, CART, NB and SVM) algorithms. A random seed is reset before each run to ensure that each algorithm is evaluated using exactly the same data. This ensures that the results are directly comparable. Model Building and Evaluation:
Evaluating 6 different algorithms:
The selected algorithms are a mixture of linear (LR and LDA) and nonlinear (KNN, CART, NB and SVM) algorithms. A random seed is reset before each run to ensure that each algorithm is evaluated using exactly the same data. This ensures that the results are directly comparable.
Model Building and Evaluation:
5.4 Choose the best model
Comparing models with each other and choosing the most accurate ... Running the above example will give you the following raw results:
LR: 0.966667 (0.040825) LDA: 0.975000 (0.038188) KNN: 0.983333 (0.033333) CART: 0.975000 (0.038188) NB: 0.975000 (0.053359) SVM: 0.991667 (0.025000)
Support Vector Machines (SVMs) have the highest accuracy score.
A graph of the model evaluation results is generated and the spread and average accuracy of each model is compared. There are many precision metrics for each algorithm because each algorithm has been scored 10 times (10x cross validation).
Box and mustache charts are at the top of the range, with many The azots reach 100% accuracy.
6. Make predictions
The KNN algorithm is very simple and was an accurate model based on our tests.
Run the KNN model directly in the validation set and summarize the results as a final accuracy score, confusion matrix, and classification report.
# Make set predictions validation data
knn.fit (X_train, Y_train)
predictions code >
(accuracy_score (Y_validation, predictions))
(confusion_matrix (Y_validation, predictions))
(classification_report (Y_validation, predictions)) co de>
The accuracy is 0.9 or 90%. The confusion matrix provides insight into three mistakes made. Finally, the classification report breaks down each grade by accuracy, recall, f1 score and support, showing excellent results (assuming the dataset to be checked was small).
0.9 [[7 0 0 ] [0 11 1] [0 2 9]] precision recall f1-score support Iris-setosa 1.00 1.00 1.00 7 Iris-versicolor 0.85 0.92 0.88 12 Iris-virginica 0.90 0.82 0.86 11 micro avg 0.90 0.90 0.90 30 macro avg 0.92 0.91 0.91 30 weighted avg 0.90 0.90 0.90 30
A Practical, No-Nonsense Introduction to Python Development. You already know you want to learn Python, and a smarter way to learn Python 3 is to learn by doing. The Python Workshop focuses on buil...
Data and storage models are the basis for big data ecosystem stacks. While storage model captures the physical aspects and features for data storage, data model captures the logical representation and...
90 SPECIFIC WAYS TO WRITE BETTER PYTHON. The Python programming language has unique strengths and charms that can be hard to grasp. Many programmers familiar with other languages often approach Pyt...
Python Workout isn’t designed to teach you Python, although I hope and expect that you’ll learn quite a bit along the way. It is meant to help you improve your understand- ing of Python and how to...