Change language

Pispark | Linear regression with extended functional dataset using Apache MLlib

| |

Ames Housing Data: The Ames Housing dataset was compiled by Dean De Cock for use in data science education and an extended version of the much-cited Boston Housing dataset. The dataset provided has 80 functions and 1459 instances.
The dataset is described below:

Multiple columns are displayed for demonstration, but there are many more columns in the dataset.

Examples:
Input dataset: Ames_housing_dataset

Code:

# SparkSession is now the entry point to Spark
# SparkSession can also be interpreted as a gateway to Spark libraries

import pyspark

from pyspark.sql import SparkSession

# create an instance of spark class

spark = SparkSession.builder.appName ( ’ames_housing_price_model’ ). getOrCreate ()

df_train = spark.read.csv (r ’D: python codingpyspark_tutorialLinear regression’

  ’housing price multiple features’

  ’ house-prices-advanced-regression-techniques’

’rain.csv’ , inferSchema = True , header = True )

Code:

# defining columns with less meaningful data based on data types

l_int = []

for item in df_train.dtypes:

if item [ 1 ] = = ’int’ :

  l_int.append (item [ 0 ])

print (l_int) 

  

l_str = [ ]

for item in df_train.dtypes:

if item [ 1 ] = = ’string’ :

l_str.append (item [ 0 ])

print (l_str)

Exit

 Integer Datatypes: [’Id’,’ MSSubClass’, ’LotArea’,’ OverallQual’, ’OverallCond’,’ YearBuilt’, ’YearRemodAdd’,’ BsmtFinSF1’, ’BsmtFinSF2’,’ BsmtUnfSF’ ’TotalBsmtSF’,’ 1stFlrSF’, ’2ndFlrSF’,’ LowQualFinSF’, ’GrLivArea’,’ BsmtFullBath’, ’BsmtHalfBath’,’ FullBath’, ’HalfBath’,’ BedroomAbvGr’, ’KitchenGbrdvRms’,’ KitchenGrdvRms’ ’,’ GarageCars’, ’GarageArea’,’ WoodDeckSF’, ’OpenPorchSF’,’ EnclosedPorch’, ’3SsnPorch’,’ ScreenPorch’, ’PoolArea’,’ MiscVal’, ’MoSold’,’ YrSold’, ’SalePrice’] String Datatypes: [’MSZoning’,’ LotFrontage’, ’Street’,’ Alley’, ’LotShape’,’ LandContour’, ’Utilities’,’ LotConfig’, ’LandSlope’,’ Neighborhood’, ’Condition1’,’ Condition2 ’,’ BldgType’, ’HouseStyle’,’ RoofStyle’, ’RoofMatl’,’ Exterior1st’, ’Exterior2nd’,’ MasVnrType’, ’MasVnrArea’,’ ExterQual’, ’ExterCon d’, ’Foundation’,’ BsmtQual’, ’BsmtCond’,’ BsmtExposure’, ’BsmtFinType1’,’ BsmtFinType2’, ’Heating’,’ HeatingQC’, ’CentralAir’,’ Electrical’, ’KitchenQual’,’ Functional’ , ’FireplaceQu’,’ GarageType’, ’GarageYrBlt’,’ GarageFinish’, ’GarageQual’,’ GarageCond’, ’PavedDrive’,’ PoolQC’, ’Fence’,’ MiscFeature’, ’SaleType’,’ SaleCondition’] 

Code :

# define entries in integer columns that have less meaningful data
# define entries in integer columns that have less meaningful data

for i in df_train.columns:

if i in l_ int:

a = ’df_train’ + ’ .’ + i

  ct_total = df_train.select (i) .count ()

ct_zeros = df_train. filter ((col (i) = = 0 )). count ()

per_zeros = (ct_zeros / ct_total) * 100

print ( ’total count / zeros count’

+ i + ’’ + str (ct_total) + ’ / ’ + str (ct_zeros) + ’ / ’ + str (per_zeros))

Percentage zeros:

 total count / zeros count / zeros_percent OpenPorchSF 1460/656 / 44.9315068 4931507 total count / zeros count / zeros_percent EnclosedPorch 1460/1252 / 85.75342465753425 total count / zeros count / zeros_percent 3SsnPorch 1460/1436 / 98.35616438356163 total count / zeros count / zeros_percent ScreenPorch 1460 / countercent 1444 / zeros_percent ScreenPorch 1460 / countercent 1444 1453 / 99.52054794520548 total count / zeros count / zeros_percent PoolQC 1460/1453 / 99.52054794520548 total count / zeros count / zeros_percent Fence 1460/1453 / 99.52054794520548 total count / zeros count / zeros_percent 99.55 total 1448 total count / counteros count / zeros_percent 99.5% 1460/1408 / 96.43835616438356 total count / zeros count / zeros_percent MoSold 1460/0 / 0.0 total count / zeros count / zeros_percent YrSold 1460/0 / 0.0 

Code:

# The above calculation gives us an idea of ​​the useful functions
# now discard columns with zeros or NA% more than 75%

 

df_new = df_train.drop ( * [ ’BsmtFinSF2’ , ’LowQualFinSF’ , ’ BsmtHalfBath’ ,

  ’ EnclosedPorch’ , ’3SsnPorch’ , ’ ScreenPorch’ ,

’PoolArea’ , ’PoolQC’ , ’ Fence’ , ’MiscFeature’ ,

’MiscVal’ , ’ Alley’ ])

df_new = df_new.drop ( * [ ’ Id’ ])

# now we have clean data to work with

Code:

# convert string to numeric object

from pyspark.ml .feature import StringIndexer

from pyspark.ml import Pipeline

feat_list = [ ’MSZoning’ , ’ LotFrontage’ , ’Street’ , ’ LotShape’ , ’LandContour’ ,

’Utilities’ , ’ LotConfig’ , ’LandSlope’ , ’Neighborhood’ , ’ Condition1’ ,

  ’ Condition2’ , ’BldgType’ , ’HouseStyle’ , ’RoofStyle’ ,

  ’ RoofMatl ’ , ’ Exterior1st’ , ’Exterior2nd’ , ’ MasVnrType’ ,

’MasVnrArea’ , ’ExterQual’ , ’ ExterCond’ , ’Foundation’ ,

  ’ BsmtQual’ , ’BsmtCond’ , ’BsmtExposure’ , ’ BsmtFinType1’ , ’ BsmtFinType2’ ,

  ’Heating’ , ’ HeatingQC’ , ’CentralAir’ , ’ Electrical’ , ’KitchenQual’ ,

’Functional’ , ’FireplaceQu’ , ’ GarageType’

  ’ GarageYrBlt’ , ’GarageFinish’ , ’GarageQual’ , ’GarageCond’ ,

  ’ PavedDrive ’ , ’ SaleType’ , ’SaleCondition’ ]

print ( ’ indexed list created’ )

 
# there are several functions to work with
# using a pipeline, we can convert multiple objects to indexers

indexers = [StringIndexer (inputCol = column, outputCol = column + "_ index" ). fit (df_new) for column in feat_list]

type (indexers)

# Combines the given list of columns into one vector column.
# input_cols: Columns to build.
# returns a Dataframe with a collected column.

 

pipeline = Pipeline (stages = indexers)

df_feat = pipeline.fit (df_new ) .transform (df_new)

df_feat.columns
# using the above code, we converted the list of functions to indexes

 

from pyspark.ml.linalg import Vectors

from pyspark.ml.feature import VectorAssembler

 
# we’ll convert the columns below into functions to work with

assembler = VectorAssembler (inputCols = [ ’ MSSubClass’ , ’LotArea’ , ’OverallQual’ ,

’OverallCond’ , ’ YearBuilt’ , ’YearRemodAdd’ ,

’BsmtFinSF1’ , ’BsmtUnfSF’ , ’ TotalBsmtSF’ ,

’1stFlrSF’ , ’2ndFlrSF’ , ’ GrLivArea’

’ BsmtFullBath’ , ’FullBath’ , ’HalfBath’ ,

’GarageArea’ , ’ MoSold’ , ’YrSold’

’MSZoning_index’ , ’LotFrontage_index’ ,

  ’ Street_index’ , ’LotShape_index’

’LandContour_index’ , ’ Util ities_index’

’LotConfig_index’ , ’ LandSlope_index’ ,

’Neighborhood_index’ , ’Condition1_index’ ,

  ’ Condition2_index’ , ’BldgType_index’ ,

’HouseStyle_index’ , ’RoofStyle_index’

  ’RoofMatl_index’ , ’ Exterior1st_index’

’Exterior2nd_index’ , ’MasVnrType_index’ ,

’MasVnrArea_index’ , ’ ExterQual_index’

’ExterCond_index’ , ’Foundation_index’ ,

’BsmtQual_index’ , ’BsmtCond_index’

  ’ BsmtExposure_index’ , ’BsmtFinType1_index’

’BsmtFinType2_index’ , ’ Heating_index’ ,

  ’HeatingQC_index’ , ’ CentralAir_index’ ,

’Electrical_index’ , ’KitchenQual_index’ ,

  ’ Functional_index’ , ’FireplaceQu_index’ ,

’GarageType_index’ , ’GarageYrBlt_index’ ,

’GarageFinish_index’ , ’GarageQual_index’ ,

  ’GarageCond_index’ , ’ PavedDrive_index’

’SaleType_index’ , ’SaleCondition_index’ ],

  outputCol = ’features’ )

output = assembler.transform (df_feat)

 

final_data = output.select ( ’ features’ , ’SalePrice’ )

  
# splitting data for validation and validation

train_data, test_data = final_data.randomSplit ([ 0.7 , 0.3 ])

Code :

train_data.describe (). show ()

test_data.describe (). show ()

Code:

from pyspark.ml.regression import LinearRegression

house_lr = LinearRegression (featuresCol = ’ features’ , labelCol = ’SalePrice’ )

trained_house_model = house_lr.fit ( t rain_data)

house_results = trained_house_model.evaluate (train_data)

print ( ’Rsquared Error: ’ , house_results.r2)

  
# Rsquared Error: 0.8279155904297449
# 82% model accuracy with train data

 
# evaluate the model by test_data

test_results = trained_house_model.evaluate (test_data)

print ( ’Rsquared error:’ , test_results.r2)

 
# Error squared: 0.8431420382408793
# better result with an accuracy of 84%

 
# create untagged data from test_data
# test_data.show ( )

unlabeled_data = test_data.select ( ’features’ )

unlabeled_data.show ()

Code :

predictions = trained_ho use_model.transform (unlabeled_data)

predictions.show ()

Shop

Learn programming in R: courses

$

Best Python online courses for 2022

$

Best laptop for Fortnite

$

Best laptop for Excel

$

Best laptop for Solidworks

$

Best laptop for Roblox

$

Best computer for crypto mining

$

Best laptop for Sims 4

$

Latest questions

NUMPYNUMPY

Common xlabel/ylabel for matplotlib subplots

12 answers

NUMPYNUMPY

How to specify multiple return types using type-hints

12 answers

NUMPYNUMPY

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

12 answers

NUMPYNUMPY

Flake8: Ignore specific warning for entire file

12 answers

NUMPYNUMPY

glob exclude pattern

12 answers

NUMPYNUMPY

How to avoid HTTP error 429 (Too Many Requests) python

12 answers

NUMPYNUMPY

Python CSV error: line contains NULL byte

12 answers

NUMPYNUMPY

csv.Error: iterator should return strings, not bytes

12 answers

News


Wiki

Python | How to copy data from one Excel sheet to another

Common xlabel/ylabel for matplotlib subplots

Check if one list is a subset of another in Python

sin

How to specify multiple return types using type-hints

exp

Printing words vertically in Python

exp

Python Extract words from a given string

Cyclic redundancy check in Python

Finding mean, median, mode in Python without libraries

cos

Python add suffix / add prefix to strings in a list

Why do I get "Pickle - EOFError: Ran out of input" reading an empty file?

Python - Move item to the end of the list

Python - Print list vertically