Face detection using Python and OpenCV with webcam



Below are the requirements for it:

  1. Python 2.7
  2. OpenCV
  3. Numpy
  4. Haar Cascade Frontal Face Classifiers

Approach / algorithms used:

  1. This project uses the LBPH (Local Binary Pattern Histograms) algorithm for face detection. It labels the pixels of the image by setting a threshold around each pixel and treats the result as a binary number.
  2. LBPH uses 4 parameters:
    (i) Radius: the radius is used to construct a circular local binary structure and represents the radius around
    center pixel. 
    (ii) Neighbors: the number of sample points to construct a circular local binary pattern. 
    (iii) Grid X: number of cells in the horizontal direction. 
    (iv) Grid Y: number of cells in the vertical direction.
  3. The generated model is trained using the labeled faces and then the test data is given to the machine and the machine selects the correct label for it.

How to use:

  1. Create a directory on your computer and name it (say project)
  2. Create two Python files named create_data.py and face_recognize.py, copy the first source and second source respectively into it.
  3. Copy haarcascade_frontalface_default.xml to your project directory, you can get it in opencv or from
    here
  4. You are now ready to run the following codes.

# Create database
# Captures images and stores them in datasets
# folder under the folder name sub_data

import cv2, sys, numpy, os

haar_file = `haarcascade_frontalface_default.xml`

 
# All face data will be
# submit this folder

datasets = `datasets`  

  

  
# These are folder sub-datasets,
# for my faces I used my name, you can
# change the label y here

sub_data = `vivek`  

 

path = os.path.join (datasets, sub_data)

if not os.path.isdir (path):

os.mkdir (path)

 
# define image size

(width, height) = ( 130 , 100

 
# & # 39; 0 & # 39; used for my webcam,
# if you have another camera
# attached use & # 39; 1 & # 39; like this

face_cascade = cv2.CascadeClassifier (haar_file)

webcam = cv2.VideoCapture ( 0

 
# The program loops until it receives 30 face images.

count = 1

while count & lt;  30

(_, im) = webcam.read ()

gray = cv2.cvtColor (im, cv2.COLOR_BGR2GRAY)

faces = face_cascade.detectMultiScale (gray, 1.3 , 4 )

  for (x, y, w, h) in faces:

cv2.rectangle (im, (x, y ), (x + w, y + h), ( 255 , 0 , 0 ), 2 )

face = gray [y: y + h, x: x + w]

face_resize = cv2.resize (face, (width, height))

  cv2.imwrite ( `% s /% s.png` % (path, count), face_resize)

  count + = 1

 

cv2.imshow ( `OpenCV` , im)

key = cv2.waitKey ( 10 )

  if key = = 27 :

  break

The following code should be run after training the model for faces:

< code>

# It helps in identifying faces

import cv2, sys, numpy, os

size = 4

haar_file = `haarcascade_frontalface_default.xml`

datasets = `datasets`

 
# Part 1. Creating fisherRecognizer

print ( `Recognizing Face Please Be in sufficient Lights ...` )

 
# Create a list of images and list of matching names

(images, lables, names, id ) = ([], [], {}, 0 )

for (subdirs , dirs, files) in os.walk (datasets):

for subdir in dirs:

names [ id ] = subdir

subjectpath = os.path.join (datasets, subdir) 

for filename in os.listdir (subjectpath):

path = subjectpath + `/` + filename

lable = id

images.append (cv2.imread (path, 0 ))

lables.append ( int (lable))

  id + = 1

(width, height) = ( 130 , 100 )

 
# Create a Numpy array from the two lists above

(images, lables) = [numpy.array (lis) for lis in [ images, lables]]

 
# OpenCV trains a model from images
# NOTE FOR OpenCV2: remove ".face"

model  = cv2.face.LBPHFaceRecognizer_create ()

model.train (images, lables)

 
# Part 2. Using fisherRecognizer in the camera stream

face_cascade = cv2.CascadeClassifier (haar_file)

webcam = cv2.VideoCapture ( 0 )

while True :

(_, im ) = webcam.read ()

  gray = cv2.cvtColor (im, cv2.COLOR_B GR2GRAY)

faces = face_cascade.detectMultiScale (gray, 1.3 , 5 )

for (x, y, w, h) in faces:

cv2.rectangle (im, (x, y), (x + w, y + h) , ( 255 , 0 , 0 ), 2 )

face = gray [y: y + h, x: x + w]

face_resize = cv2.resize (face, (width, height))

  # Try to recognize the face

prediction = model.predict (face_resize)

cv2.rectangle ( im, (x, y), (x + w, y + h), ( 0 , 255 , 0 ), 3 )

 

  if prediction [ 1 ] & lt; 500 :

 

cv2.putText (im, `% s -% .0f` %  

(names [prediction [ 0 ]], prediction [ 1 ]) , (x - 10 , y - 10 ), 

cv2.FONT_HERSHEY_PLAIN, 1 , ( 0 , 255 , 0 ))

else :

cv2.putText (im, `not recognized`

(x - 10 , y - 10 ), cv2.FONT_HERSHEY_PLAIN, 1 , ( 0 , 255 , 0  ))

 

cv2.imshow ( `OpenCV` , im)

 

key = cv2.waitKey ( 10 )

if key = = 27 :

break

Note. The above programs will not work in the online IDE.

Screenshots of the Programs

This may look different oh, because I have integrated the above program into the framework.

Running the second program gives results similar to the image below:

face recognition

Storing datasets:

data_sets