Face Recognition Alert System and AWS automation

Hello everyone,
In this blog we are going to create an alert system and AWS automation program.
Alert System : In alert system created a program that will detect the face in the image and as soon as it detects an image it will capture it and send that image to the admins email address and sends whatsapp message to admin.
AWS automation : In AWS automation, created a program that will recognize the users face and in the basis of accuracy of face recognition it will run the Terraform code that will create instance in AWS along with 5GB of volume and attach that volume to the instance.
let's start with the code:
Prerequisites:
  • Python
  • OpenCV
  • Alert System:
  • Libraries included are :
  • import cv2
    from PIL import Image 
    from email.message import EmailMessage
    import smtplib, ssl
    from email.mime.text import MIMEText
    from email.mime.base import MIMEBase
    from email.mime.multipart import MIMEMultipart
    from email import encoders
    import imghdr 
    import os 
    import pywhatkit # for whatsapp
    Here pywhatkit library is used for whatsapp, it is used in program for sending the messages.
    imghdr library is used to find the types of an image.
    PIL stands for Python Image Library which is used for image processing.
    email and smtlib libraries are used for using the mail service and helps to send the mail.
  • load the model
  • model=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
    Here haarcascade model is used to detect the face in image.
  • Now let's discuss the code for image processing
  • while True:
        cap = cv2.VideoCapture(0)
        ret, photo = cap.read()
        faces = model.detectMultiScale(photo)
        if len(faces) == 0:
            pass
        else:
            x1=faces[0][0]
            y1=faces[0][1]
            x2=x1+faces[0][2]
            y2=y1+faces[0][3]
    
            aphoto = cv2.rectangle(photo, (x1,y1), (x2,y2), [0,255,0], 5)
            cv2.imshow("Image Capturing", aphoto)
            if cv2.waitKey(5)==13: #13 is the code for Enter Key
                break
    cv2.destroyAllWindows()
    cap.release()
    when the following code in run then webcam will open and it will detect the face.
    image = Image.fromarray(aphoto)
    image.save('Alert.png')
    image_show=Image.open(r"Alert.png")
    image_crop = image_show.crop((x1,y1,x2,y2))
    image_crop.show()
    image_crop.save('Alert_face_detected.png')
    print("Image Captured")
    In the following code image is cropped and saved in the system.
  • Now let's code for Email Alert
  • email_id = os.environ['my_email']
    email_receiver = os.environ['receiver_email']
    password = os.environ['my_password']
    
    #Sender, Reciever, Body of Email
    sender = email_id
    receivers = email_receiver
    body_of_email = 'Alert intrucder has been detected'
    
    #added sender and reciver email addresses
    msg = MIMEMultipart()
    msg['Subject'] = 'Alert Intruder detected'
    msg['From'] = sender
    msg['To'] = receivers
    
    part = MIMEBase('application', 'octet-stream')
    part.set_payload(open('Alert.png', 'rb').read())#Image attached
    encoders.encode_base64(part)
    part.add_header('Content-Disposition', 'attachment; filename ="Alert.png"')
    msg.attach(part)
    
    #Connecting to Gmail SMTP Server
    s = smtplib.SMTP_SSL(host = 'smtp.gmail.com', port = 465)
    s.login(user = sender, password = password)
    
    s.sendmail(sender, receivers, msg.as_string())
    In the following code the image that was saved is mailed to other user using smtp protocol, which is used by the gmail.
  • Now let's code for WhatsApp alert
  • number = os.environ['phone_number']
    
    import pywhatkit
    pywhatkit.sendwhatmsg(number, 'Alert Intruder Detected ',2,29)
    In the following code sendwhatmsg is a function from pywhatkit library that is use to send the whatsapp message.
    Here Alert Intruder Detected is message and 2,29 is a time when to send the message.
    After code is run, whatsaap will open in browser and message will be send.
    So that's how as soon as face is detected code will run that will capture image and send it to admin's email address along with the whatsapp alert message.
    AWS automation using Terraform with the help of Face Recognition
    To create a Face Recognition program, first we need to create Dataset and then using the dataset we need to train the model that will help us in Face recognition.
  • Create Dataset
  • To create dataset use the code below, following code will capture the user image 100 times and store it in a directory that will be used by the model.
    import cv2
    import numpy as np
    
    # Load HAAR face classifier
    face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
    
    # Load functions
    def face_extractor(img):
        # Function detects faces and returns the cropped face
        # If no face detected, it returns the input image
    
        gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
        faces = face_classifier.detectMultiScale(gray, 1.3, 5)
    
        if faces is ():
            return None
    
        # Crop all faces found
        for (x,y,w,h) in faces:
            cropped_face = img[y:y+h, x:x+w]
    
        return cropped_face
    
    # Initialize Webcam
    cap = cv2.VideoCapture(0)
    count = 0
    # Collect 100 samples of your face from webcam input
    while True:
    
        ret, frame = cap.read()
        if face_extractor(frame) is not None:
            count += 1
            face = cv2.resize(face_extractor(frame), (200, 200))
            face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)
    
            # Save file in specified directory with unique name
            #path
            file_name_path = 'Path_of_dir/' + str(count) + '.jpg'
            cv2.imwrite(file_name_path, face)
    
            # Put count on images and display live count
            cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
            cv2.imshow('Face Cropper', face)
    
        else:
            print("Face not found")
            pass
    
        if cv2.waitKey(1) == 13 or count == 100: #13 is the Enter Key
            break
    
    cap.release()
    
    
    cv2.destroyAllWindows()
    Here you can see *Count == 100 * that means image will be capture 100 times and above part image is croped so that only the face part is captured.
    like this 100 images will captured and stored in a directory.
  • Train the Model
  • To train the model use the code below and provide the directory of image dataset in the code which will help in model training.
    import cv2
    import numpy as np
    from os import listdir
    from os.path import isfile, join
    
    # Get the training data we previously made
    data_path_1 = 'path_of_image_dataset/'
    onlyfiles_1 = [f for f in listdir(data_path_1) if isfile(join(data_path_1, f))]
    
    
    # Create arrays for training data and labels
    Training_Data_1, Labels_1 = [], []
    # Create arrays for training data and labels
    
    # Create a numpy array for training dataset 1
    for i, files in enumerate(onlyfiles_1):
        image_path = data_path_1 + onlyfiles_1[i]
        images = cv2.imread(image_path, cv2.IMREAD_GRAYSCALE)
        Training_Data_1.append(np.asarray(images, dtype=np.uint8))
        Labels_1.append(i)   
    
    
    Labels_1 = np.asarray(Labels_1, dtype=np.int32)
    
    
    Nitesh_model  = cv2.face_LBPHFaceRecognizer.create()
    Nitesh_model.train(np.asarray(Training_Data_1), np.asarray(Labels_1))
    print("Model trained sucessefully")
    Now our model is trained .
  • Create Face Recognition program
  • To create the Face Recognition program use the code below, now as it will recognize the face and bases on the accuracy of the face recognition terraform code will be executed which will create an instance in AWS along with 5GB of ebs volume and attach it to the instance.
    import cv2
    import numpy as np
    import os
    
    
    face_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
    
    def face_detector(img, size=0.5):
    
        # Convert image to grayscale
        gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
        faces = face_classifier.detectMultiScale(gray, 1.3, 5)
        if faces is ():
            return img, []
    
    
        for (x,y,w,h) in faces:
            cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,255),2)
            roi = img[y:y+h, x:x+w]
            roi = cv2.resize(roi, (200, 200))
        return img, roi
    
    
    # Open Webcam
    cap = cv2.VideoCapture(0)
    
    while True:
    
        ret, frame = cap.read()
    
        image, face = face_detector(frame)
    
        try:
            face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)
    
            # Pass face to prediction model
            # "results" comprises of a tuple containing the label and the confidence value
            results = Nitesh_model.predict(face)
    
            if results[1] < 500:
                confidence = int( 100 * (1 - (results[1])/400) )
                display_string = str(confidence) + '% Confident it is User'
    
            cv2.putText(image, display_string, (100, 120), cv2.FONT_HERSHEY_COMPLEX, 1, (255,120,150), 2)
    
            if confidence > 70:
                cv2.putText(image, "Hello Nitesh", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)
                cv2.imshow('Face Recognitioned', image )
                if cv2.waitKey(1)==13:
                    cap.release()
                    cv2.destroyAllWindows()
                    !terraform init
                    !terraform apply --auto-approve
    
    
            else:
                cv2.putText(image, "Unrecognised Face", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
                cv2.imshow('Face Recognition', image )
    
        except:
            cv2.putText(image, "Face Not Found", (220, 120) , cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
            cv2.putText(image, "Searching for Face....", (250, 450), cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2)
            cv2.imshow('Face Recognition', image )
            pass
    
        if cv2.waitKey(1) == 13: #13 is the Enter Key
            break
    
    cap.release()
    cv2.destroyAllWindows()
    Here you can see 92% is an accuracy which is more than the accuracy mentioned in a program therefore it will execute the terraform code.
    To watch the Demo 👇

    33

    This website collects cookies to deliver better user experience

    Face Recognition Alert System and AWS automation