Face Lock in your application using Python

In this post, we will discuss how to create a python program to put a face lock on any application, I don’t think I have to explain to you what face lock in an application is, we will use our chat application for this project basically we will try to password protect our chat application with the password being our face, So let’s start by installing the required packages first

Package installation

pip install face-recognition
pip install opencv-python

Code Explanation

The chat application part is already explained in this tutorial so i won’t waste time explaining it I will straight away move to the face lock part

import face_recognition
import cv2
import numpy as np

Soo, we have installed all the required packages required for this project, OpenCV is cv2 and it will be used to take the input from the webcam NumPy is used for its NumPy array datatype used to store huge numbers, and face recognition is the hero of our project here it will be used to verify whether input face coming from the webcam is same as face provided before

video = cv2.VideoCapture(0)

face = face_recognition.load_image_file("1.jpg")
faceencoding = face_recognition.face_encodings(face)[0]

face_encodings_list = [
    faceencoding]

face_encodings = []
s = True
face_coordinates=[]

firstly we will take input from the webcam using videocapture method of OpenCV for this project we are taking input from the laptop’s webcam but you can take input through multiple methods (mobile camera, wireless camera module, etc) , after this, we will load the image that we want to be set as the lock using the load_image_file() class of OpenCV and the variable in which that data will be stored is faces, then we will stores the encoding of that image in faceencoding variable, after this, we will create an array of all the recognized faces, for now, we are having only one face but in future, if I decide to upgrade this project and add multiple profile option then I can simply add encoding and image data in this array and we will be good to go after this we will create two empty arrays for face coordinate and face encoding and the last variable is sort of a check variable which will be used in a loop

while True:
    _,frame = video.read()

    resized_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    resized_frame_rgb = resized_frame[:, :, ::-1]

we will create an infinite while loop that will run for the time our webcam is sending the video stream, this loop will be broken after the facial recognition is done, firstly we will read input from the webcam using .read() method, then we will resize the image in the derivative 0.25 for x and y, then we will convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)

    if s:
        face_coordinates = face_recognition.face_locations(resized_frame_rgb)
        face_encodings = face_recognition.face_encodings(resized_frame_rgb, face_coordinates)
        for faces in face_encodings:
            matches = face_recognition.compare_faces(face_encodings_list, faces)
            if matches[0] == True:
                video.release()
                cv2.destroyAllWindows()
                main_program()

initial if condition will check for s variable which is True when there is no face recognized then we will find all the faces and face encodings in the current frame of video and store them in face_coordinate and face_encodings variable, then we will create a for loop to iterate on face_encodings variable ( this will be helpful when we are having multiple faces), then we will compare our two images first being the unlock image and second image coming from webcam and if they are same then match variable will store value true at 0 indexes we will create an if condition to check that and if it’s true we will release our video and destroy all windows and will call the function that is having our chat application, you can read the explanation of chat application from this blog or watch the tutorial from this link

    cv2.imshow('Face Scan', frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
video.release()
cv2.destroyAllWindows()

After this, the only part left is to display the webcam stream when there is no face in the frame and define an exit condition which is q keypress here then after the loop is terminated we will release the webcam stream and destroy all windows if you want you can watch the tutorial from this link

Complete Code

import socket
import time
import threading
from tkinter import *
import os


def main_program():
	root=Tk()
	root.geometry("300x500")
	root.config(bg="white")

	def func():
		t=threading.Thread(target=recv)
		t.start()


	def recv():
		listensocket=socket.socket()
		port=3050
		maxconnection=99
		ip=socket.gethostname()
		print(ip)

		listensocket.bind(('',port))
		listensocket.listen(maxconnection)
		(clientsocket,address)=listensocket.accept()
		
		while True:
			sendermessage=clientsocket.recv(1024).decode()
			if not sendermessage=="":
				time.sleep(5)
				lstbx.insert(0,"Client : "+sendermessage)


	s=0

	def sendmsg():
		global s
		if s==0:
			s=socket.socket()
			hostname=''
			port=4050
			s.connect((hostname,port))
			msg=messagebox.get()
			lstbx.insert(0,"You : "+msg)
			s.send(msg.encode())
		else:
			msg=messagebox.get()
			lstbx.insert(0,"You : "+msg)
			s.send(msg.encode())


	def threadsendmsg():
		th=threading.Thread(target=sendmsg)
		th.start()




	startchatimage=PhotoImage(file='start.png')

	buttons=Button(root,image=startchatimage,command=func,borderwidth=0)
	buttons.place(x=90,y=10)

	message=StringVar()
	messagebox=Entry(root,textvariable=message,font=('calibre',10,'normal'),border=2,width=32)
	messagebox.place(x=10,y=444)

	sendmessageimg=PhotoImage(file='send.png')

	sendmessagebutton=Button(root,image=sendmessageimg,command=threadsendmsg,borderwidth=0)
	sendmessagebutton.place(x=260,y=440)

	lstbx=Listbox(root,height=20,width=43)
	lstbx.place(x=15,y=80)

	user_name = Label(root,text =" Number" ,width=10)

	root.mainloop()
	os._exit(1)





import face_recognition
import cv2
import numpy as np

video = cv2.VideoCapture(0)

face = face_recognition.load_image_file("1.jpg")
faceencoding = face_recognition.face_encodings(face)[0]

face_encodings_list = [
    faceencoding]

face_encodings = []
s = True
face_coordinates=[]


while True:
    _,frame = video.read()

    resized_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)

    resized_frame_rgb = resized_frame[:, :, ::-1]


    if s:
        face_coordinates = face_recognition.face_locations(resized_frame_rgb)
        face_encodings = face_recognition.face_encodings(resized_frame_rgb, face_coordinates)

        for faces in face_encodings:
            matches = face_recognition.compare_faces(face_encodings_list, faces)
            if matches[0] == True:
                video.release()
                cv2.destroyAllWindows()
                main_program()
    cv2.imshow('Face Scan', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

video.release()
cv2.destroyAllWindows()

About the author

Harshit

Hey, I'am Harshit Roy, a programmer with an obsession of learning new things, this blog is dedicated to helping people to learn programming fun way by creating projects

View all posts