Deep Learning Project – Face Recognition with Python & OpenCV

Face Recognition with Python – Identify and recognize a person in the live real-time video.

In this deep learning project, we will learn how to recognize the human faces in live video with Python. We will build this project using python dlib’s facial recognition network. Dlib is a general-purpose software library. Using dlib toolkit, we can make real-world machine learning applications.

In this project, we will first understand the working of face recognizer. Then we will build face recognition with Python.

face recognition project

Face Recognition with Python, OpenCV & Deep Learning

About dlib’s Face Recognition:

Python provides face_recognition API which is built through dlib’s face recognition algorithms. This face_recognition API allows us to implement face detection, real-time face tracking and face recognition applications.

Project Prerequisites:

You need to install the dlib library and face_recognition API from PyPI:

pip3 install dlib 
pip3 install face_recognition

Download the Source Code:

Face Recognition Project

Steps to implement Face Recognition with Python:

We will build this python project in two parts. We will build two different python files for these two parts:

  • In this step, we will take images of the person as input. We will make the face embeddings of these images.
  • Now, we will recognize that particular person from the camera frame.

face recognition project feature extraction matching


First, create a file in your working directory. In this file, we will create face embeddings of a particular human face. We make face embeddings using face_recognition.face_encodings method. These face embeddings are a 128 dimensional vector. In this vector space, different vectors of same person images are near to each other. After making face embedding, we will store them in a pickle file.

Paste the below code in this file.

  • Import necessary libraries:
import sys
import cv2 
import face_recognition
import pickle
  • To identify the person in a pickle file, take its name and a unique id as input:
name=input("enter name")
ref_id=input("enter id")
  • Create a pickle file and dictionary to store face encodings:




  • Open webcam and 5 photos of a person as input and create its embeddings:

face recognition triplet training step

Here, we will store the embeddings of a particular person in the embed_dictt dictionary. We have created embed_dictt in the previous state. In this dictionary, we will use ref_id of that person as the key.

To capture images, press ‘s’ five times. If you want to stop the camera press ‘q’:

for i in range(5):
    key = cv2. waitKey(1)
    webcam = cv2.VideoCapture(0)
    while True:
        check, frame =

        cv2.imshow("Capturing", frame)
        small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
        rgb_small_frame = small_frame[:, :, ::-1]
        key = cv2.waitKey(1)

        if key == ord('s') : 
            face_locations = face_recognition.face_locations(rgb_small_frame)
            if face_locations != []:
                face_encoding = face_recognition.face_encodings(frame)[0]
                if ref_id in embed_dictt:
        elif key == ord('q'):
            print("Turning off camera.")
            print("Camera off.")
            print("Program ended.")
  • Update the pickle file with the face embedding.

Here we store the embed_dictt in a pickle file. Hence, to recognize that person in future we can directly load its embeddings from this file:


Now it’s time to execute the first part of python project.

Run the python file and take five image inputs with the person’s name and its ref_id:


face Embedding

face recognition embedding


Here we will again create person’s embeddings from the camera frame. Then, we will match the new embeddings with stored embeddings from the pickle file. The new embeddings of same person will be close to its embeddings into the vector space. And hence we will be able to recognize the person.

Now, create a new python file and paste below code:

  • Import the libraries:
import face_recognition
import cv2
import numpy as np
import glob
import pickle
  • Load the stored pickle files:

  • Create two lists, one to store ref_id and other for respective embedding:
known_face_encodings = []  
known_face_names = []	

for ref_id , embed_list in embed_dictt.items():
    for my_embed in embed_list:
        known_face_encodings +=[my_embed]
        known_face_names += [ref_id]
  • Start the webcam to recognize the person:
video_capture = cv2.VideoCapture(0)

face_locations = []
face_encodings = []
face_names = []
process_this_frame = True

while True  :
    ret, frame =

    small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
    rgb_small_frame = small_frame[:, :, ::-1]

    if process_this_frame:

        face_locations = face_recognition.face_locations(rgb_small_frame)
        face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)

        face_names = []
        for face_encoding in face_encodings:

            matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
            name = "Unknown"

            face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
            best_match_index = np.argmin(face_distances)
            if matches[best_match_index]:
                name = known_face_names[best_match_index]

    process_this_frame = not process_this_frame

    for (top_s, right, bottom, left), name in zip(face_locations, face_names):
        top_s *= 4
        right *= 4
        bottom *= 4
        left *= 4

        cv2.rectangle(frame, (left, top_s), (right, bottom), (0, 0, 255), 2)

        cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), cv2.FILLED)
        font = cv2.FONT_HERSHEY_DUPLEX
        cv2.putText(frame, ref_dictt[name], (left + 6, bottom - 6), font, 1.0, (255, 255, 255), 1)
    font = cv2.FONT_HERSHEY_DUPLEX

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):


Now run the second part of the project to recognize the person:


face recognition


face recognition project


This deep learning project teaches you how to develop human face recognition project with python libraries dlib and face_recognition APIs (of OpenCV).

It also covers the introduction to face_recognition API. We have implemented this python project in two parts:

  • In the first part, we have seen how to store the information about human face structure, i.e face embedding. Then we learn how to store these embeddings.
  • In the second part, we have seen how to recognize the person by comparing the new face embeddings with the stored one.


Did you like this article? If Yes, please give ProjectGurukul 5 Stars on Google | Facebook

23 Responses

  1. Vipul says:

    File “”, line 61, in
    matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
    File “/home/vipul/.local/lib/python3.8/site-packages/face_recognition/”, line 226, in compare_faces
    return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance)
    File "/home/vipul/.local/lib/python3.8/site-packages/face_recognition/", line 75, in face_distance
    return np.linalg.norm(face_encodings – face_to_compare, axis=1)
    numpy.core._exceptions.UFuncTypeError: ufunc 'subtract' did not contain a loop with signature matching types (dtype('<U32'), dtype(' dtype(‘<U32')
    i am getting this error after running .please help

  2. Hari says:

    Hi Shivam, worked fine. But I got the following error when I ran

    Traceback (most recent call last):
    File “”, line 45, in
    best_match_index = np.argmin(face_distances,axis=0)
    File “”, line 6, in argmin
    File “C:\Users\raghottama\Anaconda3\envs\env_dlib\lib\site-packages\numpy\core\”, li
    ne 1269, in argmin
    return _wrapfunc(a, ‘argmin’, axis=axis, out=out)
    File “C:\Users\raghottama\Anaconda3\envs\env_dlib\lib\site-packages\numpy\core\”, li
    ne 58, in _wrapfunc
    return bound(*args, **kwds)
    ValueError: attempt to get argmin of an empty sequence
    [ WARN:0] global C:\Users\appveyor\AppData\Local\Temp\1\pip-req-build-6uw63ony\opencv\modules\videoi
    o\src\cap_msmf.cpp (434) `anonymous-namespace’::SourceReaderCB::~SourceReaderCB terminating async ca

    Let me know what’s wrong please,

    Thank you

    • Hari says:

      Never mind, solved it.

      but after running fps has been dropped. it’s not smooth.

      Let me know what to do to bring it to normal fps.

      Thank you

  3. Bharath G M says:

    Where should I run this brother..i mean can i run this in python 3.8.1 or in vs studio or vs code..kindly reply with every steps involved in creating this project

  4. Amlan Mohanty says:

    When I try to import “face_recognition” module, the following error shows up:

    RuntimeError: Error while calling cudaGetDevice(&the_device_id) in file /tmp/pip-wheel-mmuzni47/dlib/dlib/cuda/gpu_data.cpp:201. code: 100, reason: no CUDA-capable device is detected.

    What should I do to proceed?

  5. Sara says:

    while running the file i get this error
    File “C:/Users/preet/AppData/Local/Programs/Python/Python39/”, line 39, in
    best_match_index = np.argmin(face_distances)
    File “”, line 5, in argmin
    File “C:\Users\preet\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\”, line 1269, in argmin
    return _wrapfunc(a, ‘argmin’, axis=axis, out=out)
    File “C:\Users\preet\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\”, line 58, in _wrapfunc
    return bound(*args, **kwds)
    ValueError: attempt to get argmin of an empty sequence

    why is this error and how to resolve this.

  6. Jack York says:

    Wondering why it does not prompt me to enter id after I have inserted the name. Could you please help?

  7. Jean says:

    Hi, Im new to this . Can somebody tell me , when I should press s 5 times ? Its not working on the terminal

  8. Pasha Loban says:

    Thank you man for help with learning CV

  9. neha goliwar says:

    While embedding first photo camera is not getting off. Th .pkl file got generate but camera id on only. Si What to do?

  10. neha goliwar says:

    Whole embedding first photo camera is not getting off. What to do?

  11. Rahul Dhar says:

    It’s amazing

  12. Abhinav says:

    module ‘face_recognition’ has no attribute ‘face_locations’

    • Gurukul Team says:

      you can try updating the face_recognition module: pip3 install –upgrade –upgrade-strategy only-if-needed face_recognition

  13. Sushant Adhikari says:

    Its a great project to work on.Thanks a lot for sharing this.
    If you guys would upload the project of face recognition with voice then it will be more fun.I would be so grateful if you will response to this.Please…

  14. Orikane says:

    When i only record one face with the embedding script everything works fine, but when when i add the second one the recognition script crash when it recognize the first one and this error shows with the id of the first face
    cv2.putText(frame, ref_dictt[name], (left + 6, bottom – 6), font, 1.0, (255, 255, 255), 1)
    KeyError: ‘001’

  15. Balaji Dharani says:

    Dear Team,
    Thanks for the program, While recognizing i’m receiving the error “f=open(“ref_embed.pkl”,”rb”) FileNotFoundError: [Errno 2] No such file or directory: ‘ref_embed.pkl’. Can you please check and advise , in the execution path ref_name.pkl file is available.

    Thanks & Regards,
    Balaji Dharani

    • Shivam says:

      Hi Balaji,
      I think you are getting this error because you probably may have saved the “embed_dictt” dictionary in “ref_embed.pkl” pickle file.
      Check the last point in , I think you have not executed the following lines:


    • Gurukul Team says:

      Please change the file name from “ref_name.pkl” to “ref_embed.pkl”

Leave a Reply

Your email address will not be published. Required fields are marked *