Automatic classification of pictures based on facial recognition: case example of a wedding with >60 attendees

I recently received a pack of ca. 700 pictures from a wedding that I wanted to distribute among the different attendees (>60 people) according to their appearance. The idea is to use the face-recognition library (MIT license) created by Adam Geitgey in combination with a few lines of Python code to orchestrate an algorithm that automatically classifies all pictures into personal folders for each attendee. The algorithm is agnostic as we will be prompted each time an unknown face is recognized to introduce the name of that person, which from that point on will be added to the list of known faces. The first part of the code imports all modules and libraries needed in this short project, written for Python 3.8.

import face_recognition # https://pypi.org/project/face-recognition/
import glob, sys, os, shutil
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from matplotlib.patches import Rectangle
if sys.version_info[0] >= 3:
    raw_input = input
from IPython.display import clear_output

We proceed by loading all pictures in the current folder (in *.jpg format in my case) and creating empty lists to store faces and names of the attendees.

pictures = sorted(glob.glob('*.jpg'))
faces = []
names = []

The core of the facial recognition algorithm starts now:

  • The algorithm runs iteratively on every picture until completing the entire set.
  • We start by locating faces in any given picture. If no faces are located (such as in pictures of buildings, dresses, landscapes, decoration…), the algorithm quickly jumps to the next one in the loop to save time, and the picture remains unclassified in the parent folder (to be manually reviewed later).
  • If faces are recognized, we extract their encodings and for each of them we try to find a match by comparing to our list of known faces. The list is initially empty in our case – the algorithm will be trained by us on-the-fly!
  • If a face is not recognized, we will be prompted with the corresponding image and a rectangle located at the precise face coordinates. We will need to manually introduce the name of the person identified, only once for each attendee.
  • At some point, the face-recognition neural network recognized white flowers as a face in my set of pictures. Therefore, I introduced an exception so that if the name of the person matches skip, then the face is not added to the list of known people.
  • Attendees have their own personal folder, in which copies of the pictures in which they are actually present are included.
  • Once the picture is classified, it is deleted from the parent folder to save space, and the loop continues with the next picture.
face_no = 0
for picture in pictures:
    # Find all the faces and face encodings in the current picture
    image = face_recognition.load_image_file(picture)
    face_locations = face_recognition.face_locations(image)
    # If no faces are recognized, skip the picture
    if len(face_locations) == 0:
        continue
    face_encodings = face_recognition.face_encodings(image, face_locations)
    for face_encoding in face_encodings:
        # See if the face is a match for the known face(s)
        matches = face_recognition.compare_faces(faces, face_encoding)
        # If some faces are not recognized, they should be added to the list of faces and names, with a prompt
        if True not in matches:
            # Locate face coordinates
            top = face_locations[face_no][0]
            right = face_locations[face_no][1]
            bottom = face_locations[face_no][2]
            left = face_locations[face_no][3]
            plt.imshow(mpimg.imread(picture))
            # Add rectangle around the face in the picture
            plt.gca().add_patch(Rectangle((left,top),bottom-top,right-left,edgecolor='red',facecolor='None',lw=2))
            plt.show()
            # Ask for name
            name = raw_input("Name of the person: ")
            clear_output(wait=True)
            # If name = skip, jump to the next face
            if name == 'skip':
                continue
            # Create personal folder, if required
            if os.path.exists(name) == False:
                os.mkdir(name)
            # Append to the list of names and faces, and redo matching
            names.append(name)
            faces.append(face_encoding)
            matches = face_recognition.compare_faces(faces, face_encoding)
        # Where is the True face in the name list?
        idx = matches.index(True)
        # Copy image to personal folder
        shutil.copyfile(picture, '%s/%s' % (names[idx], picture))
        face_no +=1
    # Remove image from parent folder to save space
    os.remove(picture)
    face_no = 0

In my particular case example, it took me around 10 minutes to be prompted for all attendees and classify their (ca. 700) pictures. However, the success rate of the face-recognition library was around 80-85%. It generally fails at recognizing children (as states in the documentation of the library). A quick review of the unclassified pictures reveals that the face-recognition library also fails at recognizing people from side (profile) view; this is understandable given the training data set provided to the underlying neural network, which are essentially faces looking at front.

Two caveats. First, in this algorithm I assume that among the list of known faces, there’s only one possible match. For that reason I just look at

idx = matches.index(True)

which takes the index of the first “True” entry in the list of matches. If more than one person is face-matching, this might lead to incorrect picture classification. Second, in this case example I did not further fine tune the tolerance and/or sensitivity of the face-recognition network. It is likely that by fine tuning its hyperparameters, the succeed rate of the classifier can be substantially improved.

Please, leave a comment below if you ever use this open-sourced code in some of your projects!