Linux Format

A face for AI…

We’re not the prettiest, but perhaps our Pi might not take offence.

-

Even though our face-detection routine uses higher-level libraries, the universal OpenCV library is also required; it provides various value-added services, such as methods that convert images between various colour spaces.

Given that a manual installati­on of OpenCV is a higheffort task (and requires hours of compile time, as discussed at https://pimylifeup.com/raspberry-piopencv/), the following steps take a different approach.

We’re using the precompile­d packages described at https://lindevs.com/install-precompile­d-opencv-onraspberr­y-pi, which can be downloaded from https:// github.com/prepkg/opencv-raspberryp­i. This provides an OpenCV library that is in principle ready to run and contains all elements except for the GUI support, which the repository maintainer chose not to include.

Be that as it may, the actual installati­on of the OpenCV library is accomplish­ed by deploying a DEB package, which is downloaded using Wget: :

$ wget https://github.com/prepkg/opencvrasp­berrypi/releases/latest/download/opencv_64.deb . . .

$ sudo apt install -y ./opencv_64.deb

Machine Learning Libraries

Even though OpenCV integrates itself deeply into the OS, it is but one of many requiremen­ts for a successful ML pipeline. The next step involves downloadin­g a package created by Adam Geitgey – it provides a highlevel face detector API for Python developers:

~/cvspace $ pip3 install face-recognitio­n

Don’t worry if the deployment process is stuck at:

Building wheel for dlib (PEP 517)

Pip needs to perform complex compiles before the packages are deployed. A delay of up to 15 minutes is normal; if your Pi’s power supply is low quality or the SD card is low, an even longer wait is not unusual.

When the compile is over, you may see errors like

WARNING: The scripts face_detection and face_ recognitio­n are installed in ‘/home/pi/.local/bin’ which is not on PATH.

These are harmless – you’re more interested in important informatio­n such as the following:

Successful­ly installed dlib-19.24.2 facerecogn­ition-1.3.0

face-recognitio­n-models-0.3.0

The imutils package then needs to be deployed: ~/cvspace $ pip3 install imutils

On the shoulders of AI giants

One of the benefits of the Python community is the wide availabili­ty of code samples. There is almost no job for which a Pythonist can’t find an open source implementa­tion. In the case of face detection, we’ll use the MIT-licensed source code developed by Caroline Dunn – her code is unnecessar­ily complex in that she used the OpenCV code also for GUI display and image acquisitio­n. Due to that, we need to modify the various programs before running them.

Be that as it may, the first task involves downloadin­g the code from the repository:

~/cvspace $ git clone https://github.com/

carolinedu­nn/facial_recognitio­n.git ~/cvspace $ cd facial_recognitio­n/

The next step involves modifying train_model.py. As provided by the repository, it is intended to obtain training data from an array of JPEG files found in the Pi’s filesystem. Given that our computer is connected to the camera, we can harvest them live. For this, the first modificati­on is an include pointing to libcamera2:

from picamera2 import Picamera2

After that, an image acquisitio­n configurat­ion needs to be created:

picam2 = Picamera2() config = picam2.create_preview_ configurat­ion(main={“size”: (640, 480), “format”: “YUV420”})

picam2.configure(config) picam2.start(show_preview=True)

Working with the API is somewhat complex, as it works with the fully stream-driven paradigm. Therefore our first task is the creation of an example configurat­ion – it is a struct containing various parameters specifying how informatio­n from the camera sensor is to be presented to the applicatio­n.

In the case of our example, the attribute “format”: “YUV420” is especially important – it specifies the bitfield sequence that will be expected by the OpenCV code. Furthermor­e, we invoke the method picam2. start(show_preview=True) to start the actual imageproce­ssing pipeline. By setting show_preview to true,

we furthermor­e spawn a preview window.

After that, the main work loop needs to be modified: while True: print(“-------”) print(“are we done? n for another one, q to

terminate”) inp = input(); if inp == “q”:

break; print(“Capturing”) yuv420 = picam2.capture_array() rgb = cv2.cvtColor(yuv420, cv2.COLOR_

YUV420p2RG­B)

We intend for each acquisitio­n cycle to be triggered by pushing n and Return – this is accomplish­ed by the code shown here. The method picam2.capture_array()

momentaril­y freezes the CCD data stream and returns an array with colour informatio­n, which is then transforme­d into the format expected by the machinelea­rning payload.

After that, one of the predefined methods in the face detection module can be used to determine the face position:

boxes = face_recognitio­n.face_

locations(rgb,model=”hog”) encodings = face_recognitio­n.face_encodings(rgb,

boxes) for encoding in encodings: knownEncod­ings.append(encoding) knownNames.append(“Tam”)

The example at hand trains only one face. If you want to train the model to use multiple faces instead, the string passed to knownNames must be modified to contain the name of the person currently occupying the space in front of the sensor.

Finally, image informatio­n is to be persisted. The code is a prime example of the persistenc­e module – if you have issues understand­ing it, please consult the documentat­ion at https://docs.python.org/3/library/ pickle.html to learn more about this generally useful part of the Python standard library: data = {“encodings”: knownEncod­ings, “names”:

knownNames} f = open(“encodings.pickle”, “wb”) f.write(pickle.dumps(data)) f.close()

Harvesting the models

Once the model files are persisted, the facial_req.py script is intended to be used to perform the actual image detection. It has to be modified – as mentioned above, the precompile­d OpenCV image was compiled without the GUI stack.

Due to this, the first act involves modifying the image-acquisitio­n pipeline so that it uses libcamera2 instead of OpenCV:

. . . data = pickle.loads(open(encodingsP, “rb”).read()) #vs = VideoStrea­m(usePiCamer­a=True).start() picam2 = Picamera2() config = picam2.create_preview_ configurat­ion(main={“size”: (640, 480), “format”: “YUV420”}) picam2.configure(config) picam2.start(show_preview=True)

After that, the main work loop must be modified to, once again, process the image data provided:

while True: yuv420 = picam2.capture_array() rgb = cv2.cvtColor(yuv420, cv2.COLOR_

YUV420p2RG­B) frame = imutils.resize(rgb, width=500)

. . .

In principle, the program is ready to run – due to the missing GUI support, errors will crop up pointing to missing elements in the OpenCV framework. If this is the case on your machine, simply comment out the offending line.

When run successful­ly, a result similar to the one in the screenshot (above) appears. Note that the preview window runs independen­tly of the rest of the Python program, providing an overview of whatever informatio­n is shown in front of the CCD sensor.

Going further

While a library-based face detection system is convenient to use, it doesn’t go very deep into the underlying neural network theory. Should you wish to learn more about the underlying mathematic­s, the tutorial at https://pyimagesea­rch.com/2018/09/24/ opencv-face-recognitio­n/ might be useful – do, however, keep in mind that it is quite difficult.

Finally, a disclaimer is required: this code must not be used for any security-relevant applicatio­ns. It can easily be tricked – for example, with a static image of the relevant person.

 ?? ?? The face recognitio­n is ready for action.
The face recognitio­n is ready for action.
 ?? ??
 ?? ??

Newspapers in English

Newspapers from Australia