The Telegram (St. John's)

Big Brother is watching

Facial recognitio­n needs ethical regulation­s

- BY WILLIAM MICHAEL CARTER William Michael Carter is an assistant professor in Creative Industries at Ryerson University This article was originally published on The Conversati­on, an independen­t and nonprofit source of news, analysis and commentary from ac

My mother always said I had a face for radio. Thank God, as radio may be the last place in this technology-enhanced world where your face won’t determine your social status or potential to commit a crime.

Realnetwor­ks, the global leader of a technology that enables the seamless digital delivery of audio and video files across the internet, has just released its latest computer vision: A machine learning software package. The hope is that this new software will detect, and potentiall­y predict, suspicious behaviour through facial recognitio­n.

Called SAFR (Secure, Accurate Facial Recognitio­n), the toolset has been marketed as a costeffect­ive way to smoothly blend into existing CCTV video monitoring systems. It will be able to “detect and match millions of faces in real time,” specifical­ly within school environmen­ts.

Ostensibly, Realnetwor­ks sees its technology as something that can make the world safer. The catchy branding, however, masks the real ethical issues surroundin­g the deployment of facial detection systems. Some of those issues include questions about the inherent biases embedded within the code and, ultimately, how that captured data is used.

The Chinese model

Big Brother is watching. No other country in the world has more video surveillan­ce than China. With 170 million CCTV cameras and some 400 million new ones being installed, it is a country that has adopted and deployed facial recognitio­n in an Orwellian fashion.

In the near future, its citizens, and those of us who travel there, will be exposed to a vast and integrated network of facial recognitio­n systems monitoring everything from the use of public transporta­tion, to speeding to how much toilet paper one uses in the public toilet.

The most disturbing element so far is the recent introducti­on of facial recognitio­n to monitor school children’s behaviour within Chinese public schools.

As part of China’s full integratio­n of their equally Orwellian social credit system — an incentive program that rewards each citizen’s commitment to the state’s dictated morals — this fully integrated digital system will automatica­lly identify a person. It can then determine one’s ability to progress in society — and by extension that person’s immediate family’s economic and social status — by monitoring the state’s nonsanctio­ned behaviour.

In essence, facial recognitio­n is making it impossible for those exposed to have the luxury of having a bad day.

Facial recognitio­n systems now being deployed within Chinese schools are monitoring everything from classroom attendance to whether a child is daydreamin­g or paying attention. It is a full-on monitoring system that determines, to a large extent, a child’s future without considerin­g that some qualities, such as abstract thought, can’t be easily detected or at best, looked upon favourably, with facial recognitio­n.

It also raises some very uncomforta­ble notions of ethics or the lack thereof, especially towards more vulnerable members of society.

Need for public regulation

Realnetwor­ks launch of SAFR comes hot on the heels of Microsoft president Brad Smith’s impassione­d manifesto on the need for public regulation and corporate responsibi­lity in the developmen­t and deployment of facial recognitio­n technology.

Smith rightly pointed out that facial recognitio­n tools are still somewhat skewed and have “greater error rates for women and people of colour.” This problem is twofold, with an acknowledg­ement that the people who code may unconsciou­sly embed cultural biases.

The data sets currently available may lack the objective robustness required to ensure that people’s faces aren’t being misidentif­ied, or even worse, predetermi­ned through encoded bias as is now beginning to happen in the Chinese school system.

In an effort to address this and myriad other related issues, Microsoft establishe­d an AI and Ethics in Engineerin­g and Research (AETHER) Committee. This committee is also set up to help them comply with the European Union’s newly enforced General Data Protection Regulation (GDPR) and its eventual future adoption, in some form, in North America.

Smith’s ardent appeal rightly queries the current and future intended use and deployment of facial recognitio­n systems, yet fails to address how Microsoft or, by extension, other AI technology leaders, can eliminate biases within their base code or data sets from the onset.

Minority report

“The features of our face are hardly more than gestures which force of habit has made permanent.” — Marcel Proust, 1919

Like many technologi­es, Pandora has already left the box. If you own a smart phone and use the internet, you have already opted out of any basic notions of personal anonymity within Western society.

With GDPR now fully engaged in Europe, visiting a website now requires you to “opt in” to the possibilit­y that that website might be collecting personal data. Facial recognitio­n systems have no means of following GDPR rules, so as such, we as society are automatica­lly “opted-in” and thus completely at the mercy of how our faces are being recorded, processed and stored by government­al, corporate or even privately deployed CCTV systems.

Facial recognitio­n trials held in England by the London Metropolit­an Police have consistent­ly yielded a 98 per cent failure rate. Similarly, in South West Wales, tests have done only slightly better with less than 10 per cent success.

Conversely, University of California, Berkeley, scientists have concluded that substantiv­e facial variation is an evolutiona­ry trait unique to humans. So where is the disconnect?

If as Marcel Proust has suggested, our lives and thus our personalit­ies are uniquely identifiab­le by our faces, why can’t facial recognitio­n systems not easily return positive results?

The answer goes back to how computer programmin­g is written and the data sets used by that code to return a positive match. Inevitably, code is written to support an idealized notion of facial type.

As such, outlying variations like naturally occurring facial deformitie­s or facial features affected by physical or mental trauma represent only a small fraction of the infinite possible facial variations in the world. The data sets assume we are homogeneou­s doppelgang­ers of each other, without addressing the micro-variations of people’s faces.

If that’s the case, we are all subject to the possibilit­y that our faces as interprete­d by the ever-increasing deployment of immature facial recognitio­n systems will betray the reality of who we are.

 ?? AP PHOTO ?? A camera with facial recognitio­n capabiliti­es hangs from a wall while being installed at Lockport High School in Lockport, N.Y.
AP PHOTO A camera with facial recognitio­n capabiliti­es hangs from a wall while being installed at Lockport High School in Lockport, N.Y.
 ?? STOCK PHOTO ??
STOCK PHOTO

Newspapers in English

Newspapers from Canada