[CAVIE-ACCI] The Pretoria-based Institute of Security Studies (ISS) is urging leaders of sub-Saharan African countries to ensure regulations are in place before the rollout of facial recognition technology, which many experts of the African Center for Competitive Intelligence see as one of the ‘most intrusive forms of surveillance’.
Karen Allen, a Senior Research Adviser – Emerging Threats in Africa at the ISS, warns in a recent article that the speed with which digital innovation is progressing threatens to outpace the law and lawmakers. She adds that in the wake of the killing of George Floyd, several American companies withdrew their facial recognition software products amid concerns about flaws, biases and misuse, and this, she contends, should have served as a wake-up call for Africa’s leaders.
Emerging biometric technologies have become ubiquitous across many parts of Africa, including facial recognition technologies in Zimbabwe, Uganda and South Africa, writes Allen.
She says high-speed internet has made it possible to collect vast amounts of data that must be recorded, analysed and stored. Although internet usage in Africa remained behind world figures in 2017, one in five households in Africa uses the internet now, according to the World Bank, and these figures are rising.
Biometric data is being used to monitor borders, to grant access to government services such as welfare payments, and to protect commercial entities from fraud, Allen points out.
“Facial recognition, or its close cousin facial authentication technology, is used by law enforcement and private security companies for security, digital forensics and predictive policing. In business it is among the technologies deployed for access control and client registration.”
Artificial intelligence-driven one-to-one authentication, where someone’s identity is matched against an ID document or another identifier, is less susceptible to abuse as it requires prior consent, said Gur Geva, CEO of iiDENTIFii, a South African biometrics company, speaking at a June Institute for Security Studies (ISS) webinar on the issue.
In contrast, Allen notes, facial recognition technology, where a “match” is made against a database, doesn’t depend on this.
“Therefore facial recognition technology has attracted the greatest controversy. In the extreme, it has seen US companies such as Clearview AI face legal challenges by civil liberty groups.”
The firm is accused of amassing a database of billions of faces, captured from images placed on social media platforms and other websites, and selling an app to provide access to law enforcement agencies.
Central to this challenge is the issue of presumed consent. South Africa’s Protection of Personal Information (POPI) Act of 2013 sets out the circumstances under which data can be collected, gathered and stored.
Although much of the law has only just been rolled out and is yet to be tested, such harvesting of data made public for one intended use, and sold on for a different purpose, would almost certainly be deemed illegal under the act.
Allen quotes Dr Brett van Niekerk, a cyber expert at the University of KwaZulu-Natal, to reinforce her view: “Cybersecurity is a huge problem because biometric technologies operate within the system of cyberspace, and the data stored, if not sufficiently secured, can be leaked, altered or stolen.”
A stolen identity could then be used almost as a digital balaclava to perpetuate further crimes, for instance gaining access to a building, a computer network or someone’s bank accounts.
Furthermore, with data being centralised, such as it would be under Kenya’s proposed Huduma Namba digital ID scheme, a single point of failure creates a particular risk for hacking attacks, says Allen.
The denial of privacy is another potential harm, the senior ISS researcher points out.
“The right to privacy is enshrined in numerous international conventions and national constitutions. There are concerns that the technology, if not properly checked, is prone to ‘function creep’ and being deployed as a tool of mass surveillance.”
This could be either to identify individuals in protests, for example, or as reported in Uganda, to potentially identify and track opposition politicians. There Huawei has installed facial recognition systems in closed-circuit television cameras as part of its Safe City initiative.
The third threat, according to Allen, is algorithmic bias, where repeated studies have shown facial technologies to have a high error rate in accurately identifying people of colour.
Renée Cummings, a US criminologist and advocate of ethical artificial intelligence, said at the ISS webinar that such biases had led to an “over-policing of black and brown communities in the US by law enforcement.”
This prompted a debate about whether countries such as South Africa needed to develop context-specific algorithms before the technology was deployed. This would help ensure that the database against which a face is matched is an accurate reflection of local demographics.
In South Africa a raft of legislation including the 2013 POPI Act and the Cybercrimes Bill of 2017, which is yet to become law, try to mitigate the unintended consequences of emerging technologies that could offer positive transformations in African states.
Allen recommends some measures for policymakers to consider: Regular audits of facial recognition databases, context-specific algorithms, and checks to ensure the most robust cybersecurity measures are fortified against intrusion.