Solicitor Ernest Aduwa discusses the need to avoid a shift towards a surveillance state, in Digital Forensics Magazine.
Ernest’s article was published in Digital Forensics Magazine, 26 July 2019, which can be found here.
In August 2012, Wikileaks published documents said to expose “TrapWire,” as a US government spy network that used ordinary surveillance cameras to analyse data of people of interest. The Trapwire system was said to use data from a network of CCTV systems to assess the threat level in a number of locations. However, the Wikileaks publications inevitably raised concerns about CCTV cameras’ facial recognition capabilities, leaving people to question whether they were slipping into a total surveillance state without even realising it.
Facial recognition verifies the identity of a person using their face. It captures and compares patterns based on facial details. Whilst continued development and use of the biometric technology is not doubt an exciting subject, there are grave concerns surrounding its arrival and growth.
Normalising facial recognition is arguably aimed at dispelling any fears that we are slipping towards any such surveillance state. Fast forward half a decade and Facebook is using face recognition technology to “help protect you from a stranger using your photo to impersonate you.”In June 2015, Google launched FaceNet, a new recognition system with impressive accuracy. The technology is incorporated into Google Photos and used to sort pictures and automatically tag them based on the people recognised.
Apple is using facial recognition technology to secure devices and authorise payments through those devices; and Aella Credit, a financial services company based in West Africa, uses Amazon Rekognition’s ability to detect and compare faces. The latter can provide identity verification, without any human intervention. Customers upload a profile image to the mobile app, which is then sent to Amazon Rekognition and saved in Amazon Simple Storage Service (Amazon S3). The customers’ facial expression is analysed and saved.
Using face recognition technology for social media, mobile phone devices and banking makes the technology seem like it is part of normal day to day life. Therefore, public views will eventually shift to accepting the technology as being here to stay. The risk being that once it is deployed in public, it may become impossible for individuals to opt out of being monitored and recognised without being targeted as a reclusive enemy of the state.
Facial recognition is undoubtedly an important tool in combating crime and terrorism but the risks for individuals not engaging in any illegal activity are well documented. The General Data Protection Regulation (GDPR) provides a rigorous framework for facial recognition. The regulation ‘seeks to harmonise the protection of fundamental rights and freedoms of natural persons in respect of processing activities and to ensure the free flow of personal data between member states’ (https://www.eur-lex.europa.eu).
Dubbed as the most important change in data privacy regulation in the last twenty years, the new rules relating to how personal data is collected and processed came into effect in the UK on 25th May 2018 by way of the Data Protection Act 2018 (DPA 2018). The Government has indicated that the UK’s decision to leave the EU will not alter that position.
Data used to identify any individual is classed as ‘sensitive data’ under data protection law, companies providing facial recognition technology must take considerable care when processing such data. Once data is identified as personal data, the GDPR and DPA 2018 provide a framework for how that data can be processed lawfully and there are obligations when it comes to processing what is described as ‘special categories of personal data.’ Those special categories are defined as ones that reveal racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic data, biometric (including dactyloscopic) data for the purpose of uniquely identifying a natural person, and data concerning health, sex life or sexual orientation. Under GDPR the processing of such data is prohibited unless one of the exemptions in Article 9(2) applies. Schedule 1 of the DPA 2018 sets out how those provisions are to be interpreted under UK law.
Time will tell whether the GDPR and DPA 2018 will be adequate to keep up with rapid developments in facial recognition technology. It is clear that companies offering those services will need to ensure that they have appropriate training and policies in place to prevent any data breaches, ensuring that they are transparent in respect of their processes and procedures if they are to be GDPR complaint. However, if we are to truly counterbalance any movement towards a surveillance state it is also of vital importance that the individual is aware of their rights relating to their personal data and in particular ‘the right to be forgotten’ (Article 17 GDPR).