“Our senior management team is planning to implement AI tools across our business to monitor employee site access and attendance. One of their initial suggestions is using facial recognition software to control access our sites, monitor attendance and timekeeping. I have been asked to report on the HR implications. How do I handle it?”
Biometric authentication systems like facial recognition or fingerprint checks are now present in most aspects of life, from the airport security gates to banking apps and Face ID. In the workplace, facial or fingerprint scans can be used to control and monitor employee site access instead of more traditional methods like keycards.
With facial recognition software increasing powered by artificial intelligence, it can be a powerful tool for monitoring employees and analysing the huge amounts of data that produces, which could undoubtedly allow you to quickly and accurately identify attendance, timekeeping and security issues. However, using this software raises complex issues under both data protection and discrimination law.
Facial recognition checks naturally involve processing a large amount of employees’ “biometric data”, meaning data relating to someone’s physical characteristics (like their face, voice or fingerprints), which has been analysed using specific technologies that can uniquely identify them.
When biometric data is used to identify someone, it becomes “special category” biometric data and has a higher level of protection under the UK GDPR. This means you need to establish both a lawful basis and a “special condition” for processing. The lawful basis for processing will usually be either consent legitimate interests. There are only ten available special conditions, and the most applicable here will likely be obtaining explicit employee consent, which needs to be specific and fully informed.
If you use the facial recognition software to make automated decisions about employees, such as denying them site access, this is “automated decision making” under the UK GDPR. If the software can make solely automated decisions which have legal or similarly significant effect (for example, generating attendance warnings that lead to termination of employment), you will very likely need to obtain employees’ explicit consent and provide ways to request human intervention or challenge the automated decisions. From an employment perspective, failing to process employee personal data in a compliant manner could breach the duty of trust and confidence which forms an implied term of the contract of employment.
More generally, you would be expected to carry out a data privacy impact assessment before using the software and follow the core data protection principles, primarily around minimising the personal data you use and store.
This could involve, for example, using software that immediately deletes facial scans after verifying them rather than storing them. This is known as “transient processing” which still needs to be fully complaint, but should help demonstrate that you are building in privacy protections by design.
Getting it wrong can have serious consequences. In February 2024, the ICO took enforcement action against Serco Leisure for using facial recognition software to monitor its when its 2,000 employees clocked in and out, and automatically using this data to manage their pay. Serco could not demonstrate why facial scanning was necessary or proportionate compared to less intrusive measures – like a more traditional keycard system. The system had been presented to staff as a requirement for them to be paid, meaning they could not give valid consent. The ICO ordered Serco to stop using the facial recognition system and destroy the biometric data.
You will also need to consider the risk that facial recognition software could inadvertently discriminate against certain employees. Implementing facial recognition checks is a “provision criterion or practice” which could be indirectly discriminatory if it places particular racial groups at a substantial disadvantage compared to others.
You should not assume that using AI tools will remove this risk because humans aren’t involved. Facial recognition software uses sets of rules or “algorithms” to make decisions. Algorithms can perpetuate biases; for example by focusing on particular facial features which are more dominant in some races than others. Algorithms are trained to be smarter and faster using training data sets; so if the facial recognition software trains on images of predominantly white faces, it will become better at identifying them compared to other races.
Research has shown that current software has the highest error rate when identifying black faces, and particularly black women – with an error rate as high as 35%. There is a foreseeable risk that relying solely on AI software could lead to discriminatory outcomes, especially if employees and managers do not fully understand how the software works or where human oversight is most needed.
These risks were highlighted in the recent Manjang v Uber case, where the Claimant brought claims for indirect race discrimination, harassment and victimisation based on the app’s use of facial verification checks as part of its driver authentication process. Uber’s software used artificial intelligence to make automated decisions about allowing or withholding access to the platform. Mr Manjang’s access was removed after he consistently failed the facial verification check.
The case was confidentially settled but highlights the importance of guaranteeing a level of human oversight over decisions made by AI tools like facial recognition software, and that any automated decision making is made as transparent and explainable as possible. This should reduce the risk of inadvertently using the software in a discriminatory way.
Ultimately, the question isn’t whether you can use this software, because it is available. Instead, the question is whether you should, given the increased data protection and discrimination implications. This will greatly depend on your circumstances – for example if controlling and monitoring access is a key priority for security, hygiene or health and safety reasons.
Continue reading
We help hundreds of people like you understand how the latest changes in employment law impact your business.
Please log in to view the full article.
What you'll get:
- Help understand the ramifications of each important case from NI, GB and Europe
- Ensure your organisation's policies and procedures are fully compliant with NI law
- 24/7 access to all the content in the Legal Island Vault for research case law and HR issues
- Receive free preliminary advice on workplace issues from the employment team
Already a subscriber? Log in now or start a free trial