//Face Value

Face Value

By Liz Booth

Technology has proved to be a huge enabler, allowing people and businesses to make massive strides forward. However, there is always a flipside and it now seems that the right to privacy may start to trump all other concerns when it comes to technology usage.

Take the European Union’s General Data Protection Regulation – it is aimed squarely at businesses that ignore an individual’s right to privacy. And the Europeans are not alone in introducing such safeguards.Now it seems the regulators may have another technology in their sights – facial recognition. Cameras have already proved incredibly useful. It is almost impossible to step out of your house, almost anywhere in the world, and not have your journey tracked.

They are extremely beneficial for police forces and for those wanting to minimise crime, of course. But they are also useful for the insurance profession, which routinely uses CCTV footage to validate or reject a claim. Businesses have been installing dashcams in their vehicles, monitoring both the road ahead and the behaviour of their own drivers – again, an extremely useful tool for insurers after an accident.

Now however, the profession is going one step further and utilising facial recognition cameras. Again, there are many plus sides, such as ensuring you are who you say you are when withdrawing money from an ATM – already in situ in some US banks – and the technology is beginning to be used by motor manufacturers so only the rightful owners can drive the car. In both instances, the insurance profession will benefit.

Another likely benefit is that the technology can also be used to pick up minute changes that could indicate a health problem. In the US, Lapetus, an insurtech startup, has developed facial recognition technology to aid insurers in pricing life insurance premiums. However, all is not rosy in the garden.
There are emerging fears about the use of this technology and whether it invades people’s privacy. Should individuals know that when they walk down a particular street, the technology is not only watching them but identifying them?

The World Economic Forum (WEF) has already waded into the debate and in September produced a report in which it warned governments to take people’s privacy into account. Kay Firth-Butterfield, head of artificial intelligence at WEF, is quoted as saying: “The problem is really twofold. Firstly, with the government use of facial recognition technology and then also with the company use of facial recognition.” The bigger issue, Ms Firth- Butterfield says, is about asking: “when does use (of facial recognition technology) by the government amount to security, compared to the invasion of our civil liberties?”
This is where insurers need to be mindful, as the first litigation is already appearing.

In the UK, for example, The Guardian reported the case of an office worker who believes his image was captured by facial recognition cameras when he popped out for a sandwich in his lunch break.
Supported by the campaign group Liberty, Ed Bridges, from Cardiff, raised money through crowdfunding to pursue the action, claiming the suspected use of the technology on him by South Wales Police was an unlawful violation of privacy. He argued that it also breaches data protection and equality laws.

By August, the UK’s biometrics commissioner were also joining the debate, describing the use of facial recognition on land by King’s Cross station in London as “alarming”.The BBC reported biometrics commissioner, professor Paul Wiles, as saying the government needed to update the laws surrounding the technology, for both the private and the public sector. Meanwhile, in San Francisco, according to reports in The New York Times, the authorities have banned the use of the technology by the police and other agencies.

Back in the UK, the Information Commissioner’s Office (ICO) is also concerned.It has demanded a new statutory code to govern police use of “invasive” facial recognition technology. The watchdog’s investigation follows the August incident over its use at King’s Cross station, in which it determined the technology was a potential threat to the public’s privacy. Jason Tooley, chief revenue officer at biometric authentication firm Veridium, says: “Police forces across the country halting facial recognition trials due to public backlash is a huge step backwards and puts innovation at risk.”

He warns: “There is increasing concern in the community that regulators such as the ICO will take too much of a heavy-handed approach to regulating the technology and we must absolutely ensure innovation is not being stifled or stopped. It is in the public interest for police forces to have access to innovative technology such as biometrics to deliver better services and safeguard our streets.” He also suggests it is all about public perception and acceptance, saying a strategic approach, using other biometric techniques that have greater levels of acceptance such as digital fingerprinting, would ensure a higher level of public consent due to their maturity as identity verification techniques.

“Considering the rapid rate of innovation in the field, adopting an open biometric approach that enables the police to use the right biometric technique for the right scenario, taking into account varying levels of maturity, will see the benefits associated with digital policing accelerated,” he adds. The use of this technology has started with the police and Mr Tooley is confident: “If the police adopt a transparent policy on how biometric data is interpreted, stored and used, the public’s data privacy concerns can be greatly alleviated, which will in turn trigger consent and wider acceptance.

For the insurance profession, he says: “Managing expectations around biometrics and how the technology will be used is crucial, especially in surveillance use cases. Concerns over data privacy can also be eliminated if sensitive biometric data is stored in the correct way.” Insurers will need to watch this space carefully to see how and when such technology can be adopted as well as monitor how their clients might be using the technology, or else face the prospect of rising defence claims as the liability questions mount up.

Finally, they might also want to think about how they can use the technology internally. Insurance companies in Asia are already reported to be using facial recognition to record client interviews, so they can spot when customers are lying. China’s Ping An Insurance is one firm increasingly using facial recognition technology to record the faces of customers and its own staff to verify their identities, as well as a fraud deterrent.

The key, as with much of technology, appears to lie in the permissions. If customers have given their authority for the technology to be used – and understand the implications of that – it is much more likely to be accepted.

    Some of the questions that consumers are likely to want answered from insurers include:
  • What controls are there in terms of the questions I can be asked?
  • What controls are there in terms of where that data ends up?
  • Would it be equitable if an insurance company were to deny me health insurance based on the predictive abilities of facial recognition?