Perils of facial recognition algorithmic tools in policing
Thursday, June 20, 2019 @ 12:41 PM | By Jill Presser
This means that police are feeding images of folks in Toronto, captured as we go about our daily business, into an algorithmic tool that is trained to identify faces. These could be photos from a crime scene caught on cellphone cameras, images caught by video surveillance cameras in department stores or public parking garages, they could be stills or videos caught by the myriad cameras we happen to pass every day.
We may or may not know we have been photographed. We may or may not have consented to our images being captured, memorialized and used by the police.
Toronto police are then taking these images and feeding them into a facial recognition algorithmic tool. The artificial intellilgence (AI) then compares faces captured in the subject images as against a database of 1.5 million mugshot photographs.
The point of this exercise is to assist with criminal investigations: to see if the algorithm can identify someone in an image as a person already known to police.
Police Chief Mark Saunders reported about the facial recognition tool to the Toronto Police Service Board in mid-May of this year. This was more than a year after the facial recognition tool had been in use. It seems that the Toronto police quietly ran a pilot project to test facial recognition tools in Toronto from September 2014 to September 2015.
With little or no public consultation or scrutiny, Toronto police issued a request for proposals for a facial recognition tool of its own in 2017.
Two companies’ proposals were evaluated and benchmark tested. With little or no public consultation or scrutiny, Toronto police selected the facial recognition tool of the NEC Corporation of America.
The NEC algorithmic tool was installed in Toronto in March of 2018. The 12-month warranty period for the tool expired in March of 2019, at which time the Toronto force contracted for maintenance and support from NEC for five more years, until 2023.
Apparently, at some point, the Toronto police conducted a privacy-impact assessment in relation to the facial recognition tools it acquired.
This is not a public document, so the scope and conclusions of the privacy assessment are not easily accessible.
And just like that, our police service has a tool that helps it surveil people in their day-to-day movements around Toronto. Ottawa and Calgary police services are also using facial-recognition tools.
San Francisco recently banned the use of the very same kind of facial recognition algorithmic tools that the Toronto Police Service is now using.
There the ban was motivated by privacy and civil liberties concerns, by concern over the potential for misuse and the concern over potential state and law enforcement overreach.
These are very real concerns. Ones Torontonians should share.
Notwithstanding the law enforcement benefits that facial recognition algorithms may offer, we need to worry about facial recognition. We need to ask our legislators to consider following San Francisco’s example.
This is part one of a two-part series. Part two: Why facial recognition rechnology is racist, invasive, dangerous
Jill R. Presser is a Toronto lawyer practising criminal defence, digital rights and AI law. Contact her at email@example.com.
Photo credit / Serhii Arkhipov ISTOCKPHOTO.COM
Interested in writing for us? To learn more about how you can add your voice to The Lawyer’s Daily, contact Analysis Editor Peter Carter at firstname.lastname@example.org or call 647-776-6740.