Panel discussion: Legal issues on facial recognition from an international perspective


Article from Biometric Update:  “Facial recognition critics consider the options in OBVIA webinar”

The current legal landscape throughout North America and Europe is inadequate to deal with the risks to public rights created by increasing facial recognition use, according to panelists in a webinar hosted by Laval University’s International Observatory on the Societal Impacts of AI and Digital Technology (OBVIA).

The webinar accompanies a report by University of Ottawa Professor Céline Castets-Renard, ‘Use of facial recognition by police forces in the public space in Quebec and Canada,’ which explores the legal frameworks that guide the use of facial recognition and the technology’s implementations by police forces in Quebec and elsewhere in Canada. The report is written in French, but an English summary is available. The report recommends establishing a cost-benefit balance between freedom and security interests, strengthening provincial and federal privacy laws, and the adoption of specific legislation to restrict any law enforcement agency use to conditional authorization.

Read the article


Following the report written by Professor Céline Castets-Renard on the legal framework of facial recognition used by police forces (September 2020), OBVIA is organizing an international panel bringing together European and American researchers to launch debates on the use of this technology.

Date : Wed. Nov 18, 2020 / Hour : 10:00 – 11:30 a.m. EST / Zoom

Registration

Facial recognition devices are increasingly used by police forces in the public space for surveillance and public security purposes. This technology is used, for example, to detect potential criminals and terrorists among spectators at large events such as stadiums or concert halls. Other advantages are highlighted, such as saving time or simplifying the work of police forces. However, the risks of infringements on individual civil rights that may be induced by these devices are huge. The feeling of being watched can lead to a form of self-censorship on the part of citizens, particularly with regard to their participation in public life and, more broadly, the exercise of their fundamental freedoms. The use of facial recognition can interfere with freedom of movement, expression, association and assembly. The right to privacy and the protection of personal data are also threatened. Facial recognition technology can also undermine the dignity of individuals and affect the right to non-discrimination. It can affect the rights of specific groups, such as children, elderly and disabled, ethnic minorities or racialized populations. In addition, while facial recognition technology is developing, the error rate remains high, especially for certain categories of populations, such as African-Americans and indigenous people.

About our Chair

Benoît Dupont, Scientific Director of the Smart Cybersecurity Network (SERENE-RISC)

Benoît Dupont is a full Professor of Criminology at the Université de Montréal and the Scientific Director of the Smart Cybersecurity Network (SERENE-RISC), which he founded in 2014. He is the holder of the Canada Research Chair in Cybersecurity since 2016. He is also the holder of the research Chair in the Prevention of Cybercrime. Benoît sits as an observer representing the research community on the Board of Directors of the Canadian Cyber Threat Exchange (CCTX) and on the researchers’ Council of the New Digital Research Infrastructure Organization (NDRIO). He is part of the inaugural cohort of the College of New Scholars, Artists and Scientists of the Royal Society of Canada.

About our speakers

Céline Castets-Renard, full Law professor at the University of Ottawa

Céline Castets-Renard, full Law professor at the University of Ottawa and Chair holder of the Law, Accountable and Social Trust in AI in a global context. She’s been a professor at Université Toulouse Capitole in France since 2012, where she had previously been a lecturer in private law from 2002 to 2012. In 2015, she was nominated junior member od the prestigious University Institute of France. She wrote a reference textbook in 2009, republished in 2012 on ” Law of the internet: French and European law “. Her research work generally focuses on digital law and it regulation in various areas of private law ranging from civil liability to intellectual property, personal data protection, e-commerce, ethical issues related to the regulation of self-driving cars and cybersecurity.

Carly Kind, Director of the Ada Lovelace Institute

Carly Kind is the Director of the Ada Lovelace Institute, an independent research institute and deliberative body a remit to ensure data and AI work for people and society. A human rights lawyer and leading authority on the intersection of technology policy and human rights, Carly has advised industry, government and non-profit organisations on digital rights, privacy and data protection, and corporate accountability in the technology sphere. She has worked with the European Commission, the Council of Europe, numerous UN bodies and a range of civil society organisations. She was formerly Legal Director of Privacy International, an NGO dedicated to promoting data rights and governance.

Caroline Lequesne-Roth, Assistant Professor of Public Law, University Côte d'Azur, France

Caroline Lequesne Roth is assistant Professor of Public Law at the University Côte d’Azur, in Nice. In 2018, she co-founded the program Deep Law for Technologies, which aims to bring together lawyers, data scientists and computer scientists around the challenges of deep technologies. Director of the Fablex DL4T (Artificial Intelligence Law Clinic) from 2018 to 2020, she is now in charge of the Master degree in Algorithmic Law and Data Governance. Her work focuses on the normative and conceptual challenges of the digitalization of public services in the age of artificial intelligence and big data. She is the author of various contributions related to the surveillance technologies and the relationship between science and power.

Rashida Richardson, Director of Policy Research, AI Now Institute

Rashida Richardson is a Visiting Scholar at Rutgers Law School and Rutgers Institute for Information Policy and Law, where she specializes in race, emerging technologies and the law, and she is a Senior Fellow in the Digital Innovation and Democracy Initiative at the German Marshall Fund. Rashida researches the social and civil rights implications of data driven technologies, including artificial intelligence, and develops policy interventions and regulatory strategies regarding data driven technologies, government surveillance, racial discrimination, and the technology sector. Rashida has previously worked on a range of civil rights issues as the Director of Policy Research at New York University’s AI Now Institute, Legislative Counsel at the American Civil Liberties Union of New York (NYCLU), and staff attorney at the Center for HIV Law and Policy. Rashida currently serves on the Board of Trustees of Wesleyan University, the Advisory Board of the Civil Rights and Restorative Justice Project, the Board of Directors of the College & Community Fellowship, Advisory Board for Reveal from the Center for Investigative Reporting, Board of Directors for Data for Black Lives, Advisory Board for the Electronic Privacy Information Center, and she is an affiliate and Advisory Board member of the Center for Critical Race + Digital Studies. She received her BA with honors in the College of Social Studies at Wesleyan University and her JD from Northeastern University School of Law.

References

  • Céline Castets-Renard, Rapport sur le cadre juridique applicable à l’utilisation de la reconnaissance faciale par les forces de police dans l’espace public au Québec et au Canada: Éléments de comparaison avec les États-Unis et l’Europe, OBVIA, September-November 2020 (version française / English summary).
  • Ada Lovelace Institute. ‘Beyond face value: public attitudes to facial recognition technology‘, September 2019 (link).
  • Amba Kak, ed.‘Regulating Biometrics: Global Approaches and Urgent Questions’, AI NOW Institute, Septembre 1, 2020 (link).
  • Caroline Lequesne Roth, ‘Pour un encadrement démocratique de la reconnaissance faciale‘, Recueil Dalloz, 2020 (link).
  • Caroline Lequesne Roth, ‘L’encadrement des technologies de surveillance est une condition de la démocratie‘, Le Monde, Janvier 2020 (link).
  • Caroline Lequesne Roth, ed. ‘La reconnaissance faciale dans l’espace public: Une cartographie juridique européenne‘. Rapport Fablex, 2020 (link).
  • Matthew Ryder and Jessica Jones, Matrix Chambers. ‘Facial recognition technology needs proper regulation – Court of Appeal‘, Ada Lovelace Institute, August 202 (link).
  • Rashida Richardson. ‘AI Now’s Rashardson: “Free-range facial recognition‘, Danny in the Valley, Podcasts (link).
  • Samuel Rowe and Jessica Jones, Matrix Chambers. ‘The Biometrics and Surveillance Camera Commissionner: streamlined or eroded oversight?‘, Ada Lovelace Institute, October 2020 (link).
  • Stefanie Coyle and Rashida Richardson. ‘Bottom-Up Biometric Regulation: A Community’s Response to Using Face Surveillance in Schools‘, AI NOW Institute, Septembre 1, 2020 (link).
  • Céline Castets-Renard

    Professor, Faculty of Law, Chair on Accountable Artificial Intelligence in a Global Context

    University of Ottawa

  • Benoît Dupont

    Professor, School of Criminology - Director of the International Centre for Comparative Criminology, Canada Research Chair in Cybersecurity

    Université de Montréal

Recent Articles

Contact us

Suscribing Newsletter

Receive our latest news, upcoming events, news from our collaborators and more! (French version only avalaible)

The International Observatory on the Societal Impacts of AI and Digital Technology is made possible by the support of the Fonds de recherche du Québec.