Written by Sonali Singh for US Ignite
In response to the effects of COVID-19, communities are looking into technologies that enable new methods to enhance risk mitigation efforts through video surveillance and artificial intelligence (AI). However, while there are benefits to video monitoring as a public safety tool, there is also serious concern over the accuracy of facial recognition technology and the danger to civil rights posed by increased surveillance.
In order to determine how video and AI technology could be deployed responsibly and effectively going forward, US Ignite conducted an analysis of several cities that have deployed video intelligence and optical sensors – which can be programmed to detect heat or light patterns rather than identify faces – over the last five years. To prioritize privacy, ethical deployment, and community engagement in a post-COVID environment, new projects should consider these takeaways prior to the design and implementation of optical sensor networks:
- Communicate early and often about project objectives and the impacts on city operations, residents, and businesses (see Orlando’s 2018 and 2019 facial recognition memos)
- Address shortcomings clearly and detail policy to control for inaccuracies, such as unlawful surveillance and racial profiling concerns
- Create a policy that defines ownership and restricts storing and distributing of personal data (see San Diego’s recent streetlight sensor data policy)
- Prepare existing infrastructure and seek out additional resources to help analyze the continuous stream of data
- Remain cautious about the reliability of thermal sensors, similar to optical sensors still in development, and implement additional screening procedures
- Prioritize community approval and engagement, including notifying the public of thermal or optical sensors recently installed in the area
AI is most useful when combined with alert systems and human intelligence, and it has already been successful in identifying patterns at high-traffic intersections, improving gunshot detection technology, and increasing the efficiency of data collection for city planning and public use. However, the January 2020 false arrest of a Black man in Detroit shows the danger of relying solely on automated facial recognition, especially for residents with darker complexions.
Recently, U.S. lawmakers have begun pushing to ban the use of facial recognition as a whole, and some local governments have already implemented city-wide bans.
In addition to the considerations outlined above, cities should also focus on supporting ethical and unbiased algorithms behind AI technologies for successful project deployment of any type. This includes comprehensive testing of algorithms over a diverse population in terms of race, gender, and age, in addition to availability for ethical review, verification of real-time product accuracy, and continuous product evolution and improvement.
For more information on the ethical implementation of optical sensors and video intelligence, check out this ACLU report and the Beeck Center’s guidebook for data sharing governance. Or contact us to discuss your community’s strategy.