Replicating Patterns of Racism: The Way Artificial Intelligence Amplifies Systemic Discrimination

Daniella Fergusson
10 min readMay 31, 2020

“Artificial intelligence” (AI) is an elaborate way of saying that someone programmed a computer to follow a set of procedures (an algorithm) and spit out a result. The language makes AI sound more sophisticated than it is.[1] To explain, planners use algorithms every day. We follow the Local Government Act, Community Charter, development procedure bylaws, and other bylaws and policies. When we write zoning bylaws and official community plans, arguably, we are writing algorithms that the development community follows. The difference between a planner and artificial intelligence is intelligence. And, by intelligence I mean empathy, judgement, lateral problem solving, design thinking, creativity, context, and other grey areas that our grey matter is capable of.

With the rise of smart cities and enormous, unregulated databases of commercially-available personal information[2], it is important that planners understand the limits of AI and its role in discrimination. Surveillance of the public realm offers an illustrative example, specifically pertaining to data, visibility, and agency.

man walking wearing white with a mask under an installation of surveillance cameras

Data

AI, essentially, involves feeding computers huge datasets, and training an algorithm to parse the dataset per human-provided parameters to achieve specific, optimised outcomes. For example, Google Books and Google Streetview gather image-based datasets. Google uses AI to identify significant objects in those images, such as words, house numbers, street names, or crosswalks. When you or I complete an “I am not a bot” CAPTCHA, we verify that AI has correctly recognized street numbers, traffic signals, and book text.[3]

AI “intelligence” is only as good as its source data and the assumptions guiding its parameters.[4] Predicative policing offers a good example of poor decision making based on bad data. Predicative policing uses historical data on the time, location, and nature of crime to direct policing resources to places anticipated to be crime hotspots. AI reproduces the patterns that already exist in the data used to train it. As a result, rather than predict future crimes, the AI predicts policing biased based on previous biased policing behaviour.[5]

In 2015, Toronto journalist Desmond Cole documented his experience of having been stopped and interrogated by police more than 50 times in Canada.[6] His essay in Toronto Life, “The Skin I’m In”, explains how Black Canadians are disproportionately targeted the controversial “carding” practice, which has existed since the 1950s.[7] Young Black men are 17 times more likely than a white person in Toronto to be stopped by police, and therefore are at much higher risk for arrest and imprisonment. In Vancouver from 2008 to 2017, 25% of police-conducted street checks were for Indigenous people, who comprise just 2% of the population.[8] Via the carding practice, police create a database of people’s race, height, weight, eye colour, body markings, facial hair, mobile number, and family status. This is the kind of raw data that could be fed into AI, resulting in vast amounts of personal information on specifically targeted populations.

Visibility

Surveillance is about visibility and control — who feels safe to be where, when, and with whom. AI is the tool used to make the invisible visible through facial recognition, mobile phone tracking, body language analysis, and more. The question is: visible to whom? Toronto Police has been using facial recognition to support investigations since March 2018.[9] The system compares images captured on public or private cameras to its database of 1.5 million mugshots. The Canadian Civil Liberties Association criticized this use of facial recognition technology, stating it’s the equivalent of “fingerprinting and DNA swabbing everyone at Yonge and Bloor during rush hour.”[10] Because the system is based on mugshots, it targets people who have already been criminalized. In Canada racialized communities are disproportionately targeted for policing, as documented by Cole and Robyn Maynard.[11] Any AI system built on images or data gathered through methods that disproportionately target certain members of the community, will reflect back and amplify the pre-existing discrimination.

In January 2020, it was discovered that Toronto police officers had been using a U.S. facial recognition technology, called Clearview AI. The system reportedly contains 3 billion images scraped from social media.[12] The melding of commercial and criminal datasets juxtaposes who is hyper visible and who is invisible through the lens of AI. To explain, the AI designers make certain people visible or invisible in facial recognition depending on who they expect to see in certain situations.[13] For example, Google’s image recognition system labelled African Americans as “gorillas” in 2015. In 2018, Amazon’s facial recognition technology falsely identified 28 members of U.S. Congress as people who have been arrested for crimes.[14] This is a result of: 1) the underrepresentation of racialized people in commercial databases, and 2) the overrepresentation of racialized people in mugshot databases– all based on false assumptions and beliefs that racialized people are more criminal as a group and less useful or profitable for commercial applications.

Tech companies are becoming more aware that commercial AI-training photo databases lack racialized faces. For example, a Georgia Tech study found that object-detection models, used by self-driving cars and other applications, detect people with dark skin much less accurately than light-skinned people, thus putting dark-skinned people at risk of being hit by vehicles.[15] The growing awareness that commercial facial recognition databases lack racialized faces has led to disturbing practices, such as Google’s deceptive practice of face-scanning dark-skinned homeless people.[16]

Agency

As planners, we should ask, “What problem is data collection, surveillance, and AI trying to solve?” Vendors state that sensors and software can make urban life easier and more seamless by optimising the deployment of public services. Joy Buolamwini, AI researcher at MIT Media Lab, responds by asking, “Who are we optimising for?” We have already established that existing systems, which are reproduced and amplified by AI, make Black and Indigenous people hyper visible to security tools and invisible to consumer tools. So, who is the “we” and “our” when talking about smart, seamless urban lives?

More broadly, we need ask whether people have consented to data collection and whether they have access to and control over the data collected about themselves. Google’s Sidewalk Labs, which had been hired by Waterfront Toronto to develop a smart city in Quayside, states that it does not intend to use facial recognition in the public realm. But it does intend to surveil in other ways. Shannon Mattern, Nabeel Ahmed, and Bianca Wylie have written extensively about the project. Two points are most relevant for this discussion: 1) the lack of opt-in/opt-out for surveillance, and 2) the development of unfathomably large personal information databases.

To explain, Sidewalk Labs and its subsidiaries/spin-offs intend to collect data on who goes where, when, and how to optimise the delivery of transportation, power, utility, housing, entertainment, health, and security services. A few examples include CommonSpace, a Gehl-inspired public realm behaviour observation tool; Collab, a civic participation tool; Coord, a curb space management tool; Flow, a traffic modeling tool; and Replica, a people-movement modeling tool. As Mattern writes in Places Journal[17], Replica uses de-identified mobile data from commercial databases to model how people move in cities. People are not consenting for their data to be used this way. And, as Thompson and Warzel noted in their Times Privacy Project research, the data cannot be anonymized.

Rather than offer an opt-in, Sidewalk Labs is proposing an urban environment on public land where people are surveilled and notified about the surveillance by signs. To opt-out, one would have to avoid the area completely, or at least leave the smart phone at home. In this tracked and optimised context, how fair or just is it to exclude members of the community by design? How welcome is a Black man who has been carded over 50 times going to feel in this neighbourhood? How safe is a victim of domestic violence and stalking going to feel, knowing that her every movement is being tracked in a database stored and accessed who-knows-where? Who is disproportionately helped or harmed by the data collection? How can people possible consent to the data collection, and, does withholding consent mean being effectively banned from participating in civic life?

Implications

In Discipline and Punish, Michel Foucault describes how discipline in schools, factories, and the military creates “docile bodies” ready for the modern economic system. He uses Jeremy Bentham’s panopticon as a model for effective self-discipline without the use of excessive force. The panopticon allows for the constant surveillance of a large population, without individuals knowing whether or not they are being watched at any specific moment. The structure is designed to make surveilled people “voluntarily” change their behaviours to align with the rules, even if they are not being actively watched. Structurally and functionally speaking, broad public surveillance functions the same way. Desmond Cole writes that some respondents to a Toronto Police Services Board wrote that they avoid certain areas within their own neighbourhoods due to fears of police encounters.[18] Surveillance changes behaviour.

As planners, we need to see how surveillance discriminates, and how AI can amplify that. Jane Jacobs writes about the importance of having “eyes on the street”. A community regulating itself by neighbours looking out for one another is not the same as a panopticon system of public surveillance designed and led by the police or by a private ad-tech company. In the former, neighbours have agency to discuss, create, and negotiate the rules and norms that are being enforced. In the latter, enforcement is being done to people, with little to no recourse when it happens in an arbitrary, excessive, or unjust way.

On the punish side of Discipline and Punish, Foucault shows how technology has mechanized incarceration and execution to create emotional distance for enforcers and executors. For example, the medicalized feel of lethal injection is somehow less barbaric than a beheading. Similarly, AI creates emotional distance and what Frank Pasquale calls “a patina of legitimacy” for discriminatory design. In this way, outsourcing human “eyes on the street” to a network of sensors and algorithms has a good chance of resulting in easily justifiable discriminatory practices regarding who is allowed to be in public space.

As planners, it behooves us to be aware of our own history of rational comprehensive planning, used to justify discriminatory practices, such as racial restrictive covenants, redlining, and nuisance/ticketing programs that target specific communities. AI is already being used in virtual redlining ways to make decisions about people’s insurance, credit score, risk of recidivism, job applications, rental applications, and school admissions. Before recommending projects involving AI, we should question is AI even needed, and if it is, who audits it and governs it?[19]

Note: A shorter version of this article was published in the Spring 2020 issue of Planning West.

Footnotes

[1] This isn’t intended to dismiss the complexity that makes AI possible. Kate Crawford and Vladan Joler in their 2018 piece, Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources at https://anatomyof.ai/, show the human labour and natural resource extraction complexity that goes into building an Amazon Echo running Alexa.

[2] “One Nation, Tracked: An investigation into the smartphone tracking industry from Times Opinion” by Stuart Thompson and Charlie Warzel provides an analysis from one commercial database of 12 million Americans’ movements between 2016 and 2017. The series, located at https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html, illustrates the obtrusive nature of these datasets and highly personal scope of data they collect.

[3] In essence, we’re training self-driving cars (for free) for a multi-billion-dollar company. See: Healy, Mark. “Captcha If You Can: Every time you prove you’re human to CAPTCHA, are you helping Google’s bots build a smarter self-driving car?” Ceros. https://www.ceros.com/originals/recaptcha-waymo-future-of-self-driving-cars/

[4] Aarian Marshall and Alex Davies reported in WIRED how a self-driving Uber killed a woman, because it had not been programmed to recognize pedestrians away from crosswalks. “Uber’s Self-Driving Car Didn’t Know Pedestrians Could Jaywalk.”5 November 2019. https://www.wired.com/story/ubers-self-driving-car-didnt-know-pedestrians-could-jaywalk/

[5] Lum, Kristian; Isaac, William (October 2016). “To predict and serve?”. Significance. 13 (5): 14–19. doi:10.1111/j.1740–9713.2016.00960.x The authors find that communities that are historically over-policed, either as a result of discriminatory policing or community-level racism (i.e. people reporting “suspicious activity” in the area), are also over-policed in a predicative policing environment. In 2020, Chicago Police Department shut down their predicative policing program that started in 2012. Police targeted certain people the algorithm deemed likely to commit a future crime. The propensity to crime was based on flawed data, such as charges and arrests that were dismissed and acquitted. City of Chicago Office of Inspector General. “Advisory concerning the Chicago Police Department’s Predicative Risk Models.” Published January 2020. https://igchicago.org/wp-content/uploads/2020/01/OIG-Advisory-Concerning-CPDs-Predictive-Risk-Models-.pdf

[6] Cole, Desmond. “The Skin I’m In.” Toronto Life. 21 April 2015. https://torontolife.com/city/life/skin-im-ive-interrogated-police-50-times-im-black/

[7] CBC Firsthand. “Here’s what you need to know about carding.” https://www.cbc.ca/firsthand/features/heres-what-you-need-to-know-about-carding

[8] CBC News. “Vancouver mayor to meet with anti-carding activist Desmond Cole.” 15 November 2018. https://www.cbc.ca/news/canada/british-columbia/vancouver-mayor-to-meet-with-anti-carding-activist-desmond-cole-1.4908113

[9] Allen, Kate and Wendy Gillis. “Toronto Police have been using Facial Recognition Technology for More than a Year.” The Star. 28 May 2019. https://www.thestar.com/news/gta/2019/05/28/toronto-police-chief-releases-report-on-use-of-facial-recognition-technology.html

[10] Rankin, Jim and Wendy Gillis. “Toronto police should drop facial recognition technology or risk lawsuits, civil liberties association tells board.” Toronto Star. 30 May 2009. https://www.thestar.com/news/gta/2019/05/30/toronto-police-should-drop-facial-recognition-technology-or-risk-lawsuits-civil-liberties-association-tells-board.html

[11] Robyn Maynard’s 2017 book Policing Black Lives is a must-read on 400 years of state-sanctioned surveillance, criminalization, and punishment of Black people in Canada. This is not just an American problem.

[12] Allen, Kate. “Toronto police chief halts use of controversial facial recognition tool.” Toronto Star. 13 February 2020. https://www.thestar.com/news/gta/2020/02/13/toronto-police-used-clearview-ai-an-incredibly-controversial-facial-recognition-tool.html

[13] In 2017, a Facebook employee shared footage of an automatic soap dispenser not “seeing” his hand, but successfully dispensing soap for his lighter-skinned colleague. The optical sensor had not been set up to recognize dark skin, something that would have been noticed before production had the manufacturers had a more diverse design team. Fussell, Sidney. “Why Can’t This Soap Dispenser Identify Dark Skin?” Gizmodo. Published 17 August 2017. https://gizmodo.com/why-cant-this-soap-dispenser-identify-dark-skin-1797931773.

[14] Levin, Sam. “Amazon face recognition falsely matches 28 lawmakers with mugshots, ACLU says.” The Guardian. 26 July 2018. https://www.theguardian.com/technology/2018/jul/26/amazon-facial-rekognition-congress-mugshots-aclu

[15] Samuel, Sigal. “A new study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians.” Vox. 6 March 2019. https://www.vox.com/future-perfect/2019/3/5/18251924/self-driving-car-racial-bias-study-autonomous-vehicle-dark-skin

[16] Carrie Wong, Julia. “Google reportedly targeted people with ‘dark skin’ to improve facial recognition.” The Guardian. 3 October 2019. https://www.theguardian.com/technology/2019/oct/03/google-data-harvesting-facial-recognition-people-of-color

[17] Mattern, Shannon. https://placesjournal.org/article/post-it-note-city/

[18] Cole, Desmond. “The Skin I’m In.” Toronto Life. 21 April 2015. https://torontolife.com/city/life/skin-im-ive-interrogated-police-50-times-im-black/

[19] Pasquale, Frank. “The Second Wave of Algorithmic Accountability” Law and Political Economy Blog. 25 November 2019. https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/

--

--

Daniella Fergusson

Daniella Fergusson is an urban planner unpacking how we got here and where we’re going next.