Advanced surveillance technology is already intensifying racial discrimination at police departments in the United States, and there’s a good chance it’s going to get worse.
Police departments across the country target communities of color with racially-biased policing strategies. This is a fact well-documented by plenty of extensive reviews conducted by the Department of Justice that have unearthed a tremendous number of civil rights violations committed by officers against minority residents.
What’s less understood is how surveillance technologies employed by the police intensify the racially discriminatory strategies that already exist.
Police use of captured cellphone data, social media monitoring tools, and facial recognition technology has already been called racially biased, and these technologies aren’t likely to disappear from departments.
One of the starkest examples of police using technology to compound racially biased policing comes out of Baltimore.
Perhaps that’s no surprise.
The Department of Justice published an eye-popping report on the Baltimore Police Department in August, finding that seemingly everything the department did was tinged with racist policing. This includes the department’s use of cell site simulators.
Cell site simulators, which are commonly known as stingrays, act as cellphone towers wherever they’re deployed, so cellphones connect to them. Police can then capture data from cellphones and track their whereabouts.
Civil rights groups filed a formal complaint with the Federal Communications Commission in August, alleging that Baltimore police blanket predominantly black parts of the city with stingrays more than other areas.
“[Baltimore Police Department] operates cellular transceivers without proper authorization, causes willful interference with the cellular network, disrupts emergency calling services, and inhibits the availability of the cellular network on a racially discriminatory basis,” the groups Center for Media Justice, Color of Change and the Open Technology Institute at New America wrote in the complaint.
Stingrays snap up cellphone data, but they also disrupt cell service. When they’re heavily deployed in one area, they’re likely to cause service outages on a much more regular basis than is normal.
Police in Baltimore and other cities around the United States also use social media surveillance programs to monitor activists and protests, and the American Civil Liberties Union is concerned these programs are marketed to police departments in a way that hints at racial discrimination.
One such program the American Civil Liberties Union found troubling is Geofeedia. (Mashable also uses Geofeedia to unearth potential stories on social networks). Geofeedia has been contacted for comment.
“Our records show that Geofeedia’s marketing materials, for instance, refer to unions and activist groups as ‘overt threats’ and suggest the product can be used in ways that target activists of color,” Nicole Ozer, the technology and civil liberties policy director at American Civil Liberties Union of Northern California, wrote on its website.
One email from the company invited an officer to learn how Baltimore police used Geofeedia to “stay one step ahead of the rioters” after protests against police brutality erupted in predominantly black neighborhoods there in late April, 2015.
Yet perhaps the most disturbing form of technologically-enhanced racial bias in policing stems from a technology that is around the country. Facial recognition, according to experts, is likely to be a staple of policing in the near future. Many departments already use it.
This is a potential civil rights issue for just about everyone, but it’s a potentially massive issue for black people living in America.
Black people are already disproportionately targeted by police surveillance, meaning their identities are more likely to be catalogued in police databases than other groups of people. Once their identities are on file, they can be called up in database searches for crime suspects.
Facial recognition technology can intensify this problem, because it’s proven to be less accurate at identifying black people than white people or members of other races, according to Clare Garvie, who spoke with Mashable and is researching facial recognition technology at Georgetown University’s Center on Privacy and Technology.
This can result in misidentification. And once facial recognition has misidentified a black person, it has a disproportionately large pool of innocent black people from which it can pull a “suspect.”
All these technologies, already in use across the United States, are often used with minimum public oversight. Many departments, in fact, use them without informing the public.
Originally found athttp://mashable.com/