Smartphones and the many technologies that keep making them smarter have had a tremendous impact on the way we communicate, organize — and mobilize. If you’ve ever led, attended, or even considered participating in a protest, you may have found the information you needed thanks to smart devices like your phone or watch, but it's also possible you've been advised to leave them at home to avoid detection.

Smartphones have helped enable access to information and educational resources through access to online learning and tools, particularly where in-person/physical learning is not possible or easily accessible. Mobile phones and the internet have become an important part of enjoying certain rights and freedoms, such as freedom of speech, freedom of expression, and the right to protest.

However, technologies such as facial recognition and geolocation, which allow you to operate your cell phone and some of its applications, are also used outside of your mobile devices and can be used by systems like traffic and security cameras, by public and private entities looking for data. This was demonstrated in Hong Kong, where it was reported that authorities were using data collected from security cameras and social media to identify people who had taken part in protests.

Given the increased use and capabilities of artificial intelligence (AI), there is a new demand for research into the impact of these technologies on civic space and civil rights, and everything in between.

Dr. Mona Sloane is a senior research scientist at New York University's Center for Responsible AI — she is a sociologist studying the intersection of design, technology, and society, specifically in the context of AI design and policy. Sloane explains that most AI systems are created to make decision-making processes much easier and quicker in everyday life — but the data behind that creation is flawed.

"Entities that develop and deploy AI often have a vested interest in not disclosing the assumptions that underpin a model, as well as the data it was built on, and the code that embodies it," Sloane told Global Citizen. "AI systems typically need vast amounts of data to work somewhat well. The extractive data collection processes can invade privacy. Data is always historical and will always represent historical inequities and inequalities. Using it as a basis for making decisions about what should happen in the future therefore solidifies these inequities."

Researchers like Sloane are focused on the ways in which these powerful technologies exist in the real world and make it virtually impossible to break down systemic barriers.

Facial Recognition in Civic Space

In January 2021, Amnesty International launched its global Ban the Scan campaign, which aims to do away with "the use of facial recognition systems, a form of mass surveillance that amplifies racist policing and threatens the right to protest."

The campaign highlighted that algorithmic technologies, like facial recognition scanners, "are a form of mass surveillance that violate the right to privacy and threaten the rights to freedom of peaceful assembly and expression."

In 2019, the world watched as protestors in Hong Kong covered their faces and toppled lampposts with face scanners to evade detection through facial recognition, or tried to find ways to use AI to their advantage.

"One protester in particular, this guy Colin Cheung that we found, created a facial recognition tool to try to identify police. And he didn't actually release the product, but he says because of that police actually targeted him," New York Times reporter Paul Mozur reported. "As they grabbed him, they needed access to his phone and so they tried to actually force his face in front of his phone to use the phone's facial recognition function to get it to unlock ... [He] was able to quickly disable that as he was being tackled, but it shows you how ... our biometric data has become so key to technology, it's becoming sort of weaponized in all these different sort of forms."

Assistant Professor in the Faculty of Information & Media Studies at Western University in London, Ontario, Luke Stark researches the ethical, historical, and social impacts of computational technologies like AI and machine learning. He uses the term "digital surveillance technologies" to cover the concept of data collection across media, and questions the ways in which that data is used by the government — and what that tracking and suppression means in various parts of the world, with differing legal regimes.

He argues that while this data collection is excessive, it is also difficult to sift through, which is what the Edward Snowden leaks proved, according to him.

"The powers that be, spies and intelligence agencies, and security agencies have a lot of anxiety about going through all this data, having too much data, understanding and interpreting this data," Stark told Global Citizen.

Unfortunately, he adds that the data review doesn’t need to be perfect because it is a "brute force" just the way it is — people are arrested or suppressed due to errors in the system, but then caught inadvertently for something other than what the system was trying to catch them doing.

"I'm thinking especially of the increasing number of cases in the United States where Black men are arrested based on the purported identification of a facial recognition system," he said. "And it turns out that the facial recognition system has picked the wrong guy basically, including one case where the police department used a composite drawing from a sketch artist in the system, and then found somebody who looked like the drawing and arrested them."

Stark emphasizes that there are both technical problems with algorithmic systems and social problems, which when put together "have huge scope for both abuse in the terms of the law as it exists, but also abuse from a more democratic human rights standpoint."

Technology in the World of Protest

Stark warns that these technologies have a chilling effect on protests, given how they can be used.

"The more integrated they are, the more dangerous they can be. Which isn't to say they're not dangerous when they're unintegrated, [because they still] have kind of the ability to track people and identify them by facial recognition systems and then have all sorts of data about, for instance, their movements. If you're tracking things like public transit use via digital smart cards, geolocation data via cell phones, all these different kinds of digital traces, a state that is willing and able to put all that data together is going to be able to really, really crack down on dissent extraordinarily effectively," he explained. "Hong Kong, I think, is a good example of how by really cracking down on a wide array of leaders of protests, it pushes everybody else to really lay low."

He adds that in North America too, activists advise against bringing a phone with you when you attend protests so that you can’t be tracked — but that also means losing a means to document the events.

The Power That Comes With Data Collection

AI not only impacts us in terms of the data it is able to collect, but also in terms of shaping what we perceive to be true.

Algorithms can be used to determine a person’s preferences, including political choices, which can then be used to influence the kind of political messaging someone might see on their social media feeds. One notable breach of data collection in this regard involved consulting firm Cambridge Analytica, which collected private data from the Facebook profiles of more than 50 million users in doing its work for former US President Donald Trump's campaign in 2016, according to the New York Times.

Not only does it become easier to manipulate people the more you know about them, but as people realize they are being watched, they are likely to change their behaviors, according to Jamie Susskind, author of Future Politics: Living Together in a World Transformed by Tech.

"The digital is political. Instead of looking at these technologies as consumers or capitalists we need to look at them as citizens. Those who control and own the most powerful digital systems in the future will increasingly have a great deal of control over the rest of us," Susskind said in an interview with Forbes.

Algorithms further make it possible for different people to have different perceptions of reality, by showing people different types of media that align with their politics.

Still, mobile technology has no doubt helped open up civic space, in some circumstances, despite creating new challenges, particularly with regards to the tracking and monitoring of activists and/or protestors.

The ability for these digital surveillance technologies to be used in ways that suppress dissent and/or lead to the profiling of certain groups is why academics like Sloane speak about AI.

"There is no silver bullet to make AI more fair, we need to come at this problem from all angles. Scholars need to work in a more interdisciplinary way. Social scientists can help engineers understand how social structures are deeply entangled with technologies," she said. "Affected communities and the public need to be enrolled (and compensated for) in the AI design process. We need guardrails that support and do not hinder innovation."

Stark agrees, adding that there is a lot of work to be done regarding how AI technologies interact with real people in the real world including asking: "How do you know what assumptions we are making? What practices are we using to make inferences about people? What kind of inference is at play? What stories are we telling when we make those inferences?"

"These have been concerns for social scientists for a long time," he said. "And I think, as a set of technologies, [AI] really brings that problem to the fore."


This article is part of a series connected to defending advocacy and civic space, made possible thanks to funding from the Ford Foundation.

Global Citizen Explains

Demand Equity

How Artificial Intelligence Is Affecting Human Rights and Freedoms

By Gugulethu Mhlungu