Soccer Fans, You’re Being Watched

Stadiums around the world, including at the 2022 World Cup in Qatar, are subjecting spectators to invasive biometric surveillance tech.
Large crowd of fans inside of a soccer stadium shield their eyes from sun looking towards the field.
Photograph: Simon Stacpoole/Getty Images

This fall, more than 15,000 cameras will monitor soccer fans across eight stadiums and on the streets of Doha during the 2022 World Cup, an event expected to attract more than 1 million football fans from around the globe.

“What you see here is the future of stadium operations,” the organizers’ chief technology officer, Niyas Abdulrahiman, proudly told AFP in August. “A new standard, a new trend in venue operations, this is our contribution from Qatar to the world of sport.”

Qatar’s World Cup organizers are not alone in deploying biometric technology to monitor soccer fan activity. In recent years, soccer clubs and stadiums across Europe have been introducing these security and surveillance technologies. In Denmark, Brøndby Stadium has been using facial recognition for ticketing verification since 2019. In the Netherlands, NEC Nijmegen has used biometric technology to grant access to Goffert Stadium. France’s FC Metz briefly experimented with a facial recognition device to identify fans banned from Saint-Symphorien Stadium. And the UK’s Manchester City reportedly hired Texas-based firm Blink Identity in 2019 to deploy facial recognition systems at Etihad Stadium.

In Spain, Atlético Osasuna uses facial recognition to monitor and control access to El Sadar Stadium, while Valencia CF signed a deal in June 2021 with biometrics company FacePhi to design and deploy facial-recognition technology at Mestalla Stadium in the upcoming season. The sport club then became a global ambassador for the company’s technology.

FacePhi’s biometric onboarding technology was already used for a pilot project to enroll Valencia CF fans in an automated access control system that allowed them to get into the stadium using a QR code via the football club’s mobile app. (A FacePhi spokesperson declined to provide details about the project but said “that we are not yet in the implementation phase with Valencia CF.”)

So how accurate are these systems? Over the years, there have been cases where things have gone wrong. In 2017, facial scanning technology mistakenly identified more than 2,000 people as possible criminals at the 2017 Champions League final in Cardiff, UK. The system was scrapped following a court decision, only to be redeployed earlier this year.

In 2019, Dutch soccer club Den Bosch, which uses smart cameras at its turnstiles, misidentified and banned a 20-year-old fan, falsely claiming that he violently confronted supporters and entered restricted areas. “In this case of mistaken identity—a serious risk of facial recognition technologies—an innocent person was wrongfully banned from his team’s stadium and even issued with a fine,” explains Ella Jakubowska, a senior policy adviser at the civil rights nonprofit European Digital Rights (EDRi) who highlighted the case in a 2021 report. “There’s very little credible evidence that even ‘traditional’ CCTV systems reduce crime; rather, they create an appearance of safety without usually having tangible benefits.”

Slowly but steadily, ubiquitous biometric technology systems have come to represent a new normal for stadium infrastructure in which “health securitization” is incorporated into systems for public safety and marketing. “These elements represent three interlinked use cases for stadium surveillance technologies, which are used interchangeably and sometimes simultaneously,” explains Brett Hutchins, a media professor at Australia’s Monash University and coauthor of a research paper on sports stadiums and the normalization of biometric monitoring.

“Public safety is a longstanding justification for the spread of biometric surveillance systems, while Covid-19 introduced a health dimension through body-temperature monitoring,” Hutchins says. “Marketing speaks to a seamless consumer experience for attendees at high-profile and high-cost events and encompasses everything from ease of movement in and out of the stadium through to minimizing queues for toilets and food and drinks.”

Is the deployment of such systems inevitable? “The problem here is the idea that the rollout of such technologies and infrastructures are unavoidable and an increasingly ‘natural’ part of the stadium experience,” says Hutchins. He stresses the importance of “clear and visible notifications for spectators that such technologies are in use.” Most importantly, he advocates for the introduction of “strong legislative and regulatory safeguards governing the introduction and use of these systems and the control and use of data.”

Indeed, European lawmakers have been attempting to regulate biometric mass surveillance. In April 2021, the European Commission submitted a proposal for an EU regulatory framework on artificial intelligence. Currently, the European Parliament is forming its opinion on the proposal, while the European Council is due to discuss the file in early December.

“The European Commission’s draft AI Act recognized that biometric identification is an inherently risky technology but bizarrely put forward a prohibition in Article 5 that is so weak, if anything it amounts to more of a blueprint for how to conduct biometric mass surveillance than a genuine ban,” explains EDRi’s Jakubowska.

Although there hasn’t been a final vote yet, members of the European Parliament have supported a full ban on remote biometric identification in publicly accessible spaces, by both public and private actors, and are likely to adopt that final position. However, such a ban would not include emotion-recognition uses of biometric systems, nor biometric categorization (e.g. profiling people based on their age, gender, or ethnicity). “We think that those urgently need to be banned in the AI Act. Shockingly, in the vast majority of cases, the Commission’s text did not even make those uses high-risk,” says Jakubowska.

“There should be no exceptions to the ban, as even a supposedly narrow exception would mean that mass facial recognition infrastructure would be rolled out and primed to be switched on whenever it is deemed necessary,” she adds. By definition, these systems scan the faces or bodies of every person who passes by, so it is not technically possible to limit them to, for example, suspects or perpetrators of serious crime.

In the US, the Biden administration has proposed a blueprint for an AI Bill of Rights, which commentators consider toothless, as it does not contain clear prohibitions on AI deployments that have been most controversial, like the use of facial recognition for mass surveillance.

As Qatar prepares to roll out the red carpet, fresh reports suggest everyone traveling to the country during the World Cup will be asked to download two apps that, according to experts, essentially hand over all the information on your phone. They say this highlights the urgent need for privacy regulation in global sporting events.

“Without regulation, there is a tendency to hoover up all available data and hold on to it indefinitely—this creates ‘honey pots’ for hackers and also contributes to function creep: the temptation to find other uses for the data, says Hutchins.

“Law enforcement agencies should pursue the many other tools and techniques at their disposal and that are compliant with the rule of law and human rights, rather than resorting to the use of technologies that have been widely condemned by civil society, human rights lawyers, and even UN human rights authorities,” says Jakubowska.

“Once these tools are out there, governments will argue that they should be used widely,” she summarizes. “It’s a gateway to mass surveillance.”