← Back to Blog

How Facial Recognition is Reshaping Suspicion

Digital illustration of facial recognition technology scanning a crowd

Live facial recognition (LFR) technology is rapidly becoming a staple in modern policing, scanning millions of faces in public spaces. While authorities often claim that "human-in-the-loop" safeguards prevent misuse, a growing body of research suggests a more complex reality: the technology itself is subtly reconfiguring the very nature of suspicion.

The Shift to Algorithmic Suspicion

Traditionally, police suspicion was based on officer discretion and observation. An officer might stop someone based on their behavior or specific intelligence. LFR changes this dynamic fundamentally. Instead of initiating an interaction based on intuition or evidence, officers become intermediaries for a computer's decision.

Research indicates that when an algorithm flags an individual, it creates a "presumption to intervene." Even if officers are theoretically free to ignore a match, the psychological pressure to trust the machine—known as automation bias—is powerful. The technology doesn't just assist; it "primes" the officer to view the flagged individual as suspicious before any human assessment takes place.

Bureaucratic Suspicion and Watchlists

A critical concern lies in how "watchlists" are constructed. Unlike targeted investigations where a specific suspect is sought, LFR watchlists often rely on broad, standardized criteria. This creates a form of "bureaucratic suspicion" where individuals are targeted not because of immediate behavior, but because they fit a data profile.

"The technology performs a framing and priming role in how suspicion is generated, potentially turning human-led decisions into de facto automated ones."

These lists can include a wide range of individuals, from serious offenders to those merely "of interest" or even vulnerable people. The lack of transparency in how these lists are compiled means that the "police gaze" is structurally directed towards certain groups, often reinforcing existing biases.

The Erosion of "Reasonable Suspicion"

In many legal systems, police need "reasonable suspicion" to stop and detain someone. However, the mere existence of an algorithmic match is increasingly being treated as sufficient grounds for intervention. This is problematic when the underlying data—such as custody images of unconvicted individuals—may be retained unlawfully or without clear justification.

As LFR technology continues to spread, it is crucial to scrutinize not just its technical accuracy, but its socio-technical impact. We must ask whether we are comfortable with a system where suspicion is no longer a human judgment, but a database query.