Vol. II No. 1: Intelligence-Gathering Tech: From National Defense to Civilian Police Departments

Kyra Sadovi, Gabriela Castellanos, Charles Lally, Genevieve Severini

Vol. 2 No. 1

I. Introduction & Background

The September 11, 2001 attacks were a turning point in the United States’ national and local approach to policing. In the years since 2001, American police have adopted sophisticated and, arguably, intrusive intelligence-gathering practices, and reoriented their policing strategies from the top down to incorporate information gleaned from these practices. This reorientation began at the national level, with reforms in the Central Intelligence Agency (CIA) and Federal Bureau of Investigation (FBI). In the aftermath of 9/11, the National Commission on Terrorist Attacks Upon the United States, more commonly known as the 9/11 Commission, noted that the Clinton and Bush administrations had attempted to tackle the growing threat of terrorist attacks against America with a legacy set of “government institutions and capabilities” (351).[i] Both presidential administrations had relied primarily on the Central Intelligence Agency’s Counterterrorist Center for information (gathered almost exclusively through the use of human intelligence) and the Directorate of Operations for operational capabilities. In the wake of 9/11, policymakers in the Bush administration had to grapple with the fact that post-Cold War human intelligence efforts had “consumed considerable time … and achieved little success” (350).

In response to the American intelligence community’s failure to prevent the attacks, Thomas Kean’s 9/11 Commission recommended a startlingly vast series of reforms. The Commission concluded that domestic intelligence efforts had stagnated after 1991, primarily as a result of a “lack of political will” in Congress to approve new intelligence programs (76).i With the question of political will resoundingly settled, Kean and the Commission sensed an opportunity to significantly expand America’s domestic intelligence apparatus. Chief among their proposed reforms was the establishment of an operationally capable, national-level intelligence fusion center.

Fusion centers are intelligence organizations that do not gather information themselves, but rather receive, analyze, and distribute information for use by other agencies. Essentially, fusion centers serve as “focal points” to facilitate coordination between agencies with pre-existing intelligence capabilities (4).[ii] Understanding the history of intelligence fusion centers, and their current role in civilian policing, is essential in a discussion of the modern American police department’s use of information technology. The existence and role of fusion centers in civilian policing was codified in the 2003 National Criminal Intelligence Sharing Plan (NCISP), under the direction of Attorney General John Ashcroft.[iii] Fusion centers are intelligence organizations that do not gather information themselves, but rather receive, analyze, and distribute information for use by other agencies. Essentially, fusion centers serve as “focal points” to facilitate coordination between agencies with pre-existing intelligence capabilities (4).ii The institutional implications of the existence of such informational bodies become clearer when ‘intelligence capability’ is defined in the context of civilian policing: for local police departments, an intelligence capability is the ability to leverage “information and analysis to direct policing efforts” (9).

In practice, every police department in the country has an intelligence capability; every agency with the ability to designate a “single point of contact … to gather tips” can contribute to fusion centers (4).ii Furthermore, the very existence of fusion centers incentivizes the expansion of intelligence capabilities. The NCISP exists explicitly to “improve, enhance and expand” the capabilities of fusion centers, in the hope that they can serve as force-multipliers for local intelligence efforts (6).ii Essentially, local agencies have come to view fusion centers as a path to “sustained federal partnership” (1).[iv]

Before fusion centers were applied to civilian law enforcement, however, they were implemented within America’s counterterrorism community. As part of their response to 9/11, independent of the Commission, the CIA created the Terrorist Threat Integration Center (TTIC) in 2003. The TTIC was envisioned as a space that could be “staffed with the representatives of many agencies,” to facilitate interagency communication and cooperation (401).i The most significant recommendation forwarded by the 9/11 Commission was the simplification of “lines of operational authority” (403). Kean’s commission noted that the military, intelligence community, and the judiciary all maintained independent control over their armed forces, which presented a substantial roadblock for the fusion center concept. Kean proposed that operational distinctions should be dissolved, and that all domestic and foreign United States security forces should engage in “joint operational planning” (403).

Activist groups at the time protested that this would open the door to a significant erosion of civil liberties for citizens of the United States. The American Civil Liberties Union conceded that an intelligence apparatus “built to fight the Cold War” would be inadequate in the 21st Century, but they also noted that the Commission’s report concentrated a previously unimaginable degree of power in the White House.[v] Through the CIA, the executive branch would be able to dictate not only what information agencies like the FBI “should collect, but also what actions they should take based on that intelligence.”v The ACLU envisioned a future in which domestic security forces would be “subservient to the mindset of foreign intelligence agencies.”v The ACLU made a number of recommendations, including making the newly created office of the National Intelligence Director subject to “Senate confirmation and Congressional oversight,” independent of the executive branch, and granting the FBI autonomy from the wider intelligence community.v None of their recommendations were adopted. On the contrary, the NID became a Cabinet-level position, and the FBI under Director Robert Mueller aggressively restructured to adopt a “dual role as both a law enforcement and intelligence agency”.[vi]

The newly founded Department of Homeland Security and the Defense Intelligence Agency were quick to establish their own fusion centers. The most troubling development, from a civil rights perspective, came in the form of the FBI’s unswerving commitment to intelligence gathering at the local level. Following 9/11, the FBI was designated as the nation’s primary counterterrorism agency. The FBI under Mueller identified improvements to “intergovernmental law enforcement coordination” as one of their core objectives moving forward, and saw the fusion center model as the most viable option (47).[vii] The 9/11 Commission Act of 2007 gave them the opportunity to establish a presence within Homeland Security’s so-called State, Local, and Regional Fusion Center Initiative, and siphon intelligence into their national network. In the coming years, the FBI would establish itself as one of several “federal partners” working in local fusion centers (6).ii The FBI would come to exert a great deal of influence at these centers through the Criminal Justice Information Service, which establishes guidelines for information security policies.

II. Intelligence-Led Policing

Concurrent to these developments, and to a certain degree as a result of them, police departments across the United States began adopting information technology-based doctrinal changes. Chief among these changes was the introduction of “intelligence-led policing” (ILP). ILP is a policing philosophy that seeks to leverage information available to police departments to prevent crime and streamline planning. The practice was developed in the United Kingdom primarily as a method to combat “active and recidivist offenders” (77).vii ILP prioritizes the “gathering and evaluation of data” from all sources; subsequent data evaluation and analysis into actionable information; and the resulting information’s use in guiding operational and policy decisions (7).[viii] ILP is strictly hierarchical; it conflicts with traditional policing methods in that it disregards the value of police officers as the primary agents of “problem identification and resolution” (74).vii Agencies that adopted ILP, such as the Chicago and Los Angeles Police Departments, subscribed to the notion that strategy should evolve from an “objective analysis of the criminal environment” (75). What this means in practice is that, under ILP, police departments will aggressively and preemptively combat crime even when such policies “differ from the needs” of the communities that they are situated in (75). ILP applies the same philosophy to terrorism; street-level police officers should be made aware of “terrorist indicators” so that they can participate in preventing an attack (12).[ix] Fusion centers are exceptionally attractive to police departments implementing ILP because of the burden posed by evaluating the “relevance, reliability, and accuracy of information” (35)viii.

The literature on fusion centers and that on ILP both conflate terrorism with low-level street crime. This is especially troubling given that while the FBI defines terrorism to be explicitly ideological, the Bureau itself has a history of construing street crime as ideological as a pretext to deploying tools inappropriate for civilian policing. The New York Police Department, an early adherent to ILP, understands “counterterrorism policing” to be “the same as crime policing” (68).vii Informational packets from the Department of Justice concerning fusion centers will routinely equate terrorism to crime. For example, the DHS uses the same channels to communicate “indicators and warnings of terrorism and violent crime” to their local partners (1).[x] The DoJ extols the virtues of fusion centers as both a force-multiplier to “criminal intelligence activities,” but also an early warning system for “impending plots that could impact the homeland,” (2)iv. And the DHS’s Fusion Center Guidelines describes information about common crimes as “threat information” (8).[xi] This conflation is troubling because terrorism is fundamentally different from street crime; acts of terror are attacks “inspired by, or associated with” foreign organizations or nation-states, carried out “to further ideological goals,” as defined by the FBI itself.[xii] This conflation is also troubling given the FBI’s history of “explicit[ly] political” conduct during the Cold War (4).[xiii] During the Cold War, the FBI relied on a smokescreen of ideological partisanship to grow an enormous civilian-intelligence apparatus “in the absence of any system of congressional legislative authorization of oversight” (4). This history should give pause to any observers concerned about the Bureau growing its intelligence apparatus to surveil civilians under the manufactured pretext of ideological crime.

III. Palantir

The preceding section has focused on the organizational structure of intelligence fusion centers and their history in the United States. To gain a greater understanding of these institutions and their role in civilian policing, it is necessary to consider the technology that enables them as well as their history outside of the country. No single individual has had a greater influence on the development of intelligence fusion centers than Peter Thiel, an American venture capitalist and founder of Palantir Technologies. Thiel and his company rose to prominence in the wake of the American military’s failure to secure decisive victories in Iraq and Afghanistan following the 2001 and 2003 invasions. Despite Palantir’s dubious record overseas and failure to secure clients in the corporate world, Palantir’s suite of mass surveillance software has become a mainstay of domestic intelligence fusion centers.

The Bush Administration promised the American public that the wars in Iraq and Afghanistan would be “clean.” The American military insisted that “Saddam Hussein and his top henchmen” could be eliminated in the opening hours of a war with Iraq, ending the conflict before it truly started.[xiv] In the years since the end of the Cold War, American war planners had assumed that an overwhelming opening salvo would leave a nation like Iraq too “demoralized and disoriented” to offer resistance.[xv] In his 2002 State of the Union address, President Bush assured the American public that the “precision weapons” being deployed in Afghanistan and trained against Iraq would “spare innocent lives.”[xvi] As thousands of those innocent lives were extinguished in the “shock and awe” strikes against Baghdad, American war planners began to reckon with the fact that they had almost none of the precision capabilities they had previously claimed.xv

As civilian deaths in Iraq mounted into the tens of thousands and the ‘Good War’ in Afghanistan ground to a halt, Peter Thiel and Palantir seemed to offer the American government exactly what they needed. Thiel and his co-founders created Palantir in 2003, and secured start-up funding from In-Q-Tel, the “venture capital branch” of the Central Intelligence Agency, in 2004.[xvii] Palantir is a big data firm; their enterprise software helps corporations build relational databases out of “unstructured data like emails, documents, images, and videos.”[xviii] The ability to glean meaning from seemingly unconnected pieces of information was extremely attractive to a military that had lost the ability to distinguish between enemy fighters and the unenthusiastic local population, so the CIA became Palantir’s only early investor.

Palantir was allowed to apply its big data approach to the insurgencies in Iraq and Afghanistan. Thiel and his engineers created tools to allow intelligence workers to comb through otherwise unrelated, seemingly meaningless data, and graft it into detailed profiles of insurgent organizations, enemy fighters, objects and locations of interest, and so on. From “financial documents, airline reservations, cellphone records, [and] social media postings,” Palantir was able to track the locations of insurgent leaders and target them for assassination.[xix] The tools that they created were able to sort through “the blizzard of battlefield intelligence” coming out of Iraq and Afghanistan and use it to predict the location of roadside bombs, for example.xix

There is some doubt whether Palantir’s products were as effective as they claimed, and almost no chance that they contributed to a reduction in civilian casualties. By 2008 at the latest, the American military began conducting ‘signature strikes’ in Iraq and Afghanistan. A signature strike is an attack carried out, usually with a drone-launched missile, against a target whose identity has never been verified. Signature strikes are launched when drone operators become aware of people exhibiting “behavior commonly associated with militants”.[xx] It is flatly impossible to fully analyze the effect of this practice, given that the military makes no effort to determine “the precise identities of who they killed.”xx Target profiles are assembled in part out of “metadata and tracking of cellphone SIM cards,” and launched without requiring presidential approval.[xxi]

Whether or not Palantir was able to deliver on its promise of a cleaner, more surgical war in the Middle East, federal agencies at home were impressed enough to sign them on to the fusion center project. In 2007, Palantir released a set of analytical tools for domestic use called ‘Gotham’. This software suite had a unique feature that made it especially attractive to domestic intelligence fusion centers: it could assemble the “geospatial properties” of any article of interest, from cars to individuals.[xxii] While there were some initial hiccups with this feature, modern versions of the software allow law enforcement officers to effortlessly track targeted individuals. Using Gotham, fusion center workers can begin with “almost no information” about an individual, and instantly uncover their address, vehicle information, contact information, family members, close associates, colleagues, and much more.[xxiii]

IV. Ring-Doorbell Systems

Through the creation of fusion centers, law enforcement and intelligence agencies across the United States have expanded their ability to perform surveillance and muddied the lines of authority intended to keep them accountable. Through the implementation of intelligence-led policing, law enforcement agencies have been incentivized to collect and analyze as much information as possible. Finally, through Gotham and software like it, intelligence agencies have leveraged tools of espionage against the American public. Those three developments provide the framework necessary to evaluate the impact of specific surveillance technologies on the privacy of communities where they are deployed.

One especially troubling example is Amazon’s Ring Doorbell. Ring, Inc. was founded in 2013 by American engineer Jamie Siminoff, who invented a Wi-Fi-enabled doorbell containing a camera and a microphone. Amazon bought Ring in early 2018, for more than a billion dollars, and incorporated the doorbell into its own line of smart home products. Siminoff sold Amazon on the idea that such smart doorbells created a “herd immunity” against crime.[xxiv] The idea that doorbells could deter burglaries appeared to hold water after two studies conducted in 2015 showed a reduction in break-ins in the neighborhoods where they were installed. The studies conducted by Ring itself showed a 55% reduction, while a study carried out by the LAPD showed a 21% reduction, which was inflated to 42% while accounting for crime trends in surrounding neighborhoods. In any case, the LAPD effectively endorsed the product at a press conference in 2017 and began pursuing a closer relationship with the startup.[xxv]

Ring had already offered doorbells for free to hundreds of homes in Los Angeles as part of its tests. After those tests were completed, Ring sought to “expand its Ring Neighborhoods program” through more giveaways and a closer relationship with police departments.[xxvi] By late 2019, and now under the control of Amazon, Ring established agreements with more than 400 police departments across the country to help “monitor neighborhoods” where doorbells are installed.[xxvii] These agreements can include “free or discounted” doorbells for local police departments to distribute.[xxviii] These agreements also allow police departments to contact residents directly and request footage from their Doorbells.

Ring’s relationship with police departments raises significant privacy concerns, for the users themselves as well as members of their community. Ring’s relationship with these departments is structured to make individual citizens much more accessible to the police. Digital rights group Fight for the Future noted in a 2019 open letter that these agreements create a “seamless and easily automated” method for the police to secure footage without a warrant.[xxix] Through these agreements, police departments also have the right to keep footage for as long as they want. Police departments do not need to meet any “evidentiary standard” before reaching out to individuals, and departments are never required to “delete the video[s] after a specific amount of time”.xxix Activist groups have also expressed concerns that Ring might implement facial recognition in their Doorbells. While granting facial recognition capabilities to individual users would pose a significant problem for the neighborhoods where these devices are situated, the issue is irrelevant to law enforcement.

There are very few limitations on what police departments can do with the video that they collect. Crucially, every department has the right to share data “at their discretion,” with anyone they choose.xxix If any of the police departments that Ring has partnered with maintain a presence at an intelligence fusion center, then those videos can already be processed through facial recognition software. The Department of Homeland Security’s facial recognition tools no longer require high definition images or still photographs; they can pull matches from lower fidelity “video surveillance or crime scene video footage”.[xxx] Whether or not the Doorbells themselves can perform facial recognition is a moot point when police departments can acquire their footage without a warrant and run it through the DHS’s facial recognition software.

V. Suspicious Persons Databases

This uninformed recording takes on more grave implications when fusion centers and information databases enter into the equation. Given Ring’s cooperation with the LAPD, more severe privacy concerns crop up with regard to civilians’ unwitting recordings being submitted to or retrievable by police departments in an intelligence-gathering effort. And while civilians have varying expectations of privacy in different locations, potential Ring footage of a civilian from within their house is certainly out-of-bounds.

The privacy concerns about such technologies are two-fold. First, embedded in Amazon’s 2018 patent for the doorbell is an inherent function for law enforcement to use this footage to survey “suspicious persons” around the neighborhood and surrounding area. The patent reads that the doorbell not only acts as a deterrent to those aiming to burglarize homes, but the footage can help aid law enforcement in capturing perpetrators (Sec. 0003).[xxxi]

However, the most concerning aspect of the product is what has not been advertised to the public. The doorbell constructs partial facial images, aided by multiple cameras to create a more composite image—in short, it leverages facial recognition technology to identify anyone captured in video footage at the entrance of a home. After a clearer image is created, “users [can] make more educated decisions of whether the person is suspicious or dangerous, and also whether or not to notify law enforcement” (Sec. 0006)xxxi. The patent goes on to talk about how these composite images make it easier to identify, convict and apprehend the already-presumed criminal perpetrator. The composite image is compared to a database of suspicious persons, and if that person is found in the database then information about them is retrieved for law enforcement. But what about the possibility that this person is neither committing a crime nor has the intention of doing so? The doorbell has no functions that can confirm whether or not a crime is being committed. And what about the possibility that there is no information about this person in the “database of suspicious persons”? A request is then sent to the homeowner to determine if the person is authorized to be at the home based on the composite image. If the person from the composite image is deemed to be unauthorized, they are then sent to the database of suspicious persons (Sec. 0019).

The Ring Doorbell emphasizes home and consumer safety, but the language used in the patent betrays a broader use is for law enforcement with the objective of building this “database of suspicious persons.” Which leads us to the question: How is a person deemed suspicious? Are they deemed so if they are carrying weapons or hiding in someone’s bushes? Or is it an assumption made simply based on the color of their skin or the clothing that they wear? The suspicious persons database and the ability to alert law enforcement about “suspicious persons” could lead to even heavier surveillance on black and brown communities and allow racial profiling and paranoia to convict someone of a crime they have yet to commit.

A Ring commercial from 2018 brands itself as “home security like never before,” and focuses on features such as lights with motion sensors, sensors throughout the home connected to cameras inside and out, and professional monitoring with 9-1-1 response. Ring declares itself a “whole new way of protecting your home and property.” Another advertisement from 2017 follows the creator of Ring, Jamie Siminoff, and Shaquille O’Neal as they drive around a Los Angeles neighborhood, installing the doorbell on people’s homes and marveling at the clarity of the images. O’Neal even remarks, “I love making neighborhoods safe” as the advertisement fades out.[xxxii] Ring even goes further to broadcast endearing moments caught on their home security cameras—a 2019 video of an Amazon driver dances on a doorstep as he sees snacks left out for delivery drivers. It is a heartwarming, feel-good moment, surely making you thankful that Ring was there to capture this footage in the first place. But within these ads and videos, there is hardly any mention of its law enforcement use. There is no mention of a suspicious persons database, nor any information on its ability to recognize and identify faces.

VI. Police Departments and Tech Companies: Concerning Partnerships

As of August 2019, over 400 police forces across the country have partnered with Ring.[xxxiii] The company has offered discounts to cities and community groups that spend money on the cameras and have even given free products to police departments to distribute throughout their community. With these partnerships, law enforcement can request video footage from a homeowner’s cameras within a certain time and place, allowing them to see footage from the company’s cameras. Concerningly, police don’t need a warrant to request such footage—only the user’s permission. While a user can deny the request for law enforcement to see the footage, the terms of service state that, “Ring may access, use, preserve and/or disclose your Content to law enforcement authorities, government officials, and/or third parties, if legally required to do so,” and that deleted content may be stored by Ring, only accessible with a valid court order. So, while a user can deny the request for their footage to be shared with law enforcement, that footage can still be accessed with a warrant whether the owner of the footage consents to it or not.

Ring officials and partners portray the network and partnership as a critical tool in assisting police investigations and protecting homes from criminals and thieves. However, the partnerships allow law enforcement to have eyes all over residential neighborhoods and properties, with the possibility to surveil an area at all times. Privacy advocates and legal experts have voiced concerns that the program could threaten civil liberties, turn residents into informants, and target innocent people flagged as “suspicious” by Ring—a potential result that will be discussed in the legal section below.[xxxiv]

While police forces cannot directly access the livestream of home security footage, “Neighbors” is an app that can allow police to interact with people in the neighborhood who use Ring security cameras. The app allows the people in the neighborhood to talk about stolen packages, suspicious people, and more. It permits law enforcement direct access to the network of home security cameras. However, this technology is far from infallible. There have been several cases of these home security cameras being hacked. In December 2019, a Mississippi home security camera installed in an 8-year-old girl’s room was hacked. A built-in speaker started playing songs and an unknown voice shouted racial slurs. According to the New York Times, three other similar cases had been reported in the month of December 2019 alone.[xxxv]

While we look at technology that is available to law enforcement, it is also important to look at how that technology is being marketed and sold to police forces. In the case of Ring, Amazon is almost using police forces as a conduit to the civilian market, getting them to give homeowners free Amazon products and promoting the idealized safety of a neighborhood. Ring has even forbidden police forces from using the word “surveillance” in connection with their product, admitting that the term could flag user privacy concerns.[xxxvi] In Amazon’s case, free products and tax breaks are persuading police forces to use their products, but in-person conventions are still showcasing the most advanced and up-to-date technologies for police forces to arm themselves with.

Jay Stanley, senior policy analyst at the ACLU, discusses the International Association of Chiefs of Police conference, held in Philadelphia in 2017.[xxxvii] At the conference, over 600 vendors displayed their wares for potential law enforcement buyers. While Stanley saw the standard products for police forces (batons, badges, handcuffs) there were also new technologies being marketed, like body cameras that are able to detect gunshots, and other analytic software, such as a microphones that could be installed in city centers that can detect noises such as gunshots, breaking glass, and has an “aggression detector.” He writes about Axon/Taser, a prominent player in the body camera industry. The Axon/Taser booth at the convention had a giant tent with a theater, where people stood in line to view a video and a “hologram” about Axon body cams. The convention included big tech players like Motorola, Samsung, and Microsoft trying to get in on the police technology game.

Similarly, the 2019 Milipol Paris convention gives us a look into the technology being marketed to police, militaries, and even higher intelligence agencies. The convention targets buyers all over the world, with technology ranging from machine guns to spyware. The convention displays its powerful weapons on marble tables, or hanging from the ceilings as if they were pieces of art to be admired.[xxxviii]    The convention is described as an “Apple store for armies,” evoking thoughts of products laid beautifully around a store, waiting to be bought, taken home, and played with. The convention also consists of controversial firms like NSO group, with secretive booths that are shielded from the outside eye.

While these conventions don’t necessarily highlight what police forces are actually buying, they tell us what tech companies think police forces will want. Police technology isn’t being marketed as a means to an end, or a tool in stopping crime and keeping the public safe. Police technology is being marketed as an extravagant party that uses bells and whistles to persuade and promote technology. Thinking about the secretive groups at Milipol Paris, or the Axon/Taser mini theater set up at the Philadelphia convention, there is not a product being sold, but an experience. Unfortunately, such evocative marketing tends to leave privacy and civil liberties by the wayside in the pursuit of the next big thing in law enforcement.

VII. Digital Vigilantism

Such enticing products are not being aimed solely at police departments, however. Ring’s companion app, Neighbors, has been an allure to many neighborhoods throughout the country to become more connected. However, the app makes it possible to post video captured by the cameras on the app or online. The spread of home surveillance systems like the Amazon Ring camera paired with social media neighborhood apps like Neighbors foster digital vigilantism that endanger critical rights like security and privacy. Home video surveillance systems are double-edged swords in that they may make households feel more secure, but they allow for visibility to be used as a weapon which decreases the security of other civilians.

Digital vigilantism is a “process where citizens are collectively offended by other citizen activity, and respond through coordinated retaliation on digital media” (56).[xxxix] Where in the past, acts of digital vigilantism encompassed national events like tracking down the bomber from the Boston Marathon in 2013, home video surveillance and neighborhood apps enable digital vigilantism to become more localized. The localization of digital vigilantism creates more direct “unwanted visibility” from people who are targeted, and the possibility of misinformation in local neighborhoods has more direct ramifications. Particularly, “digital vigilantism, compared to conventional vigilantism, makes use of targets’ personal information by rendering them visible to public scrutiny” (65). Social media neighborhood apps, like Neighbors, allow for people to post videos from their Ring camera for their neighbors to see. This allows for certain individuals to be under scrutiny by whole neighborhoods instantaneously—and if they don’t have the Neighbors app, this may be without their knowledge.

Scrutiny leads to privacy concerns with home video surveillance and neighborhood apps that allow the sharing of footage from home security cameras. Both legal and cultural definitions of privacy implicate the “control over personal information flows” (66).xxxix Living in the digital age, of course, complicates societal ideas of privacy, and more importantly digital vigilantism “highlights the complex nature of privacy and public space” by being a “private form of violence that at the outset marks a severe privacy violation for the targeted individual” (66). The targeted individuals have their privacy violated when put under scrutiny through post uploads on apps like Neighbors, because they have no control over their personal data being shared. In this case, their personal data is video footage of them that could range from unclear visuals to clear identifying visual footage of them.

Misidentification in surveillance footage leads to concern of racial bias in digital vigilantism. Rekognition, the facial recognition software in Ring cameras, has proven to more frequently misidentify people of color than white people. A study of the effectiveness of Rekognition when used to identify members of Congress found that “nearly 40 percent of Rekognition’s false matches … were of people of color, even though they make up 20 percent of Congress.”[xl] In fact, racial bias has already proven to be a problem in real time. Motherboard, Vice News’ tech division “reviewed more than 100 user-submitted posts in the Neighbors app between December 6 and February 5” and found that the majority of people reported as “suspicious” were people of color.[xli]

Clearly, real-life vigilantism is still alive and well, and obviously prejudiced. A small town in Oregon that implemented a volunteer-based citizen patrol system. The city recorder claimed that volunteers can identify criminals “just by looking at them.”[xlii] According to the city recorder, criminals can be identified by the way they dress, walk, and items they carry like skateboards (Farzan, 2019). The volunteers for the patrol system clearly harbor stereotypical perceptions of criminals that go unchecked by themselves and officials. Should those same volunteers have access to footage collected by any Ring camera in their neighborhood, they could do real damage to innocent citizens’ reputations and civil liberties. With the ability to collect visual data en masse for use by law enforcement, unchecked racial prejudice concerns must be a priority.

VIII. Policy Recommendations

As society continues to integrate more and more advanced technology into everyday life, it has become more difficult to adhere to traditional values of privacy. In order to ensure individuals’ privacy is protected while allowing for the security that home video surveillance systems provide, more regulations and restrictions must be enforced with both cameras and neighborhood apps. Moore’s strategic triangle is a foundational format to consider and build upon for balancing the conflicting privacy and security concerns with home surveillance.[xliii] Marke Moore, Hauser Professor for Nonprofit Organizations at Harvard, studies and advocates “public value” in regulations. Home video surveillance systems, especially ones that encourage partnerships with law enforcement like Ring, should strive to prioritize the public value of their products, especially when it creates conflicts with privacy values as a result of their services.

From the way the Ring camera is patented, it considers its public value as a crime-fighting tool. Public value is one of three elements in Moore’s strategic triangle in which the other two elements are “authorizing environment” and “operational capability.”xliii Authorizing environment refers to the group or individuals in positions that offer legitimacy to the service. The authorizing environment needs to agree that the services provide public value. Operational capability is required to produce public value. This involves sufficient resources and the correct allocation of them, a skilled workforce, access to useful data, effective communication, etc. A part of operational capability is the availability to build on the capability of an entity.

In order for a public value strategy to succeed, all three elements must be “coherently aligned” (33).xliii The public value, or the goals of the product in this case, could be seen as increasing security and possibly increasing trust between the police force and civilians. When working with these values (security and trust) you necessarily have to work with privacy and transparency. The difficulty here is dealing with the inherent contradictions of security and privacy as well as developing transparency effectively.

Transparency “serves the legitimacy of a Public Administration and the trust of civilians in the government” (35).xliii Trust and transparency are intertwined elements that determine the relationship between civilians and government forces like the police. Therefore, the police departments that partner with the Ring camera must be clear and open about their use of video footage captured on the cameras. While the police departments can find ways to be transparent about their use of Ring camera footage, this would only increase trust among Ring camera owners. The privacy of recorded individuals is still not protected even if police departments are transparent about their use of the video footage. In this case, no matter the level of transparency from the police department in order to ease people’s feelings of losing control over their own data, distrust will remain among civilians subject to being recorded by the Ring Camera.

The difficulty with the elements of privacy and security is balancing their paradoxical relationship. As privacy is more protected, our security becomes more vulnerable. A large amount of the appeal of the Ring camera is the increased feeling of security if you have one. People feel that the camera serves as a deterrent to crime and, in situations where it captures crime, a detrimental source of evidence. The security of Ring camera owners will be increased, but the privacy of pedestrians in the neighborhood plummets — even more so with the implementation of the software Rekognition which collects biometric data. Another security concern is the app in conjunction with the Ring camera. The Ring camera is connected to a social media type app where people can share videos captured on their camera to other Ring camera owners in the neighborhood. Even if police departments take precautions to protect personal data, the personal data is also in the hands of people that don’t have to abide by privacy protection laws. What can end up occurring in these situations is people can be misidentified and racially profiled leading to unnecessary conflict in the community and harm to accused individuals.

Solutions to balance these public value elements fail to predict the use of home camera devices where everyday people have a hold of others’ personal data. Policy regulations such as stewardship, which promotes the careful and responsible management of data, cannot be applied to populations outside of the workforce. One aspect to consider is the actual usefulness of the Ring camera in deterring crime. There need to be more studies that test the usefulness of the Ring camera in order to determine whether it is worth giving up our privacy for the sake of security in our neighborhoods.

IX. Legality of Facial Recognition Technology

It has been apparent for some time that technology is developing faster than the law can keep up. But nowhere is that fact made clearer than in the use of facial recognition technology by police. Unfortunately, as of May 2020, no federal legislation has been passed, nor have any federal cases been heard, regarding the limits or constitutionality of the use of facial recognition technology by police in a surveillance setting. However, there have been multiple advancements in parallel jurisdictions that can be very instructive.

One of the first cases to be heard by any government regarding police use of live facial recognition software took place in Wales. Decided earlier this year, this Bridges v. South Wales Police has been described as the first case in the world to deal with the use of Automatic Facial Recognition (AFR) by the police.[xliv] Such software has the ability to scan the faces of people in public areas in real time—that is, not in ex post facto analysis. The technology would be able to identify, collect, and store faces for an extended period of time. The South Wales Police began implementing the technology in 2017 at various public events in routine surveillance cameras. They attempted to match biometric information of people in the crowd to match the already-captured information of people of interest. It was decided that the practice was not unlawful because, “although the police’s use of AFR engaged the right to privacy under article 8 and amounted to the processing of sensitive personal data, its use was nonetheless lawful on the basis that the interference struck a fair balance and was not disproportionate.”

But it should be noted that this case is not simply about the use of facial recognition software in surveillance cameras. It is also about the storage of the data recorded (specifically, facial patterns and subsequent identity matches) in public spaces. The “contours” of biometric information collected is then matched against watchlists of people with warrants against them.[xlv] In short, the technology is able to compile and use a database of people to find facial matches.

Concerns about false matches, racial bias, and an overextension of law enforcement’s use of automatically generated databases of suspected people from public spaces, were all raised in the complaint, and Bridges is appealing the Court’s decision as of May 2020. Needless to say, the decision puts into question the elements of civil liberties in public spaces and the extent to which police departments may use intense surveillance in such a casual setting—one of the instances of the use of this technology in the complaint was during an exhibition that raised some public safety concerns, but was a public event nevertheless.  However, while Bridges is the first concrete case regarding the use of such software and data storage by police, there are pieces of legislation and case law closer to home that may give us a better indication of the direction the United States, and Illinois more specifically, are heading with regard to privacy and civil liberties.

X. Illinois Biometric Information Protection Act

The Illinois Biometric Information Protection Act is one of the first pieces of legislation of its kind, and one of the first pieces of state or federal legislation in the United States that explicitly protects individuals’ biometric information.[xlvi] That information includes retina/iris scans, fingerprints, and geometric facial scans (the roots of facial recognition technology). Private companies are required to notify consumers/employees that biometric information is being collected and stored, and it must also disclose a “retention schedule” and information about when and how permanently individuals’ data is being deleted.

It’s important to note, however, that this only regulates private companies’ behavior; it does not change the type of information collected nor the amount of time it can be stored by law enforcement. But the Act still provides private citizens recourse against companies that infringe on their privacy—and that recourse includes grounds for damages in civil suits. What makes the Act all the more interesting was its interpretation in 2019 by the Illinois Supreme Court.

XI. Rosenbach. v. Six Flags Entertainment Corp.

Rosenbach was decided by the Illinois Supreme Court in January 2019. The case centered on a minor who was required by Six Flags, an amusement park, to submit a thumbprint in order to get a season pass to the park. According to the opinion:

Neither Alexander, who was a minor, nor Rosenbach, his mother, were informed in writing or in any other way of the specific purpose and length of term for which his fingerprint had been collected. Neither of them signed any written release regarding taking of the fingerprint, and neither of them consented in writing “to the collection, storage, use sale, lease, dissemination, disclosure, redisclosure, or trade of, or for [defendants] to otherwise profit from, Alexander’s thumbprint or associated biometric identifiers or information.”[xlvii]

The Court ruled that despite the fact that there was no tangible “harm” incurred by Rosenbach or her son, the harm that existed was simply the violation of her rights. This is a huge step—the very nature of this kind of data collection poses a potential harm. When every house in a neighborhood is armed with a Ring camera and the Neighbors app, individuals are vulnerable to the unwitting collection of their biometric information (face contours) and subsequent storage in a suspicious persons database using a software that has been shown to make racially biased mistakes. The very storage of this sensitive information is dangerous because this data is still used and processed despite the subject’s lack of knowledge. The ILBIPA acknowledges this reality—harm occurs at the collection of the data because such collection and storage are an immediate and invisible infringement of privacy.

XII. Conclusion and Next Steps

This reality needs to be reflected in policymaking with regard to tech and privacy laws. While Illinois and California are two of the few states in the United States that have enacted biometric information protection legislation, there are still no federal laws of the kind. However, one such law was proposed in 2019 that could be a model for future ones.

The Facial Recognition Technology Warrant Act of 2019 at its core states that an agent or agency of the United States government “may not use facial recognition technology to engage in ongoing surveillance of an individual or group of individuals in a public space.”[xlviii] However, there are quite a few caveats to this prohibition, the biggest of which is simply that the technology can be used “if in the support of a law enforcement activity.” What exactly a law-enforcement activity is would likely be up by the courts, and the scope of the possibilities of what it could be is frighteningly large.

Another caveat is if the officer has obtained a covered court order, and if the officer “reasonably determines that exigent circumstances and compelling law enforcement needs make it impractical to obtain a covered court order.” This is also concerning and will probably prompt a legal battle as to what a “reasonable” officer would do (see Graham v. Connor[xlix]).

In short, what is needed is a federal law that prohibits the use of facial recognition technology without an explicit and specific purpose, and short-term storage of the data. What should be avoided is permission for law enforcement and private companies to collect and store biometric information for long periods of time in databases to be used for finding suspicious persons. Such use puts private citizens at risk for false positives and baseless accusations, infringing on their civil liberties and privacy rights.


Works Cited

[i] The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks upon the United States: Official Government Edition (2004).

[ii] National Criminal Intelligence Sharing Plan: Building a National Capability for Effective Criminal Intelligence Development and the Nationwide Sharing of Intelligence and Information (2013).

[iii] Attorney General Ashcroft Announces Implementation of the National Criminal Intelligence Sharing Plan,(2004), https://www.justice.gov/archive/opa/pr/2004/May/04_ag_328.htm.

[iv] Department of Justice, Global Justice Information Sharing Initiative, What’s Being Said about Fusion Centers, https://it.ojp.gov/documents/whatotherssayaboutfusioncenters.pdf.

[v] Timothy H Edgar, ACLU Analysis of the 9-11 Commission’s Recommendations for Intelligence Reform American Civil Liberties Union, https://www.aclu.org/other/aclu-analysis-9-11-commissions-recommendations-intelligence-reform.

[vi] Hearings Before the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security, 108th Cong. (2004) (Testimony by John S. Pistole Executive Assistant Director for Counterterrorism/Counterintelligence Federal Bureau of Investigation).

[vii] Michele Grillo, Police Organizational Change in a Post-September 11 Environment: Rhetoric or Reality?, 2011.

[viii] Organization for Security and Co-Operation in Europe, OSCE Guidebook Intelligence-Led Policing, https://www.osce.org/chairmanship/327476, (2017).

[ix] Department of Justice, Bureau of Justice Assistance, Intelligence-Led Policing: The New Intelligence Architecture, https://www.ncjrs.gov/pdffiles1/bja/210681.pdf, (2005).

[x] Department of Homeland Security, The Role of Fusion Centers in Countering Violent Extremism, https://it.ojp.gov/documents/roleoffusioncentersincounteringviolentextremism_compliant.pdf, (2012).

[xi] Department of Homeland Security, Cyber Integration for Fusion Centers: An Appendix to the Baseline Capabilities for State and Major Urban Area Fusion Centers, https://it.ojp.gov/GIST/178/Cyber-Integration-for-Fusion-Centers–An-Appendix-to-the-Baseline-Capabilities-for-State-and-Major-Urban-Area-Fusion-Centers, (2015).

[xii] Federal Bureau of Investigation, Terrorism, https://www.fbi.gov/investigate/terrorism.

[xiii] Peter Chalk & William Rosenau, Intelligence, Police, and Counterterrorism: Assessing Post-9/11 Initiatives, https://www.rand.org/content/dam/rand/www/external/nsrd/terrpanel/additional/intelinputv2.pdf, (2003).

[xiv] Rebecca Grant, The Redefinition of Strategic Airpower, Air Force Magazine, https://www.airforcemag.com/article/1003strategic/, (2003).

[xv] Richard Sanders, The Myth of ‘Shock and Awe’: Why the Iraqi Invasion was a Disaster, https://www.telegraph.co.uk/news/worldnews/middleeast/iraq/9933587/The-myth-of-shock-and-awe-why-the-Iraqi-invasion-was-a-disaster.html (2013).

[xvi] President George Bush Jr., State of the Union Address, (Jan. 29, 2002).

[xvii] Sam Biddle, How Peter Thiel’s Palantir Helped the NSA Spy on the Whole World, The Intercept, https://theintercept.com/2017/02/22/how-peter-thiels-palantir-helped-the-nsa-spy-on-the-whole-world/, (2017).

[xviii] Palantir, https://www.palantir.com/palantir-gotham/.

[xix] Peter Waldman, Lizette Chapman, & Jordan Robertson, Peter Thiel’s data-mining company is using War on Terror tools to track American citizens. The scary thing? Palantir is desperate for new customers, Bloomberg, https://www.bloomberg.com/features/2018-palantir-peter-thiel/, (2018).

[xx] Dan De Luce & Paul McLeary, Obama’s Most Dangerous Drone Tactic Is Here to Stay, Foreign Policy, https://foreignpolicy.com/2016/04/05/obamas-most-dangerous-drone-tactic-is-here-to-stay/, (2016).

[xxi] Adam Hudson, Thanks to Reliance on “Signature” Drone Strikes, US Military Doesn’t Know Who It’s Killing, TruthOut, https://truthout.org/articles/thanks-to-reliance-on-signature-drone-strikes-us-military-doesn-t-know-who-it-s-killing/, (2015).

[xxii] Palantir: An Open Source Development Success Story, DirectionsMag, https://www.directionsmag.com/article/1638, (2013).

[xxiii] Caroline Haskins, Revealed: This Is Palantir’s Top-Secret User Manual for Cops, Vice, https://www.vice.com/en_us/article/9kx4z8/revealed-this-is-palantirs-top-secret-user-manual-for-cops, (2019).

[xxiv] Mark Harris, Video doorbell firm Ring says its devices slash crime—but the evidence looks flimsy, MIT Tech Review, (2018).

[xxv] Jefferson Graham, Police Say Crime Drops with Digital Doorbells, USAToday, https://www.usatoday.com/story/tech/talkingtech/2017/03/29/crime-busting-video-doorbell-ring-expands-clones-undercut-price/99677840/, (2017).

[xxvi] Annlee Ellingson, Ring Cut Burglaries by Half in LA Neighborhood, L.A. Biz, https://www.bizjournals.com/losangeles/news/2016/03/24/ring-cut-burglaries-by-half-in-l-a-neighborhood.html, (2016).

[xxvii] Police departments in SoCal, nationwide partner with Ring to view doorbell cam footage, ABC7, https://abc7.com/5501056/, (2019).

[xxviii] Rani Molla, Activists are pressuring lawmakers to stop Amazon Ring’s police surveillance partnerships, Vox, https://www.vox.com/recode/2019/10/8/20903536/amazon-ring-doorbell-civil-rights-police-partnerships, (2019).

[xxix] Samantha Masunaga, Open letter calling on elected officials to stop Amazon’s doorbell surveillance partnerships with police, Fight for the Future, https://www.fightforthefuture.org/news/2019-10-07-open-letter-calling-on-elected-officials-to-stop/, (2019).

[xxx] Department of Homeland Security, Three-Dimensional Facial Recognition, Tech Note, https://www.dhs.gov/sites/default/files/FacialRecognition-TN_0508-508.pdf, (2008).

[xxxi] U.S. Patent App. No. 15/984,298 (Filed: May 18, 2018).

[xxxii] Lulu Chang, Home security company Ring partners with Shaquille O’Neal to keep homes safe, Digital Trends, https://www.digitaltrends.com/home/ring-doorbell-partners-with-shaquille-oneal/, (2017).

[xxxiii] Caroline Haskins, Amazon Is Coaching Cops on How to Obtain Surveillance Footage Without a Warrant, Vice, https://www.vice.com/en_us/article/43kga3/amazon-is-coaching-cops-on-how-to-obtain-surveillance-footage-without-a-warrant?xyz, (2019).

[xxxiv] Drew Harwell, Doorbell-camera firm Ring has partnered with 400 police forces, extending surveillance concerns, Washington Post, https://www.washingtonpost.com/technology/2019/08/28/doorbell-camera-firm-ring-has-partnered-with-police-forces-extending-surveillance-reach/?arc404=true, (2019).

[xxxv] Neil Vigdor, Somebody’s Watching: Hackers Breach Ring Home Security Cameras, https://www.nytimes.com/2019/12/15/us/Hacked-ring-home-security-cameras.html, (2019).

[xxxvi] Dell Cameron, Ring Gave Police Stats About Users Who Said ‘No’ to Law Enforcement Requests, Gizmodo, https://gizmodo.com/ring-gave-police-stats-about-users-who-said-no-to-law-e-1837713840, (2019).

[xxxvii] Jay Stanley, A Look at the High-Tech Gadgets Being Marketed to Police, ACLU, https://www.aclu.org/blog/privacy-technology/surveillance-technologies/look-high-tech-gadgets-being-marketed-police, (2017).

[xxxviii] Patrick Howell O’Neill, Champagne, shotguns, and surveillance at spyware’s grand bazaar, MIT Tech Review, https://www.technologyreview.com/2019/11/25/131837/champagne-shotguns-and-surveillance-at-spywares-grand-bazaar/#Echobox=1574697516), (2019).

[xxxix] Trottier, D. Digital Vigilantism as Weaponisation of Visibility. Philos. Technol. 30, 55–72 (2017). https://doi.org/10.1007/s13347-016-0216-4

[xl] Jacob Snow, Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots, ACLU, https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28, (2018).

[xli] Caroline Haskins, Amazon’s Home Security Company Is Turning Everyone Into Cops, Vice, https://www.vice.com/en_us/article/qvyvzd/amazons-home-security-company-is-turning-everyone-into-cops, (2019).

[xlii] Antonia Noori Farzan, A small town can’t afford cops at night. So it’s turning to cameras watched by citizen patrols, Washington Post, https://www.washingtonpost.com/nation/2019/12/03/cave-junction-oregon-citizen-patrols-cameras-police/, (2019).

[xliii] Meijer, Ronald, Conradie, Peter, & Choenni, Sunil. (2014). Reconciling Contradictions of Open Data Regarding Transparency, Privacy, Security and Trust. Journal of theoretical and applied electronic commerce research, 9(3), 32-44. https://dx.doi.org/10.4067/S0718-18762014000300004

[xliv] Adam Satariano, Police Use of Facial Recognition Is Accepted by British Court, New York Times, https://www.nytimes.com/2019/09/04/business/facial-recognition-uk-court.html, (2019).

[xlv] R (Bridges) v Chief Constable of the South Wales Police [2019] EWHC 2341

[xlvi] 740 ILCS 14, Biometric Information Privacy Act.

[xlvii] Rosenbach v. Six Flags Entertainment Corporation, 2017 IL App (2d) 170317 – Ill: Appellate Court, 2nd Dist. 2017.

[xlviii] Facial Recognition Technology Warrant Act, S.2878, 2019

[xlix] 490 U.S. 386 (more) 109 S. Ct. 1865; 104 L. Ed. 2d 443; 1989 U.S. LEXIS 2467; 57 U.S.L.W. 4513