At Liberty Podcast

At Liberty Podcast
A Growing Movement for Ethical Tech
November 21, 2018
In recent weeks, hundreds of Amazon employees have spoken out to oppose the company marketing its facial recognition software for use by Immigration and Customs Enforcement. They join a chorus of voices — both in the tech world and outside of it — who are concerned about the use of artificial intelligence by law enforcement. We’re replaying a recent episode on the impact of A.I. on our civil liberties, featuring Meredith Whittaker of the AI Now Institute.
This Episode Covers the Following Issues
Related Content
-
Press ReleaseJun 2025
Free Speech
Privacy & Technology
Ƶ Comment on Supreme Court Decision in Free Speech Coalition v. Paxton
WASHINGTON – The Supreme Court issued a blow to freedom of speech and privacy today by upholding Texas legislation that requires invasive age verification to access online content. Today’s ruling conflicts with decades of Supreme Court precedent protecting the free speech rights of adults to access sexual content online. But it is also a limited opinion that does not permit age verification for non-sexual content online. “The Supreme Court has departed from decades of settled precedents that ensured that sweeping laws purportedly for the benefit of minors do not limit adults’ access to First Amendment-protected materials,” said Cecillia Wang, national legal director of the Ƶ. “The Texas statute at issue shows why those precedents applying strict scrutiny were needed. The legislature claims to be protecting children from sexually explicit materials, but the law will do little to block their access, and instead deters adults from viewing vast amounts of First Amendment-protected content.” Texas’s H.B. 1181 mandates that any website where one-third or more of its content is deemed sexual in a way that is “harmful to minors” must require visitors to prove they are adults before accessing the site. The act defines “sexual material harmful to minors” as material that is obscene from the perspective of an average person considering the material’s effect on minors. “Today's decision does not mean that age verification can be lawfully imposed across the internet,” said Vera Eidelman, senior staff attorney with the Ƶ Speech, Privacy and Technology Project. “With this decision, the court has carved out an unprincipled pornography exception to the First Amendment. The Constitution should protect adults’ rights to access information about sex online, even if the government thinks it is too inappropriate for children to see." The Supreme Court reversed the Fifth Circuit’s ruling that mere rational basis scrutiny applies, instead imposing intermediate scrutiny, but it affirmed the Fifth Circuit Court’s ultimate conclusion that the law survives – and refused to apply strict scrutiny, as challenges to content-based laws typically do. However, the Texas law burdens adults’ ability to access sexual materials, requiring individuals to disclose personal information vulnerable to surveillance and data breaches just to access online content. The law also ultimately fails to achieve its intended purpose. Because the law only applies if one-third of a site’s content is explicit, the online sites where minors are most likely to be exposed to sexual content, like forums or social media platforms, are not affected. “As it has been throughout history, pornography is once again the canary in the coal mine of free expression,” said Alison Boden, executive director of the Free Speech Coalition. “The government should not have the right to demand that we sacrifice our privacy and security to use the internet. This law has failed to keep minors away from sexual content yet continues to have a massive chilling effect on adults. The outcome is disastrous for Texans and for anyone who cares about freedom of speech and privacy online.” The Supreme Court repeatedly heard cases on this issue in the past, many of which were brought by the Ƶ, and had consistently held that requiring users to verify their age to access protected content is unconstitutional where there are less restrictive alternatives available, like filtering software. The Free Speech Coalition is represented by Quinn Emanuel, the Ƶ, and the Ƶ of Texas. This case is a part of the Ƶ’s Joan and Irwin Jacobs Supreme Court Docket. The decision can be read here.Court Case: Free Speech Coalition, Inc. v. PaxtonAffiliate: Texas -
Press ReleaseJun 2025
Privacy & Technology
Digital Identity Leaders and Privacy Experts Sound the Alarm on Invasive ID Systems
WASHINGTON – Over 80 organizations and prominent experts have come together to oppose a new surveillance feature of digital identity systems known as “Phone Home,” which allows the government to track individuals through their digital driver’s licenses or other identity documents. Signatories include the Ƶ, notable privacy and civil liberties groups, as well as academics, state legislators, CEOs of digital identity companies, cryptographers, and other leading experts. This diverse group of experts issued a statement today focusing on a vital element of the identity system architecture: Whether it is designed to “phone home” to the issuer when somebody verifies their identity. Currently, when somebody presents a plastic driver’s license, that interaction is between the two parties, and the government is none the wiser. But digital driver licenses are being built so that the system notifies the government every time an identity card is used, giving it a bird’s-eye view of where, when, and to whom people are showing their identity. That “phone home” functionality becomes especially intrusive as people start having to use digial ID online, giving the government the ability to track your browsing history. “Creating a system through which the government can track us any time we use our driver’s license is an Orwellian nightmare,” said Jay Stanley, senior policy analyst with the Ƶ’s Speech, Privacy, and Technology project. “There is a broad consensus among those who work, think, and innovate in the digital identity space that privacy needs to be built in to any digital identity system. This is not a partisan issue and it’s one states must act on before it’s too late.” Digital identity systems threaten to create serious civil liberties problems, including around privacy and accessibility, which is why digital ID systems must make use of all available privacy technologies and architectures — including but most certainly not limited to the “no phone home” highlighted in today’s letter. The Ƶ has also published legislative recommendations for state legislatures outlining 12 technical characteristics and policy measures that must accompany any acceptable digital ID system. Unfortunately a number of states are adopting such systems without thinking through the potential ramifications of this technology, including 13 that have already created digital driver’s license systems, and another 21 that have passed enabling or study legislation. Identity systems with “phone home” capability not only create the potential for tracking of people’s lives and activities — such as those whose political beliefs certain government officials may not like — but also make it possible for an abusive government to block people from using their IDs for some or all uses. The experts are "call[ing] on authorities everywhere to favor identity solutions that have no phone home capability whatsoever, and to prioritize privacy and security over interoperability and ease of implementation.” -
Press ReleaseMay 2025
Privacy & Technology
National Security
Ƶ and Ƶ of Louisiana Sound Alarm on New Orleans Police Department’s Secret Use of Real-Time Facial Recognition
NEW ORLEANS — The Ƶ and Ƶ of Louisiana are raising urgent concerns following an investigation that shows the New Orleans Police Department has secretly used real-time face recognition technology to track and arrest residents without public oversight or City Council approval. This not only flouts local law, but endangers all of our civil liberties. This is the first known time an American police department has relied on live facial recognition technology cameras at scale, and is a radical and dangerous escalation of the power to surveil people as we go about our daily lives. According to The Washington Post, since 2023 the city has relied on face recognition-enabled surveillance cameras through the “Project NOLA” private camera network. These cameras scan every face that passes by and send real-time alerts directly to officers’ phones when they detect a purported match to someone on a secretive, privately maintained watchlist. The use of facial recognition technology by Project NOLA and New Orleans police raises serious concerns regarding misidentifications and the targeting of marginalized communities. Consider Randal Reid, for example. He was wrongfully arrested based on faulty Louisiana facial recognition technology, despite never having set foot in the state. The false match cost him his freedom, his dignity, and thousands of dollars in legal fees. That misidentification happened based on a still image run through a facial recognition search in an investigation; the Project NOLA real-time surveillance system supercharges the risks. “We cannot ignore the real possibility of this tool being weaponized against marginalized communities, especially immigrants, activists, and others whose only crime is speaking out or challenging government policies. These individuals could be added to Project NOLA's watchlist without the public’s knowledge, and with no accountability or transparency on the part of the police departments,” said Alanah Odoms, Executive Director of the Ƶ of Louisiana. "Facial recognition technology poses a direct threat to the fundamental rights of every individual and has no place in our cities. We call on the New Orleans Police Department and the City of New Orleans to halt this program indefinitely and terminate all use of live-feed facial recognition technology. The Ƶ of Louisiana will continue to fight the expansion of facial recognition systems and remain vigilant in defending the privacy rights of all Louisiana residents.” Key details revealed in the reporting include: Real-time tracking: More than 200 surveillance cameras across New Orleans, particularly around the French Quarter, are equipped with facial recognition software that automatically scans passersby and alerts police when someone on a “watch list” is detected. Privately run, publicly weaponized: The watch list is assembled by the head of Project NOLA and includes tens of thousands of faces scraped from police mugshot databases—without due process or any meaningful accuracy standards. Police use to justify stops and arrests: Alerts are sent directly to a phone app used by officers, enabling immediate stops and detentions based on unverified purported facial recognition matches. Searchable database: Project NOLA also has the capability to search stored video footage for a particular face or faces appearing in the past. So in other words, they could upload an image of someone’s face, and then search for all appearances of them across all the camera feeds over the last 30 days, thus retracing their movements, activities, and associations. Pervasive technological location tracking raises grave concerns under the Fourth Amendment to the Constitution. No retention, no oversight: NOPD reportedly does not retain records about the alerts it receives and officers rarely record their reliance on the Project NOLA FRT results in investigative reports, raising serious questions about compliance with constitutional requirements to preserve and turn over evidence to people accused of crimes and to courts, thus undermining accountability in criminal prosecutions. Violates city law: When the New Orleans City Council lifted the city’s ban on face recognition and imposed guardrails in 2022, it maintained a ban on use of facial recognition technology as a surveillance tool. This system baldly circumvents that ban. The system also circumvents transparency and reporting requirements imposed by City Council. Officials never disclosed the program in mandated public reports. In 2021, the Ƶ of Louisiana sued the Louisiana State Police for information about secretly deploying facial recognition technology, despite years of officials assuring the public it wasn’t in use. Time and again, officials claim these tools are only used responsibly, but history proves otherwise. After the Washington Post began investigating this time around, city officials acknowledged the program and said they had “paused” it and that they “are in discussions with the city council” to change the city’s facial recognition technology law to permit this pervasive monitoring. The Ƶ is now urging the New Orleans City Council to launch a full investigation and reimpose a moratorium on facial recognition use until robust privacy protections, due process safeguards, and accountability measures are in place. “Until now, no American police department has been willing to risk the massive public blowback from using such a brazen face recognition surveillance system,” said Nathan Freed Wessler, deputy director of Ƶ’s Speech, Privacy, and Technology Project. “By adopting this system–in secret, without safeguards, and at tremendous threat to our privacy and security–the City of New Orleans has crossed a thick red line. This is the stuff of authoritarian surveillance states, and has no place in American policing.”Affiliate: Louisiana -
Press ReleaseMay 2025
Disability Rights
Privacy & Technology
Disability Rights and Privacy Advocates Raise Concerns with Proposed Autism “Registry”
WASHINGTON – The Ƶ, the Autistic Self Advocacy Network (ASAN), and 80 other disability rights, civil rights, and public health organizations sent a letter to Secretary of Health and Human Services Robert F. Kennedy, Jr. today raising significant concerns with the National Institutes of Health’s (NIH) proposal to create a national autism “registry.” The registry was detailed during an April 21 presentation by NIH Director Jay Bhattacharya, which he described as a “real-world data platform” for “developing national disease registries, including a new one for autism.” The Department of Health and Human Services (HHS) has since claimed it is not creating an “autism registry,” but the department has failed to engage with autistic people and advocates, exacerbating the lack of clarity. “Instead of engaging with the communities this proposal would impact most, federal health agencies have taken every opportunity to shut disabled and autistic people out of the conversation, leaving unanswered questions, a sense of alarm, and deepening mistrust,” said Vania Leveille, Ƶ senior legislative counsel. “Trust in federal health data requires affirmative, good faith engagement with autistic people, appropriate safeguards for privacy, and ensuring any proposal helps – not hurts – the communities impacted.” The letter outlines the many unanswered questions left by NIH’s data platform proposal, including what data it will collect, what sources it will rely on, how it will anonymize and secure the data. It also highlights the increased risk of surveillance, stigmatization, and marginalization from data collection, particularly for disabled people – who have a long and troubled history with government efforts to find and track disability for the purpose of eliminating it. “It’s no secret that this proposal has created a lot of fear and confusion in the autistic community.” said Colin Killick, executive director of the Autistic Self Advocacy Network. “We continue to advocate and support research into autism that autistic people want conducted, but it is critical that autistic people’s private data not be shared without our consent. We hope the administration answers our questions to shine light on how autistic people and our rights will be protected.” The letter also establishes three key steps NIH and HHS must take to establish trust in its proposed data platform: Meaningful communication with autistic people and advocates; fundamental privacy safeguards to prevent misuse and abuse; and ensuring the data platform advances the well-being of autistic people, people with disabilities, and the public health while minimizing potential harms. The letter is here: /documents/letter-to-hhs-secretary-robert-f-kennedy-jr-on-concerns-with-proposed-autism-registry